Tinker Town
With my newfound “free time”, I’ve spent a lot of time catching up on my writing. Two PG Phridays in a row, and I have ideas for many more to come. I finally decided to “Open Source” my homelab setup, and since that’s a work-in-progress, it should see many commits in the future. And I finally started earnestly working on the ol’ home lab. Definitely keeping myself busy!
On Phriday
The PG Phriday idea came to me, as always, because I’m a fan of terrible puns. Back when I worked at Peak6, I needed a topical and entertaining way to present Postgres-focused material to the devs so it would be memorable. It started out as presentations in one of the main conference rooms, usually focusing on tweaking query performance, avoiding common pitfalls, and other benign subjects. By the time I started writing about the more esoteric features, it had taken on a life of its own.
When I transitioned to 2ndQuadrant, it got a lot harder to publish these. Not only was I contractually obligated to do so on their website, but all of them required proper review before publication. Being surrounded by some of the best and brightest in the Postgres industry certainly aided in that step, and yet it interrupted my process. The articles slowed down notably, and I felt bad about that.
By the time EDB took over, I tried to bring PG Phriday back once or twice, but it never really stuck. To do it properly, I needed to be tinkering with various aspects of the engine, features, common roadblocks, and so on. Job duties being what they are, I ran into an issue where I essentially typecast myself into an High Availability corner. It was hard to find time to write about unrelated subjects and also push the articles through review and then marketing before publication. I love HA, but you can only write about it so many times before everything kinda blurs together. My last published article was a mix of experimentation and HA, and it inspired me to dig even further. Everything was set for a resurgence.
Laid off or not, I plan on keeping that momentum as long as I can. I now see that there’s a wealth of tangential subjects I’d overlooked until recently. Heck, if I simply focus on the multitude of concepts in my last publication, I’ll be set for the next few months. Being a mod in the Postgres Discord also provides a lot of insight about the most common issues people are hitting.
We’ll see where things go!
The Bone Lab
While I was at it, I created a Bonelab github repository to chronicle all of my weird experiments there. I’d debated about using GitLab or another competitor instead just to even out the playing field a bit, but I can always change my mind. The point is to lay out my thought process and put all my cards on the table, and a repo of any kind certainly does that.
Currently the lab consists of a couple Proxmox systems with various drive configurations, but that could always change. I was using TrueNAS SCALE not so long ago, after all. There’s always a chance I decide I want to use XCP-NG instead.
Regardless, the current repository is just a tutorial on setting up Proxmox using my preferred layout, and some instructions / automation scripts for deploying a K0s Kubernetes cluster. All the tutorials I found out in the wild were usually for K3s, and I wasn’t satisfied by that. My deployment had some kind of mysterious overhead that caused the Coordinator nodes to always use 20% CPU even when the cluster was completely idle. The K0s system doesn’t have that issue as far as I can tell.
Now it’s time to start delving into more arduous subjects like getting MinIO installed so I can deploy NeonDB. Then I’ll need to run some benchmarks to compare a standard Postgres cluster with 3 replicas against a NeonDB deployment with 3 replicas using shared storage. I’m actually a bit curious how that’ll turn out.
My next update to the repo will probably be a few simple Ansible playbooks to automate the most common changes I make to Proxmox systems once they’re installed. I have a private Ansible repo for various little tweaks and automations, so maybe I’ll just clean all of that up and upload the whole thing and document the roles so they’re more usable.
It’s definitely a work in progress.
Basement Dwellers
The basement hardware has finally reached a point of relative completion. The new addition has all of its drives, ZFS is fully provisioned, my NAS container is now migrated, and I just need to start really putting it through its paces. I even took a couple hours on Saturday to clean up the horrifying rat’s nest of wires between my patch panel, switches, and both servers.
I basically created a bunch of 2-foot-long cables from the existing cables using a Klein crimping tool and checked all of them with a Klein tester. My first crimp was bad, as I didn’t realize I should insert the wires into the plug upside-down, so the pins were all reversed. Still, I didn’t create a single bad cable after that, which is some kind of miracle. I replaced five of the patch cables and made an additional 10-inch cable to link the main switch to the 10GE switch. Thankfully both switches have Auto-MDIX, so I didn’t have to create a crossover cable.
I actually had five excess unmolested cables when everything was said and done. That should give some idea of just how long they were. It’s no wonder they combined into an anxiety-inducing tangled mess!
On the Prowl
The job hunt is going well so far, and I’m staying tight-lipped until I accept an offer. The thing about interviewing is that it’s a slow process, even when all signs are positive. Planning, research, scheduling, multiple-rounds, it’s never been a straight path in the tech world. The only time I got an offer after a single interview was at TownNews, and that panel included the CEO.
I’ve given sufficient interviews in my time to know the score. I’ve always tried to ease the process for those facing my interrogation, but due diligence must be observed. The most rounds I’ve seen an applicant endure was four, and that’s probably the sweet spot. That list usually includes other techs, a manager, HR, and an executive. Finding that many holes in that many schedules is difficult even with modern calendar tools. Then there’s evaluating the feedback, and decisions need to go up and down the hiring tree, and if an offer comes out of that, lawyers and paperwork.
Combined with the work I’m doing on my repo, research into various filesystems and cloud ecosystems to test with Postgres, and my writing, and I’m busier than I’ve ever been. And I couldn’t be happier. I’m energized. Stoked, even. Some days I wake up with two or three ideas I need to verify with research or some test in the lab. I literally had to start a list because I was starting to lose track of all the cool stuff stimulating my overactive mind.
In many ways, being laid off is the best thing that has happened to me in years. Unencumbered by job duties and other obligations, EDB has unleashed a mad scientist into a technology candy store. I want to test and build LLMs, integrate them into Postgres as part of a RAG stack, back it with a distributed filesystem, integrate sharding, and add data balancing background workers. For starters.
I haven’t even had the urge to launch Steam and play a game since I got the news. Learning new stuff is my drug of choice, and always has been. I am here; I am ready. Alea iacta est.
Until Tomorrow