R720 or Bust

Page content

Ever since my previous foray into building a server, I’v been trolling Lab Gopher for an upgrade. My preference would have been for a Dell PowerEdge R720xd 3.5-inch format since it could hold 12 full-size hard disks. But those are relatively rare and deals were scarce.

Instead, I stumbled across a Dell PowerEdge R720 2.5-inch format with an additional drive cage. So while 2.5-inch drives were lower capacity, I could use 16 of them if necessary. Being as I still haven’t come close to filling the 2TB I currently have thanks to ZFS compression, I’m not really pressed for storage. Still, with a server in hand, it came time to plan the upgrade.

Apples to Apples

Once the server arrived, it came time to take stock. Here’s a quick comparison of the new server and the old one.

R710 R720
Processors 2x Intel X5600 2x Intel E5-2660 v2
- Cores 2x6 2x10
- Threads 24 40
- Speed 2.8GHz 2.2GHz
- Turbo 3.2GHz 3.0GHz
- Cache 12MB 25MB
Memory 64GB 128GB
Drive Capacity 8 16
RAID PERC 6/i 512MB H710 1GB
Remote iDRAC 6 Enterprise iDRAC 7 Enterprise

Basically the new server utterly annihilates the old one on pretty much every metric. That’s to be expected when moving from an 11th to 12th generation server, but it’s also slightly quieter. The only place where it loses out is single-core processor speed, which is admittedly somewhat irksome. On the other hand, I haven’t benchmarked it yet, and the single-core performance in newer architectures tends to score slightly higher.

Regardless, with that many cores, I have a bit more breathing room for my VMs. Who can argue with that?

When a Plan Comes Together

Given the hardware available, I want to squeeze as much out of it as possible without breaking the bank. That means I plan on combining these ingredients:

Wait… H310 RAID controller!? Didn’t the server arrive with a much better H710? Yes, but unlike the H310, the H710 doesn’t support drive passthrough; all drives must be assigned to a RAID set. ZFS vastly prefers having direct control over the underlying disk devices, and to facilitate that, I actually had to downgrade the integrated RAID card. On the other hand, since that’s a drop-in module, I now have a spare H710 controller with a 1GB BBU I can pawn off. Score!

The goal here is to start by mounting the two M.2 drives onto the SilverStone adapter. The R720 is too old to boot from M.2 NVMe drives, so the Samsung 970 EVO isn’t bootable. However, the ADATA is a SATA device, and the SilverStone allows it to run off the motherboard SATA header. This gives me a very solid OS boot drive along with a decoupled read / write cache layer that’s roughly 5x faster than the Samsung 860 EVO I’m currently using in the old server.

That means I install Proxmox on the ADATA drive, format 4x 1TB drives as 2x ZFS mirror sets, partition the 970 EVO, and add it as ZIL and L2 Arc devices. Once I’ve got everything configured properly, I can attach it to my network and migrate the existing configuration and data. ZFS has an extremely capable snapshot exporting system that’s orders of magnitude faster than rsync since it has immediate access to block-level deltas via COW (Copy on Write). The initial sync will take a while, but when I’m ready for the final migration, I can stop all my VMs, take a final snapshot, and initiate one last (and speedy) copy.

Now, those who are familiar with Proxmox know I could add the new server as a second Hypervisor in a fully configured cluster. In theory, that means I could use Proxmox itself to migrate everything over to the new server with almost no interruption. However, I’m using ZFS snapshots to maintain my long-term external backup device. If I migrated the VMs and LXD containers that way, I’d lose disk parity and have to overwrite my existing backups. Performing a ZFS snapshot migration lets me keep everything going as-is.

Once all of that has been running for a while and I’ve shaken out any cobwebs, I’ll swipe the drives from the R710 and finish the surgery by adding two more ZFS mirror sets. That gives me a total of 4TB of space which I’ll probably never use, no matter how many VMs and Containers I throw at the system. Even if I do, I have space for eight more drives; it’ll be a while before I need an external disk enclosure.

So Far, So Good

At this point, the server is mostly ready. I’ve swapped the H710 out for the H310, and installed the SilverStone adapter along with the 970 EVO. Thanks to not reading the fine print, I neglected to realize the Dell R720 can’t boot from an NVMe SSD, PCI adapter or not. On the one hand, that means I have to wait a bit longer to get everything going while the additional drive ships, but on the other, I can decouple the OS device from the ZFS R/W cache layers; that’s probably safer in the long-run.

Thankfully the ADATA was cheap, and the SilverStone adapter works for both NVMe and SATA M.2 devices. At worst I would have also needed another M.2 adapter, and the R720 has seven PCIe slots—so much room for activities! Maybe some day in the distant future, I’ll add some 10Gbit network cards just to fill an unused slot or two, who knows.

In any case, this makes me feel as though I tinker with and customize server hardware the same way the previous generation messed with cars. I basically did the equivalent of loading up a Corvette with a new engine and a boatload of nitrous. Now I just need another person to compare notes with for funsies.

Any takers?