After waiting three weeks for the knock-off drive trays I ordered for the R510 to arrive, they still hadn't. I went to check on things, and the user is based in China, though I specifically bought this particular unit because it as listed as being in Sydney. Maybe they actually do ship it from Sydney, but this is taking too long! Wrote them, and, expecting to have to get my money back, ordered two more from another seller that is definitely in Australia (paying a bit more for the privilege). Those two caddies arrived earlier in the week, so this weekend I set about swapping the filesystem over to the new machine.
First thing's first, I wasn't sure if the pool would import intact, so I backed up all the important stuff. This took far longer than it should have, as I plugged the USB hard disk into a USB2 port and rsync ran at a whopping 40MB/sec. Roughly 1.5TB at that is going to be slow no matter what.
With that out of the way, I took a backup of the configuration files just in case, powered down the FreeBSD machine and put the drives in the caddies. Booted the machine up, imported the pool without issues, and set about moving things around. This was a precarious operation: there are numerous interdependent services on this machine, and I didn't want to lock myself out of it. It's our UniFi controller for instance, and I wanted to upgrade it to a new, containerized controller, but I didn't want to lock myself out when it siezed ownership of the WAP for instance, but I managed to get it all done with hardly a hiccup.
This morning some files I wanted to move between datasets finally finished, so I started setting up services. The entire exercise is basically over-engineering-101 - each "service" is in it's own container, with several ZFS datasets for the data that are shared between them with a goal being the minimal privileges between each container as is required. A good example is my Nginx container, which has read-only access to the output of my blog's local preview - the actual writing is done by Pelican which runs in a seperate container... it's a very stripped down environment running on Alpine Linux.
I'm using LXD for everything. Why not Docker? Because I need LXD experience for work, and though absolutely no effort is made to "do it right" in this case, I'm learning a bunch by finding myself in all sorts of weird corner cases. For instance you can strap multiple ethernet interfaces to an LXD container:
# lxc info unifi Name: unifi Location: none Remote: unix:// Architecture: x86_64 Created: 2018/11/10 09:19 UTC Status: Running Type: persistent Profiles: vlan_secure Pid: 22906 Ips: eth0: inet 10.255.252.228 vethHAL5WC eth0: inet6 fe80::216:3eff:fe52:a748 vethHAL5WC eth1: inet 10.0.65.15 eth1: inet6 fe80::216:3eff:fe64:1ca9 lo: inet 127.0.0.1 lo: inet6 ::1
... but infuriatingly, Netplan does fucky things with them... it adds the default route from both interfaces if they're both configure via DHCP, so packets can come in on one interface, and go out on the other, which is an utter nightmare to diagnose (lots of time with tcpdump before I figured out what on earth was going on!). It's not the sort of thing I'd have learned by following one of the fifty thousand docker tutorials out there.
I'll write up more details later, but the uid mapping for sharing datasets between containers is interesting too. It basically boils down to:
lxc config set samba raw.idmap 'both 1000 1000 both 1001 1001 both 1002 1002' lxc config device add samba downloads disk source=/zeefus/storage/home path=/home
This means that uids 1000, 1001, and 1002 can all share files in zeefus/storage/home dataset via the /home directory inside the container. Anything owned by root or another user on the host is immutable inside the container, which works out pretty great.
Overall, I think it's pretty successful. I can upgrade one service without risking breaking another, and using snapshots of the containers if I screw something up I'll just roll back the upgrade. The only downside is major version upgrades of the host should be fun, but that's what LTS is for.