TrueNAS Migration
A Week-Long Journey of Infrastructure Transformation
The Mission
In January 2026, I undertook a major infrastructure project: converting my dual-purpose Proxmox server into a dedicated TrueNAS storage appliance while migrating all virtual machines to a separate hypervisor. This would give me proper separation between compute and storage, better resource allocation, and a more robust foundation for future growth.
The challenge? Do all of this without losing a single photo, document, or configuration file, and with minimal service downtime.
The Starting Point
- Dell OptiPlex 990: Running Proxmox with ZFS storage (2x 3TB drives in mirror) and hosting VM 400 (the application server running Nextcloud, Immich, and Paperless)
- Dell OptiPlex 9020: Secondary Proxmox host with limited storage
- Goal: Transform the 990 into a dedicated TrueNAS server and consolidate all VMs on the 9020
The End State
- Dell OptiPlex 990 (truenas): Dedicated TrueNAS SCALE server with ZFS storage and NFS exports
- Dell OptiPlex 9020 (pve): Primary Proxmox hypervisor running all VMs
- Architecture: Clean separation of compute and storage with NFS connecting them
The Journal
Day 1: Laying the Groundwork (January 11)
Sessions 1 & 2
The migration began with careful preparation. Before touching any production systems, I copied all user data to dedicated ZFS datasets with appropriate recordsizes for each workload type. Photos and large files got 1MB recordsize for sequential read performance; databases stayed at the default 128KB for random I/O.
With data safely staged, I installed Proxmox on the 9020 and migrated the simpler VMs first: Pi-hole (DNS), Home Assistant (smart home), Tailscale (VPN), and the metrics server. These migrated cleanly using Proxmox's remote migration feature.
VM 400, the application server, stayed behind. It had a 20GB ZFS zvol for its Docker data that couldn't easily migrate over the network to the 9020's limited storage. I needed to wait for hardware upgrades before tackling this one.
Status: VMs 101, 200, 300, 600 migrated. VM 400 remained on the 990. Validation period started.
Day 7: Hardware Upgrades and Preparation (January 17)
Session 3
A week of validation confirmed the migrated VMs were stable. Time to prepare for the main event.
I upgraded the 9020 with 32GB of RAM (matching the 990) and added a 1TB HDD as additional storage. The new drive became an LVM-thin pool called "hdd-storage" with 966GB available for VM 400.
The validation period had given me confidence that the infrastructure was solid. Now came the hard part.
Day 7: The Incident (January 17)
Session 4
This session taught me hard lessons about assumptions and verification.
The plan was straightforward: create a tarball of VM 400's local data (Docker images, databases), migrate the VM, restore the data. What could go wrong?
What went wrong:
- Silent tar failure: The tarball I created was 797MB. It should have been 9GB+. I didn't use the
-vflag, so I never saw that the Docker overlay2 directory (containing all the container image layers) failed to archive due to special files and hardlinks. - Zvol destruction: During migration cleanup, I ran a command to remove an orphaned disk reference. On ZFS storage, this doesn't just remove the reference; it destroys the actual data. The original VM 400 data disk was gone. Forever.
What was saved:
- The PostgreSQL databases for Immich and Paperless were in the tarball (they're regular files)
- All user data (photos, documents, files) was already on separate ZFS datasets, untouched
- Docker images could be re-downloaded from registries
I expanded the VM's disk using GParted (the partition layout required manual intervention), extracted what I could from the tarball, and assessed the damage. The critical realization: my actual user data was safe. I had "only" lost Docker's cached image layers, which could be recovered.
Day 7: Recovery (January 17)
Session 5
With the initial panic subsiding, I methodically verified what I had:
- Immich database: 459MB intact, containing metadata for 18,649 photos and videos
- Paperless database: 69MB intact, containing 4 documents
- Docker configuration: Correct, pointing to the right data directory
I re-pulled all the Docker images from their registries. Then came another discovery: the Docker compose files expected databases at /mnt/storage/, but the restored data was at /mnt/local-data/. Docker had helpfully created fresh empty databases at the expected paths.
The fix was elegant: delete the empty directories Docker created and replace them with symlinks to the actual data locations. After restarting the database containers, both showed the magic words: "PostgreSQL Database directory appears to contain a database; Skipping initialization."
Query results confirmed: 18,649 Immich assets, 4 Paperless documents. Everything was there.
Day 8: The Final Push (January 18)
Session 6
Before wiping the 990's OS to install TrueNAS, I ran every safety check I could think of:
- Full ZFS scrub: 2 hours, 0 errors
- Dataset verification: all sizes matched expectations
- Snapshot creation: safety snapshots for every dataset
- Drive serial number documentation: written down physically, not just digitally
- Backup staging: 549GB of Proxmox backups copied to the main pool (the backup drive would be reformatted)
- Clean pool export: ensuring TrueNAS would see a properly closed filesystem
With everything verified, I exported the ZFS pool and shut down the 990 for the last time as a Proxmox host.
Day 8: TrueNAS Installation (January 18)
Session 7
The TrueNAS installation was anticlimactic in the best way. I selected the correct drive (verified by serial number), ran through the installer, and configured the network.
The moment of truth: Storage → Import Pool → "NAS2" appeared in the list. One click, and 2.7TB of data was accessible. All datasets present. All snapshots intact. The ZFS import worked flawlessly.
I configured NFS shares for each dataset, set ACL types to POSIX (TrueNAS defaults to NFSv4 ACLs which can cause container permission issues), and created a new backup pool on the 1TB drive.
Back on VM 400, I configured fstab with NFS mount options optimized for each workload: async for read-heavy photo libraries, sync for document uploads where data integrity matters most. After mounting everything and starting the services:
- Immich: Login works, all 18,649 photos visible, new uploads working
- Paperless: Login works, all documents accessible, OCR processing new scans
- Nextcloud: Files syncing, no issues
Migration complete.
Day 8: The Forgotten Step (January 18)
Session 8
A few hours after declaring victory, I discovered that Proxmox backups were failing silently. Investigation revealed I had marked "update backup storage configuration" as complete without actually doing it. The backup storage was still pointing to the old path from before TrueNAS.
The fix required creating a dedicated dataset on TrueNAS for Proxmox backups (best practice anyway), creating the NFS share, and reconfiguring the Proxmox storage. After verification, all five VMs backed up successfully.
Lesson learned: "Verified backups running" should mean checking that recent backup files actually exist, not just that the job is configured.
Lessons Learned
Always Use Verbose Mode
When creating archives of critical data, always use tar -v to see what's being captured. Silent failures are the worst kind. Better yet, verify the archive contents before proceeding: tar -tzf file.tar.gz | grep expected-directory
Understand Your Tools
The command qm set --delete unused0 doesn't just remove a reference when the storage backend is ZFS. It destroys the actual zvol. The "point of no return" in my plan was in the wrong place. Always understand what commands actually do to your data, not just what they appear to do in the UI.
Separate Your Data
The reason this incident was recoverable was that user data (photos, documents, files) lived on separate ZFS datasets from application data (Docker images, caches). When the zvol was destroyed, I lost convenience (cached images) but not irreplaceable data.
Symlinks Before Containers
If a container expects data at path A but your data is at path B, create the symlink before starting the container. Docker will helpfully create fresh empty directories at missing paths, which can mask the fact that your data isn't connected.
Verify, Don't Assume
Marking a checklist item as complete feels good. Actually completing it is better. The forgotten backup configuration step could have led to a week of missed backups if I hadn't caught it the same day.
Final Architecture
The migration achieved its goals:
- TrueNAS Server (990): 32GB RAM for ZFS ARC cache, 2x 3TB mirror for data, 1TB for backups
- Proxmox Host (9020): 32GB RAM, running all VMs with NFS connections to storage
- Clean Separation: Compute and storage are independent; either can be upgraded or replaced without affecting the other
- Data Integrity: ZFS provides checksumming, snapshots, and scrubbing across all data
- Services Operational: Immich, Paperless, Nextcloud, Home Assistant, Pi-hole, Tailscale, and Grafana all running
Total downtime for user-facing services: approximately 4 hours during the VM 400 migration and recovery. No data was permanently lost.