From https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#chapter_zfs
# zpool status pool: rpool state: ONLINE scan: resilvered 9.32M in 00:00:00 with 0 errors on Thu Apr 3 23:20:51 2025 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 nvme-eui.0025388581b66796-part3 ONLINE 0 0 0
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 1.8T 0 disk --sda1 8:1 0 1.8T 0 part zd16 230:16 0 32G 0 disk --zd16p1 230:17 0 100M 0 part --zd16p2 230:18 0 892M 0 part --zd16p3 230:19 0 31G 0 part zd32 230:32 0 10G 0 disk --zd32p1 230:33 0 9.5G 0 part --zd32p2 230:34 0 1K 0 part --zd32p5 230:37 0 510M 0 part nvme0n1 259:0 0 476.9G 0 disk --nvme0n1p1 259:1 0 1007K 0 part --nvme0n1p2 259:2 0 1G 0 part --nvme0n1p3 259:3 0 475.9G 0 part
Duplicate the partition tables on the new drive (/dev/sda)
# sgdisk /dev/nvme0n1 -R /dev/sda
change the GUID so they are not the same
# sgdisk -G /dev/sda
Use parted to fdisk to expand partition 3 into the full capacity of the new disk
# fdisk /dev/sda Welcome to fdisk (util-linux 2.38.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): p Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: SSD Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 126F0F8E-624E-4F4D-8CD4-89F8B2EDE74A Device Start End Sectors Size Type /dev/sda1 34 2047 2014 1007K BIOS boot /dev/sda2 2048 2099199 2097152 1G EFI System /dev/sda3 2099200 1000215182 998115983 475.9G Solaris /usr & Apple ZFS Command (m for help): d Partition number (1-3, default 3): 3 Partition 3 has been deleted. Command (m for help): n Partition number (3-128, default 3): 3 First sector (2099200-3907029134, default 2099200): Last sector, +/-sectors or +/-size{K,M,G,T,P} (2099200-3907029134, default 3907028991): Created a new partition 3 of type 'Linux filesystem' and of size 1.8 TiB. Command (m for help): p Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: SSD Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 126F0F8E-624E-4F4D-8CD4-89F8B2EDE74A Device Start End Sectors Size Type /dev/sda1 34 2047 2014 1007K BIOS boot /dev/sda2 2048 2099199 2097152 1G EFI System /dev/sda3 2099200 3907028991 3904929792 1.8T Linux filesystem Command (m for help):
Change the label back to “Solaris /usr & Apple ZFS”
Command (m for help): t Partition number (1-3, default 3): 3 Partition type or alias (type L to list all): 157 Changed type of partition 'Linux filesystem' to 'Solaris /usr & Apple ZFS'. Command (m for help): p Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: SSD Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 126F0F8E-624E-4F4D-8CD4-89F8B2EDE74A Device Start End Sectors Size Type /dev/sda1 34 2047 2014 1007K BIOS boot /dev/sda2 2048 2099199 2097152 1G EFI System /dev/sda3 2099200 3907028991 3904929792 1.8T Solaris /usr & Apple ZFS Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks.
Check what type of boot partition you have (Grub / EUFI)
# proxmox-boot-tool status Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace.. System currently booted with uefi F0A5-6343 is configured with: uefi (versions: 6.8.12-4-pve, 6.8.12-9-pve)
Copy boot partition detail to the new disk
# proxmox-boot-tool format /dev/sda2 UUID="" SIZE="1073741824" FSTYPE="" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sda" MOUNTPOINT="" Formatting '/dev/sda2' as vfat.. mkfs.fat 4.2 (2021-01-31) Done. # proxmox-boot-tool init /dev/sda2 Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace.. UUID="F84D-06C6" SIZE="1073741824" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sda" MOUNTPOINT="" Mounting '/dev/sda2' on '/var/tmp/espmounts/F84D-06C6'. Installing systemd-boot.. Created "/var/tmp/espmounts/F84D-06C6/EFI/systemd". Created "/var/tmp/espmounts/F84D-06C6/EFI/BOOT". Created "/var/tmp/espmounts/F84D-06C6/loader". Created "/var/tmp/espmounts/F84D-06C6/loader/entries". Created "/var/tmp/espmounts/F84D-06C6/EFI/Linux". Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/F84D-06C6/EFI/systemd/systemd-bootx64.efi". Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/F84D-06C6/EFI/BOOT/BOOTX64.EFI". Random seed file /var/tmp/espmounts/F84D-06C6/loader/random-seed successfully written (32 bytes). Created EFI boot entry "Linux Boot Manager". Configuring systemd-boot.. Unmounting '/dev/sda2'. Adding '/dev/sda2' to list of synced ESPs.. Refreshing kernels and initrds.. Running hook script 'proxmox-auto-removal'.. Running hook script 'zz-proxmox-boot'.. Copying and configuring kernels on /dev/disk/by-uuid/F0A5-6343 Copying kernel and creating boot-entry for 6.8.12-4-pve Copying kernel and creating boot-entry for 6.8.12-9-pve Copying and configuring kernels on /dev/disk/by-uuid/F84D-06C6 Copying kernel and creating boot-entry for 6.8.12-4-pve Copying kernel and creating boot-entry for 6.8.12-9-pve
Add the new disk to rpool as a mirror device. Important – you have to use partition 3, not the just the disk designation.
# zpool status pool: rpool state: ONLINE scan: resilvered 9.32M in 00:00:00 with 0 errors on Thu Apr 3 23:20:51 2025 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 nvme-eui.0025388581b66796-part3 ONLINE 0 0 0 errors: No known data errors # zpool attach rpool nvme-eui.0025388581b66796-part3 /dev/sda3 # zpool status pool: rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Fri Apr 4 03:35:18 2025 378G / 378G scanned, 853M / 378G issued at 35.5M/s 841M resilvered, 0.22% done, 03:01:01 to go config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 nvme-eui.0025388581b66796-part3 ONLINE 0 0 0 sda3 ONLINE 0 0 0 (resilvering) errors: No known data errors
# zpool status pool: rpool state: ONLINE scan: resilvered 371G in 03:10:09 with 0 errors on Sat Apr 5 11:24:50 2025 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 nvme-eui.0025388581b65b82-part3 ONLINE 0 0 0 sda3 ONLINE 0 0 0
Once synced up, Power off and replace the old drive with the new drive. System should still boot if UEFI.
Once booted, you’ll have a degraded mirror, you can safely remove the old drive
# zpool status pool: rpool state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J scan: resilvered 371G in 03:10:09 with 0 errors on Sat Apr 5 11:24:50 2025 config: NAME STATE READ WRITE CKSUM rpool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 14929228184534084580 UNAVAIL 0 0 0 was /dev/disk/by-id/nvme-eui.0025388581b65b82-part3 nvme0n1p3 ONLINE 0 0 0 errors: No known data errors # zpool detach rpool 14929228184534084580 # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpool 472G 367G 105G - 1.35T 21% 77% 1.00x ONLINE - If you expanded partition 3 to be larger, tell zfs that you want to expand into the available space. # zpool set autoexpand=on rpool # zpool online -e rpool nvme0n1p3 # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpool 1.82T 367G 1.46T - - 5% 19% 1.00x ONLINE -