Since I had a old windows laptop as a plex and file server for years I thought it would be good to try something new. After researching options I ddecided to try FreeNAS. Since it has ZFS and I’m an old Sun guy – why not. Well…. After a few weeks I decided to abandon FreeNAS and roll my own using a ThinkCentre M93p Tiny. I’ll try to post some notes on how the build goes.
Tag: Unix
Raspberry Pi backup using fsarchiver and other tricks
So I ran into a few issues using the dd image backup I referenced prior Raspberry Pi 3 SDCard backup
- The Image is very large even though the data was not. For example on a 32GB SD card I was getting a 12GB file. I only have 3GB of data! so that was a bummer.
- When it comes time to recover, I have to expand the gz image file to a full 32GB to then image it onto another SD device. There’s tricks around this I’m sure but still.
- Since dd was reading 100% of the SD card (/dev/mmcblk0) even with compression it took a LONG time to create the image. 20 minutes or so. Since I’m backing up a live system this was a real issue.
I did manage to figure out how to create a partial image if your partition sizes were smaller than the actual device – This seemed to work but it still was storing 6.6GB of data which was over double what I actually had:
Trimmed SD Image…
root@webpi:/mnt/usb# blockdev --getsize64 /dev/mmcblk0p1 /dev/mmcblk0p2 66060288 8929745920 root@webpi:/mnt/usb# echo `blockdev --getsize64 /dev/mmcblk0p1` `blockdev --getsize64 /dev/mmcblk0p2` + p | dc 8995806208 root@webpi:/mnt/usb# dd if=/dev/mmcblk0 conv=sync,noerror iflag=count_bytes count=8995806208 \ | gzip > /mnt/usb/webpi.trimmed.img.gz
Still not good enough…. Any I might have to tweak the count to make sure I’m not missing the last little piece of the lasat partition since we would have partition data in front of the partitions.
So…
To remedy a few issues, I researched other ways to backup. I came to the conclusion that fsarchiver was a decent fit. Simple to use and only backs up data. The downside was I would have to use another Linux system to reconstruct the SD card. I can’t just blast a image write to a SD card and call it good.
Here are the steps. Since fsarchiver doesn’t support vfat I had to make a dd image of the 66MB vfat boot partition. Not a big deal. The newer fsarchiver supports vfat; I just didn’t want to install the packages need to do a full compile for the latest.
Benefits: Much faster. take 5 minutes total. Much smaller data footprint – 3GB of data is storing in a 2.2GB image!
Downside: Not one image – need to do some recovery with another Linux system with a SD card loaded. Since I have a Pi setup for VPN and such that’s not a problem for me.
Disclaimer – I’m only posting this stuff to help me remember what I did and possibly help others that understand how to not shoot themselves in the foot. Please be very careful in trying any of this stuff. Depending on your situation it may not apply.
Raspberry Pi Backup using fsarchiver
-
# Quiesce any major services that might write… service apache2 stop service mysql stop service cron stop
-
# Save the Partition Table for good keeping… sfdisk -d /dev/mmcblk0 > /mnt/usb/webpi.backup.sfdisk-d_dev_mmcblk0.dump
-
# Save the vfat boot partition dd if=/dev/mmcblk0p1 conv=sync,noerror | gzip > /mnt/usb/webpi.backup.dd_dev_mmcblk0p1.img.gz
-
# Save the main OS image efficiently… fsarchiver savefs -A -j4 -o /mnt/usb/webpi.backup_dev_mmcblk0p2.fsa /dev/mmcblk0p2
-
# Restart the services… service cron start service mysql start service apache2 start
Raspberry Pi Restore using fsarchiver
-
# put a new SD card in a card reader and plugged it # into a raspberry pi - showed up as /dev/sdb
-
# Restore the partition table sfdisk /dev/sdb < /mnt/usb/webpi.backup.sfdisk-d_dev_mmcblk0.dump
-
# Restore the vfat partition gunzip -c /mnt/usb/webpi.backup.dd_dev_mmcblk0p1.img.gz | dd of=/dev/sdb1 conv=sync,noerror
-
# Run fsarchiver archinfo to verify you have a fsarchiver file and # determine which partition you want to recover if you did multiple partitions fsarchiver archinfo /mnt/usb/webpi.backup_dev_mmcblk0p2.fsa ====================== archive information ====================== Archive type: filesystems Filesystems count: 1 Archive id: 5937792d Archive file format: FsArCh_002 Archive created with: 0.6.19 Archive creation date: 2017-06-12_07-51-00 Archive label: <none> Minimum fsarchiver version: 0.6.4.0 Compression level: 3 (gzip level 6) Encryption algorithm: none ===================== filesystem information ==================== Filesystem id in archive: 0 Filesystem format: ext4 Filesystem label: Filesystem uuid: 8a9074c8-46fe-4807-8dc9-8ab1cb959010 Original device: /dev/mmcblk0p2 Original filesystem size: 7.84 GB (8423399424 bytes) Space used in filesystem: 3.37 GB (3613343744 bytes)
-
# Run the restfs option for fsarchiver fsarchiver restfs /mnt/usb/webpi.backup_dev_mmcblk0p2.fsa id=0,dest=/dev/sdb2 filesys.c#127,devcmp(): Warning: node for device [/dev/root] does not exist in /dev/ Statistics for filesystem 0 * files successfully processed:....regfiles=59379, directories=6999, symlinks=5774, hardlinks=331, specials=80 * files with errors:...............regfiles=0, directories=0, symlinks=0, hardlinks=0, specials=0
-
Run sync for warm fuzzies... #sync;sync;sync
Worked like a CHAMP!
Netgear Stora NAS
Since I work helping companies manage their enterprise storage environment, I tend to be very anal with storing my data at home. It needs to be resilient, redundant, and fast. Why? I’m retarded. Most of the time, I spend more than enough money on something I have to manage and tweak constantly. No inexpensive NAS device has had all the features I wanted in an embedded device – until now.
A few weeks ago, I decided to try Netgear Stora, and I’m very impressed with it. Firstly, it’s a 1TB NAS device for $200 bucks that performs. I have a gig network at home, and Stora works very well with its 1gbit net interface.
It can support USB drives directly and will auto RAID1 if you install a second drive inside it, which was the main reason I tried it.
What’s so lovely is that it has a web interface for file manipulation that can be accessed easily from the internet. Who cares, right? While the 1gbit network is fast, direct hard drive access is much faster. Usually, a NAS device has a computer available to upload the data from other disks onto the NAS device. With direct USB disk support and a web interface, I could migrate 700Gb of data much faster than having a computer as the middle man. Since the Stora was doing the copying, I didn’t have to worry about network hiccups and file share weirdness with larger files. Nice.
I just found out that while Netgear says the file system is propriety, I was able to mount the internal mirrored drive on my computer by mounting it as a XFS filesystem within a ubuntu VM instance. AWESOME. If the Stora dies, I can still get to my data.
Optional RAID1, Great Net Performance, USB Disk Support, Internet support, Media Server support for my PS3 – $200 bucks. Good times.
Here’s a demo of it:
Sun Cheatsheet


I published my Sun Cheatsheet document to the world recently. It’s a compilation of Sun commands and processes that I documented over the years. Enjoy! http://docs.google.com/Doc?id=dhjhzg6x_3c6d658