Docker volume backup and restore the easy way.

I haven’t had to move docker volumes around in a few years, but I finally had the need today. As usual, I searched for the process, knowing that most examples are… well… not very good. Well, as I almost resorted to pulling a manual job using ubuntu, I found a great write-up by Jarek Lipski on Medium. Here’s how you backup using alpine and tar. Also, make sure you “docker stop” the containers that use the volume, so you get a consistent backup.

Which containers use a volume?

docker ps -a --filter volume=[some_volume]

Backup using an alpine image with tar:

docker run --rm -v [some_volume]:/volume -v /tmp:/backup alpine tar -cjf /backup/[some_archive].tar.bz2 -C /volume ./

Restore:

docker run --rm -v [some_volume]:/volume -v /tmp:/backup alpine sh -c "rm -rf /volume/* /volume/..?* /volume/.[!.]* ; tar -C /volume/ -xjf /backup/[some_archive].tar.bz2"

Backup using loomchild/volume-backup

I love that Jarek also created an image to simplify further the process called loomchild/volume-backup. Here’s how the image works:

docker run -v [volume-name]:/volume -v [output-dir]:/backup --rm loomchild/volume-backup backup [archive-name]

Restore:

docker run -v [volume-name]:/volume -v [output-dir]:/backup --rm loomchild/volume-backup restore [archive-name]

What’s great is this method allows inline copying of a volume from one system to another using ssh. Here’s an example Jarek provides:

docker run -v some_volume:/volume --rm --log-driver none loomchild/volume-backup backup -c none - |\
     ssh user@new.machine docker run -i -v some_volume:/volume --rm loomchild/volume-backup restore -c none -

Add HEIC support to nextcloud

From https://eplt.medium.com/5-minutes-to-install-imagemagick-with-heic-support-on-ubuntu-18-04-digitalocean-fe2d09dcef1

sudo sed -Ei 's/^# deb-src /deb-src /' /etc/apt/sources.list
sudo apt-get update
sudo apt-get install build-essential autoconf libtool git-core
sudo apt-get build-dep imagemagick libmagickcore-dev libde265 libheif
cd /usr/src/ 
sudo git clone https://github.com/strukturag/libde265.git  
sudo git clone https://github.com/strukturag/libheif.git 
cd libde265/ 
sudo ./autogen.sh 
sudo ./configure 
sudo make  
sudo make install 
cd /usr/src/libheif/ 
sudo ./autogen.sh 
sudo ./configure 
sudo make  
sudo make install 
cd /usr/src/ 
sudo wget https://www.imagemagick.org/download/ImageMagick.tar.gz 
sudo tar xf ImageMagick.tar.gz 
cd ImageMagick-7*
sudo ./configure --with-heic=yes 
sudo make  
sudo make install  
sudo ldconfig
sudo apt install php-imagick
cd /usr/src/ 
wget http://pecl.php.net/get/imagick-3.4.4.tgz
tar -xvzf imagick-3.4.4.tgz
cd imagick-3.4.4/
apt install php7.2-dev
phpize
./configure
make
make install
sudo phpenmod imagick

A restart of apache2 should finish the job. Check with the phpinfo() call…

sudo systemctl restart apache2
php -r 'phpinfo();' | grep HEIC
You should see:
ImageMagick supported formats => 3FR, 3G2, 3GP, A, AAI, AI, ART, ARW, AVI, AVS, B, BGR, BGRA, BGRO, BIE, BMP, BMP2, BMP3, BRF, C, CAL, CALS, CANVAS, CAPTION, CIN, CIP, CLIP, CMYK, CMYKA, CR2, CRW, CUBE, CUR, CUT, DATA, DCM, DCR, DCRAW, DCX, DDS, DFONT, DJVU, DNG, DPX, DXT1, DXT5, EPDF, EPI, EPS, EPS2, EPS3, EPSF, EPSI, EPT, EPT2, EPT3, ERF, EXR, FAX, FILE, FITS, FLV, FRACTAL, FTP, FTS, G, G3, G4, GIF, GIF87, GRADIENT, GRAY, GRAYA, GROUP4, HALD, HDR, HEIC,...

Password-less ssh in 2 Glorious Steps…

Local System – Let’s call it alpha
Remote System we don’t want to have to enter passwords for,
Let’s call it foxtrot

Prep: Harden your existing ssh keys since RSA 1024 sucks. This will create a new 4096 version – ed22519 is actually preferred so you can skip the rsa creation if preferred.

me@alpha$ mv ~/.ssh/id_rsa ~/.ssh/id_rsa_legacy
me@alpha$ mv ~/.ssh/id_rsa.pub ~/.ssh/id_rsa_legacy.pub

Step 1: Generate new keys:

me@alpha$ ssh-keygen -t rsa -b 4096 -o -a 100   #RSA version

me@alpha$ ssh-keygen -o -a 100 -t ed25519 #Preferred ed25519 version

Step 2: Copy the Ed25519  keys to the remote system called foxtrot:

me@alpha$ ssh-copy-id -i ~/.ssh/id_ed25519.pub me@foxtrot

If ssh-copy-id is not available (powershell, etc.) manually copy the public key to the other host:

me@alpha$ cat ~/.ssh/id_ed25519.pub | ssh me@foxtrot "cat >> ~/.ssh/authorized_keys"


DONE!
 Now verify you can actually ssh without a password:

me@alpha$ ssh me@foxtrot
me@foxtrot:~$ hostname
foxtrot
me@foxtrot:~$

You can also check your ~/.ssh/authorized_key file for duplicate or old entries, especially if you used old garbage RSA 1024 or less keys in the past.

Additional Reference: Manually copy the keys (This will ask you the password of the user you have mentioned):

me@alpha$ scp ~/.ssh/id_ed25519.pub me@foxtrot:~
me@alpha$ cat id_rsa.pub >> /home/user/.ssh/authorized_keys

Fancy way of doing the same thing (tee takes stdin and appends it to file):

me@alpha$ cat ~/.ssh/id_ed25519.pub | ssh jarvis tee -a ~/.ssh/authorized_keys

Wait… what about powershell? 

ssh-copy-id isn’t available so you can use the following:

$publicKey = Get-Content $env:USERPROFILE\.ssh\id_ed25519.pub
/authorized_keys"

ssh user@remotehost "mkdir -p ~/.ssh; echo '$publicKey' >> ~/.ssh/authorized_keys; chmod 700 ~/.ssh; chmod 600 ~/.ssh

Thanks to the following sites for easily explaining this process:
https://www.thegeekstuff.com/2008/11/3-steps-to-perform-ssh-login-without-password-using-ssh-keygen-ssh-copy-id/
https://blog.g3rt.nl/upgrade-your-ssh-keys.html
https://www.ionos.com/digitalguide/server/security/using-ssh-keys-for-your-network-connection/

 

HomeLab Build

Since I had a old windows laptop as a plex and file server for years I thought it would be good to try something new. After researching options I ddecided to try FreeNAS. Since it has ZFS and I’m an old Sun guy – why not. Well…. After a few weeks I decided to abandon FreeNAS and roll my own using a ThinkCentre M93p Tiny. I’ll try to post some notes on how the build goes.

Raspberry Pi backup using fsarchiver and other tricks

So I ran into a few issues using the dd image backup I referenced prior Raspberry Pi 3 SDCard backup

  1. The Image is very large even though the data was not.  For example on a 32GB SD card I was getting a 12GB file.  I only have 3GB of data! so that was a bummer.
  2. When it comes time to recover, I have to expand the gz image file to a full 32GB to then image it onto another SD device.  There’s tricks around this I’m sure but still.
  3. Since dd was reading 100% of the SD card (/dev/mmcblk0) even with compression it took a LONG time to create the image.  20 minutes or so.  Since I’m backing up a live system this was a real issue.

I did manage to figure out how to create a partial image if your partition sizes were smaller than the actual device – This seemed to work but it still was storing 6.6GB of data which was over double what I actually had:

Trimmed SD Image…

root@webpi:/mnt/usb# blockdev --getsize64 /dev/mmcblk0p1 /dev/mmcblk0p2
66060288
8929745920
root@webpi:/mnt/usb# echo `blockdev --getsize64 /dev/mmcblk0p1` `blockdev --getsize64 /dev/mmcblk0p2` + p | dc
8995806208
root@webpi:/mnt/usb# dd if=/dev/mmcblk0 conv=sync,noerror iflag=count_bytes count=8995806208 \
| gzip > /mnt/usb/webpi.trimmed.img.gz

Still not good enough….  Any I might have to tweak the count to make sure I’m not missing the last little piece of the lasat partition since we would have partition data in front of the partitions.

So…

To remedy a few issues,  I researched other ways to backup.  I came to the conclusion that fsarchiver was a decent fit.  Simple to use and only backs up data.  The downside was I would have to use another Linux system to reconstruct the SD card.  I can’t just blast a image write to a SD card and call it good.

Here are the steps.  Since fsarchiver doesn’t support vfat I had to make a dd image of the 66MB vfat boot partition.  Not a big deal.  The newer fsarchiver supports vfat;  I just didn’t want to install the packages need to do a full compile for the latest.

Benefits:  Much faster.  take 5 minutes total.  Much smaller data footprint – 3GB of data is storing in a  2.2GB image!
Downside:  Not one image – need to do some recovery with another Linux system with a SD card loaded.  Since I have a Pi setup for VPN and such that’s not a problem for me.

Disclaimer – I’m only posting this stuff to help me remember what I did and possibly help others that understand how to not shoot themselves in the foot.  Please be very careful in trying any of this stuff.  Depending on your situation it may not apply.

Raspberry Pi Backup using fsarchiver

  1. # Quiesce any major services that might write…
    service apache2 stop
    service mysql   stop
    service cron    stop
  2. # Save the Partition Table for good keeping…
    sfdisk -d /dev/mmcblk0 > /mnt/usb/webpi.backup.sfdisk-d_dev_mmcblk0.dump
  3. # Save the vfat boot partition
    dd if=/dev/mmcblk0p1 conv=sync,noerror | gzip > /mnt/usb/webpi.backup.dd_dev_mmcblk0p1.img.gz
  4. # Save the main OS image efficiently…
    fsarchiver savefs -A -j4 -o /mnt/usb/webpi.backup_dev_mmcblk0p2.fsa /dev/mmcblk0p2
  5. # Restart the services…
    service cron    start
    service mysql   start
    service apache2 start

Raspberry Pi Restore using fsarchiver

  1. # put a new SD card in a card reader and plugged it 
    # into a raspberry pi - showed up as /dev/sdb
  2. # Restore the partition table
    sfdisk /dev/sdb < /mnt/usb/webpi.backup.sfdisk-d_dev_mmcblk0.dump
  3. # Restore the vfat partition
    gunzip -c /mnt/usb/webpi.backup.dd_dev_mmcblk0p1.img.gz | dd of=/dev/sdb1 conv=sync,noerror
  4. # Run fsarchiver archinfo to verify you have a fsarchiver file and 
    # determine which partition you want to recover if you did multiple partitions
    fsarchiver archinfo /mnt/usb/webpi.backup_dev_mmcblk0p2.fsa 
    ====================== archive information ======================
    Archive type:                   filesystems
    Filesystems count:             1
    Archive id:                     5937792d
    Archive file format:           FsArCh_002
    Archive created with:           0.6.19
    Archive creation date:         2017-06-12_07-51-00
    Archive label:                 <none>
    Minimum fsarchiver version:     0.6.4.0
    Compression level:             3 (gzip level 6)
    Encryption algorithm:           none
    ===================== filesystem information ====================
    Filesystem id in archive:       0
    Filesystem format:             ext4
    Filesystem label:
    Filesystem uuid:               8a9074c8-46fe-4807-8dc9-8ab1cb959010
    Original device:               /dev/mmcblk0p2
    Original filesystem size:       7.84 GB (8423399424 bytes)
    Space used in filesystem:       3.37 GB (3613343744 bytes)
  5. # Run the restfs option for fsarchiver
    fsarchiver restfs /mnt/usb/webpi.backup_dev_mmcblk0p2.fsa id=0,dest=/dev/sdb2
    filesys.c#127,devcmp(): Warning: node for device [/dev/root] does not exist in /dev/
    Statistics for filesystem 0
    * files successfully processed:....regfiles=59379, directories=6999, symlinks=5774, hardlinks=331, specials=80
    * files with errors:...............regfiles=0, directories=0, symlinks=0, hardlinks=0, specials=0
  6. Run sync for warm fuzzies...
    #sync;sync;sync

 

Worked like a CHAMP!

Living on a Raspberry Pi!

This feels a little weird!

Playing with the overclocking and it really makes a difference!  The settings below look stable but make the proc very hot (over 85 degrees c.)

From /boot/config.txt:

# Overclock settings – disabled until heat sink is added. 170327 SeanK
#arm_freq=1350
#core_freq=500
#over_voltage=4
#disable_splash=1
##force_turbo=1
#boot_delay=1
#sdram_freq=500

Also created a script to put the governor in ondemand mode and put it in the init.d directory:  Continue reading Living on a Raspberry Pi!

Netgear Stora NAS

 

Warning:  Nerd Content ahead!

Since I work helping companies manage their enterprise storage environment, I tend to be very anal with storing my data at home.  It needs to be resilient, redundant, and fast. Why?  I’m retarded. Most of the time, I spend more than enough money on something I have to manage and tweak constantly.  No inexpensive NAS device has had all the features I wanted in an embedded device – until now.

A few weeks ago, I decided to try Netgear Stora, and I’m very impressed with it.  Firstly, it’s a 1TB NAS device for $200 bucks that performs.  I have a gig network at home, and Stora works very well with its 1gbit net interface.

It can support USB drives directly and will auto RAID1 if you install a second drive inside it, which was the main reason I tried it.

What’s so lovely is that it has a web interface for file manipulation that can be accessed easily from the internet. Who cares, right?  While the 1gbit network is fast, direct hard drive access is much faster.  Usually, a NAS device has a computer available to upload the data from other disks onto the NAS device.  With direct USB disk support and a web interface, I could migrate 700Gb of data much faster than having a computer as the middle man.  Since the Stora was doing the copying, I didn’t have to worry about network hiccups and file share weirdness with larger files.  Nice.

I just found out that while Netgear says the file system is propriety, I was able to mount the internal mirrored drive on my computer by mounting it as a XFS filesystem within a ubuntu VM instance.    AWESOME.   If the Stora dies, I can still get to my data.

Optional RAID1, Great Net Performance, USB Disk Support, Internet support, Media Server support for my PS3 – $200 bucks. Good times.

Here’s  a demo of it:

Goodbye Sun Microsystems…

So Oracle is in the final stages of buying Sun Microsystems Inc, a company I adored for years. It’s too bad to see Sun go, and with all other Oracle buyouts I’m sure not much will be left of the original idea of Sun. It’s sad to see, but after seeing Sun as the premier UNIX envorinment in the late 90’s go through it’s demise in early 2000’s the writing was on the wall.

I remember distinctly being at a good friends house discussing a plan we had to get in the car, drive to Merlo Park CA, and tell the then CEO exactly how to get back on track:

  • Start advertising was a big one. Sunrays – Awesome. Who knew they were awesome outside of Sun? No one. I was at the 2002 Winter Olympic Games supporting the Timing computers, which were Sun (again who knew?) and it was retarded that Sun didn’t want to be seen as a sponsor of the games. Dumb. On top of that, I got to see how daily random blackouts were wreaking havoc on Windows NT machines that the press were using, only to think, “If they only knew how easy support would have been if Sunray’s were here.”
  • Quality needs to be #1 again – Patches need to be solid again. Stop pushing code out the door to satisfy delivery plans. Make it right the first time.
  • Understand product support should be where opportunity is seen for improvement and customer satisfaction, and not simple as a operational cost. There was so much red tape internally at Sun it all but guaranteed unhappy customers.

Sun deserves their fate. I hope Solaris is nurtured into a bigger and better product, with fewer bugs, and ZFS can deliver on the promises it made 5 years ago and has yet to achieve it’s greatness. Xen virtualization is nice, but lacks the nice migration and recovery options VMWare has. Project Blackbox I’m sure will morph into a “Database in a Box” concept.

Oracle hopefully will take the CoolThreads sun4v technology to the next level.  The old powerhouse SPARC sun4u procs should rest in peace like the Z80’s and 68k series procs. Great procs, but power hungry and without the market share it’s too costly to keep up.

Best of luck Oracle. I wish you the best. Be gentle with one’s you buy.