Here are some notes about backing up particular applications.
Handling logical volumes turns out to be a bit of a trick: use the Finnix distribution's startup code to turn LVM on and off. This results in distribution specific code for the first stage of restoration. It is generated in make.fdisk. To edit it, search make.fdisk for "Hideous".
LVM required the addition of two new LVM specific scripts, make.lvs and mount.lvs. They are only generated and used if there are logical volumes present.
Selinux is disabled on the test machines. /selinux is not backed up in any of these scripts. At a guess, you should probably disable selinux after the first stage restoration, and you will probably have some selinux specific tasks to perform before turning it back on.
The default bootloader in Fedora is the Grand Unified Bootloader (GRUB). It has to run at the end of the first stage, or you won't be able to boot thereafter. To preserve it for first stage restoration, make the following changes:
Edit the penultimate stanza of restore.metadata:
# Now install the boot sector. # chroot $target /sbin/lilo -C /etc/lilo.conf chroot $target /sbin/grub-install /dev/hda |
Add the following stanza to save.metadata:
# Grub requires these at installation time. if [ -d usr/share/grub ] ; then # Red Hat/Fedora crunch usr.share.grub usr/share/grub fi if [ -d usr/lib/grub ] ; then # SuSE crunch usr.lib.grub usr/lib/grub fi |
If you run Tripwire or any other application that maintains a database of file metadata, rebuild that database immediately after restoring.
Squid is a HTTP proxy and cache. As such it keeps a lot of temporary data on the hard drive. There is no point in backing that up. Insert "--exclude /var/spool/squid" into the appropriate tar command in your second stage backup script. Then, get squid to rebuild its directory structure for you. Tack onto the tail end of the second stage restore script a command for squid to initialize itself. Here is how I did it over SSH in restore:
ssh $target "mkdir /var/spool/squid ; chown squid:squid /var/spool/squid;\ /usr/sbin/squid -z;touch /var/spool/squid/.OPB_NOBACKUP" |
The last command creates a file of length 0 called .OPB_NOBACKUP. This is for the benefit of Arkeia, and tells Arkeia not to back up below this directory
These notes are based on testing with Arkeia 4.2.
Arkeia is a backup and restore program that runs on a wide variety of platforms. You can use Arkeia as part of a bare metal restoration scheme, but there are two caveats.
The first is probably the most problematic, as absent any more elegant solution you have to hand select the directories to restore in the navigator at restoration time. The reason is that, apparently, Arkeia has no mechanism for not restoring files already present on the disk, nothing analogous to tar's -p option. If you simply allow a full restore, the restore will crash as Arkeia over-writes a library which is in use at restore time, e.g. lib/libc-2.1.1.so. Hand selection of directories to restore is at best dicey, so I recommend against it.
The second caveat is that you have to back up the Arkeia data dictionary and/or programs. To do that, modify the save.metatdata script by adding Arkeia to the list of directories to save:
# arkeia specific: tar cf - usr/knox | gzip -c > $zip/arkeia.tar.gz |
You must back up the data dictionary this way because Arkeia does not back up the data dictionary. This is one of my complaints about Arkeia, and I have solved it in the past by saving the data dictionary to tape with The TOLIS Group's BRU.
The data dictionary will be restored in the script restore.metadata automatically.
Amanda (The Advanced Maryland Automatic Network Disk Archiver) works quite well with this set of scripts. Use the normal Amanda back-up process, and build your first stage data as usual. Amanda stores the data on tape in GNU tar or cpio format, and you can recover from individual files to entire backup images. The nice thing about recovering entire images is that you can then use variants on the scripts in this HOWTO to restore from the images, or direct from tape. I was able to restore my test machine with the directions from W. Curtis Preston's Unix Backup & Recovery. For more information on it, see the Resources. The Amanda chapter from the book is on line.
I made two changes to the script restore. First, I changed it to accept a file name as an argument. Then, since Amanda's amrestore decompresses the data as it restores it, I rewrote it to cat the file into the pipe instead of decompressing it.
The resulting line looks like this:
cat $file | ssh $target "umask 000 ; cd / ; tar -xpkf - " |
where $file is the script's argument, the image recovered from the tape by amrestore.
Since the command line arguments to tar prohibit over-writing, restore from images in the reverse of the order in which they were made. Restore most recent first.
Amanda does require setting ownership by hand if you back up the amanda data directory with save.metadata. Something like:
bash# chown -R amanda:disk /var/lib/amanda |
You can also add that line to your scripts for second state restoration, such as restore.
OK, NTFS isn't an application. It is a file system used by Microsoft operating system Windows NT and its descendents, including Windows 2000 and Windows XP. You can back it up and restore to it from Linux with ntfsclone, one of the NTFS utilities in the ntfsprogs suite, available from http://www.linux-ntfs.org/.
These scripts will create NTFS partitions, but will not put a file system on them. It is not clear from the docs whether ntfsclone will lay down a file system on a virgin partition or not.