its actually not that easy… but also not that hard to also backup larger files reliable and encrypted (!) over internet yourself.
0. setup ssh on your backup-server, so the backup-user can login without beeing prompted for password (safer and more reliable) i will not cover in detail here how to do this.
1. split the files into smaler files (1GB e.g.)
i would create a new screen session on server and run the command in there (so you can logoff/close terminal and it will continue)
screen -S split # create new screen session with name split # split the files into 1GB chunks gzip -c LARGEBACKUP.file | split --suffix-length=5 -b 1073741824 - LARGEBACKUP.file.split.gz # generate sha512sum/sha256sum/md5sum # to check if transfered files were ok (will also be transfered) sha512sum *.file.split.gz > sha512sums.txt
press Ctrl/Strg+A then D to detach from the screen session
2. make backup script
that will test if an rsync-process is allready running and restart it if not.
vim /scripts/onlineBackup.sh
fill with this content: (bandwidth=50 do not use more than 50kbyte/s for upload)
#!/bin/bash ps cax | grep -v rsyncd | grep -v grep | grep rsync > /dev/null if [ $? -eq 0 ]; then echo "Process is running." else echo "Process is not running." /share/MD0_DATA/BACKUP/SERVER4/ERGODENT_DAILY/onlineBackup.sh # backup data first (more important) # now backup the rest (backups of 3d / 3d server and all the other vms # here port -p4444 is used # do not use more than kbyte/s --bwlimit=65 rsync -vv --bwlimit=65 --progress --update -r --partial --checksum --inplace --no-whole-file --exclude=LARGEBACKUP.file -e 'ssh -p4444' /sync/this/directory/ admin@RemoteBackupServer.com:/to/this/directory/ fi
3. create cronjob that runs every night and checks if rsync process is still going on… if yes does not start another process… if no starts another online backup process.
crontab -e # start editing crontab 0 0 * * * /share/MD0_DATA/onlineBackup.sh # check if online backup process is running, restart if not every midnight
next level: a major improvement of the script would be:
after every backup: check if files transfered successfully
4. remote login, make md5 check of all transfered files, compare to md5sum.list from server -> if differences spottet -> e-mail admin both files.
… i modified the rsync command so it uses md5 automatically… so this might not be needed anymore but would be a good secondary check.
also it would be good to check the disks for badSectors regularly and check the readability of the files and send an mail to the admin if problems occur.
Incremental backups?
let’s assume a client computer get’s a nasty virus that encrypts all files it can get and then ask for ransom…
let’s say your users did not report that problem in 3 days… not the backups “backed up” all those nicely encrypted files, overwriting in worst case, usefull files on the backup.
Not good. So you definately need incremental backups. Out of reach of hackers and viruses.
Rsync Can Do Hard Links
Although I personally like the method of using rsync with hard links, rsync has an option that does the hard links for you, so you don’t have to create them manually. Most modern Linux distributions have a fairly recent rsync that includes the very useful option –link-dest= . This option allows rsync to compare the file copy to an existing directory structure and lets you tell rsync to copy only the changed files (an incremental backup) relative to the stated directory and to use hard links for other files.
I’ll look at a quick example of using this option. Assume you have a source directory, SOURCE , and you do a full copy of the directory to SOURCE.full :
[laytonjb@home4]$
rsync -avh --delete /home/laytonjb/TEST/SOURCE/ /home/laytonjb/TEST/SOURCE.full
sending incremental file list
created directory /home/laytonjb/TEST/SOURCE.full
./
Open-MPI-SC13-BOF.pdf
PrintnFly_Denver_SC13.pdf
easybuild_Python-BoF-SC12-lightning-talk.pdf
sent 12.31M bytes received 72 bytes 24.61M bytes/sec
total size is 12.31M speedup is 1.00
You can then create an incremental backup based on that full copy using the following command:
rsync -avh --delete --link-dest=/home/laytonjb/TEST/SOURCE.full /home/laytonjb/TEST/SOURCE/ /home/laytonjb/TEST/SOURCE.1
Rsync checks which files it needs to copy relative to SOURCE.full using hard links when it creates SOURCE.1 , creating the incremental copy.
To better use this approach, you would want to implement the backup rotation scheme discussed in the previous section. The script might look something like this:
rm -rf backup.3
mv backup.2 backup.3
mv backup.1 backup.2
mv backup.0 backup.1
rsync -avh --delete --link-dest= backup.1/ source_directory/ backup.0/
Source: http://www.admin-magazine.com/Articles/Using-rsync-for-Backups/%28offset%29/2
have phun & save backups to remote places
liked this article?
- only together we can create a truly free world
- plz support dwaves to keep it up & running!
- (yes the info on the internet is (mostly) free but beer is still not free (still have to work on that))
- really really hate advertisement
- contribute: whenever a solution was found, blog about it for others to find!
- talk about, recommend & link to this blog and articles
- thanks to all who contribute!