Two servers for external use (dissemination of information) with the following setups:
2 AMD Opteron 240 1.4 GHz CPUs
RIOWORKS HDAMB DUAL OPTERON motherboard
4 KINGSTON 512MB PC3200 REG ECC RAM
80GB MAX 7200 UDMA 133 HD
6 200GB WD 7200 8MB HD
ASUS 52X CD-A520 CDROM
1.44mb floppy drive
Antec 4U22ATX550EPS 4u case
2 AMD Palamino MP XP 2000+ 1.67 GHz CPUs
Asus A7M266-D w/LAN Dual DDR
4 Kingston 512mb PC2100 DDR-266MHz REG ECC RAM
Asus CD-A520 52x CDROM
1 41 GB Maxtor 7200rpm ATA100 HD
6 120 GB Maxtor 5400rpm ATA100 HD
1.44mb floppy drive
ATI Expert 2000 Rage 128 32mb
IN-WIN P4 300ATX mid tower case
Enermax P4-430ATX power supply
Desktop and terminal hardware
We have identified at least two kinds of users of our cluster: those that need (i.e., take advantage of) permanent local processing power and disk space in conjunction with the cluster to speed up processing, and those that just need only the cluster processing power. The former are assigned "desktops" which are essentially high-performance machines, and the latter are assigned dumb "terminals". Our desktops are usually dual or quad processor machines with the current high-end CPU being a 1.6 GHz Opteron, having as much as 10 GB of RAM, and over 1 TB of local disk space. Our terminals are essentially machines where a user can log in and then run jobs on our farm. In this setup, people may also use laptops as dumb terminals.
Miscellaneous/accessory hardware
We generally use/prefer Viewsonic monitors, Microsoft Intellimouse mice, and Microsoft Natural keyboards. These generally have worked quite reliably for us.
Putting-it-all-together hardware
For visual access to the nodes, we initially used to use KVM switches with a cheap monitor to connect up and "look" at all the machines. While this was a nice solution, it did not scale. We currently wheel a small monitor around and hook up cables as needed. What we need is a small hand held monitor that can plug into the back of the PC (operated with a stylus, like the Palm).
For networking, we generally use Netgear and Cisco switches.
Costs
Our vendor is Hard Drives Northwest ( http://www.hdnw.com). For each compute node in our cluster (containing two processors), we paid about $1500-$2000, including taxes. Generally, our goal is to keep the cost of each processor to below $1000 (including housing it).
Software
Operating system: Linux, of course!
The following kernels and distributions are what are being used:
Kernel 2.2.16-22, distribution KRUD 7.0
Kernel 2.4.9-7, distribution KRUD 7.2
Kernel 2.4.18-10, distribution KRUD 7.3
Kernel 2.4.20-13.9, distribution KRUD 9.0
Kernel 2.4.22-1.2188, distribution KRUD 2004-05
These distributions work very well for us since updates are sent to us on CD and there's no reliance on an external network connection for updates. They also seem "cleaner" than the regular Red Hat distributions, and the setup is extremely stable.
We use our own software for parallelising applications but have experimented with PVM and MPI. In my view, the overhead for these pre-packaged programs is too high. I recommend writing application-specific code for the tasks you perform (that's one person's view).
Costs
Linux and most software that run on Linux are freely copiable.
Set up, configuration, and maintenance
Disk configuration
This section describes disk partitioning strategies. Our goal is to keep the virtual structures of the machines organised such that they are all logical. We're finding that the physical mappings to the logical structures are not sustainable as hardware and software (operating system) change. Currently, our strategy is as follows:
farm/cluster machines:
partition 1 on system disk - swap (2 * RAM)
partition 2 on system disk - / (remaining disk space)
partition 1 on additional disk - /maxa (total disk)
servers:
partition 1 on system disk - swap (2 * RAM)
partition 2 on system disk - / (4-8 GB)
partition 3 on system disk - /home (remaining disk space)
partition 1 on additional disk 1 - /maxa (total disk)
partition 1 on additional disk 2 - /maxb (total disk)
partition 1 on additional disk 3 - /maxc (total disk)
partition 1 on additional disk 4 - /maxd (total disk)
partition 1 on additional disk 5 - /maxe (total disk)
partition 1 on additional disk 6 - /maxf (total disk)
partition 1 on additional disk(s) - /maxg (total disk space)
desktops:
partition 1 on system disk - swap (2 * RAM)
partition 2 on system disk - / (4-8 GB)
partition 3 on system disk - /spare (remaining disk space)
partition 1 on additional disk 1 - /maxa (total disk)
partition 1 on additional disk(s) - /maxb (total disk space)
Note that in the case of servers and desktops, maxg and maxb can be a single disk or a conglomeration of disks.
Package configuration
Install a minimal set of packages for the farm. Users are allowed to configure desktops as they wish, provided the virtual structure is kept the same described above is kept the same.
Operating system installation and maintenance
Personal cloning strategy
I believe in having a completely distributed system. This means each machine contains a copy of the operating system. Installing the OS on each machine manually is cumbersome. To optimise this process, what I do is first set up and install one machine exactly the way I want to. I then create a tar and gzipped file of the entire system and place it on a bootable CD-ROM which I then clone on each machine in my cluster.
The commands I use to create the tar file are as follows:
tar -czvlps --same-owner --atime-preserve -f /maxa/slash.tgz /
I use a script called go that takes a machine number as its argument and untars the slash.tgz file on the CD-ROM and replaces the hostname and IP address in the appropriate locations. A version of the go script and the input files for it can be accessed at: http://www.ram.org/computing/linux/linux/cluster/. This script will have to be edited based on your cluster design.
To make this work, I use Martin Purschke's Custom Rescue Disk ( http://www.phenix.bnl.gov/~purschke/RescueCD/) to create a bootable CD image containing the .tgz file representing the cloned system, as well as the go script and other associated files. This is burned onto a CD-ROM.
There are several documents that describe how to create your own custom bootable CD, including the Linux Bootdisk HOWTO ( http://www.linuxdoc.org/HOWTO/Bootdisk-HOWTO/), which also contains links to other pre-made boot/root disks.
Thus you have a system where all you have to do is insert a CDROM, turn on the machine, have a cup of coffee (or a can of coke) and come back to see a full clone. You then repeat this process for as many machines as you have. This procedure has worked extremely well for me and if you have someone else actually doing the work (of inserting and removing CD-ROMs) then it's ideal. In my system, I specify the IP address by specifying the number of the machine, but this could be completely automated through the use of DHCP.
Rob Fantini ( rob@fantinibakery.com) has contributed modifications of the scripts above that he used for cloning a Mandrake 8.2 system accessible at http://www.ram.org/computing/linux/cluster/fantini_contribution.tgz.
Cloning and maintenance packages
FAI
FAI ( http://www.informatik.uni-koeln.de/fai/) is an automated system to install a Debian GNU/Linux operating system on a PC cluster. You can take one or more virgin PCs, turn on the power and after a few minutes Linux is installed, configured and running on the whole cluster, without any interaction necessary.
SystemImager
SystemImager ( http://systemimager.org) is software that automates Linux installs, software distribution, and production deployment.
DHCP vs. hard-coded IP addresses
If you have DHCP set up, then you don't need to reset the IP address and that part of it can be removed from the go script.
DHCP has the advantage that you don't muck around with IP addresses at all provided the DHCP server is configured appropriately. It has the disadvantage that it relies on a centralised server (and like I said, I tend to distribute things as much as possible). Also, linking hardware ethernet addresses to IP addresses can make it inconvenient if you wish to replace machines or change hostnames routinely.
Known hardware issues
The hardware in general has worked really well for us. Specific issues are listed below:
The AMD dual 1.2 GHz machines run really hot. Two of them in a room increase the temperature significantly. Thus while they might be okay as desktops, the cooling and power consumption when using them as part of a large cluster is a consideration. The AMD Palmino configuration described previously seems to work really well, but I definitely recommend getting two fans in the case--this solved all our instability problems.
Known software issues
Some tar executables apparently don't create a tar file the nice way they're supposed to (especially in terms of referencing and de-referencing symbolic links). The solution to this I've found is to use a tar executable that does, like the one from RedHat 7.0.
Performing tasks on the cluster
This section is still being developed as the usage on my cluster evolves, but so far we tend to write our own sets of message passing routines to communicate between processes on different machines.
Many applications, particularly in the computational genomics areas, are massively and trivially parallelisable, meaning that perfect distribution can be achieved by spreading tasks equally across the machines (for example, when analysing a whole genome using a technique that operates on a single gene/protein, each processor can work on one gene/protein at a time independent of all the other processors).
So far we have not found the need to use a professional queueing system, but obviously that is highly dependent on the type of applications you wish to run.
Rough benchmarks
For the single most important program we run (our ab initio protein folding simulation program), using the Pentium 3 1 GHz processor machine as a frame of reference, on average:
Xeon 1.7 GHz processor is about 22% slower
Athlon 1.2 GHz processor is about 36% faster
Athlon 1.5 GHz processor is about 50% faster
Athlon 1.7 GHz processor is about 63% faster
Xeon 2.4 GHz processor is about 45% faster
Xeon 2.7 GHz processor is about 80% faster
Opteron 1.4 GHz processor is about 70% faster
Opteron 1.6 GHz processor is about 88% faster
Yes, the Athlon 1.5 GHz is faster than the Xeon 1.7 GHz since the Xeon executes only six instructions per clock (IPC) whereas the Athlon executes nine IPC (you do the math!). This is however an highly nonrigourous comparison since the executables were each compiled on the machines (so the quality of the math libraries for example will have an impact) and the supporting hardware is different.
Uptimes
These machines are incredibly stable both in terms of hardware and software once they have been debugged (usually some in a new batch of machines have hardware problems), running constantly under very heavy loads. One common example is given below. Reboots have generally occurred when a circuit breaker is tripped.
Clientless SSL VPN (WebVPN) allows for limited but valuable secure access to the corporate network from any location. Users can achieve secure browser-based access to corporate resources at anytime. This document provides a straightforward configuration for the Cisco Adaptive Security Appliance (ASA) 5500 series to allow Clientless SSL VPN access to internal network resources.
The SSL VPN technology can be utilized in three ways: Clientless SSL VPN, Thin-Client SSL VPN (Port Forwarding), and SSL VPN Client (SVC Tunnel Mode). Each has its own advantages and unique access to resources. 1. Clientless SSL VPN
A remote client needs only an SSL-enabled web browser to access http- or https-enabled web servers on the corporate LAN. Access is also available to browse for Windows files with the Common Internet File System (CIFS). A good example of http access is the Outlook Web Access (OWA) client. 2. Thin-Client SSL VPN (Port Forwarding)
A remote client must download a small, Java-based applet for secure access of TCP applications that use static port numbers. UDP is not supported. Examples include access to POP3, SMTP, IMAP, SSH, and Telnet. The user needs local administrative privileges because changes are made to files on the local machine. This method of SSL VPN does not work with applications that use dynamic port assignments, for example, several FTP applications.
Refer to Thin-Client SSL VPN (WebVPN) on ASA using ASDM Configuration Example in order to learn more about the Thin-Client SSL VPN. 3. SSL VPN Client (SVC-Tunnel Mode)
The SSL VPN Client downloads a small client to the remote workstation and allows full, secure access to the resources on the internal corporate network. The SVC can be downloaded permanently to the remote station, or it can be removed after the secure session ends.
Clientless SSL VPN can be configured on the Cisco VPN Concentrator 3000 and specific Cisco IOS® routers with Version 12.4(6)T and higher. Clientless SSL VPN access can also be configured on the Cisco ASA at the Command Line Interface (CLI) or with the Adaptive Security Device Manager (ASDM). The ASDM usage makes configurations more straightforward.
Clientless SSL VPN and ASDM must not be enabled on the same ASA interface. It is possible for the two technologies to coexist on the same interface if changes are made to the port numbers. It is highly recommended that ASDM is enabled on the inside interface, so WebVPN can be enabled on the outside interface.
Refer to SSL VPN Client (SVC) on ASA Using ASDM Configuration Example in order to know more details about the SSL VPN Client.
Clientless SSL VPN enables secure access to these resources on the corporate LAN:
OWA/Exchange
HTTP and HTTPS to internal web servers
Windows file access and browsing
Citrix Servers with the Citrix thin client
The Cisco ASA adopts the role of a secure proxy for client computers which can then access pre-selected resources on the corporate LAN.
This document demonstrates a simple configuration with ASDM to enable the use of Clientless SSL VPN on the Cisco ASA. No client configuration is necessary if the client already has an SSL-enabled web browser. Most web browsers already have the capability to invoke SSL/TLS sessions. The resultant Cisco ASA command lines are also shown in this document.
The information in this document is based on these software and hardware versions:
Cisco ASA Software Version 7.2(1)
Cisco ASDM 5.2(1) Note: Refer to Allowing HTTPS Access for ASDM in order to allow the ASA to be configured by the ASDM.
Cisco ASA 5510 series
The information in this document was created from the devices in a specific lab environment. All the devices used in this document began with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command.
At this stage, you can issue the https://inside _IP Address from a web browser to access the ASDM application. Once ASDM has loaded, begin the configuration for WebVPN.
This section contains the information needed to configure the features described within this document. Note: Use the Command Lookup Tool (registered customers only) to obtain more information about the commands used in this section.
Clientless SSL VPN macro substitutions let you configure users for access to personalized resources that contain the user ID and password or other input parameters. Examples of such resources include bookmark entries, URL lists, and file shares. Note: For security reasons, password substitutions are disabled for file-access URLs (cifs://). Note: Also for security reasons, use caution when you introduce password substitutions for web links, especially for non-SSL instances.
These macro substitutions are supported:
CSCO_WEBVPN_USERNAME - SSL VPN user login ID
CSCO_WEBVPN_PASSWORD - SSL VPN user login password
CSCO_WEBVPN_INTERNAL_PASSWORD - SSL VPN user internal resource password
CSCO_WEBVPN_CONNECTION_PROFILE - SSL VPN user login group drop-down, a group alias within the connection profile
CSCO_WEBVPN_MACRO1 - Set through RADIUS/LDAP vendor-specific attribute
CSCO_WEBVPN_MACRO2 - Set through RADIUS/LDAP vendor-specific attribute
In order to know more about macro substitutions, refer to Clientless SSL VPN Macro Substitutions.
Use this section to confirm that your configuration works properly.
Establish a connection to your ASA device from an outside client to test this: https://ASA_outside_IP_Address
The client receives a Cisco WebVPN page that allows access to the corporate LAN in a secure fashion. The client is allowed only the access that is listed in the newly created group policy. Authentication:A simple login and password was created on the ASA for this lab proof of concept. If a single and seamless sign-on to a domain for the WebVPN users is preferred, refer to this URL:
ASA with WebVPN and Single Sign-on using ASDM and NTLMv1 Configuration Example
This section provides information you can use to troubleshoot your configuration. Note: Do not interrupt the Copy File to Server command or navigate to a different screen while the copy process is in progress. If the operation is interrupted, it can cause an incomplete file to be saved on the server. Note: Users can upload and download the new files with the WEBVPN client, but the user is not allowed to overwrite the files in CIFS on WEB VPN with the Copy File to Server command. When the user attempts to replace a file on the server, the user receives this message: "Unable to add the file."
Follow these instructions to troubleshoot your configuration.
In ASDM, choose Monitoring > Logging > Real-time Log Viewer > View. When a client connects to the ASA, note the establishment and termination of SSL and TLS sessions in the real-time logs.
In ASDM, choose Monitoring > VPN > VPN Statistics > Sessions. Look for the new WebVPN session. Be sure to choose the WebVPN filter and click Filter. If a problem occurs, temporarily bypass the ASA device to ensure that clients can access the desired network resources. Review the configuration steps listed in this document.
The Output Interpreter Tool (registered customers only) (OIT) supports certain show commands. Use the OIT to view an analysis of show command output. Note: Refer to Important Information on Debug Commands before the use of debug commands.
show webvpn ?—There are many show commands associated with WebVPN. In order to see the use of show commands in detail, refer to the command reference section of the Cisco Security Appliance.
debug webvpn ?—The use of debug commands can adversely impact the ASA. In order to see the use of debug commands in more detail, refer to the command reference section of the Cisco Security Appliance.
Problem :
Only three WEB VPN clients can connect to ASA/PIX; the connection for the fourth client fails. Solution :
In most cases, this issue is related to a simultaneous login setting within the group policy.
Use this illustration to configure the desired number of simultaneous logins. In this example, the desired value was 20.
In the administration console, you can enable restrictions so that messages are not accepted by Postfix when non-standard or other disapproved behavior is exhibited by an incoming SMTP client. These restrictions provide some protection against ill-behaved spam senders. By default, SMTP protocol violators (that is, clients that do not greet with a fully qualified domain name) are restricted. DNS based restrictions are also available.
Important: Understand the implications of these restrictions before you implement them. You may want to receive mail from people outside of your mail system, but those mail systems may be poorly implemented. You may have to compromise on these checks to accommodate them.
The MTA attempts to deliver a message, and if a Zimbra user’s mailbox exceeds the set quota, the Zimbra mailbox server temporarily sends the message to the deferred queue to be delivered when the mailbox has space. The MTA server’s bounce queue lifetime is set for five days. The deferred queue tries to deliver a message until this bounce queue lifetime is reached before bouncing the message back to the sender. You can change the default through the CLI zmlocalconfig, bounce_queue_lifetime parameter.
Clam AntiVirus software is bundled with the Zimbra Collaboration Suite as the virus protection engine. The Clam anti-virus software is configured to block encrypted archives, to send notification to administrators when a virus has been found, and to send notification to recipients alerting that a mail message with a virus was not delivered.
Zimbra utilizes SpamAssassin to control spam. SpamAssassin uses predefined rules as well as a Bayes database to score messages with a numerical range. Zimbra uses a percentage value to determine "spaminess" based on a SpamAssassin score of 20 as 100%. Any message tagged between 33%-75% is considered spam and delivered to the user’s Junk folder. Messages tagged above 75% are always considered spam and discarded.
How well the anti-spam filter works depends on recognizing what is considered spam or not considered spam (ham). The SpamAssassin filter can learn what is spam and what is not spam from messages that users specifically mark as Junk or Not Junk by sending them to their Junk (Spam) folder in the web client or via Outlook for ZCO and IMAP. A copy of these marked messages is sent to the appropriate spam training mailbox. The ZCS spam training tool, zmtrainsa, is configured to automatically retrieve these messages and train the spam filter.
In order to correctly train the spam/ham filters, when ZCS is installed, spam/ham cleanup is configured on only the first MTA. The zmtrainsa script is enabled through a crontab job to feed mail that has been classified as spam or as non-spam to the SpamAssassin application, allowing SpamAssassin to ‘learn’ what signs are likely to mean spam or ham. The zmtrainsa script empties these mailboxes each day.
Initially, you may want to train the spam filter manually to quickly build a database of spam and non-spam tokens, words, or short character sequences that are commonly found in spam or ham. To do this, you can manually forward messages as message/rfc822 attachments to the spam and non-spam mailboxes. When zmtrainsa runs, these messages are used to teach the spam filter. Make sure you add a large enough sampling of messages to these mailboxes. In order to get accurate scores to determine whether to mark messages as spam at least 200 known spams and 200 known hams must be identified.
The policy daemon runs after you set the bits in steps 1 and 3 above and then restart Postfix. The postfix_policy_time_limit key is because the Postfix spawn (8) daemon by defaults kills its child process after 1000 seconds. This is too short for a policy daemon that may run as long as an SMTP client is connected to an SMTP process.
If you have your Zimbra installation on its own logical volume, you can use this script:
#!/bin/bash
time=`date +%Y-%m-%d_%H-%M-%S`
# Modify the following variables according to your installation
#########################################
# backup_dir - directory to backup to
backup_dir=/path/to/backups/$time
# vol_group - the Volume Group that contains $zimbra_vol
vol_group=PUT_VOL_GROUPNAME_HERE
# zimbra_vol - the Logical Volume that contains /opt/zimbra
zimbra_vol=PUT_ZIMBRA_VOLNAME_HERE
# zimbra_vol_fs - the file system type (ext3, xfs, ...) in /opt/zimbra
zimbra_vol_fs=PUT_ZIMBRA_FILE_SYSTEM_TYPE_HERE
# lvcreate and lvremove commands path -
lvcreate_cmd=`which lvcreate`
lvremove_cmd=`which lvremove`
# Do not change anything beyond this point
#########################################
# Test for an interactive shell
if [[ $- != *i* ]]
then say() { echo -e $1; }
# Colors, yo!
GREEN="\e[1;32m"
RED="\e[1;31m"
CYAN="\e[1;36m"
PURPLE="\e[1;35m"
else say() { true; } # Do nothing
fi
# Output date
say $GREEN"Backup started at "$RED"`date`"$GREEN"."
# Stop the Zimbra services
say $CYAN"Stopping the Zimbra services..."
say $PURPLE" This may take several minutes."
/etc/init.d/zimbra stop
# Create a logical volume called ZimbraBackup
say $GREEN"Creating a LV called ZimbraBackup:"$PURPLE
$lvcreate_cmd -L1000M -s -n ZimbraBackup /dev/$vol_group/$zimbra_vol
# Create a mountpoint to mount the logical volume to
say $GREEN"Creating a mountpoint for the LV..."
# WARNING: this is insecure!
mkdir -p /tmp/ZimbraBackup
# Mount the logical volume to the mountpoint
say $GREEN"Mounting the LV..."
# WARNING: remove nouuid option if the filesystem is not formatted as XFS !!!
mount -t $zimbra_vol_fs -o nouuid,ro /dev/$vol_group/ZimbraBackup /tmp/ZimbraBackup/
# Start the Zimbra services
say $CYAN"Starting the Zimbra services..."
# WARNING: it's safer not to put this command in background
/etc/init.d/zimbra start &
# For testing only
#say $RED"Press Enter to continue...\e[0m"
#read input
# Create the current backup
say $GREEN"Creating the backup directory and backup..."
mkdir -p $backup_dir
tar zcvf $backup_dir/zimbra.backup.tar.gz /tmp/ZimbraBackup/zimbra/ 2&> /dev/null
# Unmount /tmp/ZimbraBackup and remove the logical volume
say $GREEN"Unmounting and removing the LV."$PURPLE
umount /tmp/ZimbraBackup/
$lvremove_cmd --force /dev/$vol_group/ZimbraBackup
# Done!
say $GREEN"Zimbra backed up to "$CYAN$backup_dir$GREEN"!"
say $GREEN"Backup ended at "$RED"`date`"$GREEN".\e[0m"
ZCS Tools currently contains a Cold Backup script written in Perl. It also supports backup rotation. This script does not use LVM.
Currently: zimbraColdBackup-Ver0.02beta
#!/usr/bin/perl
use strict;
use warnings;
use POSIX;
use IO::Scalar; # for building up output on a string
use Proc::ProcessTable; # for killing processes
use File::Path; # for removing directories
use File::Rsync; # for syncing using rsync
use Mail::Mailer; # for sending email
#############################################################################
# Please make changes below to suit your system and requirements
# absolute path to'rsync' on your system
my $rsync = '/usr/bin/rsync';
# absolute path to zimbra directory
my $zimbra = '/opt/zimbra';
# absolute path to backup directory. ensure that it exists!
my $backup_dir = '/backup';
# do you want to rotate backups?
my $rotate_bak = 1; # 1 = yes, 0 = no
# if yes, after how many days?
# make sure that you don't specify '0'. Specifying zero will delete even
# the latest backup - that is, the backup taken today and you will end up
# with no data!
my $rotate_days = 7;
# do you want to send the backups to a remote location? (using rsync)
my $send_remote = 0; # 1 = yes, 0 = no
# if you would like to use Rsync to send to remote location:
# please enter the destination server below
# (before using the script make sure that you have password-less and
# passphrase-less SSH login setup using private / public cryptography
# this script will neither provide SSH password nor the passphrase
my $ssh_server = 'host.domain.com'; # SSH server IP or hostname
my $remote_path = '/backups'; # path on remote server to send backup
# Finally:
# Do you want to have the results of backup emailed?
my $send_result = 1; # 1 = yes, 0 = no
# if yes, to whom whould it be emailed?
my $to_email = 'smile@sis.net.in';
# CC email (optional: you can leave this empty)
my $cc_email = '';
# BCC email (optional: you can leave this empty)
my $bcc_email = '';
# Sender / From email (it will look like the email arrive from this person)
my $from_email = 'root@localhost';
# That's it!
# Don't edit below this line unless you know what you're doing
#############################################################################
my $zimbra_user = 'zimbra';
my $prog_name = $0;
# properties of this program
# name of program
my $progname = "zimbraColdBackup";
# version number
my $version = "0.2Beta";
# revision number (independent of version number)
my $revision = "30";
# license under which distributed
my $license = qq(
#############################################################################
# Program Name $progname #
# Program Version $version #
# Program Revision $revision #
# #
# This script can be used to backup Zimbra Collaboration Suite 4.0.0 GA #
# #
# Most recent version of this script can be downloaded from: #
# http://sourceforge.net/projects/zcstools/ #
# #
# Copyright (C) 2006 Chintan Zaveri #
# E-mail: smile\@sis.net.in #
# #
# This program is free software; you can redistribute it and/or modify #
# it under the terms of the GNU General Public License version 2, as #
# published by the Free Software Foundation. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License along #
# with this program; if not, write to the Free Software Foundation, Inc., #
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. #
#############################################################################
);
# usage text
my $usage = qq(
Usage:
zimbraColdBackup OPTION
help|usage|?|--help|--usage
All of these options, or providing no options will print this text
about usage.
overview|--overview
What this script does.
install|installation|--install|--installation
These options will display a text on installation of this script.
confirm|--confirm
These options will run the backup procedure.
version|--version
These options will print the Name, Version and Revision Number for
this script.
license|licence|lisense|lisence|--license|--licence|--lisense|--lisence
These options will print the License under which this program is
distributed.
);
# overview text
my $overview = qq(
Overview:
This script can be used to take off-line backup of Zimbra Collaboration Suite.
The following is the series of actions undertaken by this script:
1. Stop Zimbra
2. Backup Zimbra in the specified local directory using Rsync
3. Start Zimbra
Optionally, if you specified, this script will also do the following:
1. Rotate the backups
2. Send the backup to another system using Rsync
3. Email the results of backup
);
# installation text
my $installation = qq(
Installation:
It is fairly easy to install this script. The installation requires you to do
the following:
1. Install all required Perl modules
2. Configure this script
3. Run it once - Test it!
4. Schedule it using crontab
1. Install all required Perl modules
The best way to do this is by running the script. Once you run the script, you
would receive an error message similar to the following:
Can't locate Mail/Mailer.pm in \@INC (\@INC contains:
/usr/lib/perl5/5.8.5/i386-linux-thread-multi /usr/lib/perl5/5.8.5
/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi
/usr/lib/perl5/site_perl/5.8.4/i386-linux-thread-multi
/usr/lib/perl5/site_perl/5.8.3/i386-linux-thread-multi
/usr/lib/perl5/site_perl/5.8.2/i386-linux-thread-multi
/usr/lib/perl5/site_perl/5.8.1/i386-linux-thread-multi
/usr/lib/perl5/site_perl/5.8.0/i386-linux-thread-multi
/usr/lib/perl5/site_perl/5.8.5 /usr/lib/perl5/site_perl/5.8.4
/usr/lib/perl5/site_perl/5.8.3 /usr/lib/perl5/site_perl/5.8.2
/usr/lib/perl5/site_perl/5.8.1 /usr/lib/perl5/site_perl/5.8.0
/usr/lib/perl5/site_perl
/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.8.4/i386-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.8.3/i386-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.8.2/i386-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.8.1/i386-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.8.0/i386-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.8.5 /usr/lib/perl5/vendor_perl/5.8.4
/usr/lib/perl5/vendor_perl/5.8.3 /usr/lib/perl5/vendor_perl/5.8.2
/usr/lib/perl5/vendor_perl/5.8.1 /usr/lib/perl5/vendor_perl/5.8.0
/usr/lib/perl5/vendor_perl .) at ./ZimbraColdBackup.pl line 9.
BEGIN failed--compilation aborted at ./zimbraColdBackup.pl line 9.
In the first line you can see that it is unable to locate Mail/Mailer.pm
To install this module, just type the following:
perl -MCPAN -e 'install Mail::Mailer'
This command will install the Mail::Mailer module.
Remember, the "/" must be converted to "::" when providing the command and
the ".pm" must be removed.
You may receive such errors for a few times. Just install the relevant
modules.
2. Configure the script
Once the modules are installed, you need to open this script in a text editor,
such as "vi" or "nano". Please enter correct values against the variables at
the top of the script. Once you open it in a text editor, it will become
self-explanatory.
3. Run it once - Test it!
Just say: "./zimbraColdBackup confirm" after you have configured it. It should
run and do as promised. If it doesn't just let me know, or ensure that it has
been properly configured.
4. Schedule it using crontab
Create a cron job using the command 'crontab -e' to run the script at fixed
intervals.
);
# parse the arguments
# if there are no arguments print usage and die
die $usage, @_ if ( $#ARGV + 1 < 1 );
# what was the argument? (ignore more than one arguments...)
my $option = $ARGV[0];
# select action
if ( ( $option =~ /^(--)?help$/ ) ||
( $option =~ /^(--)?usage$/ ) ||
( $option =~ /^\?$/)
) {
die $usage, @_;
}
elsif ( $option =~ /^(--)?overview$/ ) {
die $overview, @_;
}
elsif ( $option =~ /^(--)?install(ation)?$/ ) {
die $installation, @_;
}
elsif ( $option =~ /^(--)?confirm$/ ) {
1; # go ahead and run the script
}
elsif ( $option =~ /^(--)?version$/) {
die $progname, " Ver. ", $version, " Rev. ", $revision, "\n", @_;
}
elsif ( $option =~ /^(--)?li[sc]en[sc]e$/ ) {
die $license, @_;
}
else {
die "Invalid option: Please try again", $usage, @_;
}
# going ahead and running the script :-)
# check inputs
if ( ( $rsync eq "" ) || ( $rsync !~ /^\// ) ) {
die "Please provide an absolute path to 'rsync'", "\n", @_;
}
if (! ( -d $zimbra ) ) {
die "Please provide an absolute path to 'zimbra' directory", "\n", @_;
}
if (! ( -d $backup_dir ) ) {
die "Please provide an absolute path to backup directory", "\n", @_;
}
if ( $send_remote =~ /\D/ ) {
die "Please enter either '1' or '0' in \$send_remote", "\n", @_;
}
if ( $send_remote ) {
# check ssh params
if ( $ssh_server eq "" ) {
die "Please enter valid SSH server to rsync to.", "\n", @_;
}
}
if ( $rotate_bak =~ /\D/ ) {
die "Please enter either '1' or '0' in \$rotate_bak", "\n", @_;
}
if ( $rotate_days =~ /\D/ ) {
die "Please enter either '1' or '0' in \$rotate_days", "\n", @_;
}
if ( $send_result =~ /\D/ ) {
die "Please enter either '1' or '0' in \$send_result", "\n", @_;
}
if ( $send_result ) {
if ( ! $to_email ) {
die "Please enter valid email in \$to_email", "\n", @_;
}
if ( ! $from_email ) {
die "Please enter valid email in \$from_email", "\n", @_;
}
}
# if you reach here, everything is valid, please proceed
my $result = '';
my $res_fh = IO::Scalar->new ( \$result );
# now whatever output we want to build up, we will print to $res_fh
print $res_fh "Date: ",
POSIX::strftime ( '%m-%d-%Y, %A, %H:%M', localtime ( time ) ),
" Hours\n";
# current day, date, month, time, ...
my $current_time = POSIX::strftime ( '%m-%d-%Y-%A-%H-%M', localtime ( time ) );
my $since_epoch = time ( ); # seconds since epoch
my $bak_dir = $backup_dir; # we want to use the backup dir path later
$backup_dir .= '/'.$current_time.'-'.$since_epoch;
# Stop Zimbra
my $zmstopstat = system ( "su - zimbra -c '$zimbra/bin/zmcontrol stop'" );
if ( $zmstopstat ) {
print $res_fh "Stopping Zimbra: Some Problem Occurred. Please check.\n";
}
else {
print $res_fh "Stopping Zimbra: Success\n";
}
# Kill all lingering Zimbra processes
my $zimbra_uid = getpwnam ( $zimbra_user );
my $process_table = Proc::ProcessTable->new;
# Gracefully kill lingering processes: kill -15, sleep, kill -9
foreach my $process ( @{$process_table->table} ) {
if ( ( $process->uid eq $zimbra_uid ) ||
( ( $process->cmndline =~ /$zimbra_user/ ) &&
( $process->cmndline !~ /$prog_name/ ) ) )
{
kill -15, $process->pid; # thanks, merlyn
sleep 10; # not sure if there'll be buffering.
kill -9, $process->pid;
}
}
# Backup Zimbra using "rsync"
my $rsync_obj = File::Rsync->new ( {
'rsync-path' => $rsync,
'archive' => 1,
'recursive' => 1,
'links' => 1,
'hard-links' => 1,
'keep-dirlinks' => 1,
'perms' => 1,
'owner' => 1,
'group' => 1,
'devices' => 1,
'times' => 1
} );
my $zmrsyncstat = $rsync_obj->exec ( {
src => "$zimbra/",
dest => "$backup_dir"
} );
if ( $zmrsyncstat ) {
print $res_fh "Rsync Zimbra: Successfully created $backup_dir\n";
}
else {
print $res_fh "Rsync Zimbra: Some Problem Occurred. Please check.\n";
}
# Now that backup is done, start Zimbra
my $zmstartstat = system ( "su - zimbra -c '$zimbra/bin/zmcontrol start'" );
if ( $zmstartstat ) {
print $res_fh "Starting Zimbra: Some Problem Occurred. Please check.\n";
}
else {
print $res_fh "Starting Zimbra: Success\n";
}
print $res_fh "Zimbra was off-line for: ", time ( ) - $since_epoch, " seconds\n";
# Rotate backups
if ( $rotate_bak ) { # should we rotate backups?
# get a list of all files from the backup directory
opendir ( DIR, $bak_dir ) or die "can't opendir $bak_dir: $!";
while ( defined ( my $filename = readdir ( DIR ) ) ) {
# if $filename is . or .. do not remove it
if ( $filename !~ /\./ ) { # if this isn't there, you're dead
# if $filename is older than $rotation_days then delete it
my @filename_parts = split ( "-", $filename ); # to get epoch sec
# allowed age of backups
my $allowed_age = $since_epoch - ( 60 * 60 * 24 * $rotate_days );
# if the last part of $filename < allowed age
if ( ( $filename_parts[6] < $allowed_age ) && ($filename ne "") ) {
# delete it
my $zmrmtreestat = rmtree ( "$bak_dir/$filename" );
# print the status of removing
if ( $zmrmtreestat ) {
print $res_fh "Rotating Backup: Removed $bak_dir/$filename\n";
}
else {
print $res_fh "Rotating Backup: Can't delete $filename\n";
}
}
}
}
closedir ( DIR );
}
# Send to remote system
if ( $send_remote ) {
# Backup Zimbra using "rsync"
my $rem_rsync_obj = File::Rsync->new ( {
'rsync-path' => $rsync,
'archive' => 1,
'recursive' => 1,
'links' => 1,
'hard-links' => 1,
'keep-dirlinks' => 1,
'perms' => 1,
'owner' => 1,
'group' => 1,
'devices' => 1,
'times' => 1
} );
my $destination =
$ssh_server.':'.$remote_path.'/'.$current_time.'-'.$since_epoch;
my $zmremrsyncstat = $rem_rsync_obj->exec ( {
src => "$backup_dir/",
dest => "$destination"
} );
if ( $zmremrsyncstat ) {
print $res_fh "Remote Rsync: Successfully created $destination\n";
}
else {
print $res_fh "Remote Rsync: Some Problem Occurred. Please check.\n";
}
}
print $res_fh "The backup took: ", time ( ) - $since_epoch, " seconds\n";
# Send email report
if ( $send_result ) {
# send results by email
my $mailer = Mail::Mailer->new ( "sendmail" );
$mailer->open( {
'From' => $from_email,
'To' => $to_email,
'Cc' => $cc_email,
'Bcc' => $bcc_email,
'Subject' => 'Result of zimbraColdBackup'
} ) or die "Can't open: $!\n";
print $mailer $result;
$mailer->close();
}
# print results on std output
print $result;
NB: This script uses Mail::Mailer to send a notification using sendmail. You should make sure that sendmail is installed on your system and that it is not set to start at boot otherwise you'll have a conflict with the Zimbra MTA not starting.
A Simple Shell Script Method
The following script can be called from the command line or crontab, and relies only on rsync, tar, and a scriptable ftp client. I used ncftp but you can use others and modify the syntax accordingly. This script was written and tested in Ubuntu 6.06 LTS server. I cannot confirm if it requires any modification to work in other distros but would appreciate feedback if necessary to make it more general.
#!/bin/bash
# Zimbra Backup Script
# Requires ncftp to run
# This script is intended to run from the crontab as root
# Date outputs and su vs sudo corrections by other contributors, thanks, sorry I don't have names to attribute!
# Free to use and free of any warranty! Daniel W. Martin, 5 Dec 2008
# Outputs the time the backup started, for log/tracking purposes
echo Time backup started = $(date +%T)
before="$(date +%s)"
# Live sync before stopping Zimbra to minimize sync time with the services down
# Comment out the following line if you want to try single cold-sync only
rsync -avHK --delete /opt/zimbra/ /backup/zimbra
# which is the same as: /opt/zimbra /backup
# Including --delete option gets rid of files in the dest folder that don't exist at the src
# this prevents logfile/extraneous bloat from building up overtime.
# Now we need to shut down Zimbra to rsync any files that were/are locked
# whilst backing up when the server was up and running.
before2="$(date +%s)"
# Stop Zimbra Services
su - zimbra -c"/opt/zimbra/bin/zmcontrol stop"
sleep 15
# Kill any orphaned Zimbra processes
kill -9 `ps -u zimbra -o "pid="`
# Only enable the following command if you need all Zimbra user owned
# processes to be killed before syncing
# ps auxww | awk '{print $1" "$2}' | grep zimbra | kill -9 `awk '{print $2}'`
# Sync to backup directory
rsync -avHK --delete /opt/zimbra/ /backup/zimbra
# Restart Zimbra Services
su - zimbra -c "/opt/zimbra/bin/zmcontrol start"
# Calculates and outputs amount of time the server was down for
after="$(date +%s)"
elapsed="$(expr $after - $before2)"
hours=$(($elapsed / 3600))
elapsed=$(($elapsed - $hours * 3600))
minutes=$(($elapsed / 60))
seconds=$(($elapsed - $minutes * 60))
echo Server was down for: "$hours hours $minutes minutes $seconds seconds"
# Create a txt file in the backup directory that'll contains the current Zimbra
# server version. Handy for knowing what version of Zimbra a backup can be restored to.
su - zimbra -c "zmcontrol -v > /backup/zimbra/conf/zimbra_version.txt"
# or examine your /opt/zimbra/.install_history
# Display Zimbra services status
echo Displaying Zimbra services status...
su - zimbra -c "/opt/zimbra/bin/zmcontrol status"
# Create archive of backed-up directory for offsite transfer
# cd /backup/zimbra
tar -zcvf /tmp/mail.backup.tgz -C /backup/zimbra .
# Transfer file to backup server
ncftpput -u -p / /tmp/mail.backup.tgz
# Outputs the time the backup finished
echo Time backup finished = $(date +%T)
# Calculates and outputs total time taken
after="$(date +%s)"
elapsed="$(expr $after - $before)"
hours=$(($elapsed / 3600))
elapsed=$(($elapsed - $hours * 3600))
minutes=$(($elapsed / 60))
seconds=$(($elapsed - $minutes * 60))
echo Time taken: "$hours hours $minutes minutes $seconds seconds"
One further note: I have observed some odd behavior in this and other scripts that, when run from the command line work flawlessly, but when run from crontab the script may get ahead of itself and, for example, try to ftp the file before tar is done creating it; resulting in a useless backup. Loading the script into crontab with the parameters to create a log file, for example
. /etc/zimbra.backup > /temp/zbackup.log 2>&1
seems to solve this problem (while creating the log, or showing the output on the screen, the script seems to follow the sequence more carefully), while giving you a line-by-line record of the backup procedure. In my installation with just over 3GB backed up, the logfile is 2.5 mb and is overwritten each night. NB You may find that using su on your operating system has problems and some services don't start or stop correctly. If that's the case use 'sudo -u zimbra' in the following format for the commands:
sudo -u zimbra zmcontrol start
A Simple Shell Script Method like above, but with rsync over ssh
#!/bin/bash
# Zimbra Backup Script
# Requires that you have ssh-keys: https://help.ubuntu.com/community/SSHHowto#Public%20key%20authentication
# This script is intended to run from the crontab as root
# Date outputs and su vs sudo corrections by other contributors, thanks, sorry I don't have names to attribute!
# Free to use and free of any warranty! Daniel W. Martin, 5 Dec 2008
## Adapted for rsync over ssh instead of ncftp by Ace Suares, 24 April 2009 (Ubuntu 6.06 LTS)
# the destination directory for local backups
DESTLOCAL=/backup/backup-zimbra
# the destination for remote backups
DESTREMOTE="yourserver:/backup/backup-zimbra"
# Outputs the time the backup started, for log/tracking purposes
echo Time backup started = $(date +%T)
before="$(date +%s)"
# a backup dir on the local machine. This will fill up over time!
BACKUPDIR=$DESTLOCAL/$(date +%F-%H-%M-%S)
# Live sync before stopping Zimbra to minimize sync time with the services down
# Comment out the following line if you want to try single cold-sync only
rsync -avHK --delete --backup --backup-dir=$BACKUPDIR /opt/zimbra/ $DESTLOCAL/zimbra
# which is the same as: /opt/zimbra /backup
# Including --delete option gets rid of files in the dest folder that don't exist at the src
# this prevents logfile/extraneous bloat from building up overtime.
# the backupdir will hold all files that changed or where deleted during the previous backup
# Now we need to shut down Zimbra to rsync any files that were/are locked
# whilst backing up when the server was up and running.
before2="$(date +%s)"
# Stop Zimbra Services
/etc/init.d/zimbra stop
#su - zimbra -c"/opt/zimbra/bin/zmcontrol stop"
#sleep 15
# Kill any orphaned Zimbra processes
#kill -9 `ps -u zimbra -o "pid="`
pkill -9 -u zimbra
# Only enable the following command if you need all Zimbra user owned
# processes to be killed before syncing
# ps auxww | awk '{print $1" "$2}' | grep zimbra | kill -9 `awk '{print $2}'`
# Sync to backup directory
rsync -avHK --delete --backup --backup-dir=$BACKUPDIR /opt/zimbra/ $DESTLOCAL/zimbra
# Restart Zimbra Services
#su - zimbra -c "/opt/zimbra/bin/zmcontrol start"
/etc/init.d/zimbra start
# Calculates and outputs amount of time the server was down for
after="$(date +%s)"
elapsed="$(expr $after - $before2)"
hours=$(($elapsed / 3600))
elapsed=$(($elapsed - $hours * 3600))
minutes=$(($elapsed / 60))
seconds=$(($elapsed - $minutes * 60))
echo SERVER WAS DOWN FOR: "$hours hours $minutes minutes $seconds seconds"
# Create a txt file in the backup directory that'll contains the current Zimbra
# server version. Handy for knowing what version of Zimbra a backup can be restored to.
# su - zimbra -c "zmcontrol -v > $DESTLOCAL/zimbra/conf/zimbra_version.txt"
# or examine your /opt/zimbra/.install_history
# Display Zimbra services status
echo Displaying Zimbra services status...
su - zimbra -c "/opt/zimbra/bin/zmcontrol status"
# /etc/init.d/zimbra status # seems not to work
# backup the backup dir (but not the backups of the backups) to remote
rsync -essh -avHK --delete-during $DESTLOCAL/zimbra $DESTREMOTE
# Outputs the time the backup finished
echo Time backup finished = $(date +%T)
# Calculates and outputs total time taken
after="$(date +%s)"
elapsed="$(expr $after - $before)"
hours=$(($elapsed / 3600))
elapsed=$(($elapsed - $hours * 3600))
minutes=$(($elapsed / 60))
seconds=$(($elapsed - $minutes * 60))
echo Time taken: "$hours hours $minutes minutes $seconds seconds"
# end
Backup Shell Script with Compressed & Encrypted Archives
This script has following features:
Easy to use!
Backups with or without strong encryption!
Compressed archives
Optional Off-site copying of archives after creation
MD5 checksums for integrity checks of archives
Weekly backup rotation - 1 Full & 6 Diff's per rotation
Email report on Full backup
Email notifications on errors
Backup file lists (attached to weekly full backup report)
Installer & Setup option for quick deployment (install needed software and setup env e.g. ssh pki auth and cronjobs)
You can follow the development and find the latest info about usage and so on in the zimbra forum under the thread "[Yet Another Backup Script Community Version]" To download a current version of this script go to the forum thread there you will find a link to the file in the first post. If you need any help you can contact me in the forum, I would be happy to help!
#!/bin/bash
## *** Info ***
# USAGE: -h or --help for help & usage.
# -f or --full for Full backup.
# -d or --diff for Diff backup.
# -V or --version for version info.
# --INSTALL for script install and setup.
#
# This is a backup script for the FOSS version of Zimbra mail server.
# The script is free and open source and for use by anyone who can find a use for it.
#
# THIS SCRIPT IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDERS AND/OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
# AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
# THE USE OF THIS DOCUMENT, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# CONTRIBUTORS:
# heinzg of osoffice.de (original author)
# Quentin Hartman of Concentric Sky, qhartman@concentricsky.com (refactor and cleanup)
#
# What this script does:
# 1. Makes daily off-line backups, at a service downtime of ~ < 2 min.
# 2. Weekly backup cycle - 1 full backup & 6 diffs.
# 3. Predefined archive sizes, for writing backups to CD or DVD media...
# 4. Backup archive compression.
# 5. Backup archive encryption.
# 6. Backup archive integrity checks and md5 checksums creation.
# 7. Automated DR - Off-site copy of backup archives via ssh.
# 8. Install and setup function for needed software (Ubuntu Systems only)
# 9. Weekly eMail report & eMail on error - including CC address.
#
# This script makes use of following tools:
# apt-get, cron, dar, dpkg, mailx, md5sum, rsync, ssh, uuencode, wget, zimbra mta.
#
# We have opted to use a pre-sync directory to save on "down time", but this
# causes one to have huge additional space usage.
# But hard drives are cheep today!
#
# What is still to come or needs work on:
# 1. Recovery option
# 2. Better documentation
##------- CONFIG -------#
# Edit this part of the script to fit your needs.
#--- Directories ---#
# Please add the trailing "/" to directories!
ZM_HOME=/opt/zimbra/ # where zimbra lives
SYNC_DIR=/tmp/fakebackup/ # intermediate dir for hot/cold syncs. must have at least as much free space as
ZM_HOME consumes
ARCHIVEDIR=/Backup/zimbra_dars/ # where to store final backups
TO_MEDIA_DIR=/Backup/burn/ # where to put fulls for archiving to media
#--- PROGRAM OPTIONS ---#
RSYNC_OPTS="-aHK --delete --exclude=*.pid" # leave these unless you are sure you need something else
#--- ARCHIVE NAMES ---#
BACKUPNAME="Zimbra_Backup" # what you want your backups called
FULL_PREFIX="FULL" # prefix used for full backups
DIFF_PREFIX="DIFF" # prefix used for differential backups
BACKUPDATE=`date +%d-%B-%Y` # date format used in archive names
BACKUPWEEK=`date +%W` # Week prefix used for backup weekly rotation and naming
#--- ARCHIVE SIZE ---#
ARCHIVESIZE="4395M" # storage media size, for full-backup archiving
COMPRESS="9" # valid answers are 1 - 9 ( 9 = best )
#--- Encryption Options ---#
CRYPT="yes" # valid answers are "yes" or "no"
PASSDIR=/etc/`basename $0`/ # the directory the encryption hash is stored in.
PASSFILE="noread" # the file containing the password hash
#--- Log Settings ---#
EMAIL="your@mailaddress.local" # the address to send logs to
EMAILCC="" # another address to send to
LOG="/var/log/zim_backup.log" # log location
#--- SSH REMOTE DR COPY ---#
# This option will secure copy your archives to a remote server via 'scp'
DRCP="no" # valid answers are "yes" or "no"
SSHUSER="you" # recommend creating a user on the remote machine just for transferring backups
SSHKEY="rsa" # recommended answers are "rsa" or "dsa" but "rsa1" is also valid.
REMOTEHOST="remote.server.fqdn" # can use IP too
REMOTEDIR="/tmp/" # where you want your backups saved.
#--- Use Hacks? ---#
# Built in hacks to fix common problems
#Hack to start Stats, even run zmlogprocess if needed
STATHACK="yes" # valid answers are "yes" or "no"
## ~~~~~!!!! SCRIPT RUNTIME !!!!!~~~~~ ##
# Best you don't change anything from here on,
# ONLY EDIT IF YOU KNOW WHAT YOU ARE DOING
Snapshots
While the above three methods can be used on a rotating fashion, their mainly full copies; as anyone with a 20TB store knows, backups take up space.
Thus, this section is devoted to reducing storage needed through incrementalish snapshots.
-Labeled 'ish' because this is nothing like the NE method of hot/live backups using redologs to restore to any given second.
-In short, so we don't confuse it with NE, please don't say "my incrementals aren't working" in the forums without first mentioning that you're using a FOSS method.
(Hence this is labled 'snapshots' to cut down on some confusion.)
Utilities to help you make rotating snapshots: rsnapshot rdiff-backup An rsync script
Another script being worked out: http://www.zimbra.com/forums/showthread.php?threadid=15275
Another: http://www.zimbra.com/forums/showthread.php?threadid=15963
Emergency Repairs
Preparing to Back Up
Before we begin, make sure that you are logged in as a user that can perform the tasks outlined here.
It is always good practice to backup your copy of Zimbra in the event of unforeseen circumstances.
To prevent changes to any Zimbra databases during the backup processes you may wish to use:
>su zimbra
>zmcontrol stop
to terminate Zimbra.
If you get some kind of error, you may want to make sure that Zimbra has completly stopped by running:
>ps auxww | grep zimbra
and kill any left over processes such as the log.
Alternatively as root you could run the following command to kill all Zimbra user owned processes instantly (use wisely):
Make sure that the copy location has enough space to support your backup copy (i.e. the /tmp folder probably isn't the best location).
Since all of the components Zimbra needs are stored in the Zimbra folder itself, you can simply copy the folder to a safe location.
It may be possible to create a cron job to do these tasks automatically.
Copy Command: cp -Rp /opt/zimbra [location path]
Depending on your hardware and the amount of data contained in your Zimbra installation, this process can take a while. Note: It is a very good idea to tag your installation with the version/build of zimbra that will be backed up (ie 3.0.0_GA_156) and the date of backup. You'll need this later.
Restoring
Before restoring, you should make sure that all of the processes accociated with the damaged/failed Zimbra installation are terminated. Failure to terminate all of the processes could have dire consquences. See "Preparing to Backup" for additional info.
====Rename your "broken" Zimbra installation.'
You may be able to scavenge data, if needed. If you simply do not want the old data, you can skip this part.
>mv /opt/zimbra [new location i.e. /tmp/zimbra-old]
You may want to move it completly out of the /opt folder just to be safe.
Copy your backup Zimbra installation to the /opt folder and name it "zimbra".
>cp -rp [location of backup] /opt
>mv /opt/[backup name] /opt/zimbra
Restore to Existing/Backup Zimbra Server
In the event of a failure, you can either restore your /opt/zimbra folder to a server that is not currently running Zimbra and download a dummy copy of Zimbra and run an upgrade in order to clean everything up and make it run correctly again (see next section), or you may restore the backup to an existing Zimbra server. This will only work if the existing server is running the EXACT SAME VERSION of Zimbra as the backup you want to restore. Also, this has been tested and seems to work well with CE 4.5.8, but did not work with 4.5.7. This may be, for example, a backup email server that you want to always keep current with last night's backup, so it can replace the production server on short notice in the event of a disaster.
Simply set up your backup server identical to your production server (preferably the same OS, but necessarily the exact same version of Zimbra). Any modifications you made or packages you added for your production server with regards to extra anti-spam protection, etc., should also be added to this server. Shut down Zimbra on the backup server. Copy /opt/zimbra from your backup to the backup server.
Start Zimbra. Everything should work. The advantage to this method is that you can retain all your customizations (anti-spam modifications, for example) that would otherwise have been lost in the "upgrade" method. Also, this allows you to use a script to keep a backup server current without having to reinstall Zimbra after each time the backup server is made current, or before putting the backup server into production in the event of a failure.
Downloading a Dummy Copy of Zimbra
Now, we need to know what build/version of zimbra you were running. If you followed the backup instructions above, then the backup folder should be tagged with the version/build you have backed up.
You need to download the full install of the Zimbra version you were running. You may find all Zimbra releases at: Sourceforge.
If you don't know your version number, you can find it by: Method 1 - view install_history file.
cat /opt/zimbra/.install_history
Method 2 - dont think this will work unless you did a bunch of chroots.
zmcontrol -v
Both of the above listed methods were from this forum post
Installing the Dummy Copy of Zimbra
Once you have downloaded the tar file, you will need to uncompress it by:
>tar -xvzf [name of file].tgz
This will create a folder in the directory named "zcs", cd to it and run the install script:
>cd zcs
>./install.sh
WARNING: Do not run the script with the -u option. You will remove all of your backup data & you must run the script as root, NOT ZIMBRA.
The script will remove the existing packages, and install them again. The script will attempt to stop Zimbra Services and "UPGRADE FAILED - exiting". This is okay, simply rerun the script, and it will install normally.
If you experence error 389, -1, connection refused errors, please search the Forums. These errors are covered extensivly.
Resetting Permissions
If you are up and running now, you may have one more hurdle to overcome: permissions.
You may need to reset the permissions on the message store by running the following:
>/opt/zimbra/libexec/zmfixperms
This is potentially a dangerous suggestion, but if you have trouble getting tomcat to start even after you run zmfixperms, try running (worked for ZCS 4.5.7 CE)
chown -R zimbra:zimbra /opt/zimbra
Of course, you must run
/opt/zimbra/libexec/zmfixperms
again after that. It appears that zmfixperms is supposed to chown zimbra:zimbra on something, but it misses it. This way, you chown EVERYTHING zimbra:zimbra, and zmfixperms puts back things that need to be owned as root, postfix, etc.
Disaster Recovery
In the unfortunate event of losing your complete server or installation, the following will get you up and running. This has been tested successfully on v5.0.5
The machine you are recovering to must have the same hostname as the original machine.
Once you have your OS and all pre-requisites installed. Download and install Zimbra as normal.
Once Zimbra is installed, stop all Zimbra services and move/rename the /opt/zimbra folder: