For any service provider it is important to maintain a reliable backup policy as it enables prompt services restoration after any force majeure event. Although the design of Sipwise C5 implies data duplication and high availability of services, we still strongly suggest you to configure a backup procedure. The Sipwise C5 has a built-in solution that can help you back up the most crucial data. Alternatively, it can be integrated with any Debian compatible backup software.
This is the most important data in the system. All subscriber and billing information, CDRs, user preferences, etc. are stored in the MySQL server. It is strongly recommended to have up-to-date dumps of all the databases on corresponding Sipwise C5 nodes.
The system configuration files such as /etc/mysql/sipwise.cnf and the /etc/ngcp-config/ directory should be included in the backup as well. We suggest backing up the whole /etc folder.
The /home/jail/home/cdrexport directory contains the exported CDRs. It depends on your call data retention policy whether or not to remove these files after exporting them to an external system.
The Sipwise C5 comes with an easy-to-use solution that creates everyday backups of the most important data:
This functionality is disabled by default and can be enabled and configured in the backuptools subsection in the config.yml file. Please, refer to the “C.1.3 backup tools” section of the “Sipwise C5 configs overview” chapter for the backup configuration options.
Once you set the required configuration options, apply the changes:
ngcpcfg apply 'enable the backup feature' ngcpcfg push all
Once you activate the feature, Sipwise C5 will create backups in the off-peak time on the standby nodes and put them to the /var/backup/ngcp_backup directory. You can copy these files to your backup server using scp or ftp.
info | |
make sure that you have enough free disk space to store the backups for the specified number of days. |
In the worst case scenario, when the system needs to be recovered from a total loss, you only need 4 steps to get the services back online:
ngcpcfg apply 'restored the system from the backup' ngcpcfg push all
important | |
All existing data will be wiped out! Use this script only if you want to clear all previously configured services and start configuration from scratch. |
To reset database to its original state you can use a script provided by CE: * Execute ngcp-reset-db. It will assign new unique passwords for Sipwise C5 services and reset all services. The script will also create dumps for all Sipwise C5 databases.
In case of unresolvable database replication issues or to copy mysql data between a pair of hosts (usually a pair of sp1 and sp2 nodes).
There is a script for that: ngcp-sync-db.
To synchronize databases you need to run the script on your target host.
Definitions:
important | |
Your existing database on local will be completely wiped. The script provides a possibility to backup both master and local databases during the procedure. |
You can run the script with -h
or --help
to check its options or use man ngcp-sync_db
If you run it without any options it automatically calculates master hostname (e.g. if you run it on sp2 then sp2==local and sp1==master).
The script also requires mysql credentials and if none provided it uses username=sipwise and the password is picked from /etc/mysql/sipwise.cnf. You can specify user and/or password for both master and local.
Before the actual start it produces a summary with settings used to the procedure and a confirmation prompt to prevent accidental usage. Making use of --force
option" however suppresses the confirmation prompt.
By default no messages are printed on STDOUT (compliant to be integrated into another tools)
and with -v
or --verbose
options you enable debugging where all the ongoing steps will be printed to STDOUT.
There are 2 modes available for synchronization, online and backup. By default online is used where the procedure does not create any backups and everything goes on the fly. That is useful for large databases where creating backups would require solid amounts of available free disk space. With the backup mode master db is dumped into a backup file on local first (default directory: /var/backup/ngcp-sync-db) and imported upon the backup completion.
Mysql database connection to the master db and the local db is the essential part and by default the script tries to establish direct mysql connection however that may not be possible due to the access restrictions. To overcome that you can use --ssh-tunnel
option and specifying there a local custom free port (e.g. --ssh-tunnel=33125
) in this case an ssh tunnel will be created to master and used to establish the db connection on the localhost behalf (NOTE: Public key based ssh negotiation is required for the tunnel as the script does not suppot ssh credentials for security reasons).
Backups may be a subject to create during synchornization for possible rollbacks. To create the local db backup you should add --local-backup
. The master db backup is automatically created only using --sync-mode=backup
. Upon completion all those created backups are deleted and if you need to keep them please use --keep-backups
option (NOTE: In case of errors during synchronization and when backups are created they are NOT automatically deleted. Therefore, if the script had failed with an error and afterwards completed successully you may want to manually remove the remaining backups from /var/backup/ngcp-sync-db).
Normal online mode synchronization sp1 → sp2.
sp2> ngcp-sync-db
Normal backup mode synchronization sp1 → sp2.
sp2> ngcp-sync-db --sync-mode=backup
Forced online mode synchronization sp1 → sp2. USE WITH CARE as there will be no confirmation prompts.
sp2> ngcp-sync-db --force
Direct mysql db access is not possible. SSH tunnel is initialised to local port 33125 and forwards all connections 127.0.0.1:33125 → sp1:3306.
sp2> ngcp-sync-db --ssh-tunnel=33125
Custom mysql credentials for the master db connection (by default: sipwise:/etc/mysql/sipwise.cnf)
sp2> ngcp-sync-db --master-user=frank --master-pass=dbconnect
Normal online mode synchronization sp1 → sp2 with the local db backup and retaining the backup. (no master backup in this case as it is only available with --sync-mode=backup
).
sp2> ngcp-sync-db --local-backup --keep-backups
Normal online mode synchronization custom-node → sp2 with ssh tunnel
sp2> ngcp-sync-db --master-host=custom-node --ssh-tunnel=45001
Forced syncrhonization custom-node → sp2 with ssh tunnel, backup sync mode, local backup, custom master and local db credentials and ports as well as a different backup dir
sp2> ngcp-sync-db --force --sync-mode=backup --master-host=custom-node --master-port=3308 --ssh-tunnel=45001 --master-user=frank --master-pass=dbconnect --local-user=john --local-pass=dblocal --local-backup --keep-backups --backup-dir=/home/barry/backups
Sipwise Sipwise C5 offers an easy way to cleanup, backup or archive old accounting
data — i.e. CDRs — that is not necessary for further processing any more, or must
be deleted according to the law. There are some Sipwise C5 components designed for this
purpose and they are commonly called cleanuptools. These are basically configurable
scripts that interact with NGCP’s accounting
and kamailio
databases, or remove
exported CDR files in order to clean or archive the unnecessary data.
The configuration parameters of cleanuptools are located in the main Sipwise C5 configuration
file: /etc/ngcp-config/config.yml
. Please refer to the config.yml
file description:
Cleanuptools Configuration Data
Section 1.7, “cleanuptools” for configuration parameter
details.
In case the system administrator needs to modify some configuration value, the new configuration must be activated in the usual way, by running the following commands:
> ngcpcfg apply 'Modified cleanuptools config' > ngcpcfg push all
As a result new configuration files will be generated for the accounting database and the exported CDR cleanup tools. Please read detailed description of those tools in subsequent sections of the handbook.
The Sipwise C5 system administrator can also select the time when cleanup scripts are
run, by modifying the schedule here: /etc/cron.d/cleanup-tools
The script responsible for cleaning up the database is: /usr/sbin/acc-cleanup.pl
The configuration file used by the script is: /etc/ngcp-cleanup-tools/acc-cleanup.conf
An extract from a sample configuration file is provided here:
############ batch = 10000 archive-target = /var/backup/cdr compress = gzip username = dbcleaner password = rcKamRdHhx7saYRbkJfP host = localhost connect accounting time-column = from_unixtime(start_time) backup-months = 2 backup-retro = 2 backup cdr connect accounting archive-months = 2 archive cdr connect kamailio time-column = time cleanup-days = 90 cleanup acc # Clean up after mediator by deleting old leftover acc entries and deleting # old entries out of acc_trash and acc_backup connect kamailio time-column = time cleanup-days = 30 cleanup acc_trash cleanup acc_backup
The configuration file itself contains a detailed description of how database cleanup script works. It consists of a series of statements, one per line, which are going to be executed in sequence. A statement can either just set a variable to some value, or perform an action.
There are 3 types of actions the database cleanup script can take:
These actions are discussed in following sections.
A generic action is connecting to the proper database: connect <database name>
The database cleanup tool can create monthly backups of CDRs in the accounting
database and store those data records in separate tables named: cdr_YYYYMM
.
The instruction in the configuration file looks like: backup <table name>
,
by default and typically it is: backup cdr
Configuration values that govern the backup procedure are:
time-column
: Which column in cdr table shows the month which a CDR belongs to.
batch
: How many records to process within a single SQL statement. If unset,
less than or equals 0, all of them are processed at once.
backup-months
: How many months worth of records to keep in the cdr table — where current CDRs are stored — and not move into the monthly backup tables.
important | |
Months are always processed as a whole, thus the value specifies how many months to keep AT MOST. In other words, if the script is started on December 15th and this value is set to "2", then all of December and November is kept, and all of October will be backed up. |
backup-retro
: How many months to process for backups, going backwards in time.
Using the example above, with this value set to "3", the months October, September
and August would be backed up, while any older records would be left untouched.
The database cleanup tool can archive (dump) old monthly backup tables. The statement
used for this purpose is: archive <table name>
, by default and typically it is:
archive cdr
This creates an SQL dump out of too old tables created by the backup
statement
and drop them afterwards from database. Archiving uses the following configuration
values:
archive-months
: Uses the same logic as the backup-months
variable above.
If set to "12" and the script was started on December 15th, it will start archiving
with the December table of the previous year.
important | |
Note that the sum of |
archive-target
: Target directory for writing the SQL dump files into. If explicitly
specified as "/dev/null", then no actual archiving will be performed, but instead
the tables will only be dropped from database.
compress
: If set to "gzip", then gzip the dump files after creation. If unset,
do not compress.
host, username
and password
: As dumping is performed by an external command,
those variables are reused from the connect
statement.
The database cleanup tool may do database table cleanup without performing backup.
In order to do that, the statement: cleanup <table name>
is used. Typically this
has to be done in kamailio
database, examples:
cleanup acc
cleanup acc_trash
cleanup acc_backup
Basically the cleanup
statement works just like the backup
statement, but doesn’t
actually backup anything, but rather just deletes old records. Configuration values
used by the procedure:
time-column
: Gives the database column name that shows the time of CDR creation.
batch
: The same as with backup
statement.
cleanup-days
: Any record older than this many days will be deleted.
The script responsible for cleaning up exported CDR files is: /usr/sbin/cleanup-old-cdr-files.pl
The configuration file used by exported CDR cleanup script is: /etc/ngcp-cleanup-tools/cdr-files-cleanup.yml
A sample configuration file is provided here:
enable: no max_age_days: 30 paths: - path: /home/jail/home/*/20[0-9][0-9][0-9][0-9]/[0-9][0-9] wildcard: yes remove_empty_directories: yes max_age_days: ~ - path: /home/jail/home/cdrexport/resellers/*/20[0-9][0-9][0-9][0-9]/[0-9][0-9] wildcard: yes remove_empty_directories: yes max_age_days: ~ - path: /home/jail/home/cdrexport/system/20[0-9][0-9][0-9][0-9]/[0-9][0-9] wildcard: yes remove_empty_directories: yes max_age_days: ~
The exported CDR cleanup tool simply deletes CDR files in the directories provided in the configuration file, if those have already expired.
Configuration values that define the files to be deleted:
enable
: Enable (yes
) or disable (no
) exported CDR cleanup.
max_age_days
: Gives the expiration time of the exported CDR files in days.
There is a general value which may be overridden by a local value provided at a
specific path. The local value is valid for the particular path only.
paths
: an array of path definitions
path
: a path where CDR files are to be found and deleted; this may contain wildcard
characters
wildcard
: Enable (yes
) or disable (no
) using wildcards in the path
remove_empty_directories
: Enable (yes
) or disable (no
) removing empty
directories if those are found in the given path
max_age_days
: the local expiration time value for files in the particular path