Backup, Recovery and Maintenance

Sipwise C5 Backup

For any service provider it is important to maintain a reliable backup policy as it enables prompt services restoration after any force majeure event. Although the design of Sipwise C5 implies data duplication and high availability of services, we still strongly suggest you to configure a backup procedure. The Sipwise C5 has a built-in solution that can help you back up the most crucial data. Alternatively, it can be integrated with any Debian compatible backup software.

What data to back up

  • The database

This is the most important data in the system. All subscriber and billing information, CDRs, user preferences, etc. are stored in the MySQL server. It is strongly recommended to have up-to-date dumps of all the databases on corresponding Sipwise C5 nodes.

  • System configuration

The system configuration folder /etc/ngcp-config/ must be included in the backup as well. It contains the system specific configuration (like SSL keys). Also you might have some local modifications. We suggest backing up the whole /etc folder to preserve the etckeeper history to be able to answer when and who changed particular configuration files in the past.

  • Exported CDRs (optional)

The /home/jail/home/cdrexport directory contains the exported CDRs. It depends on your call data retention policy whether or not to remove these files after exporting them to an external system.

The built-in backup solution

The Sipwise C5 comes with an easy-to-use solution that creates everyday backups of the most important data:

  • The system configuration files. The whole /etc directory is backed up.

  • Exported CDRs. The /home/jail/home/cdrexport directory with csv files.

  • All required databases on corresponding servers.

This functionality is disabled by default and can be enabled and configured in the backuptools subsection in the config.yml file. Please, refer to the “C.1.3 backup tools” section of the “Sipwise C5 configs overview” chapter for the backup configuration options.

Once you set the required configuration options, apply the changes:

ngcpcfg apply 'enable the backup feature'
ngcpcfg push all

Once you activate the feature, Sipwise C5 will create backups in the off-peak time on the standby nodes and put them to the /ngcp-data/backup/ngcp_backup directory, namespaced by a timestamp and the node pairname. By default it will also copy them to its peer node. It can also be configured to copy them to the 'mgmt' nodes, so that they keep a consolidated backup of the entire Carrier. You can copy these files to your backup server using scp or ftp.

make sure that you have enough free disk space to store the backups for the specified number of days.

Recovery

In the worst case scenario, when the system needs to be recovered from a total loss, you only need 4 steps to get the services back online:

  • Install Sipwise C5 as explained in chapter 2.

  • Restore the /etc/ngcp-config/ directory from the backup, overwriting your local files.

  • Restore 'mysql.encryption.key' in constants.yml if new MariaDB instance/binlogs were encrypted (DB is encrypted by default). See the detailed information in MariaDB data restoration remarks.

  • Restore the database from the latest MySQL dump.

  • Apply the changes to bring the original configuration into effect:

ngcpcfg apply 'restored the system from the backup'
ngcpcfg push all

Reset Database

All existing data will be wiped out! Use this script only if you want to clear all previously configured services and start configuration from scratch.

To reset database to its original state you can use a script provided by CE: * Execute ngcp-reset-db. It will assign new unique passwords for Sipwise C5 services and reset all services. The script will also create dumps for all Sipwise C5 databases.

Synchronize database

In case of unresolvable database replication issues or to copy mysql data between a pair of hosts (usually a pair of sp1 and sp2 nodes).

There is a script for that: ngcp-sync-db.

To synchronize databases you need to run the script on your target host.

  • Definitions:

    • 'master' - remote/master host (the database is dumped from there)

    • 'local' - target/local host (the database is imported onto)

  • Usage:

Your existing database on 'local' will be completely wiped. The script provides a possibility to backup both 'master' and 'local' databases during the procedure.

You can run the script with -h or --help to check its options or use man ngcp-sync_db

If you run it without any options it automatically calculates 'master' hostname (e.g. if you run it on 'sp2' then 'sp2'=='local' and 'sp1'=='master').

The script also requires mysql credentials and if none is provided it uses the ones from the file /etc/mysql/sipwise_extra.cnf. You can specify user and/or password for both 'master' and 'local'.

Before the actual start it produces a summary with settings used to the procedure and a confirmation prompt to prevent accidental usage. Making use of --force option" however suppresses the confirmation prompt. By default no messages are printed on STDOUT (compliant to be integrated into another tools) and with -v or --verbose options you enable debugging where all the ongoing steps will be printed to STDOUT.

There are 2 modes available for synchronization, 'online' and 'backup'. By default 'online' is used where the procedure does not create any backups and everything goes on the fly. That is useful for large databases where creating backups would require solid amounts of available free disk space. With the 'backup' mode 'master' db is dumped into a backup file on 'local' first (default directory: '/ngcp-data/backup/ngcp-sync-db') and imported upon the backup completion.

Mysql database connection to the 'master' db and the 'local' db is the essential part and by default the script tries to establish direct mysql connection however that may not be possible due to the access restrictions. To overcome that you can use --ssh-tunnel option and specifying there a local custom free port (e.g. --ssh-tunnel=33125) in this case an ssh tunnel will be created to 'master' and used to establish the db connection on the 'localhost' behalf (NOTE: Public key based ssh negotiation is required for the tunnel as the script does not suppot ssh credentials for security reasons).

Backups may be a subject to create during synchronization for possible rollbacks. To create the 'local' db backup you should add --local-backup. The 'master' db backup is automatically created only using --sync-mode=backup. Upon completion all those created backups are deleted and if you need to keep them please use --keep-backups option (NOTE: In case of errors during synchronization and when backups are created they are NOT automatically deleted. Therefore, if the script had failed with an error and afterwards completed successfully you may want to manually remove the remaining backups from /ngcp-data/backup/ngcp-sync-db).

  • Examples:

Normal online mode synchronization 'sp1' → 'sp2'.

sp2> ngcp-sync-db

Normal backup mode synchronization 'sp1' → 'sp2'.

sp2> ngcp-sync-db --sync-mode=backup

Forced online mode synchronization 'sp1' → 'sp2'. USE WITH CARE as there will be no confirmation prompts.

sp2> ngcp-sync-db --force

Direct mysql db access is not possible. SSH tunnel is initialised to local port 33125 and forwards all connections 127.0.0.1:33125 → sp1:3306.

sp2> ngcp-sync-db --ssh-tunnel=33125

Custom mysql credentials for the 'master' db connection (by default: /etc/mysql/sipwise_extra.cnf)

sp2> ngcp-sync-db --master-user=frank --master-pass=dbconnect

Normal online mode synchronization 'sp1' → 'sp2' with the 'local' db backup and retaining the backup. (no 'master' backup in this case as it is only available with --sync-mode=backup).

sp2> ngcp-sync-db --local-backup --keep-backups

Normal online mode synchronization 'custom-node' → 'sp2' with ssh tunnel

sp2> ngcp-sync-db --master-host=custom-node --ssh-tunnel=45001

Forced syncrhonization 'custom-node' → 'sp2' with ssh tunnel, backup sync mode, local backup, custom 'master' and 'local' db credentials and ports as well as a different backup dir

sp2> ngcp-sync-db --force --sync-mode=backup --master-host=custom-node --master-port=3308 --ssh-tunnel=45001 --master-user=frank --master-pass=dbconnect --local-user=john --local-pass=dblocal --local-backup --keep-backups --backup-dir=/home/barry/backups

Accounting Data (CDR) Cleanup

Sipwise Sipwise C5 offers ways to cleanup, backup or archive old accounting data — i.e. CDRs — that is not necessary for further processing any more, or must be deleted according to the law. There are some Sipwise C5 components designed for this purpose and they are commonly called cleanuptools. These are configurable scripts that interact with NGCP’s accounting and kamailio databases, or remove exported CDR files in order to clean or archive the unnecessary data.

Cleanuptools Configuration

The configuration parameters of cleanuptools are located in the main Sipwise C5 configuration file: /etc/ngcp-config/config.yml. Please refer to the config.yml file description: Cleanuptools Configuration Data for configuration parameter details.

In case the system administrator needs to modify some configuration value, the new configuration must be activated in the usual way, by running the following commands:

> ngcpcfg apply 'Modified cleanuptools config'
> ngcpcfg push all

As a result new configuration files will be generated for the accounting database and the exported CDR cleanup tools. Please read detailed description of those tools in subsequent sections of the handbook.

The Sipwise C5 system administrator can also select the time when cleanup scripts are run, by modifying the schedule here: /etc/cron.d/cleanup-tools

Accounting Database Cleanup

The script responsible for cleaning up the database is: ngcp-cleanup-acc

The configuration file used by the script is: /etc/ngcp-cleanup-tools/acc-cleanup.conf

An extract from a sample configuration file is provided here:

############

batch = 10000
archive-target = /ngcp-data/backup/cdr
compress = gzip

username = dbcleaner
password = rcKamRdHhx7saYRbkJfP
host = localhost
port = 3306

redis-batch = 10000
redis-port = 6379

connect accounting
keep-months = 2
use-partitioning = yes
timestamp-column = cdr_start_time
backup cdr_cash_balance_data
backup cdr_time_balance_data
backup cdr_relation_data
backup cdr_tag_data
backup cdr_mos_data
backup cdr_export_status_data
backup cdr_group
timestamp-column = first_cdr_start_time
backup cdr_period_costs
timestamp-column = start_time
backup cdr

archive-months = 2
archive cdr_cash_balance_data
archive cdr_time_balance_data
archive cdr_relation_data
archive cdr_tag_data
archive cdr_mos_data
archive cdr_export_status
archive cdr_group
archive cdr_period_costs
archive cdr

cleanup-days = 1
use-partitioning = no
timestamp-column = cdr_start_time
cleanup int_cdr_cash_balance_data
cleanup int_cdr_time_balance_data
cleanup int_cdr_relation_data
cleanup int_cdr_tag_data
cleanup int_cdr_group
cleanup int_cdr_export_status
timestamp-column = start_time
cleanup int_cdr

connect kamailio
time-column = time
cleanup-days = 90
cleanup acc

connect-redis 21
connect kamailio
time-column = time_hires
cleanup-days = 3
cleanup-mode = mysql
cleanup-redis acc:entry::*

# Clean up after mediator by deleting old leftover acc entries and deleting
# old entries out of acc_trash and acc_backup
connect kamailio
time-column = time
cleanup-days = 30
cleanup acc_trash
cleanup acc_backup

maintenance = no

The configuration file itself contains a detailed description of how database cleanup script works. It consists of a series of statements, one per line, which are going to be executed in sequence. A statement can either only set a variable to some value, or perform an action.

There are 4 types of actions the database cleanup script can take:

  • backup database tables

  • archive database tables

  • cleanup database tables

  • cleanup redis databases

These actions are discussed in following sections.

A generic action is connecting to the proper database: connect <database name>

Backup Database Tables

The database cleanup tool can create monthly backups of data in the accounting database tables by moving old records to separate tables named: cdr_YYYYMM. The instruction in the configuration file looks like: backup <table name>, by default and typically it is: backup cdr

Configuration values that govern the backup procedure are:

  • time-column: The name of the column in the table to use for determining which month a record belongs to. Must be a "datetime" column.

  • timestamp-column: The name of the column in the table to use for determining which month a record belongs to. Must be a "decimal(13,3)" column.

  • use-partitioning: If a table is partitioned using the time-column (timestamp-column) and the value is set to "yes", then moving/deleting records are instant operations, done by managing the partitions. Otherwise the usual method is used to delete/move records by chunks.

  • batch: How many rows to include per transaction when processing in chunks. If unset or ⇐ 0, it does them all at once.

  • keep-months: How many months worth of records to keep in the table and not move into the monthly backup tables.

    IMPORTANT: Months are always processed as a whole and this specifies how many months to keep AT MOST. In other words, if the script is started on December 15th and this value is set to "2", then all of December and November is kept, and all of October will be moved out.

Archive Database Tables

The database cleanup tool can archive (dump) old monthly tables. The statement used for this purpose is: archive <table name>, by default and typically it is: archive cdr

This creates an SQL dump out of older tables created by the backup statement and drop them from database afterwards. Archiving uses the following configuration values:

  • archive-months: Uses the same logic as the "keep-months" variable. If set to "12" and the script was started on December 15th, it will start archiving with the December table of the previous year. Archiving continues month by month, going backwards in time, until the script encounters a missing table.

  • archive-target: Target directory for writing the SQL dump files. If explicitly specified as "/dev/null", then no actual archiving will be performed, but instead the tables will only be dropped.

  • compress: If set to "gzip", then gzip the dump files after creation. If unset, do not compress.

  • host, "username" and "password": As dumping is performed by an external command, those variables are reused from the "connect" statement.

Cleanup Database Tables

The database cleanup tool may do database table cleanup without performing backup. In order to do that, the statement: cleanup <table name> is used. Typically this has to be done in kamailio database, examples:

  • cleanup acc

  • cleanup acc_trash

  • cleanup acc_backup

The cleanup statement works exactly like the backup statement, but doesn’t actually backup anything, but rather only deletes old records. Additional configuration parameters required by the cleanup procedure:

  • cleanup-days: Any record older than these many days will be deleted.

Cleanup Redis Databses

With the advent of persisting kamailio.acc record data in a redis keystore, a separate cleanup-redis <key pattern> statement was introduced. It will remove old redis entries, whose keys match the given redis SCAN pattern. Typically this has to be done for the redis 21 database (acc records), e.g.:

  • cleanup-redis acc:entry::*

connect-redis <redis database number> has to be used instead of connect +<mariadb database name>, which is needed to initially connect before any <backup>, <archive> and <cleanup> operations.

The cleanup-redis statement works similar to the cleanup statement, with some additional options below:

  • time-column: The name of the field in a redis entry denoting the record timestamp in epoch seconds.

  • cleanup-mode: If set to "delete", aged entires will be simply removed. If set to "mysql", they will be inserted into a database table first. The latter requires connect to open the database additionally.

  • redis-batch: Chunk size of redis entries with matching keys to look at. Note that redis processing always works in chunks, there is no partitioning.

  • cleanup-days: Any entry older than these many days will be deleted.

Exported CDR Cleanup

The script responsible for cleaning up exported CDR files is: ngcp-cleanup-cdr-files

The configuration file used by exported CDR cleanup script is: /etc/ngcp-cleanup-tools/cdr-files-cleanup.yml

A sample configuration file is provided here:

enable: no
max_age_days: 30
paths:
  -
    path: /home/jail/home/*/20[0-9][0-9][0-9][0-9]/[0-9][0-9]
    wildcard: yes
    remove_empty_directories: yes
    max_age_days: ~
  -
    path: /home/jail/home/cdrexport/resellers/*/20[0-9][0-9][0-9][0-9]/[0-9][0-9]
    wildcard: yes
    remove_empty_directories: yes
    max_age_days: ~
  -
    path: /home/jail/home/cdrexport/system/20[0-9][0-9][0-9][0-9]/[0-9][0-9]
    wildcard: yes
    remove_empty_directories: yes
    max_age_days: ~

The exported CDR cleanup tool deletes CDR files in the directories provided in the configuration file, if those have already expired.

Configuration values that define the files to be deleted:

  • enable: Enable (yes) or disable (no) exported CDR cleanup.

  • max_age_days: Gives the expiration time of the exported CDR files in days. There is a general value which may be overridden by a local value provided at a specific path. The local value is valid for the particular path only.

  • paths: an array of path definitions

    • path: a path where CDR files are to be found and deleted; this may contain wildcard characters

    • wildcard: Enable (yes) or disable (no) using wildcards in the path

    • remove_empty_directories: Enable (yes) or disable (no) removing empty directories if those are found in the given path

    • max_age_days: the local expiration time value for files in the particular path

Managing packages

The Sipwise C5 uses Debian packages to deliver the code to servers. Therefore it is important to keep Debian packages installation state and version consistent across nodes in the cluster. To achieve it, Sipwise C5 uses the Approx component.

Approx is a proxy server for Debian archive files. It fetches files from remote repositories on demand, and caches them for local use. All files are always being delivered from the approx cache at the same version as they were delivered previously to all the other cluster nodes.

All approx cache files are stored in '/var/cache/approx/' on the first management type (MGMT) Carrier nodes (normally 'web01' on Carrier) and are shared between 'web01a' and 'web01b' pair for high availability.

The following tools are available on the platform to maintain the Approx cache (call them with '--help' options to see all the possible functionality):

To provide all the necessary functionality, Approx distinguishes between two main types of files:

The first type of files 'Repository metadata' is always 'frozen' and always returned from the Approx cache. Freezing them is enough to provide the same Debian packages for all the cluster nodes. To update approx cache ('Repository metadata') users should use 'ngcp-approx-cache', to sync approx cache ('Repository metadata') between installations (LAB and PROD) users can use 'ngcp-approx-snapshots'.

The second type of files 'Repository packages' is a local cache/mirror of remote Debian servers. Every time the server requests some package, it is being checked in the Approx cache storage and returned if available (for performance reasons). If the package files are missing, they will be requested from the remote server, returned to the client and stored locally for future usage. Such an approach speeds up the installation stage (all packages are available from the LAN) and make possible to reinstall old packages state in case a cluster node needs reinstallation (e.g. disaster recovery) as all packages are available locally even if they have disappeared from the Debian servers.

To provide such a separation, Sipwise C5 has two TCP ports to use:

  • Approx read-write (RW) port (by default '9999'). Managed by Approx itself.

  • Approx read-only (RO) port (by default '9998'). Managed by Nginx.

All requests towards Approx RW port will overwrite the Approx cache (access to the Approx RW port is limited to the 'ha_int' interface only). Normally ngcp-approx-cache only uses the RW port. APT source files '/etc/apt/sources.list.d/*.list' only uses the RO port. (the Approx host is 'web01' for Carrier and 'sp' for PRO installations):

root@web01a:~# grep -H 9998 /etc/apt/sources.list.d/*
/etc/apt/sources.list.d/debian.list:deb http://web01:9998/debian/ bullseye main contrib non-free
/etc/apt/sources.list.d/debian.list:deb http://web01:9998/debian-security/ bullseye-security main contrib non-free
/etc/apt/sources.list.d/debian.list:deb http://web01:9998/debian/ bullseye-updates main contrib non-free
/etc/apt/sources.list.d/debian.list:deb http://web01:9998/debian-debug/ bullseye-debug main contrib non-free
/etc/apt/sources.list.d/sipwise.list:deb [arch=amd64] http://web01:9998/autobuild/ release-trunk-bullseye main
/etc/apt/sources.list.d/sipwise.list:#deb-src http://web01:9998/autobuild/ release-trunk-bullseye main
root@web01a:~#

Approx does NOT support a secure HTTP protocol (HTTPS), all connections from NGCP servers towards Approx should use the plain HTTP transport.

All connections from the Approx to external servers use HTTPS by default and can be fine-tuned in case of a HTTP/HTTPS proxy in use. The proxy servers should be configured in config.yml:

bootenv:
  http_proxy: ''
  https_proxy: ''

The custom Approx repositories can be defined using the following section in config.yml:

bootenv:
  custom_repos:
  - enable: no
    name: my-example-repo
    url: https://example.com/debian
  - enable: yes
    name: my-example-repo2
    url: https://example.com/myrepo

Maintaining the Approx cache

It is recommended to create an Approx snapshot before updating the Approx cache.

The script 'ngcp-approx-cache' is designed to update the Approx cache and clean/manage it.

To update the Approx cache please use the following command:

ngcp-approx-cache --auto # you can add --force to skip all confirmations

The command above will update all the 'Repository metadata' to the latest available versions from the remote servers (pointed by APT source list files in '/etc/apt/sources.list.d/*.list').

The tool ngcp-approx-cache should be called on the management type (MGMT) Carrier node. It is enough to execute it only once (web01a/sp1 or web01b/sp2).

To update all packages on the server from the Approx cache it is enough to call the ususal Debian command:

apt update && apt upgrade
Do not forget to update DB/ngcp-config if NGCP packages have been updated, and apply configuration changes: ngcp-update-db-schema && ngcp-update-cfg-schema && ngcpcfg apply 'new packages'.
To prevent unnecessary downtime always upgrade Debian packages on inactive HA node.

Also, the tool 'ngcp-approx-cache' allows to check the Approx cache consistency, clean stale packages and/or NGCP releases from the Approx cache, and more. See all the available functionality using '--help' option:

ngcp-approx-cache --help

Maintaining the Approx snapshots

An approx snapshot is the concept of having the approx cache state of a particular point in time. The approx snapshots only contain the 'Repository metadata' parts. The tool 'ngcp-approx-snapshots' is designed to create/manage/export/import approx snapshots between different installations (e.g. LAB and PROD). It allows syncing Debian packages versions between systems and guarantees installation state consistency between production and the code tested by QA in the LAB, etc.

Usage example:

On the LAB system, update approx cache, install and test new packages:

root@web01a:~# ngcp-approx-cache --auto --force
...
root@web01a:~# ngcp-approx-snapshots --create
Creating approx snapshot '20210319002625'...
root@web01a:~#
root@web01a:~# apt update && apt upgrade && ... && ngcp-config apply ...
...
root@web01a:~# # HACK/FIX/TEST
...
root@web01a:~# ngcp-approx-snapshots --export 20210319002625
Exporting approx snapshot '20210319002625'...
Successfully exported approx snapshot: /tmp/tmp.71CS2yjrli/ngcp-approx-snapshot-20210319002625.gzip
root@web01a:~#

Copy snapshot from LAB to PROD (~40MB for mr9.4/buster):

root@web01a:~# scp /tmp/tmp.71CS2yjrli/ngcp-approx-snapshot-20210319002625.gzip PROD:/tmp/

Check the versions of packages on PROD, import new approx snapshot, switch to new approx snapshot:

root@sp1:/var/cache# apt-cache policy ngcp-templates-pro
ngcp-templates-pro:
  Installed: 9.4.1.1+0~mr9.4.1.1
  Candidate: 9.4.1.1+0~mr9.4.1.1
  Version table:
 *** 9.4.1.1+0~mr9.4.1.1 990
        990 http://sp:9998/sppro/mr9.4.1 buster/main amd64 Packages
        100 /var/lib/dpkg/status
root@sp1:/var/cache#

root@sp1:/var/cache# ngcp-approx-snapshots --import /tmp/ngcp-approx-snapshot-20210319002625.gzip
Importing approx snapshot '/tmp/ngcp-approx-snapshot-20210319002625.gzip'...
Successfully imported approx snapshot.
root@sp1:/var/cache#

root@sp1:/var/cache# ngcp-approx-snapshots --list
List of locally available snapshots:
20210319002625 | Created at 2021-03-19 00:26:25 on web01a
root@sp1:/var/cache#

root@sp1:/var/cache# ngcp-approx-snapshots --switch 20210319002625
Switching to approx snapshot...
WARNING: the active approx cache will be removed! (Snapshots created?)
Should we switch to approx snapshot '20210319002625'? (yes/no): yes
Removing the active approx cache '/var/cache/approx'...
Switching to the approx snapshot '20210319002625'
root@sp1:/var/cache#

root@sp1:/var/cache# ngcp-approx-snapshots --apt-update # to force local repository metadata update
...
root@sp1:/var/cache# apt-cache policy ngcp-templates-pro
ngcp-templates-pro:
  Installed: 9.4.1.1+0~mr9.4.1.1
  Candidate: 9.4.1.13+0~mr9.4.1.13
  Version table:
     9.4.1.13+0~mr9.4.1.13 990
        990 http://sp:9998/sppro/mr9.4.1 buster/main amd64 Packages
 *** 9.4.1.1+0~mr9.4.1.1 100
        100 /var/lib/dpkg/status
root@sp1:/var/cache#

root@web01a:~# apt update && apt upgrade && ... && ngcp-config apply

It is also useful to see the list of available approx snapshots (the tool ngcp-approx-snapshots has to be executed on the MGMT node):

root@web01a:~# ngcp-approx-snapshots --list
List of locally available approx snapshots:
 * 20210319000519 | Created at 2021-03-19 00:05:19 on sp1
   20210325081717 | Created at 2021-03-25 08:17:17 on web01b
   20210325101739 | Created at 2021-03-25 10:17:39 on web01a [LOCKED]
root@web01a:~#

The symbol "*" above shows the currently active snapshot (it implies the content in /var/cache/approx/snapshot-info/created and /mnt/glusterfs/mgmt-share/approx_snapshots/20210319000519/snapshot-info/created is identical).

Example for locking/unlocking snapshots (to prevent deleting by mistake):

root@web01a:~# ngcp-approx-snapshots --list
List of locally available approx snapshots:
 * 20210319000519 | Created at 2021-03-19 00:05:19 on sp1
   20210325081717 | Created at 2021-03-25 08:17:17 on web01b
   20210325101739 | Created at 2021-03-25 10:17:39 on web01a [LOCKED]

root@web01a:~# ngcp-approx-snapshots --delete 20210325101739
ERROR: cannot remove locked snapshots '20210325101739'

root@web01a:~# ngcp-approx-snapshots --unlock 20210325101739
Unlocked snapshot '20210325101739'

root@web01a:~# ngcp-approx-snapshots --delete 20210325101739
Deleting approx snapshot '20210325101739'

root@web01a:~# ngcp-approx-snapshots --list
List of locally available approx snapshots:
 * 20210319000519 | Created at 2021-03-19 00:05:19 on sp1
   20210325081717 | Created at 2021-03-25 08:17:17 on web01b
root@web01a:~#

Also, it is possible to search and show all packages versions in all snapshots:

root@sp1:~# ngcp-approx-snapshots --search ngcp-templates-pro
Search results for the package 'ngcp-templates-pro':
   20210326204026 : 9.4.1.1+0~mr9.4.1.1 (mr9.4.1)
   20210326224730 : 9.4.1.2+0~mr9.4.1.2 (mr9.4.1)
    active approx : 9.4.1.3+0~mr9.4.1.3 (mr9.4.1)
    apt installed : 9.4.1.2+0~mr9.4.1.2
    apt candidate : 9.4.1.3+0~mr9.4.1.3
root@sp1:~#

See more details in 'ngcp-approx-snapshots --help'.