Deployment

This chapter provides a step by step instruction on how to set up a Sipwise C5 CARRIER/Pro nodes from scratch.

Installation Prerequisites

KVM

Use an USB Keyboard and any Monitor with VGA connector (male three-row 15-pin DE-15). A Mouse is not required.

Install Medium

The install CD provides the ability to easily install Sipwise C5 CE/PRO/Carrier, including automatic partitioning and installation of the underlying Debian system.

Burn the ISO to a CD with an application of your choice, or preferably put the ISO on a USB stick (all data will be wiped from there):

% dd if=sip_provider_mr12.5.1.iso of=/dev/sdX
Do not specify a partition (like /dev/sdb1), but only the disk itself (like /dev/sdb) since the ISO already provides a partition table.
When dd-ing the ISO to a device the system is NOT (U)EFI capable. To be able to boot the device using (U)EFI you need to set it up using grml2usb, see the following section for further details.

Instructions for setting up (U)EFI capable USB device using grml2usb

Install grml2usb (see grml.org/grml2usb) or boot a Grml/Grml-Sipwise ISO to be able to use its grml2usb.

Create a FAT16 partition on the USB pen and enable the boot flag for the partition. The GParted tool is known to work fine:

GParted with settings for grml2usb

Then invoke as root or through sudo (adjust /dev/sdX accordingly for your USB device):

# grml2usb sip_provider_mr12.5.1.iso /dev/sdX1

That’s it.

This is how (U)EFI boot looks like:

Grml-Sipwise ISO in UEFI mode

whereas that’s how BIOS boot looks like:

Grml-Sipwise ISO in BIOS mode

Instructions for setting up (U)EFI capable USB device on Mac OS

Invoke the Disk Utility (in german locales: "Festplatten-Dienstprogramm"), select the USB stick. Choose the partitioning tab, choose "1 Partition" in Volume Schema, name it "SIPWISE" and choose MS-DOS (FAT) as file system. Then "Apply" the settings.

Double-click on the Grml-Sipwise ISO and switch to a terminal, invoke "diskutil" to identify the device name of the ISO (being /dev/disk2s1 in the command line below).

Finally mount the first partition of the ISO and copy the files to the USB device, like:

% mkdir /Volumes/GRML
% sudo mount_cd9660 /dev/disk2s1 /Volumes/GRML
% cp -a /Volumes/GRML/* /Volumes/SIPWISE/
% diskutil unmount /Volumes/SIPWISE/
% sudo umount /Volumes/GRML
% sudo umount /Volumes/GRML

The resulting USB Stick should be bootable in (U)EFI mode now, meaning it should also boot on recent Mac OS systems (press the Option/Alt key there during boot and choose "EFI Boot" then).

Network

The setup routine needs both access to Sipwise mirror of public Debian repositories (https://debian.sipwise.com), to the Sipwise repositories (https://deb.sipwise.com) and Sipwise license server (https://license.sipwise.com).

Installation CARRIER

This section describes how to install vanilla carrier installation.

CARRIER Hardware

Sipwise C5 CARRIER starts with a minimum deployment of 50.000 subscribers, requiring one chassis with two web servers, two db servers, two load balancers and two proxies. A fully deployed Sipwise C5 CARRIER for 200.000 subscribers fills the chassis up with 14 servers, containing two web servers, two db servers, two load balancers and 8 proxies.

Power supply

Connect at least 2 power cords to chassis power supplies.

Initial chassis configuration

Connect Patch cords to Active CMM node on the backside of each chassis and connect them to the switch.

Chassis IP-management setup

By default chassis will try to obtain address using DHCP protocol, if not:

  • connect using laptop to CMM try to use IP from label on CMM

  • setup IP address

Change CMM password to at least 12-chars string generated using Linux pwgen as an example.

Change Password for switch modules (IO).

Remember to download for CRM FRU numbers: Status → Table View → Export to CSV Also it will be useful to copy SN numbers to that CSV file.

Login to CMM and Copy Mac Address of first adapter of Each node through:

  • Chassis Management → Reports → Mac Address and copy Mac1 Column

On each compute node check and configure:

  • Raid setup: for deployments we are using RAID1

  • Select Legacy boot mode (Boot → Legacy mode)

Bootstrapping first node

First node you have to install is: web01a All another nodes are using web01a as PXE boot server and source of Debian-packages.

Insert the Install Medium (when using USB or CD), reboot the server, and when prompted in the top right corner, press F11 to access the Boot menu. Choose SATA Optical Drive when using a CD, or Hard drive C:Removable XXX when using a USB stick.

You will be presented with the Sipwise installer bootsplash:

Sipwise Bootsplash

Navigate down to mr12.5.1 CARRIER (web01a) and press <enter> to start the automatic installation process. This will take quite some time (depending on the network connection speed and the installation medium).

Once the Debian base system is installed and the ngcp-installer is being executed, you can login via ssh to root@<ip> (password is sipwise as specified by the ssh boot option) and watch Sipwise C5 installation process by executing tail -f /mnt/var/log/ngcp-installer.log -f /tmp/deployment-installer-debug.log). After the installation has been finished and when prompted on the terminal to do so, press r to reboot. Make sure the Install Medium is ejected in order to boot into the newly installed system. Once up, you can login with user root and password sipwise.

Then you need to run the initial configuration for the first node:

It is strongly recommended to run ngcp-initial-configuration within terminal multiplexer like screen.
screen -S ngcp
ngcp-initial-configuration

Network Configuration

For successful bootstrapping all other nodes you have to correctly fill in the network.yml. You can edit network.yml in your favorite text editor. If needed, add missing sections (for prx0Ya and prx0Yb) according to the low-level design architecture doc. Important to write down into ‘ha_int’ section mac address of first adapter (MAC1), only network adapters with particular Mac-address, which are listed in network.yml are able to boot over PXE and be provisioned by API. Then you should apply your configuration changes, Use the following commands:

  ngcpcfg apply "Initial network configuration"
  ngcpcfg push --shared-only

Deployment of the rest nodes in chassis

Prior deployment of additional node you must enable ngcpcfg_api service. Run:

ngcpcfg set /etc/ngcp-config/config.yml 'bootenv.ngcpcfg_api.enable=yes' &&
       ngcpcfg commit 'Activate bootenv API in config.yml'
After deployment complete ngcpcfg_api.enable must set to "no" due to security constraint!

Power ON web01b from CMM web GUI (by default it will try to boot over PXE) and wait until web01b is deployed and reboots completely.

It is strongly recommended to run ngcp-initial-configuration within terminal multiplexer like screen.

Run:

screen -S ngcp
ngcp-initial-configuration --join

When it is finished you can deploy all other A-nodes even in parallel. After it you can deploy rest B-nodes.

All nodes but web01a should be configured with the '--join' option after the reboot:

It is strongly recommended to run ngcp-initial-configuration within terminal multiplexer like screen.
screen -S ngcp
ngcp-initial-configuration --join

If your web01b is in another chassis you should follow procedure described below:

Disconnect patch-cord cable from EXT1 port of SM1 which was connected to our office switch. Connect cable between the two Switch Modules (EXT1 on SM1 on first chassis and EXT1 on SM1 on second chassis). Power ON web01a from CMM web GUI and wait until web01a boots. Login to CMM on second chassis and start Node for web01b (by default it will try to boot over PXE). Wait until web01b is deployed and reboots completely. Begin deployment for all nodes in A-chassis (turn them on even in parallel). After process of deployment A-nodes completes you can deploy rest nodes in B-chassis.

Checking install:

SSH on web01a and run the following command:

ngcp-status --all

On prx-nodes there are two mysql instances running which handles the following replications: sp1<→sp2 (port 3306) and db01→localhost (port 3308). Check the replication of tables between DB-node and PRX-node with ngcp-mysql-replication-check:

root@prx01a:~# ngcp-mysql-replication-check -a -v
[prx01a] Replication slave is running on localhost:3306 from 'sp2'. No replication errors.
[prx01a] Replication slave is running on 127.0.0.1:3308 from 'db01a'. No replication errors.
[prx01a] Replication slave is running on 127.0.0.1:3308 from 'db01b'. No replication errors.

Installation Pro

The two nodes are installed one after the other by performing the following steps. It can be bare-hardware installation, virualised one (Proxmox) or Cloud (Google Cloud).

PRO Hardware

This section is valid for bare-hardware installation only.

Hardware Specifications

Sipwise provides Sipwise C5 platform fully pre-installed on two Lenovo ThinkSystem SR250 servers. Their most important characteristics are:

  • 1x 6C 12T E-2246G CPU @ 3.60GHz

  • 64 GB RAM (DDR4 ECC)

  • 2x 480Gb Lenovo branded SATA SSD

  • 1x 4Port intel i350 add-on network card

Hardware Prerequisites

In order to put Sipwise C5 into operation, you need to rack-mount it into 19" racks.

You will find the following equipment in the box:

  • 2 servers

  • 2 pairs of rails to rack-mount the servers

  • 2 cable management arms

You will additionally need the following parts as they are not part of the distribution:

  • 4 power cables

    The exact type required depends on the location of installation, e.g. there are various forms of power outlets in different countries.
  • At least 2 CAT5 cables to connect the servers to the access switches for external communication

  • 1 CAT5 cable to directly connect the two servers for internal communication

Rack-Mount Installation

Install the two servers into the rack (either into a single one or into two geographically distributed ones).

The rails shipped with the servers fit into standard 4-Post 19" racks. If they do not fit, please consult your rack vendor to get proper rails.

The following figure shows the mounted rails:

Rack-mounted Rails
Figure 1. Rack-mounted Rails

Power Supply Cabling

Each server has two redundant Power Supply Units (PSU). Connect one PSU to your normal power circuit and the other one to an Uninterruptible Power Supply Unit (UPS) to gain the maximum protection against power failures.

The cabling should look like in the following picture to prevent accidental power cuts:

Proper PSU Cabling
Figure 2. Proper PSU Cabling

Use a 230V/16A power supply and connect a power cord with a C13 jack to each of the two servers, using any of the two power supply units (PSU) in the back of the servers.

Network Cabling

Internal Communication

The high availability (HA) feature of Sipwise C5 requires that a direct Ethernet connection between the servers is established. One of the network interfaces must be dedicated to this functionality.

External Communication

Remaining network interfaces may be used to make the servers publicly available for communication services (SIP, messaging, etc.) and also for their management and maintenance.

Internal Communication

Patch a cross-link with a straight CAT5 cable between the two servers by connecting the cable to the network interface assigned to the HA component by Sipwise. The direct cross cable is applied for maximum availability because this connection is used by the servers to communicate with each other internally.

We strongly suggest against using a switch in between the servers for this internal interface. Using a switch is acceptable only if there is no another way to connect the two ports (e.g. if you configure a geographically distributed installation).
In case you are using a switch for cross-link make sure to enable portfast mode on Cisco switches. The thing is that STP puts the port into learning mode for 90 seconds, after it comes up for the first time. During this learning phase, the link is technically up, but no traffic passes through, so the GCS service will detect the other node as dead during boot. The portfast mode tells the switch to skip the learning phase and go to forwarding state right away: spanning-tree portfast [trunk].
External Communication

For both servers, depending on the network configuration, connect one or more straight CAT5 cables to the ports on the servers network cards and plug them into the corresponding switch ports. Information about proper ports of the servers to be used for this purpose are provided by Sipwise.

Initial BIOS Configuration

Power on both servers, and when prompted on the top right corner, press F2 to access the BIOS menu.

Automatic Power-On Setting:

Navigate to System SecurityAC Power Recovery and change the setting to On by pressing the <right> key. This will cause the server to immediately boot, as soon as it’s connected to the power supply, which helps to increase availability (e.g. after an accidental shutdown, it can be remotely powered on again by power-cycling both PSU simultaneously via an IP-PDU).

Go back with <esc> until prompted for Save changes and exit, and choose that option.

Bootstrapping first node

First node you have to install is: sp1 sp2 node is using sp1 as PXE boot server and source of Debian-packages.

Insert the Install Medium (when using USB or CD), reboot the server, and when prompted in the top right corner, press F11 to access the Boot menu. Choose SATA Optical Drive when using a CD, or Hard drive C:Removable XXX when using a USB stick.

You will be presented with the Sipwise installer bootsplash:

Sipwise Bootsplash

Navigate down to mr12.5.1 PRO (sp1) and press <enter> to start the automatic installation process. This will take quite some time (depending on the network connection speed and the installation medium).

Once the Debian base system is installed and the ngcp-installer is being executed, you can login via ssh to root@<ip> (password is sipwise as specified by the ssh boot option) and watch Sipwise C5 installation process by executing tail -f /mnt/var/log/ngcp-installer.log -f /tmp/deployment-installer-debug.log). After the installation has finished and when prompted on the terminal to do so, press r to reboot. Make sure the Install Medium is ejected in order to boot into the newly installed system. Once up, you can login with user root and password sipwise.

Then you need to run the initial configuration for the first node:

It is strongly recommended to run ngcp-initial-configuration within terminal multiplexer like screen.
screen -S ngcp
ngcp-initial-configuration

Network Configuration

The Sipwise C5 PRO pair uses eth0 for the public interface, and eth1 for a small, dedicated internal network on the cross-link for the GCS, replication and synchronization. Both of them are configured automatically at install time.

For the service to be used for SIP, RTP, HTTP etc. you need to configure a floating IP in the same network as you have configured on eth0. Put this IP address into the shared_ip array of the interface which contains the ext types in /etc/ngcp-config/network.yml.

ngcp-network --set-interface=eth0 --shared-ip=1.2.3.5

Once done, execute ngcpcfg apply "added shared ip", which will restart (beside others) the GCM/CRM processes, which in turn will configure a virtual interface eth0:0 with your floating IP.

For successful bootstrapping sp2 you have to correctly fill in the network.yml. You can edit network.yml in your favorite text editor. Important to write down into internal interface of sp2 section mac address of second adapter (MAC2), only network adapters with particular Mac-address, which are listed in network.yml are able to boot over PXE and be provisioned by API. Then you should apply your configuration changes, Use the following commands:

  ngcpcfg apply "Initial network configuration"
  ngcpcfg push --shared-only

After the configuration you can proceed to the second node.

You need to keep sp1 running and connected to the network in order to set up sp2 correctly, because the install procedure automatically synchronizes various configurations between the two nodes during the setup phase.

Setting up PRO sp2

Prior deployment of additional node you must enable ngcpcfg_api service. Run:

ngcpcfg set /etc/ngcp-config/config.yml 'bootenv.ngcpcfg_api.enable=yes' &&
       ngcpcfg commit 'Activate bootenv API in config.yml'
After deployment complete ngcpcfg_api.enable must set to "no" due to security constraint!

Power ON sp2 server (by default it will try to boot over PXE) and wait until sp2 is deployed and reboots completely.

It is strongly recommended to run ngcp-initial-configuration within terminal multiplexer like screen.

Run:

screen -S ngcp
ngcp-initial-configuration --join

Verify running Cluster configuration

After both sp1 and sp2 have been set up and are rebooted into the freshly installed system, one node should show cluster node active when logging into the machine, the other node should either have the default message of the day, or cluster node inactive.

Check the running processes on both nodes by executing ngcp-service summary on both of them.

Output of the active node:
# ngcp-service summary
Ok Service                        Managed    Started   Status
-- ------------------------------ ---------- --------- ----------
   approx                         managed    on-boot   active
   asterisk                       managed    by-ha     active
   coturn                         unmanaged  by-ha     inactive
   corosync                       managed    on-boot   active
   dhcp                           unmanaged  by-ha     inactive
   exim                           managed    on-boot   active
   glusterfsd                     managed    on-boot   active
   grafana                        managed    on-boot   active
   haproxy                        unmanaged  on-boot   inactive
   kamailio-lb                    managed    by-ha     active
   kamailio-proxy                 managed    by-ha     active
   kannel-bearerbox               unmanaged  by-ha     inactive
   kannel-smsbox                  unmanaged  by-ha     inactive
   monit                          managed    on-boot   active
   mysql                          managed    on-boot   active
   mysql_cluster                  unmanaged  on-boot   inactive
   ngcp-eaddress                  unmanaged  on-boot   inactive
   ngcp-faxserver                 managed    by-ha     active
   ngcp-license-client            managed    on-boot   active
   ngcp-lnpd                      unmanaged  on-boot   inactive
   ngcp-logfs                     managed    on-boot   active
   ngcp-mediator                  managed    by-ha     active
   ngcp-panel                     managed    on-boot   active
   ngcp-pushd                     unmanaged  by-ha     inactive
   ngcp-rate-o-mat                managed    by-ha     active
   ngcp-snmp-agent                managed    on-boot   active
   ngcp-voisniff                  managed    by-ha     active
   ngcp-websocket                 unmanaged  by-ha     inactive
   ngcp-witnessd                  managed    on-boot   active
   ngcpcfg-api                    managed    on-boot   active
   nginx                          managed    on-boot   active
   ntpd                           unmanaged  on-boot   inactive
   openvpn                        unmanaged  on-boot   inactive
   openvpn-vip                    unmanaged  by-ha     inactive
   pacemaker                      managed    on-boot   active
   prosody                        managed    by-ha     active
   redis                          managed    on-boot   active
   redis-master                   managed    by-ha     active
   rtpengine                      managed    by-ha     active
   rtpengine-recording            unmanaged  by-ha     inactive
   rtpengine-recording-nfs-mount  unmanaged  on-boot   inactive
   sems                           managed    by-ha     active
   sems-b2b                       unmanaged  by-ha     inactive
   slapd                          unmanaged  by-ha     inactive
   snmpd                          managed    on-boot   active
   snmptrapd                      unmanaged  on-boot   inactive
   ssh                            managed    on-boot   active
   sssd-nss                       unmanaged  on-boot   inactive
   sssd-pam-priv                  unmanaged  on-boot   inactive
   sssd-pam                       unmanaged  on-boot   inactive
   sssd-sudo                      unmanaged  on-boot   inactive
   sssd                           unmanaged  on-boot   inactive
   syslog                         managed    on-boot   active
   systemd-timesyncd              managed    on-boot   active
Output of the standby node:
# ngcp-service summary
Ok Service                        Managed    Started   Status
-- ------------------------------ ---------- --------- ---------
   approx                         managed    on-boot   active
   asterisk                       managed    by-ha     inactive
   corosync                       managed    on-boot   active
   coturn                         unmanaged  by-ha     inactive
   dhcp                           managed    by-ha     inactive
   exim                           managed    on-boot   active
   glusterfsd                     managed    on-boot   active
   grafana                        managed    on-boot   active
   haproxy                        unmanaged  on-boot   inactive
   kamailio-lb                    managed    by-ha     inactive
   kamailio-proxy                 managed    by-ha     inactive
   kannel-bearerbox               unmanaged  by-ha     inactive
   kannel-smsbox                  unmanaged  by-ha     inactive
   monit                          managed    on-boot   active
   mysql                          managed    on-boot   active
   mysql_cluster                  unmanaged  on-boot   inactive
   ngcp-eaddress                  unmanaged  on-boot   inactive
   ngcp-faxserver                 managed    by-ha     inactive
   ngcp-license-client            managed    on-boot   active
   ngcp-lnpd                      unmanaged  on-boot   inactive
   ngcp-logfs                     managed    on-boot   active
   ngcp-mediator                  managed    by-ha     inactive
   ngcp-panel                     managed    on-boot   active
   ngcp-pushd                     unmanaged  by-ha     inactive
   ngcp-rate-o-mat                managed    by-ha     inactive
   ngcp-snmp-agent                managed    on-boot   active
   ngcp-voisniff                  managed    by-ha     inactive
   ngcp-websocket                 unmanaged  by-ha     inactive
   ngcp-witnessd                  managed    on-boot   active
   ngcpcfg-api                    managed    on-boot   active
   nginx                          managed    on-boot   active
   ntpd                           unmanaged  on-boot   inactive
   openvpn                        unmanaged  on-boot   inactive
   openvpn-vip                    unmanaged  by-ha     inactive
   pacemaker                      managed    on-boot   active
   prosody                        managed    by-ha     inactive
   redis                          managed    on-boot   active
   redis-master                   managed    by-ha     inactive
   rtpengine                      managed    by-ha     inactive
   rtpengine-recording            unmanaged  by-ha     inactive
   rtpengine-recording-nfs-mount  unmanaged  on-boot   inactive
   sems                           managed    by-ha     inactive
   sems-b2b                       unmanaged  by-ha     inactive
   slapd                          unmanaged  by-ha     inactive
   snmpd                          managed    on-boot   active
   snmptrapd                      unmanaged  on-boot   inactive
   ssh                            managed    on-boot   active
   sssd-nss                       unmanaged  on-boot   inactive
   sssd-pam-priv                  unmanaged  on-boot   inactive
   sssd-pam                       unmanaged  on-boot   inactive
   sssd-sudo                      unmanaged  on-boot   inactive
   sssd                           unmanaged  on-boot   inactive
   syslog                         managed    on-boot   active
   systemd-timesyncd              managed    on-boot   active

If your output matches the output above, you’re fine.

Also double-check if the replication is up and running by executing mysql and do the query show slave status\G, which should NOT report any error on both nodes.

Synchronizing configuration changes

Between sp1 and sp2 there is a shared glusterfs storage, which holds the configuration files. If you change any config option on the active server and apply it using ngcpcfg apply "my commit message", then execute ngcpcfg push to propagate your changes to the second node.

Note that ngcpcfg apply "my commit message" is implicitly executed on the other node if you push configurations to it.

What’s happening when booting the Sipwise Deployment ISO

What happens when booting the Sipwise Deployment ISO is roughly:

  • Grml ISO boots up

  • The Grml system checks for the netscript boot option (specified in the kernel command line which is visible at the boot splash when pressing <TAB>)

  • The URL provided as argument to the netscript= boot option will be downloaded and executed

  • The netscript provides all the steps that need to be executed depending on which boot options have been specified (see the following section for more information)

Deployment stages

If installing Debian or Debian plus ngcp this happens during deployment:

  • boot options get evaluated

  • if installing a PRO system and usb0 exists IPMI is configured with IP address 169.254.1.102

  • checking for some known disks (based on a whitelist) to not cause any data loss by accident, exits if trying to install on an unknown disk type

  • starting ssh server for remote access

  • partition disk and set up partitions

  • make basic software selection using /etc/debootstrap/packages

  • run grml-debootstrap to install Debian base system

  • adjusting /etc/hosts of target system

  • adjusting /etc/udev/rules.d/70-persistent-net.rules of target system if installing a virtual PRO system

If installing ngcp the following takes place:

  • downloading according ngcp-installer version

  • executing ngcp-installer with settings as specified by boot options (see the following section for further information)

  • build ngcp-rtpengine kernel module (as this can’t be done automatically inside a chroot where kernel version of deployment system doesn’t necessarily match the kernel version of the installed system)

  • stop any running processes of the ngcp system that have been started during installation

  • copy generated log files to installed system

  • adjust /etc/hosts, /etc/hostname and /etc/network/interfaces of the installed system

  • kill any remaining ngcp system processes

  • ask for rebooting/halting the system

Important boot options for the Sipwise Deployment ISO

  • arch=i386 - install a 32bit Debian system instead of defaulting to 64bit system (use if so only for non-ngcp installations!)

  • debianrelease=…​ - use specified argument as Debian release (instead of defaulting to bookworm)

  • dns=…​ - use specified argument (IP address) as name server

  • ip=$IP::$GATEWAY:$NETMASK:$HOSTNAME:$DEVICE:off - use static IP address configuration instead of defaulting to DHCP

  • ngcpce - install CE flavour of ngcp

  • ngcphostname=…​ - use specified argument as hostname for the ngcp system

  • ngcpinstvers=…​ - use specified argument as ngcp-installer version (e.g. 0.7.3) instead of defaulting to latest version

  • ngcpsp1 - install PRO flavour of ngcp, system 1 (sp1)

  • ngcpsp2 - install PRO flavour of ngcp, system 2 (sp2)

  • ngcpppa=…​ - Use Sipwise PPA repository during installation

  • nodhcp - disable DHCP (only needed if no network configuration should take place)

  • nongcp - do not install any ngcp flavour but only a base Debian system

  • puppetenv=…​ - install puppet software in target system and enable specified argument as its environment (note that the target system’s hostname must be known to puppet’s manifest)

  • ssh=…​ - start SSH server with specified password for users root and grml

Logfiles

The settings assigned to the ngcp installer are available at /tmp/installer-settings.txt.

The log file of the Debian installation process is available at /tmp/grml-debootstrap.log.

The log file of the deployment process is available at /tmp/deployment-installer-debug.log.

The log files of the ngcp installer are available at /mnt/var/log/ngcp-installer.log as long as the installation is still running or if the installation fails. Once the installation process has been completed successfully the file system of the target system is not mounted any longer so you can’t access /mnt/var/log/ngcp-installer…​ any longer by default. But the log files are also available inside /var/log/ of the installed system so you can access them even after rebooting into the installed system. You can access the log files from the deployment ISO by executing:

root@spce ~ # Start lvm2
root@spce ~ # mount /dev/mapper/ngcp-root /mnt
root@spce ~ # ls -1 /mnt/var/log/{grml,ngcp}*.log
/mnt/var/log/grml-debootstrap.log
/mnt/var/log/deployment-installer-debug.log
/mnt/var/log/ngcp-installer.log

Debugging the deployment process

By default the deployment script enables an SSH server with password sipwise for users root and grml so you can login already while deployment is still running (or if it fails).

At the top of the system screen is a logo splash where you should find the IP address of the system so you can ssh to it:

Deployment ISO logo

If you think the problem might be inside the deployment script (which is available as /tmp/netscript.grml on the system) itself enable the "debugmode" boot option to run it under "set -x" and enable timestamp tracing. The deployment process log is available at /tmp/deployment-installer-debug.log.

Custom Installation

The installation and configuration steps are separated. First, the hard disk drive or SSD is partitioned, Debian is bootstrapped and NGCP packages are installed. Then the server is rebooted. Afterwards, it is necessary to run the ngcp-initial-configuration tool.

After the reboot you can modify the /etc/ngcp-installer/config_deploy.inc file to tune the configuration process.