Instances configuration
The network.yml structure
Instances are defined in the network.yml file. To start, add at the end of the file a new configuration block called 'instances:'.
Let’s have a look at the example of instances configuration block below. Here a part of the configuration details is intentionally skipped for simplicity, in order to just create a first, superficial understanding:
instances:
- name: instance_lb_1
service: kamailio-lb
host: sp1
label: lb
status: online
interfaces:
...
connections:
...
databases:
nosql:
...
sql:
...
- name: instance_proxy_1
...
- name: instance_sems_1
...
Instances are defined as a list of elements, where each element corresponds to an individual instance, regardless of its type. Each element requires the following parameters to be configured:
-
name: an arbitrary name to give to the instance. Only letters, digits and '_' chars are allowed.
-
service: the type of the service the instance has to run. Choose one from the 'Instances Supported' table.
-
host: the name of the host where the instance should run by default (i.e. sp1, lb01a, …)
-
label: a label assigned to the instance. Choose the value listed in the 'Instances Supported' table corresponding to the 'service' you selected.
-
status: the status of the instance. Choose one among: 'online' / 'offline' / 'inactive'.
-
interfaces: a list of the interfaces assigned to the instance. See Interfaces section for more details.
-
connections (optional): a list of the connections to other instances or services. See Connections section for more details.
-
databases (optional): a list of the database connections required for the instance. See DB Connections section for more details.
Once a list of instances is configured, apply these changes and then push to all the other nodes:
ngcpcfg apply 'added new instances'
ngcpcfg push all
Please remember that applying this configuration can trigger a short disruption of service. If the instances are being configured on the production platform, plan this as maintenance outside of business hours. |
Instances operation
While the changes are being applied, it will be noticeable that the number of generated configuration files increases, compared to what the system has had before. This is because the ngcpcfg framework generates specialized configuration for each single instance using the same base templates.
The system will also add a new service with a specific instance name and it will set this service in the active state by default, if not explicitly defined as 'inactive'.
For example, if the 'kamailio-lb' instance has been defined with a name 'A' and a host 'sp1', then:
-
a new folder with the name '/etc/kamailio/lb/lb_A' is created. This folder will contain the configuration files needed for proper operation of the newly added instance, with specific values for things such as 'listen=', 'alias=' etc.
-
a new service 'kamailio-lb-A' is automatically started on host 'sp1' and it will use a list of dedicated configuration files stored under the '/etc/kamailio/lb_A/' folder.
The status of the new instance can be checked using the command:
ngcp-service status kamailio-lb-A
A new dedicated folder will be added on all of the nodes/locations of the cluster, in order to provide a possibility for this instance to start anywhere in case of failover. |
Template’s customization for Instances
Template’s customization is available also for instantiated services. To generate a dedicated version of an existing template for a single instance, you have to copy the original .tt2 file to .customtt.tt2.inst-$INSTANCE_NAME, where '$INSTANCE_NAME' refers to the real name of the instance.
Example: We’ll create /etc/ngcp-config/templates/etc/lb/kamailio.cfg.customtt.tt2.inst-A and use it for our customized configuration. In this example, we’ll append a comment at the end of the template.
cd /etc/ngcp-config/templates/etc/kamailio/lb
cp kamailio.cfg.tt2 kamailio.cfg.customtt.tt2.inst-A
echo '# This is my last line comment for the instance config' >> kamailio.cfg.customtt.tt2.inst-A
ngcpcfg apply 'my commit message'
The ngcpcfg command will generate /etc/kamailio/lb_A/kamailio.cfg from our custom template instead of the generic one:
tail -1 /etc/kamailio/lb_A/kamailio.cfg
# This is my last line comment for the instance config
similar to traditional customtt file users have to upgrade all .customtt.tt2 manually every time .tt2 is upgraded, as customtt files take precedence over newly unpacked .tt2 files. |
Remember configuration file precedence (highest to lowest): *.customtt.tt2.inst-$INSTANCE_NAME *.customtt.tt2.$NGCP_HOSTNAME *.customtt.tt2.$NGCP_PAIRNAME *.customtt.tt2.$NGCP_NODENAME *.customtt.tt2 *.tt2.inst-$INSTANCE_NAME *.tt2.$NGCP_HOSTNAME *.tt2.$NGCP_PAIRNAME *.tt2.$NGCP_NODENAME *.tt2 Check the ngcpcfg framework documentation or the ngcpcfg script man page for a full description of all the supported template files. |
Interfaces
Each instance always has to be reachable independently, regardless of the node/location it currently takes. Due to that, each instance has dedicated floating IP address(es) that is/are migratable between cluster node pairs.
This floating IP address will always stick to its instance and therefore will provide IP connectivity to it.
this implies that while migrating to instances, administrators have to reserve a sufficient number of IP addresses in the subnets that the services will be listening on. For example: kamailio-lb service is listening on the 'sip_int' interface to obtain internal SIP traffic and on the 'sip_ext' interface to obtain external traffic. That means a new IP has to be reserved on both the 'sip_int' and 'sip_ext' subnets to be able to start a new kamailio-lb instance. The same approach can be applied to the other types of instances, with the exception of Proxy and Sems-b2b type of instances that will only need private IP addresses. |
Here is a list of the interface types to be defined for each instance:
Service name | Interface type | Used for |
---|---|---|
kamailio-lb |
sip_int |
internal SIP messages |
sip_ext |
external SIP messages |
|
kamailio-proxy |
sip_int |
internal SIP messages |
sems-b2b |
sip_int |
internal SIP messages |
rtp_int |
internal RTP messages |
|
asterisk |
sip_int |
internal SIP and RTP messages |
As mentioned before each instance has a paramater called 'host' to define on which node the instance should has to be run by default.
In a normal state, the system will always try to run the instance on the specified node. In case the specified node is down (for any reason, e.g. maintenance), then the instance will be automatically migrated to the pair node. When the default node is back to normal state, then the instance is migrated back together with the floating IP address belonging to it.
On the node where the instance has to be run (or could be run in case of failover), the interface(s) with the same name and subnetwork must be defined.
This means that, if for example a kamailio-proxy instance with the interface 'neth1' and IP 192.168.1.1 is defined, then on any node that it could possibly run on, an interface with the name 'neth1' and with the including subnet must be defined. For example the sub-network '192.168.1.0/24' would fit this demand.
As shown in the Instances Interfaces table, certain instances could require more than one interface. However, the same interface can be used for two or more types of connections. This mainly depends on the network topology and on the system administrator decision (how to define/interconnect these instance elements).
It is important to define the following parameters for each instance interface:
-
name: the name of the host’s interface that has to be used
-
ip: the IP to assign to this instance’s interface
-
type: list of types of services assigned to this interface. See Service Types section for more details.
An example on how a definition of the 'instance_lb_1' can look like after adding the interfaces:
- name: instance_lb_1
service: kamailio-lb
host: sp1
label: lb
status: online
interfaces:
- name: neth2
ip: 192.168.1.211
type:
- sip_ext
- name: neth1
ip: 192.168.255.211
type:
- sip_int
connections:
...
databases:
nosql:
...
sql:
...
Connections between instances
In a standard Sipwise C5 system, all the services are working in immovable, predefined order. What that means is that there is no possibility to set a specific kamailio-proxy to work with a specific kamailio-lb. The stack will be always: Lb→Proxy→Sems-b2b, regardless of whether this is a PRO or Carrier grade setup.
A decision was made to give more flexibility in this regard:
-
to provide mobility and scalability (sharding) to the instance-based services
-
to give a possibility to the system administrator to design their own internal topology (in terms of inter-connections between the instances)
This opens up completely new capabilities for the Sipwise C5 system, because it gives a possibility to create a dedicated path for call flow/routing, meaning that it’s possible to define which specific LB, Proxy and Sems-b2b instances are engaged into processing of the SIP call.
The list of connections between instances that can be defined:
Service name | Connection | Scope | Multilple links | Fallback |
---|---|---|---|---|
kamailio-lb |
proxy |
dispacth the call to internal proxies |
yes |
yes |
kamailio-proxy |
b2b |
dispatch the call to b2b |
no |
no |
voicemail |
dispatch the call to voicemail server |
no |
no |
|
sems-b2b |
lb |
select default LB for outbound registration messages |
no |
no |
proxy |
select default proxy for sems generated messages |
no |
no |
|
asterisk |
proxy |
dispatch the outgoing fax |
no |
no |
By default, instances will automatically try to connect between each other, looking for other instances running on the same node.
Even if this method could be useful for the very initial configuration of the system, it could lead to certain obstacles, in particular when more than one instance is active by default on the same node. |
Structure of instances connections
This is a list of options/parameters which build up instance connections to other instances/hosts:
-
name: a name of the connection to be defined. Available options are: lb, proxy, b2b, voicemail
-
algorithm: algorithm to be used in order to dispatch a call in case multiple links are defined and supported, see the 'Instances Connections' table. Available options are: hash, hash_ruri, round_robin, random, serial, weight, parallel.
-
links: a list of connections
-
type: it defines whether the connection is directed to an instance (type 'instance') or to a standard service (type 'host').
-
name: for 'type: instance' this is the name of the instance to connect to, for 'type: host' it is where the service will run on.
-
interfaces: a list of interfaces where a remote instance/host can be reached
-
name: the name of the instance’s/host’s interface that has to be used
-
type: the interface type
-
-
Here an example of how the connections for the new kamailio-lb instance can be configured:
- name: instance_lb_1
service: kamailio-lb
host: sp1
label: lb
status: online
interfaces:
- name: neth2
ip: 192.168.1.211
type:
- sip_ext
- name: neth1
ip: 192.168.255.211
type:
- sip_int
connections:
- name: proxy
algorithm: random
links:
- type: instance
name: instance_proxy_1
interfaces:
- name: neth1
type: sip_int
- type: instance
name: instance_proxy_2
interfaces:
- name: neth1
type: sip_int
- type: host
name: prx01
interfaces:
- name: neth1
type: sip_int
databases:
nosql:
...
sql:
...
Where:
-
a connection to the 'proxy' has been defined
-
since kamailo-lb supports multiple connections and fallback definitions, the 'random' algorithm of selection has been defined
-
links allowing to reach the 'proxy' have been defined: two of them towards instances 'instance_proxy_1' and 'instance_proxy_2', and one to the default kamailio-proxy service running on the prx01 host
In all of the defined links, two instances and the host are reachable on the 'neth1' interface of type 'sip_int' and all of these links will be used by the 'instance_lb_1' to distribute calls. |
Connections to databases
The concept of instance connections is also applied in the scope of the NoSQL / SQL databases backend usage, and makes it configurable from a dedicated block of instances called 'databases:'. The setup is able to define connections towards NoSQL and SQL using the configuration based on instances (network.yml), therefore pointing to the desired databases a particular instance must be connected with.
Currently the backend for the SQL database is implemented using MariaDB, and for the NoSQL database using KeyDB (analogous to Redis) |
It is important that if instances are enabled templating will try to collect proper values for:
-
db.central.${hostname} / nosql.central.${hostname}
-
db.replicatedpair.${hostname} / nosql.replicatedpair.${hostname}
-
db.replicatedcentral.${hostname}
from the:
-
'instances.${name}.databases.sql'
-
'instances.${name}.databases.nosql'
Which then will be used to build up configuration files for instances as well as the /etc/hosts file, which will contain a list of name translations for the databases. This allows to have host records for connections towards nosql/sql databases, including the local one. Using that approach, the NoSQL database (KeyDB) and the SQL database (Maria DB) can be listening on the internal IP addresses (not on the loopback interfaces) and at the same time not use floating IP addresses, and still be reachable by the instances.
only one of the databases types can be located standalone: 'db_central'. The 'db_replicated_central' and 'db_replicated_pair' are always local. |
If the database connections are not defined, the Active/Active setup with instances will still work. Hence configuration of the databases is not absolutely a must.
A list of the connections towards databases that can be defined:
Service name | noSQL type | SQL type |
---|---|---|
kamailio-lb |
db_replicated_pair |
|
kamailio-proxy |
db_central |
db_central |
db_replicated_pair |
db_replicated_pair |
|
db_replicated_central |
||
sems-b2b |
db_central |
db_central |
db_replicated_pair |
db_replicated_pair |
|
db_replicated_central |
'db_central' can be located as a separate node serving a role of the central SQL/NoSQL database, while 'db_replicated_pair' / 'db_replicated_central' are working locally on the node. |
This is a list of options/parameters which build up instance connections to other instances/hosts:
-
nosql:
-
name: a name of the connection to be defined, is equal to the node/location value.
-
port: port of the NoSQL DB to be connected to
-
type: type of the connection: 'db_central', 'db_replicated_pair', 'db_replicated_central'
-
-
sql:
-
name: a name of the connection to be defined, is equal to the node/location value.
-
port: port of the NoSQL DB to be connected to
-
type: type of the connection: 'db_central', 'db_replicated_pair', 'db_replicated_central'
-
Here is an example, how the databases connections for the new kamailio-lb instance can be configured:
- name: instance_lb_1
service: kamailio-lb
host: sp1
label: lb
status: online
interfaces:
- name: neth2
ip: 192.168.1.211
type:
- sip_ext
- name: neth1
ip: 192.168.255.211
type:
- sip_int
connections:
- name: proxy
algorithm: random
links:
- type: instance
name: instance_proxy_1
interfaces:
- name: neth1
type: sip_int
- type: instance
name: instance_proxy_2
interfaces:
- name: neth1
type: sip_int
- type: host
name: prx01
interfaces:
- name: neth1
type: sip_int
databases:
nosql:
- name: sp1
port: 6379
type: db_replicated_pair
sql: []
The kamailio-lb service requires only one NoSQL connection of the 'db_replicated_pair' type. |
Disable default services
When the most important/required steps of the configuration are done and all those migrated standard services are not doing any significant work, they can be safely moved into the offline mode by executing the following commands:
ngcpcfg set /etc/ngcp-config/config.yml "kamailio.lb.status: offline"
ngcpcfg set /etc/ngcp-config/config.yml "kamailio.proxy.status: offline"
ngcpcfg set /etc/ngcp-config/config.yml "b2b.status: offline"
ngcpcfg set /etc/ngcp-config/config.yml "asterisk.status: offline"
ngcpcfg apply "Turn off lb, proxy, b2b, asterisk standard services"
ngcpcfg push all
Please remember that applying this configuration can trigger a short disruption of service. Plan this as maintenance outside of business hours. |