The end of life (EOL) date for this module is on 31 January, 2020.

Starting a cluster

Single Node Cluster

Before starting each node needs to have its network settings configured in config/application.conf as described in Configuration.

Then run the node using the provided oacluster.sh script:

> ./oacluster.sh

There will be a short delay while the node initialises.

2013-08-23 15:36:44,234  INFO - ClusterCast - Waiting for all cluster members: 1
2013-08-23 15:36:48,905  INFO - Cluster(akka://ClusterSystem) - Cluster Node [akka.tcp://ClusterSystem@host1:2551] - Node [akka.tcp://ClusterSystem@host1:2551] is JOINING, roles []
2013-08-23 15:36:49,911  INFO - Cluster(akka://ClusterSystem) - Cluster Node [akka.tcp://ClusterSystem@host1:2551] - Leader is moving node [akka.tcp://ClusterSystem@host1:2551] to [Up]

At this point, API clients can connect to the server using the hostname set.

Note: The hostname passed to API clients must match the hostname set in application.conf exactly.

Multi-node Cluster

If running a multi-node cluster, it’s required that the seed-nodes setting contains at least two nodes from the cluster. These can include the current node itself:

seed-nodes = ["akka.tcp://ClusterSystem@myhost.com:2551", "akka.tcp://ClusterSystem@myotherhost.com:2551"]

The seed-nodes must be configured to be the same on each cluster node.

Run each cluster node process across all the machines. There will be a delay while all the nodes are added.

Seen from the perspective of a single node in a 2 node cluster, the log will look something like below. The exact ordering of events will depend on the order in which each node is started:

2013-09-24 11:54:52,975  INFO - ClusterCast - Waiting for all cluster members: 1
2013-09-24 11:54:57,782  INFO - Cluster(akka://ClusterSystem) - Cluster Node [akka.tcp://ClusterSystem@host1:2551] - Node [akka.tcp://ClusterSystem@host1:2551] is JOINING, roles []
2013-09-24 11:54:58,769  INFO - Cluster(akka://ClusterSystem) - Cluster Node [akka.tcp://ClusterSystem@host1:2551] - Leader is moving node [akka.tcp://ClusterSystem@host1:2551] to [Up]
2013-09-24 11:55:05,339  INFO - Cluster(akka://ClusterSystem) - Cluster Node [akka.tcp://ClusterSystem@host1:2551] - Node [akka.tcp://ClusterSystem@host2:2551] is JOINING, roles []
2013-09-24 11:55:05,766  INFO - Cluster(akka://ClusterSystem) - Cluster Node [akka.tcp://ClusterSystem@host1:2551] - Leader is moving node [akka.tcp://ClusterSystem@host2:2551] to [Up]
2013-09-24 11:55:05,776  INFO - ConnectionConfigActor - Cluster changed to 2 nodes. Min: 1. Redistributing gateway connections

On the other node, the log will look similar to the following:

2013-09-24 11:55:05,380  INFO - ClusterCast - Waiting for all cluster members: 1
2013-09-24 11:55:05,398  INFO - Cluster(akka://ClusterSystem) - Cluster Node [akka.tcp://ClusterSystem@host2:2551] - Welcome from [akka.tcp://ClusterSystem@host1:2551]
2013-09-24 11:55:06,207  INFO - ConfigActor - Starting to monitor config file C:\develop\source\openaccess\cluster\modules\node\src\main\resources\settings.conf
2013-09-24 11:55:06,235  INFO - ConnectionConfigActor - Cluster changed to 2 nodes. Min: 1. Redistributing gateway connections

At this point, API clients can connect to any node in the cluster. When running multiple nodes it is recommended that you specify a minimum number of nodes as described in Configuration.

Dynamically adding nodes

To add more nodes to a running cluster, simply configure hostname, port, and seed-nodes as described in Configuration.

If the new node has successfully joined the cluster, you should see output similar to this:

2013-09-26 11:37:44,861  INFO - ClusterCast - Waiting for all cluster members: 3
2013-09-26 11:37:44,932  INFO - Cluster(akka://ClusterSystem) - Cluster Node [akka.tcp://ClusterSystem@host4:2552] - Welcome from [akka.tcp://ClusterSystem@host1:2551]
2013-09-26 11:37:45,878  INFO - ConnectionConfigActor - Cluster changed to 4 nodes. Min: 3. Redistributing gateway connections

Multiple node cluster using single binary

It is also possible to run a multiple node cluster on the same machine using the same binary. This can be useful to distribute the processing of data across all CPU cores.

Specify the number of instances when running the oacluster.sh script:

> ./oacluster.sh --start all 4

This will start four cluster node processes in the background. Logs will be written to workspace/logs/geneos-openaccess-node[1-4].log.

Example output:

[user@localhost node]$ ./oacluster.sh --start all 4
Started instance 1 with pid 26713 on port 2551 logging to /opt/openaccess-node/latest/workspace/logs/oacluster-node-1.log
Started instance 2 with pid 26770 on port 2552 logging to /opt/openaccess-node/latest/workspace/logs/oacluster-node-2.log
Started instance 3 with pid 26853 on port 2553 logging to /opt/openaccess-node/latest/workspace/logs/oacluster-node-3.log
Started instance 4 with pid 26947 on port 2554 logging to /opt/openaccess-node/latest/workspace/logs/oacluster-node-4.log

All nodes will be using same config but incrementing the port number. seed-nodes should still be configured using an externally visible ip/hostname. See Multi-node Cluster.

Node status

You can check the running status for each process using –proc-status-all <number>:

[user@localhost node]$ ./oacluster.sh --process-status
Instance 1 is running with pid 26713 on port 2551 logging to /opt/openaccess-node/latest/workspace/logs/oacluster-node-1.log
Instance 2 is running with pid 26770 on port 2552 logging to /opt/openaccess-node/latest/workspace/logs/oacluster-node-2.log
Instance 3 is running with pid 26853 on port 2553 logging to /opt/openaccess-node/latest/workspace/logs/oacluster-node-3.log
Instance 4 is running with pid 26947 on port 2554 logging to /opt/openaccess-node/latest/workspace/logs/oacluster-node-4.log

You can also find out the cluster status for a particular node:

[user@localhost node]$ ./oacluster.sh --cluster-status 2
Cluster status for instance 2:

Members:
        Member(address = akka.tcp://ClusterSystem@localhost:2551, status = Up)
        Member(address = akka.tcp://ClusterSystem@localhost:2552, status = Up)
        Member(address = akka.tcp://ClusterSystem@localhost:2553, status = Up)
        Member(address = akka.tcp://ClusterSystem@localhost:2554, status = Up)

Stopping nodes

To stop all nodes run:

[user@localhost node]$ ./oacluster.sh --kill all

For more detail on the oacluster.sh script, see oacluster.sh