The end of life (EOL) date for this module is on 31 January, 2020.

Cluster Performance (defunct)

Performance Guidelines

You can adjust the values below to see the resource usage as the variables change. Please read below for information about how this data was gathered.

Note: the testing involved starting all clients at once. Better results can be achieved by staggering client startup.

 
OA clients     
DataSet subs per client     
 
CPU    0.00%
Memory    0.00GB
Network Send    0.00 kbps
Network Recv    0.00 kbps

Methodology

The performance data was gathered by running a four node Open Access cluster on two Rackspace 8GB Standard machines in the Rackspace cloud.

The cluster was connected to 30 gateways, each with 16 probes, 4 entities per probe and 5 samplers per entity, where each sampler was a 4x4 toolkit updating every second. 40% of all the cells in each gateway were either warning or critical.

Each Open Access client subscribed to a number of unique paths that directly matched an individual cell.

Finally, each variable (number of clients, paths per client etc) was modified. For each value, the cluster was started and allowed one minute to settle and connect to the gateways. We then started the Open Access client(s) and waited five minutes. Finally, the cluster and clients were shut down before the next test.

For each test, the CPU usage, memory usage and network ingress and egress was recorded to create a performance profile for each test where the first minute of each series was discarded to avoid the startup of the nodes impacting the test results. For each profile, the maximum, minimum and average of each metric was recorded. Finally, we used simple linear regression on the points from the profiles.

../../ImportedGeneosImages/paths_cpu_profile.png../../ImportedGeneosImages/paths_network_profile.png
../../ImportedGeneosImages/clients_cpu_profile.png../../ImportedGeneosImages/clients_network_profile.png

Naturally we also needed to establish how much load could be placed upon the cluster before issues occurred. During each test, both the cluster in-built Self-Monitoring and our own Geneos monitoring was used to ensure and integrity and stablity of the cluster.

We encountered a number of failure modes which are detailed in Cluster Failure Cases.