OP5 Monitor ["OP5 Monitor"]
["OP5 Monitor > Slim Poller"]["User Guide"]

Set up Slim Poller


The Slim Poller is a scaled-down version of the poller. For more information on its contents and limitations, see Slim Poller in Scalable monitoring.

To deploy a Slim Poller you need to have:

  • A good knowledge of Docker and the Docker ecosystem.
  • A working Docker environment or container orchestration tool.

Note: ITRS does not support Docker or any of its services, such as swarm clusters and container orchestration. For more information on how to build and manage Docker services, refer to the Docker and Kubernetes documentation.

Container overview

The Slim Poller consists of two containers, which include the following components:

  • slim-poller_naemon-core — contains Naemon, the MerlinNaemon Eventbroker Module and check plugins; it is responsible for executing all checks.
  • slim-poller_naemon-merlin — contains the Merlin daemon; the daemon communicates with the Eventbroker Module in the slim-poller_naemon-core container, and is responsible for sending all check results back to the master server.

The two containers have a one-to-one relationship. You cannot set up multiple slim-poller_naemon-core containers with one slim-poller_naemon-merlin container.

Before you begin

Before you add a Slim Poller to a master server, ensure the following:

  • Any peers are fully connected and synchronised. For guidance, see Check cluster state information in Scale up your monitoring environment.
  • The host groups which you want the Slim Poller to monitor exist on the master.
  • Folder /opt/monitor/.ssh exists on the master; if not, create it with the following commands:
    mkdir -p /opt/monitor/.ssh
    chown monitor /opt/monitor/.ssh
    chmod 700 /opt/monitor/.ssh				

Install SSH keys

The Slim Poller container must have SSH keys installed that are authorized at the master server. One way to achieve this is to create a new Docker image from the Slim Poller image.

Caution: The SSH key added in this image must also be added to /opt/monitor/.ssh/authorized_keys on the designated master server and all peers.

The following are example commands for installing the SSH:

FROM op5com/slim-poller_naemon-core:8.3.0

COPY --chown=monitor:root id_rsa /opt/monitor/.ssh/id_rsa
COPY --chown=monitor:root id_rsa.pub /opt/monitor/.ssh/authorized_keys

RUN chmod 600 /opt/monitor/.ssh/id_rsa
RUN chmod 644 /opt/monitor/.ssh/authorized_keys

Automatic setup

Beginning OP5 Monitor 8.3.x, it is possible to make use of the following Kubernetes features:

  • Autoscaling in Kubernetes. For more information, see Horizontal Pod Autoscaler.

  • kubectl command to manually increase the number of Slim Pollers running.

Setup for autoscaling

In order to achieve autoscaling, the following is set up:

  • A container entry script that connects to a designated master, and registers on the cluster to all relevant masters and peers.

  • The cluster_update module is set up to detect any changes to the cluster. A connection is established with the designated master, and the cluster configuration is updated.

  • Slim Pollers are identified by UUID on the master, but not towards peers.

  • The address of each Slim Poller must be an address that is reachable from within the Kubernetes cluster; for example, the pod IP. It is not necessary for this IP to be reachable from masters outside the Kubernetes cluster.

A number of environment variables are used to configure this, such as the master IP, poller hostgroup, and so on. For more information, see Setting environment variables.

Setting environment variables

For autoscaling to work correctly, you need to set up the following environment variables:

Environment variable Description
MASTER_ADDRESS The address of the designated master node.
MASTER_NAME Name of the master node.

Merlin port of the master node.

By default, this is set to 15551.


The address that this poller should use. Use the Kubernetes pod IP.

POLLER_NAME Name of the poller. In autoscaling, this name is generated by Kubernetes.

One or more hostgroups that the poller is responsible for. If there are multiple hostgroups, specify them in a comma-separated list.

These hostgroups must exist on the master server prior to container startup.

FILES_TO_SYNC Optional. Comma-separated list of paths to sync from the master server.

Set the log level of various setup scripts. This does not change the log output of Naemon and Merlin.

Mandatory: No

Possible values: debug, info, error, critical

Default value: info


The following example shows a YAML file, example-autoscaling.yml, with the environment variables configured for autoscaling in Kubernetes:


Scaling to a higher number of replicas

After starting a Slim Poller deployment, you can scale up the replicas manually by using the kubectl command:

kubectl scale deployment.v1.apps/op5-slim-poller --replicas=2

You can also use Kubernetes autoscaling. For more information, see Horizontal Pod Autoscaler.

Example Kubernetes deployment file

See example slim-poller-kubernetes.yml file:


Manual setup

Set up volumes and containers

Volume overview

In order to create stateless containers, you need to have a few volumes set up, as summarised below. These volumes are also included in the deployment examples.

Volume Containers Mount point Persistence Required Description



/var/run/naemon/ Not required Yes Needed for sharing a Linux socket between the naemon-core and naemon-merlin containers.



/opt/monitor/op5/merlin/ Required Yes The Merlin configuration files.
naemon-conf slim-poller_naemon-core /opt/monitor/etc/ Recommended No The Naemon configuration files. If no changes to the default configuration are required, you can omit this volume. Merlin will fetch the required object configuration from the master at startup.
status slim-poller_naemon-core /opt/monitor/var/status/ Required Yes Saves the Naemon status files, which contain state information such as comments, downtimes, and acknowledgements.

Docker Compose quick start

To get started with Docker Compose:

  1. Save the Docker Compose YAML configuration below into a file called docker-compose.yml in a new directory on your system.
  2. Run the following command to start the Slim Poller containers:
    docker-compose up

Deployment examples

Note that the following deployment examples have a specific version tag set on the Slim Poller container images. You must ensure that your setting has the same version tag as the version of your OP5 Monitor master.


For Kubernetes/OpenShift









Configure the master

In a load-balanced environment with peered masters, you must perform these steps on the master and all of its peers.

  1. Add the Slim Poller to the master using the following command, replacing the variables between angled brackets (<>) with your own values:
    mon node add <poller_name> type=poller hostgroup=<selected_hostgroups> connect=no notifies=no takeover=no address=<poller_IP>

    Note: If the master is on the same network as the Slim Poller and can monitor the same components as the Slim Poller, then you do not need to specify takeover=no.

    For example:

    mon node add poller type=poller hostgroup=pollergroup connect=no notifies=no takeover=no address=
  2. Restart the master.
    mon restart

Success: You can now see file /var/cache/merlin/config/<POLLER_NAME>.cfg on your master server. If the file does not exist, the master cannot correctly generate the poller configuration.

Configure the Slim Poller

To configure the Slim Poller:

  1. Open a shell on the slim-poller_naemon-core container.
  2. Use the following convenience script to set up the master server on the Slim Poller, replacing the variables between angled brackets (<>) with your own values:
    setup.sh --master-name <master_name> --master-address <master_IP> --poller-name <poller_name>	

    If you prefer to set up the master server manually using mon commands, see Poller configuration.

Note: Beginning OP5 Monitor 8.3.x, you can enable node identification using UUID instead of using IP. For guidance, see UUID identification in Scale up your monitoring environment.

Plugin state retention

Some plugins save state between each invocation, such as check_by_snmp_cpu, which saves retention data at location /var/check_by_snmp_cpu. Other plugins, such as Plugin reference, might require a login file saved on the system.

For cases such as these, we recommend that you define a persistent storage volume in your deployment files. As a starting point for creating new persistent storage, you can use the status volume from the deployment examples in Deployment examples.