Slim Poller

Introduction

The OP5 Monitor Slim Poller is a small, containerised version of OP5 Monitor. It includes only the components needed to run checks and sync the results to the Master server. This allows you to run a smaller, more efficient version of OP5 Monitor in cases where you do not need your Poller to have a user interface or an HTTP API. Furthermore, it facilitates the deployment of OP5 Monitor into existing cloud or container infrastructures, to monitor the services within them.

Prerequisites

To use the Slim Poller, you need to have:

  • a good knowledge of Docker and the Docker ecosystem
  • a working Docker environment or container orchestration tool

Note: ITRS does not support Docker or any of its services, such as swarm clusters and container orchestration. For more information on how to build and manage Docker services, refer to the Docker and Kubernetes documentation.

Limitations

The Slim Poller only comes with a subset of the full OP5 Monitor functionality, so has the following limitations.

Functionality Limitation
User interface There is no user interface included with a Slim Poller deployment, so you must use the user interface of the Master server instead.
HTTP API As there is no HTTP API included, all access to the API must be through the Master server.
Notifications The Slim Poller does not include a mail server, so all notifications must be sent from the Master server (see Notify through master in Distributed Monitoring for more details).
Plug-ins Only the most frequently used plug-ins are included with the Slim Poller. See Plug-ins for more information.
Passive mode

The Slim Poller must be set up in passive mode. This means that there is one-way communication from the Poller to the Master. The Slim Poller must be able to reach the Master (on ports 22 and 15551). It is not possible to deploy the Slim Poller in scenarios where the Master can reach the Slim Poller but the Slim Poller cannot reach the Master.

Running a Poller in passive mode also means that you cannot use the Test this check feature (found in the Master server user interface) on the Slim Poller.

Poller mode only A Slim Poller can only be used in poller mode. It is not possible to set up a Slim Poller as a peer to a Master OP5 Monitor node. It is, however, possible to set up Poller peers.
No autoscaling Autoscaling a Slim Poller container is currently not supported. However, it is possible to manually set up multiple Poller peers if required.
Unique IP Each node in OP5 Monitor is identified by an IP address. As a result, each Slim Poller must have a unique and static outgoing (source) IP. It is therefore recommended to set up node affinity in Kubernetes, for example, to ensure that the deployment does not change compute node. Similarly, if running several Slim Poller installations in the same cluster, you must ensure each deployment has a unique outgoing IP address.

Component/container overview

The Slim Poller consists of two containers including the following components:

  • slim-poller_naemon-core

    This container contains Naemon, the Merlin Naemon Eventbroker Module and check plug-ins; it is responsible for executing all checks.

  • slim-poller_naemon-merlin

    This container contains the Merlin daemon; the daemon communicates with the Eventbroker Module in the slim-poller_naemon-core container, and is responsible for sending all check results back to the Master server.

The two containers have a one-to-one relationship (it is not possible to set up multiple slim-poller_naemon-core containers with one slim-poller_naemon-merlin container).

Volumes

In order to create stateless containers, you need to have a few volumes set up, as summarised below. These volumes are also included in the deployment examples.

Volume Configuration
ipc

Containers: slim-poller_naemon-core and slim-poller_naemon-merlin

Mount point: /var/run/naemon/

Persistence: Not required

Required: Yes

Description: Needed for sharing a Linux socket between the naemon-core and naemon-merlin containers.

merlin-conf Containers: slim-poller_naemon-core and slim-poller_naemon-merlin

Mount point: /opt/monitor/op5/merlin/

Persistence: Yes

Required: Yes

Description: The Merlin configuration files.

naemon-conf Containers: slim-poller_naemon-core

Mount point: /opt/monitor/etc/

Persistence: Recommended

Required: No

Description: The Naemon configuration files. If no changes to the default configuration are required, you can omit this volume. Merlin will fetch the required object configuration from the Master at startup.

ssh-conf Containers: slim-poller_naemon-core

Mount point: /opt/monitor/.ssh/

Persistence: Yes

Required: Yes

Description: Holds the SSH keys required to connect to the Master server.

status Containers: slim-poller_naemon-core

Mount point: /opt/monitor/var/status/

Persistence: Yes

Required: Yes

Description: Saves the Naemon status files, which contain state information such as comments, downtimes and acknowledgements.

Deployment examples

Note that the following deployment examples have a specific version tag set on the Slim Poller container images. You must ensure you are setting the same version tag as the version of your OP5 Monitor Master.

Docker Compose

Docker Compose quickstart

To get started with Docker Compose:

  1. Save the Docker Compose YAML configuration below into a file called docker-compose.yml in a new folder on your system.
  2. Run the following command to start the Slim Poller containers:
    docker-compose up

docker-compose.yml

version: "3.1"
services:
  naemon-core:
    depends_on:
      - naemon-merlin
    volumes: 
      - ipc:/var/run/naemon
      - merlin-conf:/opt/monitor/op5/merlin
      - naemon-conf:/opt/monitor/etc
      - ssh-conf:/opt/monitor/.ssh
      - status:/opt/monitor/var/status/
    image: op5com/slim-poller_naemon-core:8.1.0
  naemon-merlin:
    volumes:
      - ipc:/var/run/naemon
      - merlin-conf:/opt/monitor/op5/merlin
    image: op5com/slim-poller_naemon-merlin:8.1.0
volumes:
  ipc:					      
  merlin-conf:
  naemon-conf:
  ssh-conf:
  status:					

Kubernetes/OpenShift

slim-poller.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: op5-slim-poller
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: op5-slim-poller
    template:
      metadata:
	 labels:
	   app: op5-slim-poller
      spec:
        volumes:
  	 - name: ipc
	   emptyDir: {}
	 - name: merlin-conf
	   persistentVolumeClaim:
	     claimName: merlin-conf-pvc
	 - name: naemon-conf
	   persistentVolumeClaim:
	     claimName: naemon-conf-pvc
	 - name: ssh-conf
	   persistentVolumeClaim:
	     claimName: ssh-conf-pvc
	 - name: status
	   persistentVolumeClaim:
	     claimName: status-pvc
	 containers:
	 - image: op5com/slim-poller_naemon-core:8.1.0
	   name: naemon-core
	   resources: {}
	   volumeMounts:
	   - name: ipc
	     mountPath: /var/run/naemon
	   - name: merlin-conf
	     mountPath: /opt/monitor/op5/merlin
	   - name: naemon-conf
	     mountPath: /opt/monitor/etc
	   - name: ssh-conf
	     mountPath: /opt/monitor/.ssh
	   - name: status
	     mountPath: /opt/monitor/var/status
	 - image: op5com/slim-poller_naemon-merlin:8.1.0
	   name: naemon-merlin
	   resources: {}
	   volumeMounts:
	   - name: ipc
	   mountPath: /var/run/naemon
	   - name: merlin-conf
	   mountPath: /opt/monitor/op5/merlin
	 restartPolicy: Always
status: {}         

merlin-conf-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: merlin-conf-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
	storage: 5Mi     

naemon-conf-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: naemon-conf-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

ssh-conf-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ssh-conf-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
	storage: 5Mi	

status-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: status-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 150Mi

Configuring the Slim Poller

Master server configuration

The following steps explain how to add the Slim Poller to the Master using mon shell commands.

Before you begin

Before you run the Master server configuration commands, ensure that:

  • The hostgroups which you want the Slim Poller to monitor exist on the Master.
  • Folder /opt/monitor/.ssh exists on the Master; if not, create it with the following commands:
    mkdir -p /opt/monitor/.ssh
    chown monitor /opt/monitor/.ssh
    chmod 700 /opt/monitor/.ssh				

Configure the Master server

Configure the Master server as described in the following steps.

  1. Add the Slim Poller to the Master using the following command, replacing the variables between angled brackets <> with your own values.
    mon node add <POLLER_NAME> type=poller hostgroup=<SELECTED_HOSTGROUPS> connect=no notifies=no takeover=no address=<POLLER_IP>

    Note: If the Master is on the same network as the Slim Poller and can monitor the same components as the Slim Poller, you do not need to specify takeover=no.

    Here is an example of a command with variables replaced with real values:

    mon node add poller type=poller hostgroup=pollergroup connect=no notifies=no takeover=no address=192.168.1.2
  2. Restart the Master.
    mon restart

If the Slim Poller was added successfully, you can now see file /var/cache/merlin/config/<POLLER_NAME>.cfg on your Master server. If the file does not exist, the Master cannot correctly generate the Poller configuration.

Slim Poller configuration

Configure the Slim Poller as described in the following steps.

  1. Open a shell on the naemon-core container.
  2. Sync the SSH keys with the Master server, using the following command, replacing the variable between angled brackets <> with your own values:
    mon sshkey push <MASTER_IP>	
  3. Use the following convenience script to set up the Master server on the Poller, replacing the variables between angled brackets <> with your own values:
    setup.sh --master-name <MASTER_NAME> --master-address <MASTER_IP> --poller-name <POLLER_NAME>	

    If you prefer to set up the Master server manually using mon commands, follow the Poller steps in the Poller configuration document.

You have now completed the Slim Poller setup and synced the initial configuration from the Master server.

Plug-ins

Supported plug-ins

To ensure the smallest possible image size, only the most frequently used plug-ins, listed in the table below, are included with the Slim Poller. You can install custom plug-ins using the procedure Install custom plug-ins.

check_apt check_dns check_imap check_nntp check_simap
check_aws check_docker check_ircd check_nntps check_smtp
check_breeze check_dummy check_jabber check_nrpe check_snmp
check_by_snmp_cpu check_dummyv2 check_json check_nt check_snmpif
check_by_snmp_disk_io check_elasticquery check_k8s check_ntp check_spop
check_by_snmp_extend check_file check_ldap check_ntp_peer check_ssh
check_by_snmp_load_avg check_file_age check_ldaps check_ntp_time check_ssmtp
check_by_snmp_memory check_flexlm check_load check_nwstat check_swap
check_by_snmp_procs check_fping check_log check_overcr check_swarm
check_by_ssh check_ftp check_mailq check_pgsql check_tcp
check_clamd check_host check_mrtg check_ping check_time
check_cluster check_hpjd check_mrtgtraf check_pop check_traffic
check_dbi check_http check_mssql_health check_procs check_udp
check_dhcp check_icmp check_mysql check_radius check_ups
check_dig check_ide_smart check_mysql_health check_real check_users
check_disk check_iferrors check_mysql_query check_rpc check_wave
check_disk_smb check_ifoperstatus check_nagios check_sensors check_wmi_plus

Install custom plug-ins

Custom Docker image

Creating a custom Docker image on top of the image provided for the OP5 Monitor Slim Poller is the recommended way to install custom plug-ins on the Slim Poller. It is a flexible approach which enables plug-in dependencies to be installed as well.

To create a custom Docker image, you need to define a new Dockerfile. This new image must be based on the naemon-core image. After you define the base image in the Docker file, you can install the plug-in and dependencies as required.

The example Dockerfile below installs plug-in check_puppetdb.

FROM op5com/slim-poller_naemon-core
						
ARG PLUGIN_PATH=/opt/plugins/custom	
							
WORKDIR /tmp							
							
# Make sure the custom plugins path exists							
RUN mkdir -p $PLUGIN_PATH
							
# Install dependencies
RUN yum install -y ruby rubygems rubygem-json
							
# install git
RUN yum install -y git
					
# Checkout the sourcecode from git
RUN git clone https://github.com/xorpaul/check_puppetdb.git
							
RUN cp ./check_puppetdb/check_puppetdb.rb $PLUGIN_PATH
							
#clean up
RUN rm -rf ./check_puppetdb/								

You can find more information on Dockerfiles at docs.docker.com.

Build image

After you define your new custom Docker image, build the image using the following command:

docker build -f <path/to/dockerfile>	

Depending on how you deploy the Slim Poller, you might need to upload or push the image to your environment, or you can upload it to hub.docker.com.

Change deployment files

After you make the new image available in your environment, change your deployment files to use this new image instead of the naemon-core image.

Updates

After each new release of the Slim Poller, you must rebuild your custom image to ensure you have the newest changes. You can use the following commands to do this manually.

docker pull op5com/slim-poller_naemon-core
docker build . --no-cache			

Plug-in state retention

Some plug-ins save state between each invocation of the plug-in, such as plug-in check_snmp_by_cpu, which saves retention data at location /var/check_snmp_by_cpu. Other plug-ins, such as check_aws, might require a login file saved on the system.

For cases such as these, we recommend that you define a persistent storage volume in your deployment files. As a starting point for creating new persistent storage, you can use the status volume from the deployment examples included above.