Geneos ["Geneos"]
You are currently viewing an older version of the documentation. You can find the latest documentation here.
["Geneos > Netprobe"]["User Guide"]

Netprobe Overview

Introduction

This topic introduces the general characteristics of the Netprobe in the Geneos architecture.

For more information on the top-level overview of Geneos, see the Geneos Product Overview.

What is a Netprobe?

A Netprobe is a lightweight monitoring agent which is deployed on every node you want to manage. As the active agent in the Geneos framework, there must be at least one Netprobe running in the environment.

Netprobes comprise the instrumentation layer of the Geneos solution, and are called probes in the Gateway Setup Editor.

Plug-ins

Plug-ins are run locally by the Netprobe to collect data. Sampled data and updates are passed from the Netprobe to the connected Gateway. The protocol used is designed to minimise network traffic. Netprobe can execute control-type functions upon receiving commands from the Gateway.

The Netprobe has a wide variety of plug-ins for you to choose from, depending on the platform, application, and statistics that you wish to monitor. To see the plug-ins supported by the Netprobe, see the list of plug-ins on the Geneos Home Page.

Samplers

A sampler is a specific configuration of a plug-in, you can run multiple instances of a plug-in with different configurations using samplers. Samplers are applied at the Managed Entities level.

Supported platforms

For a complete list of supported platforms and other compatibilities, see the Geneos Compatibility Matrix.

Netprobe features

The Netprobe has certain key features that define how you use it within Geneos.

Netprobe setup

After a Netprobe is installed and running on a machine, the next step is to set it up to connect to a Gateway. Depending on the mode that the Netprobe is running in, as well as its relationship with a Gateway, you can set up a Netprobe by itself or from the Gateway.

For more information on the Netprobe setup settings, see Netprobe setup.

Command-line options

There are some command-line options that can be passed to the Netprobe installer for Windows platforms, or directly to the Netprobe binary.

For the full list of both installer and binary command-line options, see Netprobe Command-line Options.

Internal commands

Gateway commands are the primary method of interaction between the Gateway and connected users. Commands are invoked by users through a controlling process (such as Active Console) which prompts the Gateway to perform a given operation.

For more information on the internal commands that apply to the Netprobe, see Netprobe commands in Gateway Commands.

Security features

The following are options you can perform to keep the Netprobe secure:

Transport Layer Security

Geneos components can communicate using Transport Layer Security (TLS) as well as TCP/IP. This is configured using command line options for a listening gateway and using the xml setup file for Floating and Self-Announcing Netprobes. See Secure Communications.

Variables

Variables refer to settings on the Netprobe that relate to the system or platform it is running on.

  • On non-Windows platforms, these are set as environment variables in the shell from which the Netprobe is launched.
  • On Windows platforms, these are set in the registry.

For more information on applying Netprobe variables, see Netprobe variables.

Whitelisting

Where a whitelist is defined, the Netprobe can restrict the plug-ins and commands that can be run on it to a listed few.

Whitelists are defined in the Netprobe setup file.

For more information on whitelisting, see Netprobe Whitelist.

Netprobe types

Netprobes have different modes of operation, which are referred to as their type. It is useful to configure netprobes as different types to suite the application environment.

The virtual Netprobe is an exception, as it is a completely different type of Netprobe.

Normal Netprobe

In normal mode, the Netprobe remains passive, and only starts to monitor data when it receives a configuration from an incoming Gateway connection.

In this mode, every Netprobe needs to be separately specified in the Gateway setup file. In addition, the setup needs to be manually updated if the Netprobe is migrated to a different server.

Most client implementations begin with Netprobes running in normal mode.

Floating Netprobe

In floating mode, a Netprobe and Managed Entity are configured on the Gateway, but without connection details.

On starting, the Netprobe informs the Gateway of its host name and listen port. In turn, the Gateway connects to the Netprobe. This allows a Netprobe to automatically follow an application that migrates between servers.

For more information on using floating Netprobes, see Manage floating Netprobes.

Self-announcing Netprobe

With Self-Announcing Netprobes, no Netprobe or Managed Entity is configured on the Gateway. Instead, you configure the Netprobe setup file to specify both the Netprobe and Managed Entity names, along with one or more type names that correspond to types in the Gateway setup file. This configuration allows Netprobes to start up on any hosts and immediately be configured with default monitoring. In addition, the Netprobe can fetch its setup file from a remote source.

For more information on using Self-Announcing Netprobes, see Manage Self-Announcing Netprobes.

Virtual Netprobe

Virtual Netprobes do not require you to install or run a Netprobe. Instead, virtual Netprobes appear in the Gateway directory structure as regular Netprobes, but do not make a TCP connection to an external Netprobeprocess.

Virtual Netprobes are intended to be used for configuring Gateway plug-ins and for creating user-defined dataviews using Compute Engine.

Auto-discovery

Beginning Geneos 5.1, auto-discovery is available to Netprobes.

You can enable auto-discovery by invoking the -discovery command-line option.

Discovery executable

Auto-discovery works by running a discovery executable, which extracts JSON metadata from its environment. For example:

{
	"netprobe": {
		"discovered-properties": {
			"OS": "Linux",
			"Version": "RHEL 7.1 x64",
			"Location": "AMER",
			"DataCenter": "NY4",
			"Applications": "ION_Gateway, Pricing Engine",
			"ION_HOME": "/opt/ion",
			"Database": "MySQL",
			"MySQL_HOME": "/opt/mysql",
			"JAVA_HOME": "/usr/jre8"
		}
	}
}

You can tailor the executable to conform to your Geneos implementation. Nonetheless, a template is available in the Netprobe binary:

  • netprobe/templates/discovery.tmpl.sh for Linux and other platforms.
  • netprobe\templates\discovery.tmpl.bat for Windows.

JSON metadata

Using the JSON metadata, the executable updates the Netprobe setup file with the metadata through the use of macros.

If you specify a setup URL with the ? character, then the Netprobe appends the JSON metadata as query parameters and pulls the setup file from the URL specified.

Local Netprobe setup file

The following example shows a raw Netprobe setup file:

<?xml version="1.0" encoding="ISO-8859-1"?>
<netprobe
	compatibility="1"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:noNamespaceSchemaLocation="http://schema.itrsgroup.com/GA2011.2-110303/netprobe.xsd">
	<selfAnnounce>
				<enabled>true</enabled>
				<retryInterval>60</retryInterval>
				<requireReverseConnection>false</requireReverseConnection>
				<probeName>SAN-[[$HOSTNAME]]</probeName>
				<managedEntity>
						<name>me-san-centos6</name>
						<attributes>
								<attribute name="OS">[[$OS]]</attribute>
						</attributes>
						<types>
							<type>type</type>
						</types>
				</managedEntity>
				<gateways>
						<gateway>
								<hostname>192.168.101.52</hostname>
								<port>12912</port>
						</gateway>
				</gateways>
	</selfAnnounce>
</netprobe>

The following example shows a resolved Netprobe setup file:

<?xml version="1.0" encoding="ISO-8859-1"?>
<netprobe
	compatibility="1"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:noNamespaceSchemaLocation="http://schema.itrsgroup.com/GA2011.2-110303/netprobe.xsd">
	<selfAnnounce>
				<enabled>true</enabled>
				<retryInterval>60</retryInterval>
				<requireReverseConnection>false</requireReverseConnection>
				<probeName>SAN-ATSCENTOS63</probeName>
				<managedEntity>
						<name>me-san-centos6</name>
						<attributes>
								<attribute name="OS">Linux</attribute>
						</attributes>
						<types>
							<type>type</type>
						</types>
				</managedEntity>
				<gateways>
						<gateway>
								<hostname>192.168.101.52</hostname>
								<port>12912</port>
						</gateway>
				</gateways>
	</selfAnnounce>
</netprobe>

URL query parameters

Consider the following JSON metadata:

{
	"netprobe": {
		"discovered-properties": {
			"hostname": "ATSCENTOS64",
			"OS": "Linux"
		}
	}
}

The following example shows a URL from which the Netprobe attempts to retrieve as setup file using JSON metadata as query parameters:

http://localhost:8080?hostname=ATSCENTOS63?OS=Linux

Discovery debug

If you enable the DISCOVERY_DEBUG environment variable, then the NetprobeXML setup, both raw and resolved, is saved in the Netprobe log. For more information, see DISCOVERY_DEBUG in Netprobe variables.

Collection Agent

Note: Beginning Geneos 5.1, the Collection Agent is included in the Netprobe binaries for Windows and generic Linux platforms.

What is the Collection Agent?

The Collection Agent is a piece of software that is able to run dynamic plugins. Dynamic plugins are plugins that can do the following:

  • Discover what is available to monitor.
  • Collect metadata about the monitored applications.
  • Collect dimensional data.

Collection Agent plugins are separate from the Collection Agent binary. Therefore, Collection Agent plugins can be downloaded and upgraded independently of the Collection Agent itself.

Collection Agent is written in Java, so it is important to have the right version of Java installed on the host running it.

For more information on supported Java versions and platforms, see the 5.x Compatibility Matrix.

Why do we need the Collection Agent?

The Collection Agent’s dynamic plugins have many advantages. One great advantage is that there are dimensions attached to every datapoint. This dimensional data is metric, log, and event data that is self-describing. Because the data is self-describing, it requires far less configuration in Geneos to set up these plugins.

Examples of dimensions for a metric might be publishing an application’s name, the host name, its IP address, and so on. This information will be used to show the metric in the right place in the Active Console State Tree.

Collection Agent data items are also strongly typed and have a unit of measure, which makes aggregation and analytics on the data much more efficient.

A Collection Agent and Netprobe can be used together to dynamically collect, identify, and visualise application metrics in a constantly changing environment.

Additionally, this solution makes some improvements over standard Netprobes:

  • Allows applications to push metrics and logs ensuring that nothing is lost. The Collection Agent's log processing plugins will persist logs on disk until it publishes them to the Netprobe, thus ensuring no data is lost in case the Netprobe is unavailable.
  • Uses a dimensional data model which retains more information about the data being sent.

Collection Agent components

The Collection Agent gathers application data points and reports them to Geneos for visualisation. An instance is deployed to each host running monitored applications.

Collection Agent has the following sub-components:

  • Collectors
  • Workflow pipelines
  • Reporters

Collectors

Collectors collect application-specific data points from one or more applications that are monitored. They are packaged as JAR files. Core Collection Agent collectors are offered out of the box.

There are three collectors:

  • StatsD — receives metrics from instrumentation libraries. There are two libraries that send data that can be consumed by this collector:
    • StatsD client Java library
    • StatsD client Python library
  • Kubernetes metrics collector (KubernetesMetricsCollector) — listens to Kubernetes API.
  • Kubernetes log collector (KubernetesLogCollector) — locates and reads application logs.

Workflow pipelines

A workflow is composed of pipelines, which have processors to perform operations on the data collected from the applications and services. A workflow receives data from collectors, enriches this data, and sends it to a reporter. Filtering and enriching of data is configured per pipeline. There is a pipeline for each class of data:

  • Metrics
  • Logs
  • Events

Once data has been processed a pipeline then sends data to a reporter. One pipeline can send data to one reporter.

Each workflow pipeline is backed by a store in which data points are buffered before being sent to a reporter. Each store is configured with a maximum capacity. The store can be either in memory or on disk

Processors

A pipeline configuration is composed of processors. When data crosses the boundary from a collector, it becomes a message. Messages are then modified by processors in stages as they move through a pipeline.

There are four types of processors:

  • Enrichment processors
  • Pass filter processors
  • Stop filter processors
  • Throttle processors

Reporters

Reporters publish data from workflows. For example, data can be published to Geneos where it is visualised.

There are three reporters:

  • Logging reporter — logs data points to stdout.
  • TCP reporter — allows the Collection Agent to communicate with the Netprobe.
  • Kafka reporter

Multiple instances of the same reporter can exist.

Collection Agent plug-ins

The functionality of the Collection Agent can be extended using plug-ins. A Collection Agent plug-in is a JAR file that contains one or more collector, processor, and reporter components that facilitate data collection from a specific source.

The following plug-ins are available:

  • StatsD plug-in — provides a StatsD server that allows custom metrics to be collected from any application instrumented with a StatsD client.
  • Kubernetes plug-in — provides a suite of collectors and processors necessary for collecting logs, metrics, and events in a Kubernetes or OpenShift environment.