Geneos ["Geneos"]
You are currently viewing an older version of the documentation. You can find the latest documentation here.
["Geneos > Gateway"]["Technical Reference"]

Publish to Kafka

Publishing data

Introduction

The Gateway can publish data to external systems using a dynamically loaded adapter.

From Geneos v4.12, only an adapter for publishing using Apache Kafka is supported. The adapter is provided as a shared object in the lib64 directory (lib64/geneos-adapter-kafka.so). If you need to run the Gateway from a directory that is not the parent of lib64, please ensure that the Kafka adapter can be located either via the LD_LIBRARY_PATH environment variable or via the adapter.library.path setting. See publishing > additionalSettings.

The Gateway can use the Kafka adapter to publish data to a Kafka cluster (minimum version 0.9), which can be read by any system that implements a Kafka subscriber. In a Hot Standby configuration, only the active Gateway publishes to the Kafka cluster, and data consumers are isolated from any failover.

The Kafka adapter supports SSL encryption and authentication, and this must be configured both on the Kafka cluster and in the Gateway setup file. See publishing > additionalSettings for more details.

When Publishing is enabled, the Gateway streams the following fixed set of data:

  • directory information — data about each probe, managed entity, and dataview known to the Gateway.
  • metrics — all dataview cells and headlines.
  • metadata — severity, snooze, and user assignment information.

This data is published using JSON-based formats which are described below.

Configuration

publishing

This section defines the parameters controlling publishing to external systems.

publishing > enabled

This setting allows publishing to be enabled/disabled. By default publishing is turned off, however if a publishing section exists then it is turned on. This setting allows that to be overridden, so publishing is disabled but there is configuration saved for a later date.

Mandatory: No
Default: true

Note: Publishing and Gateway Hub can be enabled at the same time. Errors generated by attempting to publish using Publishing do not affect the operation of Gateway Hub. Similarly, errors generated by attempting to publish using Gateway Hub do not affect the operation of Publishing.

publishing > adapter

The adapter section allows the adapter to be selected and holds settings specific to the selected adapter.

publishing > adapter > kafka

Settings for the Kafka adapter.

publishing > adapter > kafka > topicPrefix

Specifies the first part of the topic names under which data is published and the first part of the topic name to which the Kafka adapter subscribes for requests, such as metrics snapshots. By default, the publishing topics used include geneos-probes, geneos-raw.table, (and eight other names beginning geneos-) and the request topic is geneos-requests. This setting allows the topics to be changed to, for example, my-test-probes, my-test-raw.table and so on.

Mandatory: No
Default: geneos-

publishing > adapter > kafka > brokerList

Specifies a comma-separated list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The adapter makes use of all servers irrespective of which servers are specified here for bootstrapping - this list only impacts the initial hosts used to discover the full set of brokers. Please refer to the description of bootstrap.servers in the Kafka documentation for more information.

Mandatory: No
Default: localhost:9092

publishing > additionalSettings

This advanced configuration option allows additional settings to be specified. These settings either control the way the adapter is loaded or are interpreted by the adapter itself.

Settings are entered using the same syntax as a Java properties file, each line specifies a key-value pair.

Settings with the prefix adapter. control the way the adapter is loaded. The settings available are:

  • adapter.library.path — allows you to specify the location of the shared library that implements the adapter. By default, this is loaded from the lib64 sub-directory of the Gateway's working directory; you can specify an alternate location, either as a path relative to the Gateways's working directory or as an absolute path (as in the example above.)
  • adapter.gateway.name — used with the topic prefix to construct a unique Kafka consumer group id for the Gateway. The default group id is <topicprefix><gatewayname>, e.g. geneos-DevGateway. You can override the Gateway name part with this setting, but there is generally no need to do so.

The Kafka adapter uses the librdkafka library to connect to Kafka and supports transparent pass-through of settings to the library. Any global setting (other than callbacks) defined in the librdkafka documentation can be used by prefixing their names with kafka..

In the example above, the key kafka.client.id refers to the librdkafka option documented as client.id.

Note: The Kafka adapter discards incoming messages when the librdkafka publishing queue is full. When publishing is resumed, the adapter reports the number of discarded messages. The length of the queue is modified by including a kafka.queue.buffering.max.message setting here.

SASL PLAIN authentication

To use SASL PLAIN for authentication on a normal connection, use the following template, replacing the username and password with your credentials:

kafka.sasl.mechanism=PLAIN
kafka.security.protocol=SASL_PLAINTEXT
kafka.sasl.username=<username>
kafka.sasl.password=<password>

Note: Username and password are cleartext.

SSL encryption

To use SSL encryption for the connection, the SSL protocol must be enabled and the certificate used to sign the Kafka brokers' public keys must be trusted by the Kafka adapter. Use the following template, replacing the location with your credentials:

kafka.security.protocol=ssl
kafka.ssl.ca.location=<location of CA certificate PEM file>
SSL encrypted connection with SASL PLAIN authentication

To use SSL encryption with SASL PLAIN for authentication, use the following template, replacing the location, username, and password with your credentials:

kafka.sasl.mechanism=PLAIN
kafka.security.protocol=SASL_SSL
kafka.ssl.ca.location=<location of CA certificate PEM file>
kafka.sasl.username=<username>
kafka.sasl.password=<password>

Note: Username and password are cleartext.

SSL encryption with SSL authentication

To use SSL encryption with SSL authentication, the Kafka adapter must be able to present a certificate which is signed using a certificate trusted by the broker(s). Use the following template, replacing the locations with your credentials:

kafka.security.protocol=ssl
kafka.ssl.ca.location=<location of CA certificate PEM file>
kafka.ssl.certificate.location=<location of client certificate>
kafka.ssl.key.location=<location of private key for client certificate>
Use Kerberos to connect to Kafka on Linux 64-bit Gateways

Linux 64-bit Gateways can connect to Kafka using Kerberos. Use the following template:

kafka.sasl.kerberos.service.name=<service name>
kafka.security.protocol=SASL_PLAINTEXT
kafka.sasl.kerberos.keytab=<keytab file>
kafka.sasl.kerberos.principal=<Kerberos principal>

The default value of kafka.sasl.kerberos.service.name is kafka. Use this value unless the Service Principal Names for the Kafka cluster have been set up with a different service name.

Use your credentials for the kafka.sasl.kerberos.keytab and kafka.sasl.kerberos.principal fields. The keytab file must encode the password for the username specified as kafka.sasl.kerberos.principal. You can generate the keytab file using the Unix utility ktutil or the Windows Server built-in utility ktpass.exe.

Note: If you are using Kerberos to connect to Kafka, and the Gateway's working directory is not the package directory, you must set the SASL_PATH environment variable. It must point to the sasl2 directory inside the lib64 directory of the Gateway package. Ensure you have also set the LD_LIBRARY_PATH environment variable or the adapter.library.path setting to locate the required libraries.

Test your connection to Kerberos

Before you test Kerberos with your Gateway, use the commands kinit and klist to test that the keytab file can connect to your Kerberos server and a valid ticket is returned. Use the following template for kinit, replacing the service name, broker hostname, keytab file, and Kerberos principal with your credentials:

$ kinit -S <service name>/<broker hostname> -k -t <keytab file> <Kerberos principal>
Mandatory: No

publishing > secureSettings

This section allows settings that cannot be set in cleartext, such as passwords, to be encrypted in the Gateway setup file.

publishing > secureSettings > setting > name

The name of the secure setting.

Mandatory: Yes

publishing > secureSettings > setting > value

The value of the secure setting.

This can either be:

  • stdAES — AES 256-bit encryption. The password is entered in the Set password box.
  • var— a reference to a variable that provides a password in Operating Environment. References to invalid variables, or variables that are not AES-encrypted, are treated as errors.

Strategies

Setting Description Default
Name Specify a name to uniquely identify the strategy. New Strategy
Targets

Set of dataviews that the strategy applies.

For each strategy you must specify an XPath that identifies a dataitems that the strategy applies to.

If the filter option is selected, then the target Xpaths must point to one or more Netprobes, Managed Entities, or samplers. Xpaths pointing to dataviews or individual cells , or that reference run-time values (for example severity or connection state) are not supported.

If the pivotDataview or schedule option is selected, then the target must be a dataview and this is checked when setup is validated.

The ancestors and descendant of all matching dataitems are also published.

 
Options

Specify what type of strategy to use, the following options are available:

  • filter — publish only the dataitems specified in the targets, their ancestors and their descendants. The metrics, severity messages, snooze messages and user assignment messages of all dataitems not explicitly or implicitly targeted are not published. Severity is propagated through published data based only on included dataitems.

  • pivotDataview — pivots the target dataviews such that the rownames of the original dataview become column names and the adjacent columns become rows. Pivot behaviour is determined by the settings in the publisher tab of target samplers.

  • schedule — publish only at increments of the specified interval.

filter
     

Schedule

Setting Description
Every Number of intervals between publishing operations.
Interval

Size of the interval, the following options are available:

  • Days

  • Hours

  • Minutes

Starting at Specify a starting time for the schedule.
Timezone Specify the timezone used to set the schedule.
   

Strategy Group

Strategies may be grouped for organisational purposes. This has no effect on their application but is useful for grouping categories and making them easier to find in the navigation tree.

You must specify a name when creating a strategy group.

kafkacat

kafkacat is an open source utility written and maintained by the author of the librdkafka library used by Geneos. kafkacat is shipped with Linux 64-bit Gateways to ease the testing of connecting to your Kafka infrastructure. For more information about kafkacat, see https://github.com/edenhill/kafkacat.

To ensure that kafkacat uses the same Kafka, SSL, and SASL libraries as the Gateway, kafkacat must be run with the following environment variables:

  • LD_LIBRARY_PATH — This must point at the lib64 library supplied as part of the Gateway bundle.
  • SASL_PATH — This must point at the sasl2 directory in the Gateway lib64 directory.

Enable Kafka debug logging

To enable Kafka debug logging, follow these steps:

  1. Open your GSE.
  2. Click Operating environment in the Navigation tree.
  3. Select the Debug tab.
  4. Open the drop-down list by the Publishing, and tick adapter.
  5. Click Publishing in the Navigation tree.
  6. Select the Advanced tab.
  7. In Additional settings, add kafka.debug= followed by the debug categories. We recommend it is set to topic,protocol.
    • For a full set of debug categories, see the librdkafka documentation. The queue and all categories are extremely verbose.
  8. Click Save current document.

Standardised formatting

Some data can vary in format from dataview to dataview or even from row to row or column to column. Standardised Formatting allows normalisation of this data to a standard format for downstream systems.

Currently dateTime is supported. The dateTime formatter can be applied as shown below: Date-times, dates and times are formatted to ISO 8601 format.

Type Format
Date-time 2012-07-27T19:12:00Z
Date-time with micro seconds 2012-07-27T19:12:00.123456Z
Date 2012-07-27

Note the adjustment to UTC. Cells containing only times are assumed to be for the current day and formatted as a date-time.

There are two types of standardised formatting: System supplied formatting and User specified formatting.

System supplied formatting

The Gateway provides a set of converters for cells in Geneos plugins.

These are defined in a file located in <gateway resources directory>/standardisedformats/formats.json. By default, the gateway resources directory is <gateway directory>/resources but you can modify this using a Command line options.

This data in this file is in JavaScript Object Notation (JSON).

It is recommended that this file be left unchanged. However, in unusual circumstances it may be preferable to update the formats.json file rather than generate a set of user specified formatting.

The file has a number of entries under the "formats" label at the start. Taking one as an example it breaks down as follows:

{
"formats" : [
{
"type"     : "datetime",
"format"   : "%a %b %d %H:%M:%S %Y",
"region"   : "gateway",
"plugin"   : "Gateway-gatewayData",
"cell"     : { "row" : "licenseExpiryDate", "column" : "value"}
},
...
]}
Element Description
type

Data type of the data being formatted.

Currently the only format is "datetime".

format

Specifies the expected format of the data.

See the following section for a description of the time parsing codes.

Where you need several of these to accommodate different servers that may generate dates in differing formats use the "formats" tag defined below.

formats Where a variable may be in a number of formats an array of alternative formats may be specified.
{
...
"formats",
["%a %b %d %H:%M:%S %Y", "%s"]
...
}
region

Timezone information for the formatter.

For our purposes these should be "gateway" or "netprobe" depending on the location of the sampler.

The formatter will use the timezone of the specified component. Where netprobe timezone isn't known it will fall back to the Gateway.

plugin The name of the plugin this formatter applies to. The formatter applies to all dataviews created by the plugin unless you restrict this.
dataview

Only applies to variables within the named dataview.

{
...
"dataview", "overview"
...
}
cell

Targets the formatting of a specific row / column pair.

For more flexibility, you can target a range of columns.

{
...
"cell"  : { "row" : "licenseExpiryDate", "column" : "Value"}
...
}

cols

An array or regular expressions used to target columns within a dataview.

{
...
"cols"     : ["applied|changed"]
...
}

Will target all column headings containing "applied" or "changed" as part of the column name. i.e. "appliedDate"

headlines An array or regular expressions used to target columns witin a dataview. Works as cols above and can be used in conjunction.

Note: Some "formats" entries may end with a special format called "raw". This prevents errors from being logged where none of the formats were able to process the input.

User specified formatting

Certain dataviews contain cells where the format of the cell cannot be determined. This may be because the data is generated by a user script, or it may be because a Geneos plugin is extracting data whose format is determied by the enviroment of the server on which the plugin is running.

The standardised formatting tab on samplers allows the user to describe the originating format of variables (cells and headlines). This allows them to be transformed into standardised format. You can also specify which cells in the sampler the format is applied to by using the dataview name and the applicability sections.

Configuration

samplers > sampler > publishing > standardisedFormatting

See Standardised formatting in Publish to Kafka.

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview

Provides a list of standardised formatting variables to apply to one or more dataviews in the sampler. When the name of the dataview is set the variable definitions are restricted to the named dataview. If the name is left unset the variable definitions applies to all dataviews belonging to the sampler.

Mandatory: No
Default: Not set

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > Name

Name of the variable.

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable

Variable definition specifying type and applicability of variable.

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type

Specifies the type of the variable. Currently on dateTime is supported.

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime

Specifies that the variable is a date-time (includes dates and times which are assumed to be for the current day).

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime > formats > format

Specifies the expected format of the data. See the following section for a description of the time parsing codes. There can be several of these to accommodate that different servers may generate dates in differing formats.

Mandatory: Yes

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime > exceptions > exception

Specifies exceptions to Standardised Formatting. If the text specified here matches the value of a published cell, no formatting is applied and no error is logged. This is for instances where dataviews have invalid values such as a blank string ("") or "N/A" as a date.

For data published via Publishing using Kafka, the data is published unchanged.

For data published to Gateway Hub, any exceptions are published as "N/A". This allows the Gateway Hub to recognise and process the data.

Mandatory: No

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime > ignoreErrorsIfFormatsFail

If set to true and the none of the formats can translate the data then the errors generated are suppressed. This can be used to specify a NULL format that overrides a system provided one with no formatting. This can be useful for example if you don't care about formatting these and the output is in an unexpected format due to locale.

Mandatory: No
Default: false

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > type > dateTime > overrideTimezone

The default timezone date is that of the dataview. i.e. if the Probe has a timezone set the date would be interpreted from that timezone. If not the timezone will match the Gateway's timezone. This setting allows you to explicitly set the timezone from which the data is being received.

Mandatory: No

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability

Allows matching of formats to dataview variables.

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability > headlines

Allows you to match variables to headline variables.

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability > headlines > regex

A regular expression to match against the headline names.

Expression Effect
date Matches all headlines containing date.
date$ Matches all headlines ending in date$.
^date$ Matches the healdine date.

All values within the matching headlines will have the formatter applied for publishing.

Mandatory: No

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability > columns > regex

A regular expression to match against the column names.

Expression Effect
date matches all columns containing date
date$ matches all columns ending in date$
^date$ matches the column date

All values within the matching columns will have the formatter applied for publishing.

Mandatory: No

samplers > sampler > publishing > standardisedFormatting > dataviews > dataview > variables > variable > applicability > cells

Maps to a specific row and column. These mappings are good for dataviews made from name, value pairs and take priority on lookup over column regular expressions. As these are specific the row and columns are not regular expressions.

Mandatory: No

operatingEnvironment > debug > DirectoryManager > showStandardisedFormattingErrors

Turns on error reporting for errors found processing inputs to standardised formats. If this is not enabled and errors are found then a summary of the number of errors found is logged at 10 minute intervals.

Publishing message formats overview

The Gateway can be configured to publish metrics, directory data and metadata in JSON format.

The Gateway supports publishing to a Kafka cluster, as a Kafka "producer client". This provides a more resilient mechanism for publishing and allows other systems to consume Geneos data by implementing a Kafka consumer client.

The configuration settings used to enable publishing using either adapter are described above. This remainder of this document describes the message formats used.

Message presentation

Although the format of the message payload sent by the Gateway is the same in each case, the different publishing platforms support different mechanisms for selecting and identifying messages. They also use different terminology to refer to similar concepts.

In this document we use the following terms:

  • Topic
    A string identifying the category of message, for example "raw.headlines". Messages within a topic have similar schemas (in fact, apart from metrics messages, all messages in a topic have the same schema.)
  • Key
    A string identifying a subset of a topic, for example the key "gatewayLoad.gatewayLoad..Gateway.vp.Ad-hoc GW" might appear within the topic "raw.headlines". All messages with the same key will have the same schema, but there are better ways to infer the schema than by inspecting the key.
    All messages which have a sequencing relationship to each other will have the same key: keys may be used to sub-divide topics for load balancing or to filter messages of interest for a particular application.
    Directory message topics are not subdivided in this way; they use an empty string as a key.
  • Payload
    The actual data to be consumed by a client system. The payload is a JSON object, described in detail at Message payloads and Appendix: Payload schema below.

Kafka topics, keys and messages

When the Kafka adapter is used:

  • The message topic is prefixed with a configurable string (the default is "geneos-") and used as the Kafka topic.
  • The message key is used to allocate messages to partitions within the Kafka topic and is available as the message key to Kafka clients.
  • The message payload becomes the message body.

Message key format (subject to change)

In principle, the message key is an opaque string used to allocate messages to partitions in Kafka. Applications needing to filter messages should ignore the key and use the "target" object within the payload.

In practice, "what is the format of the message key?" is a frequently asked question. At present, it is roughly equivalent to a reversed XPath delimited by dots:

<datview>.<sampler>.<type>.<managed entity>.<probe>.<gateway>

For example (note that if a component name is not applicable, it will be omitted, but the following dot will not):

CPU.CPU..myEntiy.theProbe.Ad-hoc GW
....theProbe.Ad-hoc GW

The number of name components (and hence the number of dots) is the same for all metrics and metadata messages.

Message payloads

A JSON schema for the publishing message payloads is provided as Appendix: Payload schema.

The examples in this section are pretty-printed: the messages published by Gateway have no redundant whitespace.

Directory messages

Messages on the probes, managedEntities and dataviews topics provide information about the corresponding items in the Gateway directory hierarchy. They are published as these items are created and deleted and as their run-time attributes are updated.

Messages in this category are also retransmitted when publishing setup is reconfigured (for example to change the Kafka topic prefix).

Probes

Example messages

A message sent as a Netprobe comes up:

{
  "data": {
    "timestamp": "2015-07-01T16:18:23.263Z",
    "name": "theProbe",
    "gateway": "Ad-hoc GW",
    "osType": "Linux",
    "HostName": "linux-dev",
    "Port": "7036",
    "ConState": "Up",
    "OS": "Linux 2.6.18-371.4.1.el5",
    "Version": "GA3.1.1-150515"
  },
  "operation": "update"
}

A message describing a virtual probe:

{
  "data": {
    "timestamp": "2015-10-20T09:15:04.732Z",
    "name": "vp",
    "gateway": "Ad-hoc GW",
    "osType": "Virtual",
    "ConState": "Up"
  },
  "operation": "update"
}

Points to Note:

    • The properties whose names start with capital letters are configuration ('HostName', 'Port') or run-time ('ConState', 'OS', 'Version') parameters. Of these, only 'ConState' is applicable to virtual probes.
    • When a connection is established to a probe, several update messages will be published, one for each run-time parameter.

Managed Entities

Example message
{
  "data": {
    "timestamp": "2016-05-25T15:02:18.255Z",
    "name": "theEntity",
    "probe": "theProbe",
    "gateway": "Ad-hoc GW",
    "attributes": {
      "Team": "Middleware",
      "Purpose": "Testing"
    }
  },
  "operation": "update"
}

Note: An update message is published whenever a managed entity attribute is changed, added or deleted.

Dataviews

Example message
{
  "data": {
    "timestamp": "2016-05-27T12:51:16.009Z",
    "dataview": "CPU",
    "sampler": "CPU",
    "pluginName": "CPU",
    "type": "Default Samplers",
    "managedEntity": "basics",
    "probe": "theProbe",
    "gateway": "Ad-hoc GW",
    "topicSuffix": "CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW",
    "availableTopics": [
      "enriched.table.CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW",
      "enriched.headlines.CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW",
      "raw.table.CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW",
      "raw.headlines.CPU.CPU.Default Samplers.basics.theProbe.Ad-hoc GW"
    ]
  },
  "operation": "create"
}

Note: Update messages are only sent on the dataviews topic in response to a client request or a change in Publishing configuration.

Metrics messages

Raw and enriched forms of metrics data

Metrics data is available in two forms:

  • Raw form — includes only the rows and cells provided by the data source (normally a Netprobe, but possibly a Gateway plugin, or, in the case of Gateway sharing, an exporting Gateway.) In this case the published sample time is provided by the data source.
  • Enriched form — also includes additional rows and cells configured on the publishing Gateway and (usually) populated by rules. In this case the published sample time is the time on the publishing Gateway that the dataview was last updated (either by receiving an update or by processing a rule.)
    When new data for a row is provided by the data source and then values in that row are computed by Gateway rules, an update message will be published for each of these changes as they occur: first the change from the data source and then each change made by a rule.

Headline data

  • Raw headline messages are published on the topic raw.headlines.
  • Enriched headline messages are published on the topic enriched.headlines.

A raw message:

{
  "data": {
    "sampleTime": "2016-05-27T12:54:29.685Z",
    "target": {
      "gateway": "Ad-hoc GW",
      "probe": "theProbe",
      "managedEntity": "basics",
      "type": "Default Samplers",
      "sampler": "CPU",
      "dataview": "CPU",
      "filter": {
        "osType": "Linux",
        "pluginName": "CPU"
      }
    },
    "samplingStatus": "OK",
    "numOnlineCpus": "2",
    "loadAverage1Min": "0.00",
    "loadAverage5Min": "0.00",
    "loadAverage15Min": "0.00",
    "numPhysicalCpus": "1",
    "HyperThreadingStatus": "DISABLED",
    "numCpuCores": "2"
  },
  "operation": "create"
}

This message represents the headlines of a dataview.

The 'target' property identifies the dataview. The 'filter' property of the target provides additional information which may help a consumer of the message to determine which properties should appear in the rest of the message. For example, the Geneos CPU plugin on Windows publishes a dataview that has different headlines and column names from the CPU plugin on Linux.

The properties that follow 'target' are the headlines for the dataview. 'samplingStatus' is always present as the next property after 'target'; in some cases it is the only headine present.

An enriched message for the headlines of the same dataview:

{
  "data": {
    "sampleTime": "2016-05-27T12:54:29.877Z",
    "target": {
      "gateway": "Ad-hoc GW",
      "probe": "theProbe",
      "managedEntity": "basics",
      "type": "Default Samplers",
      "sampler": "CPU",
      "dataview": "CPU",
      "filter": {
        "osType": "Linux",
        "pluginName": "CPU"
      }
    },
    "samplingStatus": "OK",
    "numOfflineCpus": "0",
    "numOnlineCpus": "2",
    "loadAverage1Min": "0.00",
    "loadAverage5Min": "0.00",
    "loadAverage15Min": "0.00",
    "numPhysicalCpus": "1",
    "HyperThreadingStatus": "DISABLED",
    "numCpuCores": "2"
  },
  "operation": "update"
}

This message represents the headlines of the same dataview. The differences from the "raw" message are:

  • computed headlines are included: in this example there is a computed headline called "numOfflineCpus" (a meaningless number included for the sake of this example);
  • the 'sampleTime' is the time the Gateway generated the message.

Table (cell) data

  • Raw table messages are published on the topic raw.table.
  • Enriched table messages are published on the topic enriched.table.
Example messages

A raw message:

{
  "data": {
    "sampleTime": "2016-05-27T12:59:56.085Z",
    "target": {
      "gateway": "Ad-hoc GW", "probe": "theProbe", "managedEntity": "basics",
      "type": "Default Samplers", "sampler": "CPU", "dataview": "CPU",
      "filter": { "osType": "Linux", "pluginName": "CPU" }
    },
    "name": "Average_cpu",
    "row": {
      "type": "",
      "state": "",
      "clockSpeed": "",
      "percentUtilisation": "0.97 %",
      "percentUserTime": "0.53 %",
      "percentKernelTime": "0.18 %",
      "percentWaitTime": "0.25 %",
      "percentIdle": "99.03 %"
    }
  },
  "operation": "update"
}

This message represents a row from a dataview. As in the case of headline messages, the 'target' property identifies the dataview and provides additional information to help determine which properties should appear in the rest of the message.

The 'name' property identifies the row; the 'row' property contains the cell data. Because this is a "raw" message, no computed cells are shown and the 'sampleTime' property is the time the netprobe published the sample.

An enriched message for the same table row

{
  "data": {
    "sampleTime": "2016-05-27T12:59:56.283Z",
    "target": {
      "gateway": "Ad-hoc GW", "probe": "theProbe", "managedEntity": "basics",
      "type": "Default Samplers", "sampler": "CPU", "dataview": "CPU",
      "filter": { "osType": "Linux", "pluginName": "CPU" }
    },
    "name": "Average_cpu",
    "row": {
      "workDone": "3194.4816",
      "type": "",
      "state": "",
      "clockSpeed": "",
      "percentUtilisation": "0.97 %",
      "percentUserTime": "0.53 %",
      "percentKernelTime": "0.18 %",
      "percentWaitTime": "0.25 %",
      "percentIdle": "99.03 %"
    }
  },
  "operation": "update"
}

This message represents the same row from the same dataview. The differences from the "raw" message are:

  • computed columns are included: in this case the cell data includes a computed column called "workDone" (a meaningless number computed for the sake of this example);
  • the 'sampleTime' is the time the Gateway generated the message.
Sample times for imported data

Where Gateway sharing is used, the "raw form" messages for imported metrics data may include rows, columns or headlines added by the exporting Gateway. For each update, the sample time will reflect the original source of the data.

That is, when a new dataview sample is provided by a Netprobe, the exporting Gateway will first forward that sample, with sample time provided by the Netprobe and then, if a rule is triggered, will send a further update for computed data using its own time for the sample time. The raw form published by the importing Gateway will therefore include sample times from both the Netprobe and the exporting Gateway. Although all timestamps are UTC, if the operating system clocks are not synchronised, the sample times in the "raw form" published messages may be inconsistent. For example, computed updates may be published with earlier sample times than the data from which they are calculated.

Points of Note:

  • All dataview cells and headlines are published as strings. Although data may be provided as strings or numbers by plugins, some plugins may vary the data type of a column from one row to the next or from one sample to the next. If the Gateway were to attempt to use the current data type of a cell when formatting JSON messages, the implied schema might vary from one message to the next.
  • Certain cells are marked as dateTime cells by the gateway. This is done using Standardised Formatting (see Standardised formatting). If a cell is marked as a dateTime cell then the contents of the cell will be reformatted prior to being published. The format used is ISO 8601 in the form e.g. "1970-02-01T09:00:00.123456". Note the fractional part is optional and may go to microsecond accuracy depending on the data. A dateTime can also be sent as a pure date e.g. "1970-02-01".

Metadata messages

These messages provide information about severity, snooze status and user assignment of data items at all levels of the directory hierarchy.

Severity messages

Severity messages are published on the topic metadata.severity.

Example messages

Here are two of the messages generated when the status of a dataview headline changed from "WARNING" to "OK":

{
  "data": {
    "timestamp": "2016-05-25T15:02:18.357Z",
    "target": {
      "gateway": "Ad-hoc GW",
      "probe": "vp",
      "managedEntity": "Gateway",
      "sampler": "probeData",
      "type": "",
      "dataview": "probeData",
      "headline": "samplingStatus",
      "filter": {
        "osType": "Virtual",
        "pluginName": "Gateway-probeData"
      }
    },
    "severity": "OK",
    "active": true,
    "snoozed": false,
    "snoozedParents": 2,
    "userAssigned": false,
    "value": {
      "cell": "OK"
    }
  },
  "operation": "update"
}

{
  "data": {
    "timestamp": "2016-05-25T15:02:18.358Z",
    "target": {
      "gateway": "Ad-hoc GW",
      "probe": "vp",
      "managedEntity": "Gateway",
      "filter": {
        "osType": "Virtual"
      }
    },
    "severity": "OK",
    "active": true
    "snoozed": false,
    "snoozedParents": 2,
    "userAssigned": false,
  },
  "operation": "update"
}

Here are some examples of severity messages for dataview items resulting from severity change:

 {
   "data": {
     "timestamp": "2016-07-19T14:51:15.451Z",
     "target": {
       "gateway": "Ad-hoc GW",
       "probe": "vp",
       "managedEntity": "Gateway",
       "sampler": "gateway",
       "type": "",
       "dataview": "gateway",
       "row": "databaseHost",
       "column": "value",
       "filter": {
         "osType": "Virtual",
         "pluginName": "Gateway-gatewayData"
       }
     },
     "severity": "OK",
     "active": true,
     "snoozed": false,
     "snoozedParents": 2,
     "userAssigned": false,
     "value": {
       "cell": "dbhost"
     }
   },
   "operation": "update"
 }

 {
   "data": {
     "timestamp": "2016-07-19T14:51:15.446Z",
     "target": {
       "gateway": "Ad-hoc GW",
       "probe": "vp",
       "managedEntity": "m",
       "sampler": "gw",
       "type": "",
       "dataview": "gw",
       "row": "releaseAge",
       "column": "value",
       "filter": {
         "osType": "Virtual",
         "pluginName": "Gateway-gatewayData"
       }
     },
     "severity": "CRITICAL",
     "active": true,
     "snoozed": false,
     "snoozedParents": 2,
     "userAssigned": false,
     "value": {
       "cell": "4 days",
       "number": 4
     }
   },
   "operation": "create"
 }

 {
   "data": {
     "timestamp": "2016-07-18T15:47:34.288Z",
     "target": {
       "gateway": "Ad-hoc GW",
       "probe": "vp",
       "managedEntity": "m2",
       "sampler": "gw",
       "type": "",
       "dataview": "gw",
       "row": "licenseExpiryDate",
       "column": "value",
       "filter": {
         "osType": "Virtual",
         "pluginName": "Gateway-gatewayData"
       }
     },
     "severity": "OK",
     "active": true,
     "snoozed": false,
     "snoozedParents": 2,
     "userAssigned": false,
     "value": {
       "cell": "2021-01-31T00:00:00Z",
       "dateTime": "2021-01-31T00:00:00Z"
     }
   },
   "operation": "update"
}

Points of Note:

  • A severity message is generated for each data item in the Gateway directory hierarchy whose status is changed.
  • The 'target' property identifies the data item to which the message refers. Only the path components that apply to that item are present.
  • For example, the 'type' property is present in the first example, even though it is empty, but none of the properties from 'sampler' on are present in the second example.
  • 'create' messages are never published for severity metadata, because all data items are created in active state and with undefined severity.
  • 'delete' messages are only published if the item being deleted is inactive or has a severity other than "UNDEFINED".
  • The "value" field is made up of "cell" which is the original cell value. If a number is detected then the optional "number" field is also present. If the value is identifiable is a date-time then the "dateTime" field is present. As with all date-times published this is formated to ISO 8601 format. This provides metadata to downstream systems that require type information.

Snooze messages

Snooze messages are published on the topic metadata.snooze.

Example messages

Here are the messages generated when a managed entity is snoozed and unsnoozed:

{
  "data": {
    "timestamp": "2016-05-27T14:51:10.000Z",
    "target": {
      "gateway": "Ad-hoc GW",
      "probe": "vp",
      "managedEntity": "Gateway",
      "sampler": "gatewayLoad",
      "type": "",
      "filter": {
        "osType": "Virtual"
      }
    },
    "snoozed": {
      "snoozed": true,
      "snoozedBy": "ActiveConsole1",
      "comment": "Simple example",
      "period": "Manual"
    }
  },
  "operation": "update"
}

{
  "data": {
    "timestamp": "2016-05-27T14:51:10.000Z",
    "target": {
      "gateway": "Ad-hoc GW",
      "probe": "vp",
      "managedEntity": "Gateway",
      "sampler": "gatewayLoad",
      "type": "",
      "filter": {
        "osType": "Virtual"
      }
    },
    "snoozed": {
      "snoozed": false,
      "unsnoozedBy: "ActiveConsole1"		
    }
  },
  "operation": "update"
}

Here is an example of snoozing a headline, using some more complex options:

{
  "data": {
    "timestamp": "2016-05-27T14:54:51.000Z",
    "target": {
      "gateway": "Ad-hoc GW",
      "probe": "vp",
      "managedEntity": "Gateway",
      "sampler": "gatewayLoad",
      "type": "",
      "dataview": "gatewayLoad",
      "headline": "samplingStatus",
      "filter": {
        "osType": "Virtual",
        "pluginName": "Gateway-gatewayLoad"
      }
    },
    "snoozed": {
      "snoozed": true,
      "snoozedBy": "ActiveConsole1",
      "comment": "Complex options",
      "period": "Until",
      "untilSeverity": "OK",
      "untilTime": "2016-05-27T15:54:51.000Z",
      "untilValue": "NOTE: Stats collection is disabled"
    }
  },
  "operation": "update"
}

Points of Note:

  • A snooze message is generated only for the data item which is snoozed or unsnoozed, not for its ancestors or descendants.
  • The 'target' property identifies the data item to which the message refers. Only the path components that apply to that item are present.
  • When an item is snoozed, the properties of the 'snoozed' object depend on the form of the snooze command used.
  • When an item is unsnoozed, the 'snoozed' object contains only the 'snoozed' boolean.
  • Only 'update' messages are published for snooze metadata, because, if an item is deleted and recreated, its snooze status will be preserved.

User assignment messages

User assignment messages are published on the topic metadata.userassignment.

Example messages

Here are the messages generated when a probe is assigned and unassigned:

{
  "data": {
    "timestamp": "2016-05-27T15:09:39.000Z",
    "target": {
      "gateway": "Ad-hoc GW",
      "probe": "vp",
      "filter": {
        "osType": "Virtual"
      }
    },
    "userAssignment": {
      "userAssigned": true,
        "assignedTo": "Ann Administrator",
        "assignedBy": "Joe Bloggs",
      "comment": "",
      "period": "Manual"
    }
  },
  "operation": "update"
}

{
  "data": {
    "timestamp": "2016-05-27T15:10:30.677Z",
    "target": {
      "gateway": "Ad-hoc GW",
      "probe": "vp",
      "filter": {
        "osType": "Virtual"
      }
    },
    "userAssignment": {
      "userAssigned": false,
      "unassignedBy: "John Doe"
    }
  },
  "operation": "update"
}

Here is an example of assigning a table cell, using some more complex options:

{
  "data": {
    "timestamp": "2016-05-27T15:17:15.000Z",
    "target": {
      "gateway": "Ad-hoc GW",
      "probe": "vp",
      "managedEntity": "Gateway",
      "sampler": "probeData",
      "type": "",
      "dataview": "probeData",
      "row": "theProbe",
      "column": "security",
      "filter": {
        "osType": "Virtual",
        "pluginName": "Gateway-probeData"
      }
    },
    "userAssignment": {
      "userAssigned": true,
      "assignedTo": "John Doe",
      "assignedBy": "Ann Administrator",
      "comment": "Please set up secure config",
      "period": "Until a change in value",
      "untilValue": "INSECURE"
    }
  },
  "operation": "update"
}

Points of Note:

  • The 'target' property identifies the data item to which the message refers. Only the path components that apply to that item are present.
  • When an item is assigned, the properties of the 'userAssignment' object depend on the form of the user assignment command used.
  • When an item is unassigned, the 'userAssignment' object contains only the 'userAssigned' boolean.
  • Only 'update' messages are published for user assignment metadata, because, if an item is deleted and recreated, its user assignment status will be preserved.

Features common to all types of message

Timestamps

Timestamps are formatted using the following ISO 8601 format: YYYY-MM-DDThh:mm:ss.sssZ. Note that fractional seconds are shown, to millisecond precision, and that all timestamps are in UTC. For raw metrics (see Raw and enriched forms of metrics data in Message Payloads section), the timestamp shown is the sample time reported by the sampler. Otherwise the timestamp, including the sample time shown for enriched metrics is the time on the Gateway host when the message is formatted.

Operation

The possible values for the operation property of the message payload are as follows:

  • create
    The data item described has been added to the configuration, or, in the case of a table row, re-created with a new set of columns.
  • update
    The data item has been updated in some way, including, for the headlines topics, the addition or removal of a headline.
  • snapshot
    The current state of the data item is being sent in response to a client request or a change in Publishing configuration.
  • repeat
    The current state of the data item is being sent as a result of the use of the "Schedule" publishing strategy.
  • delete
    The data item described has been removed from the configuration, or, in the case of a table row, is about to be re-created with a new set of columns.
    When a 'delete' message is sent, the properties of the 'data' part of the message reflect the last known state of the item.

Request/reply messages

In addition to the publish/subscribe mechanism used for the messages described so far, Kafka messages support a mechanism which clients can connect to make requests and receive replies.

To send a request using Kafka, the client publishes a request to the Geneos request topic. The name of this topic is normally 'geneos-requests'; the 'geneos-' prefix is the same as the (configurable) prefix used for the publishing topics. The Gateway does not acknowledge the request. All the Gateways that publish to a given Kafka cluster using the same topic prefix will receive and act on the request.

Resend directory request

A client can request that the Gateway re-send the directory information by sending the following request:

{"request":"resend-directory"}

The Gateway resends all the data for the directory topics. This is sent via the normal Kafka publishing topics.

Snapshot metrics request

A client can request that the Gateway provide a snapshot of selected metrics data by sending a request of the following form:

{"request":"snapshot-metrics","target":{"key":"value","key":"value"},"match":"exact"}

The JSON object provided as the value of "target" specifies the dataviews for which to provide a snapshot.

For example, to request a snapshot of all dataviews from Linux CPU samplers in managed entities with attribute "Region" set to "London" and attribute "Division" set to "FIXED INCOME", a client could send this request:

{
 "request":"snapshot-metrics",
 "target":{
           "attributes":{"Region":"London","Division":"FIXED INCOME"},
           "osType":"Linux",
           "pluginName":"CPU"
          },
 "match":"exact"
}

To request a snaphot of all dataviews from Gateway plugins, one could use this request:

{
 "request":"snapshot-metrics",
 "target":{
           "pluginName":"Gateway*"
          },
 "match":"wildcard"
}

The Gateway will send a snapshot of the metrics (raw and enriched, headlines and table data) for all dataviews which match the target specification. This is sent via the normal Kafka publishing topics.

Target syntax

Keys and values are all case-sensitive. The following keys are supported:

  • gateway — Gateway name.
  • probe — Probe name.
  • osType — Probe operating system type, e.g. "Linux", "Windows". To select virtual probes, use "Virtual".
  • managedEntity — Managed entity name.
  • attributes — Collection of name/value pairs of managed entity attributes.
  • sampler — Sampler name.
  • pluginName — Name of the plugin, e.g. "FKM".
  • dataview — Dataview name.

The osType determines the operating system which the probe is running on. This can be found in the properties of a managed entity that has a running probe:

Active Console identifies the type of operating system based on a numeric value:

  • 0 — Unknown
  • 1 — Other
  • 2Windows
  • 3Solaris
  • 4Linux
  • 5 — HPUX
  • 6 — AIX_OS
  • 7 — Solaris_x86

Exact vs wildcard matching

If the match key in the snapshot request has the value wildcard, then values may include the wildcard characters '*' and '?'. As wildcards, '*' matches zero or more unspecified characters and '?' matches exactly one unspecified character. '*' and '?' can be escaped (so that they match a literal asterisk or question mark) by preceding them with a backslash, '\'. Note that to encode a backslash in JSON, it needs to be doubled.

If the match key in the snapshot request has the value exact, then all characters match themselves.

Snapshot request for events

A client can request that the Gateway provide a snapshot of selected event data.

The request can be for separate snapshots of severity, snooze, and user assign events.

The snapshot responds with the severity, snooze, or userAssign state of all matching target data items (i.e. dataviews and cells), and also the state of its parent data items (i.e. sampler, managed entity, gateway).

The form and syntax required to request the snapshot is very similar to the Snapshot metrics request, with the only difference being the value of the request.

The request values are:

  • snapshot-severity
  • snapshot-snooze
  • snapshot-userassignment

For example, to make a request for severity events use:

{
 "request":"snapshot-severity",
 "target":{
          [YOUR key:value PAIRS HERE]
          },
 "match":"exact"
}

Snapshot request for all events and metrics

A client can request that the Gateway provide a snapshot of all event (severity, snooze, and userAssign) and metrics data. To make a request for this data, send a request in the following form:

{
 "request":"snapshot-all",
 "target":{
           [YOUR key:value PAIRS HERE]
          },
 "match":"exact"
}

Appendix: Payload schema

{
  "id": "http://schema.itrsgroup.com/geneos/publishing.json",
  "$schema": "http://json-schema.org/draft-04/schema#",

  "description": "Schema for messages published by Gateway.",

  "type": "object",
  "required": [ "data", "operation" ],
  "additionalProperties": false,

  "properties": {
    "data": {
      "type": "object",
      "oneOf": [
        { "$ref": "#/definitions/directoryData" },
        { "$ref": "#/definitions/metricsData" },
        { "$ref": "#/definitions/metadataData" }
      ]
    },
    "operation": {
      "enum": [ "create", "update", "delete", "repeat", "snapshot" ]
    }
  },

  "definitions": {
    "directoryData": {
      "oneOf": [
        { "$ref": "#/definitions/probeData" },
        { "$ref": "#/definitions/managedEntityData" },
        { "$ref": "#/definitions/dataviewData" }
      ]
    },
    "probeData": {
      "description": "Schema for messages in the \"probes\" topic.",
      "properties": {
        "timestamp": {
          "type": "string", "format": "date-time",
          "description": "Time message is published."
        },
        "name": {
          "type": "string",
          "description": "Unique name for the probe."
        },
        "gateway": {
          "type": "string",
          "description": "Name of the publishing gateway."
        },
        "osType": {
          "enum": [ "Linux", "Windows", "AIX", "Unknown", "Virtual" ],
          "description": "Host operating system type."
        },
        "HostName": {
          "type": "string",
          "description": "Configuration parameter."
        },
        "Port": {
          "type": "string", "pattern": "^[0-9]+$",
          "description": "Configuration parameter; a port number as a string."
        },
        "ConState": {
          "type": "string",
          "description": "Status of probe connection. For example,  'Up', 'Down', 'Unreachable', 'Rejected'."
        },
        "OS": {
		  "type": "string",
          "description": "Run time parameter: The actual operating system of the probe."
        },
        "Version": {
          "type": "string",
          "description": "Run time parameter: The probe version."
        }
      },
      "required": [ "timestamp", "name", "gateway", "osType", "ConState" ],
      "additionalProperties": false
    },
    "managedEntityData": {
      "description": "Schema for messages in the \"managedEntities\" topic.",
      "properties": {
        "timestamp": {
          "type": "string", "format": "date-time",
          "description": "Time message is published."
        },
        "name": {
          "type": "string",
          "description": "unique name for the entity."
        },
        "probe": {
          "type": "string",
          "description": "probe providing metrics for the entity."
        },
        "gateway": {
          "type": "string",
          "description": "Name of the publishing gateway."
        },
        "attributes": {
          "type": "object",
          "description": "Attributes configured for the entity.",
          "additionalProperties": { "type": "string" }
        }
      },
      "required": [ "timestamp", "name", "probe", "gateway", "attributes" ],
      "additionalProperties": false
    },
    "dataviewData": {
      "description": "Schema for messages in the \"dataviews\" topic.",
      "properties": {
        "timestamp": {
          "type": "string", "format": "date-time",
          "description": "Time message is published."
        },
        "dataview": {
          "type": "string",
          "description": "Data view name, unique within sampler, type and entity."
        },
        "sampler": {
          "type": "string",
          "description": "Name of sampler providing data view."
        },
        "pluginName": {
          "type": "string",
          "description": "Name of plug-in implementing sampler."
        },
        "type": {
          "type": "string",
          "description": "Type containing sampler (may be empty string)."
        },
        "managedEntity": {
          "type": "string",
          "description": "Managed entity containing sampler."
        },
        "probe": {
          "type": "string",
          "description": "Probe hosting the sampler."
        },
        "gateway": {
          "type": "string",
          "description": "Name of the publishing gateway."
        },
        "topicSuffix": {
          "type": "string",
          "description": "Dataview path used as key for metrics messages relating to this dataview."
        },

      },
      "required": [ "timestamp", "dataview", "sampler", "pluginName", "type", "managedEntity", "probe", "gateway", "topicSuffix", "availableTopics" ],
      "additionalProperties": false
    },
    "metricsData": {
       "oneOf": [
        { "$ref": "#/definitions/enrichedHeadlineData" },
        { "$ref": "#/definitions/enrichedTableData" },
        { "$ref": "#/definitions/rawHeadlineData" },
        { "$ref": "#/definitions/rawTableData" }
      ]
    },
    "targetForMetrics": {
      "description": "Geneos path information identifying the dataview containing the metrics.",
      "properties": {
        "gateway": {
          "type": "string"
        },
        "probe": {
          "type": "string"
        },
        "managedEntity": {
          "type": "string"
        },
        "type": {
          "type": "string"
        },
        "sampler": {
          "type": "string"
        },
        "dataview": {
          "type": "string"
        },
        "filter": {
          "type": "object",
          "description": "Useful additional information that helps determine the set of metrics to expect.",
          "required": [ "osType", "pluginName" ],
          "properties": {
            "osType": {
              "type": "string",
              "description": "The operating system of the probe hosting the dataview."
            },
            "pluginName": {
              "type": "string",
              "description": "The name of the plug-in implementing the sampler."
            }
          }
        }
      },
      "required": [ "gateway", "probe", "managedEntity", "type", "sampler", "dataview", "filter" ],
      "additionalProperties": false
    },
    "enrichedHeadlineData": {
      "description": "Schema for messages in the \"enriched.headline\" topic.",
      "properties": {
        "sampleTime": {
          "type": "string", "format": "date-time",
          "description": "Time message is published."
        },
        "target": {
          "type": "object",
          "$ref": "#/definitions/targetForMetrics"
        },
        "samplingStatus": {
          "type": "string",
          "description": "Sampling status of the dataview. Will be followed by other headlines, including those computed by rules."
        }
      },
      "required": [ "sampleTime", "target", "samplingStatus" ],
      "additionalProperties": { "type": "string" }
    },
    "enrichedTableData": {
      "description": "Schema for messages in the \"enriched.table\" topic.",
      "properties": {
        "sampleTime": {
          "type": "string", "format": "date-time",
          "description": "Time message is published."
        },
        "target": {
          "type": "object",
          "$ref": "#/definitions/targetForMetrics"
        },
        "name": {
          "type": "string",
          "description": "Name of the row."
        },
        "row": {
          "type": "object",
          "description": "Metrics data for the row, including values computed by rules",
          "additionalProperties": { "type": "string" }
        }
      },
      "required": [ "sampleTime", "target", "name", "row" ],
      "additionalProperties": false
    },
    "rawHeadlineData": {
      "description": "Schema for messages in the \"raw.headline\" topic.",
      "properties": {
        "sampleTime": {
          "type": "string", "format": "date-time",
          "description": "Sample time reported by probe."
        },
        "target": {
          "type": "object",
          "$ref": "#/definitions/targetForMetrics"
        },
        "samplingStatus": {
          "type": "string",
          "type": "string",
          "description": "Sampling status of the dataview. Will be followed by other headlines, excluding those computed by rules."
        }
      },
      "required": [ "sampleTime", "target", "samplingStatus" ],
      "additionalProperties": { "type": "string" }
    },
    "rawTableData": {
      "description": "Schema for messages in the \"raw.table\" topic.",
      "properties": {
        "sampleTime": {
          "type": "string", "format": "date-time",
          "description": "Sample time reported by probe."
       },
        "target": {
          "type": "object",
          "$ref": "#/definitions/targetForMetrics"
        },
        "name": {
          "type": "string",
          "description": "Name of the row."
        },
        "row": {
          "type": "object",
          "description": "Metrics data for the row, excluding values computed by rules",
          "additionalProperties": { "type": "string" }
        }
      },
      "required": [ "sampleTime", "target", "name", "row" ],
      "additionalProperties": false
    },
    "metadataData": {
       "oneOf": [
        { "$ref": "#/definitions/severityData" },
        { "$ref": "#/definitions/snoozeData" },
        { "$ref": "#/definitions/userAssignmentData" }
      ]
    },
    "targetForMetadata": {
      "description": "Geneos path information identifying the item to which metadata applies.",
      "properties": {
        "gateway": {
          "type": "string"
        },
        "probe": {
          "type": "string"
        },
        "managedEntity": {
          "type": "string"
        },
        "type": {
          "type": "string"
        },
        "sampler": {
          "type": "string"
        },
        "dataview": {
          "type": "string"
        },
        "headline": {
          "type": "string"
        },
        "row": {
          "type": "string"
        },
        "column": {
          "type": "string"
        },
        "filter": {
          "type": "object",
          "description": "Allows the same criteria to be used when filtering metadata as for metrics.",
          "required": [ "osType" ],
          "properties": {
            "osType": {
              "type": "string",
              "description": "The operating system of the probe."
            },
            "pluginName": {
              "type": "string",
              "description": "The name of the plug-in implementing the sampler. Present when dataview is also present."
            }
          }
        }
      },
      "required": [ "gateway" ],
      "dependencies": {
        "managedEntity": [ "probe" ],
        "type": [ "managedEntity", "sampler" ],
        "sampler": [ "managedEntity", "type" ],
        "dataview": [ "sampler" ],
        "headline": [ "dataview" ],
        "row": [ "dataview", "column" ],
        "column": [ "dataview", "row" ],
        "filter": [ "probe" ]
      },
      "additionalProperties": false
    },
    "severityData": {
      "description": "Schema for messages in the \"metadata.severity\" topic.",
      "properties": {
        "timestamp": {
          "type": "string", "format": "date-time",
          "description": "Time message is published."
        },
        "target": {
          "type": "object",
          "$ref": "#/definitions/targetForMetadata"
        },
        "severity": {
          "enum": [ "UNDEFINED", "OK", "WARNING", "CRITICAL", "UNKNOWN" ],
          "description": "String representing the \"state/@severity\" attribute of the item."
        },
        "active": {
          "type": "boolean",
          "description": "The value of the \"state/@active\" attribute of the item."
        },
        "snoozed": {
          "type": "boolean",
          "description": "Whether the data item is currently snoozed."
        },
        "snoozedParents": {
          "type": "integer",
          "description": "The number of parent data items that are snoozed."
        },
        "userAssigned": {
          "type": "boolean",
          "description": "Whether a user is assigned to the data item."
        },
        "value": {
          "type": "object",
          "description": "Value of the dataitem.",
          "properties": {
            "cell": {
              "type": "string",
              "description": "Value of the cell as is."
            },
            "dateTime": {
              "type": "string", "format" : "date-time",
              "description": "The value of the cell as an ISO formatted date-time"
            },
            "number" : {
              "type": "number",
              "description": "The value of the cell as double."
            }
          },
          "required": ["cell"]
        }
      },
      "required": [ "timestamp", "target", "severity", "active" ],
      "additionalProperties": false
    },
    "snoozeData": {
      "description": "Schema for messages in the \"metadata.snooze\" topic.",
      "properties": {
        "timestamp": {
          "type": "string", "format": "date-time",
          "description": "Time message is published."
        },
        "target": {
          "type": "object",
          "$ref": "#/definitions/targetForMetadata"
        },
        "snoozed": {
          "oneOf": [
            {
              "description": "Schema for \"snoozed\" object for an item which is not snoozed",
              "type": "object",
              "properties": {
                "snoozed": {
                  "enum": [ false ]
                },
                "snoozedBy": {
                  "type":"string",
                  "description": "Name of the user/system that unsnoozed the item."
                },						
              },
              "required": [ "snoozed" ],
              "additionalProperties": false
            },
            {
              "description": "Schema for \"snoozed\" object for an item which is snoozed",
              "type": "object",
              "properties": {
                "snoozed": {
                  "enum": [ true ]
                },
                "snoozedBy": {
                  "type": "string",
                  "description": "Username of the user who snoozed the item."
                },
                "comment": {
                  "type": "string",
                  "description": "User entered comment."
                },
                "period": {
                  "enum": [ "Manual", "SeverityTo", "SeverityFrom", "Time", "DateTime", "ValueChanges", "SeverityToOrTime", "Until", "Unknown" ],
                  "description": "Type of snooze period."
                },
                "untilSeverity": {
                  "enum": [ "UNDEFINED", "OK", "WARNING", "CRITICAL", "UNKNOWN" ],
                  "description": "String representing the associated severity for the \"SeverityFrom\", \"SeverityTo\", \"SeverityToOrTime\" and \"Until\" period types."
                },
                "untilTime": {
                  "type": "string", "format": "date-time",
                  "description": "Time when the data item will become unsnoozed."
                },
                "untilValue": {
                  "type": "string",
                  "description": "Value of the data item when snoozed, for the \"ValueChanges\" or \"Until\" period type ."
                }
              },
              "required": [ "snoozed", "snoozedBy", "comment", "period" ],
              "additionalProperties": false
            }
          ]
        }
      },
      "required": [ "timestamp", "target", "snoozed" ],
      "additionalProperties": false
    },
    "userAssignmentData": {
      "description": "Schema for messages in the \"metadata.userassignment\" topic.",
      "properties": {
        "timestamp": {
          "type": "string", "format": "date-time",
          "description": "Time message is published."
        },
        "target": {
          "type": "object",
          "$ref": "#/definitions/targetForMetadata"
        },
        "userAssignment": {
          "oneOf": [
            {
              "description": "Schema for \"userAssignment\" object for an item which is not assigned to a user.",
              "type": "object",
              "properties": {
                "userAssigned": {
                  "enum": [ false ]
                }
                "unassignedBy": {
                  "type":"string",
                  "description": "Username of the user who unassigned the item."
                },}
              },
              "required": [ "userAssigned" ],
              "additionalProperties": false
            },
            {
              "description": "Schema for \"userAssignment\" object for an item which is assigned to a user.",
              "type": "object",
              "properties": {
                "userAssigned": {
                  "enum": [ true ]
                },
                "assignedTo": {
                  "type": "string",
                  "description": "Username of the user the item is assigned to."
                },
                "assignedBy": {
                  "type":"string",
                  "description": "Username of the user who assigned the item."
                },	
                "comment": {
                  "type": "string",
                  "description": "An explanation about the purpose of this instance."
                },
                "period": {
                  "enum": [
                    "Manual",
                    "UntilOk",
                    "Until severity changes to specified",
                    "Until severity changes from specified",
                    "Until a specific date / time",
                    "Until severity changes to specified or until a specific date / time",
                    "Until a change in value",
                    "Unknown"
                  ],
                  "description": "An explanation about the purpose of this instance."
                },
                "severity": {
                  "enum": [ "UNDEFINED", "OK", "WARNING", "CRITICAL", "UNKNOWN" ],
                  "description": "Severity to which should have (\"UntilOk\", \"Until severity changes to specified\") or not have (\"Until severity changes from specified\") for user assignment to lapse."
                },
                "untilTime": {
                  "type": "string",
                  "description": "Time at which user assignment will lapse."
                },
                "untilValue": {
                  "type": "string",
                  "description": "Value from which item should change for user assignment to lapse."
                }
              },
              "required": [ "userAssigned", "assignedTo", "comment", "period" ],
              "additionalProperties": false
            }
          ]
        }
      },
      "required": [ "timestamp", "target", "userAssignment" ],
      "additionalProperties": false
    }
  }
}