HTTP-JSON Plugin
Overview Copied
The HTTP-JSON plugin enables the Collection Agent to ingest JSON over HTTP/HTTPS and publish the resulting metrics, attributes, and events into Geneos. Incoming JSON can be passed through unchanged when already in the expected format or transformed using an embedded JQ program to reshape and enrich data, including timestamp handling.
The plugin is designed for integrating REST endpoints, exporters, webhooks, and custom scripts, with configurable request/response limits (timeouts and maximum payload size) and support for common deployment patterns in both secure and non-secure environments.
This plugin includes two separate collectors:
- HTTP-JSON Push Collector - a collector that receives JSON payloads over HTTP.
- HTTP-JSON Pull Collector - a collector that queries JSON payloads over HTTP.
Refer to Deploy the HTTP-JSON Plugin collectors on how to set up and configure the two collectors.
Intended audience Copied
This guide is intended for users who need to ingest JSON payloads from HTTP sources and transform it into Collection Agent datapoints.
As a user, you should be familiar with HTTP protocol concepts such as methods, headers, and authentication to properly configure the collectors. Additionally, knowledge of JQ is required to transform JSON payloads into the JSON schema required by the collectors.
Prerequisites Copied
The HTTP-JSON plugin requires the following versions of Geneos components:
-
Gateway and Netprobe 7.8.x or higher.
-
Collection Agent 6.x or higher.
Note
Starting from Collection Agent 5.x, Java 21 is the minimum required version.
The HTTP-JSON binaries are packaged with Netprobe, and are stored in the collection_agent folder. Alternatively, you can download separate binaries for the HTTP-JSON plugin from ITRS Downloads.
Deploy the HTTP-JSON Plugin collectors Copied
The HTTP-JSON Push Collector and HTTP-JSON Pull Collector supports Collection Agent publication into Geneos using dynamic managed entities. Setting up these collectors in Geneos involves three primary steps:
- Set up your Collection Agent plugin.
- Configure your mappings.
- Configure your other Dynamic Entities in the Gateway, see Create Dynamic Entities in Collection Agent setup for a more detailed procedure.
Set up your Collection Agent plugin Copied
Use one of the following options listed below to configure the plugin.
- Setting up your collector in the Gateway Setup Editor by adding the following configuration in Dynamic Entities > Collectors. The available collectors for the HTTP-JSON plugin are
HttpJsonPushCollectorandHttpJsonPullCollector. For more information on how to set up your collectors, see Collectors in Dynamic Entities. - Adding the following configuration in
collection-agent.ymlfile on your local machine.
For HTTP-JSON Push Collector:
collectors:
- type: plugin
class-name: HttpJsonPushCollector
# Optional but recommended name.
name: http-push
# Required. Port on which to receive JSON documents.
port: 8080
# Optional. Acceptor thread pool size (default = 2)
acceptor-thread-pool-size: 2
# Optional. Worker thread pool size (default = 4)
worker-thread-pool-size: 4
# Optional. URL suffix.
# If specified, the publisher must publish using a URL ending in the specified path.
path: /ingest
# Optional. Acceptable HTTP methods. Defaults to [ POST, PUT ]
# If specified, only JSON documents received via one of the specified methods are accepted.
methods: [ POST, PUT ]
# Optional. Maximum content length. Defaults to 1 MiB. Must be in the range [1KiB, 8MiB].
max-content-length: 1048576
# Optional. Defaults to false.
# When set, the HTTP 'Content-Type' header must be set to 'application/json' else requests are
# dropped.
require-content-type: false
# Optional. JQ configuration.
# If not specified then received JSON documents are passed straight through which implies
# they must be produced externally in the correct format, including any timestamps in ISO-8601
# format.
jq:
# Optional. Specify EITHER 'path' (external file) OR 'content' (inline), not both.
# If neither is specified, received JSON documents are passed straight through
# which implies they must be produced externally in the correct format.
# Option 1: External JQ file
path: ./path/to/jq_program.jq
# Option 2: Inline JQ statement (use YAML multiline string with |)
content: |
.[] | {
name: .metric_name,
dimensions: .tags,
gauge: { value: .value }
}
# The following options are used to customise the interpretation of timestamps output by
# the JQ program in case it is not able to produce ISO-8601 / UTC timestamps.
# Optional. Defaults to "iso-8601".
# Valid options are: "iso-8601", "custom", "epoch-seconds", "epoch-millis", "epoch-nanos".
timestamp-type: iso-8601
# Optional. Must be specified when "timestamp-type" is "custom".
# Must be a valid timestamp format (see below).
timestamp-format: yyyy-MM-dd HH:mm:ss
# Optional. May be specified when "timestamp-type" is "custom".
timezone-id: "America/New_York"
# Optional TLS configuration.
tls-config:
# Required.
cert-file: /path/to/cert_file.pem
# Required.
key-file: /path/to/private_key.pem
# Optional. Specify when mTLS is required.
trust-chain-file: /path/to/trust_chain.pem
For HTTP-JSON Pull Collector:
collectors:
- type: plugin
class-name: HttpJsonPullCollector
# Optional but recommended name.
name: http-pull
# Optional. Polling interval in milliseconds. Default is 60000 (1 minute).
collection-interval: 60000
# Required. List of targets to poll.
# At least 1 polling target must be specified.
targets:
# Optional. Host component of HTTP URL. Defaults to 'localhost'.
- host: localhost
# Required. Port component of HTTP URL
port: 8080
# Required. Path and query (if any) components of HTTP URL.
path: /resource
# Optional. HTTP method. Defaults to GET. Supports GET, POST, PUT
method: GET
# Optional. Request body (for POST/PUT requests). Ignored for GET.
# Provide EITHER inline JSON in 'body.content' OR the path to a file in 'body.path', not both.
# Default Content-Type is application/json unless overridden in headers.
body:
# Inline JSON body
content: '{"query": "select * from metrics"}'
# External file containing body
#path: ./path/to/body.json
# Optional. Custom request headers (e.g., for authentication, API keys)
# Default headers set are Connection: close, Accept-Encoding: gzip.
# Connection and Accept-Encoding can be overriden.
headers:
Authorization: "Bearer ${env:HTTP_PULL_BEARER_TOKEN}"
X-API-Key: ${env:API_KEY}
X-Custom-Header: "custom-value"
# Optional. Connection and poll timeout in milliseconds. Default is 10000.
timeout: 10000
# Optional. Maximum content length. Defaults to 1 MiB. Must be in the range [1KiB, 8MiB].
max-content-length: 1048576
# Optional. JQ configuration.
# If not specified then received JSON documents are passed straight through which implies
# they must be produced externally in the correct format, including any timestamps in ISO-8601
# format.
jq:
# Optional. Specify EITHER 'path' (external file) OR 'content' (inline), not both.
# If neither is specified, received JSON documents are passed straight through
# which implies they must be produced externally in the correct format
# Option 1: External JQ file
path: ./path/to/jq_program.jq
# Option 2: Inline JQ statement
content: |
[.metrics[] | {
name: .name,
dimensions: .labels,
gauge: { value: .value }
}]
# The following options are used to customise the interpretation of timestamps output by
# the JQ program in case it is not able to produce ISO-8601 / UTC timestamps.
# Optional. Defaults to "iso-8601".
# Valid options are: "iso-8601", "custom", "epoch-seconds", "epoch-millis", "epoch-nanos".
timestamp-type: iso-8601
# Optional. Only specified when "timestamp-type" is "custom".
# Must be a valid timestamp format (see below).
timestamp-format: yyyy-MM-dd HH:mm:ss
# Optional. Only (optionally) specified when "timestamp-type" is "custom".
timezone-id: "America/New_York"
# Optional TLS configuration.
tls-config:
# Required.
cert-file: /path/to/cert_file.pem
# Required.
key-file: /path/to/private_key.pem
# Optional. Specify when mTLS is required.
trust-chain-file: /path/to/trust_chain.pem
Example HTTP-JSON Push Collector configuration Copied
Using this HTTP-JSON Push Collector configuration:
port: 9000
acceptor-thread-pool-size: 2
worker-thread-pool-size: 4
path: /ingest
methods: [ POST, PUT ]
max-content-length: 1048576
require-content-type: false
jq:
path: /opt/netprobe/jq_program.jq
timestamp-type: iso-8601
timestamp-format: yyyy-MM-dd HH:mm:ss
timezone-id: "America/New_York"
This configuration will return:
Note
This result is also based on the Example JQ program used to translate the example JSON document pushed into the collector.
Example HTTP-JSON Pull Collector configuration Copied
Using this HTTP-JSON Pull Collector configuration:
targets:
- host: 127.0.0.1
port: 8090
path: /status
# Optional. HTTP method. Defaults to GET. Supports GET, POST, PUT
method: GET
# Optional. Request body (for POST/PUT requests). Ignored for GET.
# Provide EITHER inline JSON in 'body.content' OR the path to a file in 'body.path', not both.
# Default Content-Type is application/json unless overridden in headers.
#body:
# Inline JSON body
#content: '{"query": "select * from metrics"}'
# External file containing body
#path: ./path/to/body.json
headers:
Authorization: "Bearer ${env:HTTP_PULL_BEARER_TOKEN}"
timeout: 10000
max-content-length: 1048576
jq:
path: /opt/netprobe/jq_program.jq
timestamp-type: iso-8601
timestamp-format: yyyy-MM-dd HH:mm:ss
timezone-id: "America/New_York"
This configuration will return:
Note
This result is also based on the Example JQ program used to translate the example JSON document pulled into the collector.
Mapping configuration Copied
The HTTP-JSON plugin publishes data through the Collection Agent. To display that data in the Gateway as Dynamic Entities and Dataviews, you must configure a dynamic entity mapping. The mapping defines which metric dimension keys are used to:
- Identify and group metrics into an entity.
- Name or partition Dataviews within that entity.
Because the plugin can accept arbitrary upstream JSON, these keys are entirely determined by your payload and/or your JQ transform. In practice, you choose dimension labels that:
- Are stable over time (do not change per sample).
- Uniquely identify the thing you’re monitoring (entity key).
- Provide meaningful sub-grouping (dataview key), such as service, endpoint, or component name.
Example mapping configuration Copied
In the example JQ program, the plugin emits dimensions containing hostname and servicecheckname. Configure the mapping with the following options in the Geneos items:
- Entity key:
hostname(required) — to create one Dynamic Entity per host. - Dataview key:
servicecheckname(optional) — to create a separate Dataview per service check under that host.
HTTP Authorization Copied
Configure the authentication by setting the Authorization header under a target’s headers map.
Bearer authentication Copied
headers:
Authorization: "Bearer ${env:HTTP_PULL_BEARER_TOKEN}"
Basic authentication Copied
Provide basic authentication as a Base64-encoded token in accordance with RFC 7617. Use the following format for the Authorization header:
Authorization: Basic <base64(username:password)>
To generate the Base64-encoded token, use the following command:
echo -n 'username:password' | base64
Then reference the encoded value in your configuration:
headers:
Authorization: "Basic ${env:BASIC_AUTH_B64}"
HTTP Security Headers Copied
Both collectors capture and publish HTTP security headers as Entity Attributes for compliance, audit, and security monitoring purposes. The following headers are collected when present:
Authorization(credentials masked as<scheme> ******)User-AgentX-Forwarded-ForStrict-Transport-SecurityContent-Security-Policy
The collectors publish these headers using the naming convention http_header_<header-name> (for example, http_header_authorization, http_header_user_agent).
JSON schema Copied
High level description Copied
Regardless of the input, the output of the JQ program must output a JSON document that adheres to the following:
- An array of JSON objects, with each object representing a single data point.
- Each JSON object must contain the following fields:
"name"— the data point name."dimensions"— the data point dimensions object representing the emitting entity.- An object specifying type-specific information named either:
"gauge","counter","status","attribute","log".
- Each JSON object may contain the following fields:
"timestamp"— the data point timestamp in ISO-8601 / UTC format (for example,"2024-07-13T08:56:34Z")."properties"— the data point properties object for the emitting entity.
The following example shows a valid JSON document specifying a single gauge data point:
[
{
"name": "some.gauge.name",
"dimensions": {
"host.name": "some.host.com",
"service.name": "some.service"
},
"properties": {
"important.property": "something.important"
},
"timestamp": "2024-07-13T08:56:34Z",
"gauge": {
"value": 98.9,
"unit": "%"
}
}
]
JSON definitions Copied
“name” field Copied
The "name" field represents the name of the data point series. The combination of "name" and "dimensions" must be globally unique.
“dimensions” object Copied
The "dimensions" object represents the identity of the entity or resource emitting the data point series. The combination of "name" and "dimensions" must be globally unique.
Dimension ordering Copied
The plugin preserves the insertion order of dimensions as they appear in the JSON input. The plugin uses SequencedMap<String, String> for dimension storage, which maintains the order in which dimensions were added.
For example:
{
"dimensions": {
"datacenter": "us-west",
"application": "web-service",
"cluster": "prod-cluster",
"instance": "server-01"
}
}
In this example, the dimensions are accessible in the exact order they appear in the JSON:
datacenter(first)applicationclusterinstance(last)
The plugin preserves this ordering throughout the data processing pipeline. You can rely on this ordering for consistent dimension iteration and display.
“properties” object Copied
The "properties" object represents additional information about the specific data point being exported.
“timestamp” field Copied
The "timestamp" field represents the sample timestamp of the specific data point. By default, timestamps must be in UTC and expressed in ISO-8601 format.
JQ has limited non-platform-specific date/time processing. If you cannot emit a timestamp in UTC and/or ISO-8601 format, you can use the following alternative formats:
- Epoch nanoseconds (numeric)
- Epoch milliseconds (numeric)
- Epoch seconds (numeric)
- Custom (string) with valid formats specified here.
For custom timestamp formats, if the timestamp represents non-UTC time, you can specify a timezone as either:
- An abbreviation such as
"PST" - A full name such as
"America/Los_Angeles" - A custom ID such as
"GMT-8:00"
You specify the timestamp format as part of the jq section of the plugin’s configuration. Refer to the collector configuration sections for more information.
If no "timestamp" field is present, the collector uses the current time.
Type object Copied
The JSON object must contain exactly one field named from either one of the following:
| Object | Description |
|---|---|
"gauge" |
|
"counter" |
|
"status" |
"value": String representing the status. |
"attribute" |
|
"log" |
|
Example JSON document Copied
The following example shows a JSON document specifying data points (associated with the same entity/resource by dimensions) of each type:
[
{
"name": "some.gauge.name",
"dimensions": {
"host.name": "some.host.com",
"service.name": "some.service"
},
"properties": {
"some.property.key": "some.property.value"
},
"timestamp": "2024-07-13T08:56:34Z",
"gauge": {
"value": 98.9,
"unit": "%"
}
},
{
"name": "some.counter.name",
"dimensions": {
"host.name": "some.host.com",
"service.name": "some.service"
},
"properties": {
"some.property.key": "some.property.value"
},
"timestamp": "2024-07-13T08:56:34Z",
"counter": {
"value": 98,
"duration": 10,
"unit": "s"
}
},
{
"name": "some.status.name",
"dimensions": {
"host.name": "some.host.com",
"service.name": "some.service"
},
"properties": {
"some.property.key": "some.property.value"
},
"timestamp": "2024-07-13T08:56:34Z",
"status": {
"value": "OK - nothing to see here"
}
},
{
"name": "some.numeric.attribute.name",
"dimensions": {
"host.name": "some.host.com",
"service.name": "some.service"
},
"properties": {
"some.property.key": "some.property.value"
},
"timestamp": "2024-07-13T08:56:34Z",
"attribute": {
"value": 128,
"unit": "MB"
}
},
{
"name": "some.log.name",
"dimensions": {
"host.name": "some.host.com",
"service.name": "some.service"
},
"properties": {
"some.property.key": "some.property.value"
},
"timestamp": "2024-07-13T08:56:34Z",
"log": {
"severity": "info",
"message": "Something happened"
}
}
]
Example JQ program Copied
Given the following JSON document:
[
{
"info": "opsview_resultsexporter 2024-07-16 08:08:48",
"message": {
"hostname": "opsview",
"servicecheckname": "Opsview - Autodiscovery Manager - Status",
"current_state": 0,
"problem_has_been_acknowledged": false,
"is_hard_state": true,
"check_attempt": 1,
"last_check": 1721117325,
"execution_time": 0.214479,
"stdout": "METRIC OK - CPU Usage is 0.00%, Memory Usage is 2.00%, Memory Used is 160.53MB, Child Count is 0, Uptime is 22h 42m ",
"perf_data": {
"CPU": "0.00%",
"Memory": "2.00%",
"Memory_Usage": "160.53MB",
"Children": "0"
},
"metadata": {
"hostname_run_on": "opsview-appliance"
}
}
},
{
"info": "opsview_resultsexporter 2024-07-16 08:08:48",
"message": {
"hostname": "opsview",
"servicecheckname": "Opsview - TimeSeries Enqueuer - Status",
"current_state": 0,
"problem_has_been_acknowledged": false,
"is_hard_state": true,
"check_attempt": 1,
"last_check": 1721117325,
"execution_time": 0.417679,
"stdout": "METRIC OK - CPU Usage is 0.00%, Memory Usage is 0.70%, Memory Used is 56.36MB, Child Count is 1, Uptime is 22h 34m ",
"perf_data": {
"CPU": "0.00%",
"Memory": "0.70%",
"Memory_Usage": "56.36MB",
"Children": "1"
},
"metadata": {
"hostname_run_on": "opsview-appliance"
}
}
}
]
You can use the following JQ program to translate it into the required internal format:
def get_timestamp:
scan("\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}") |
sub(" "; "T") |
sub("$"; "Z");
def get_dimensions:
if .servicecheckname == null
then
{
hostname: .hostname
}
else
{
hostname: .hostname,
servicecheckname: .servicecheckname
}
end;
def status_metric($timestamp; $dimensions):
def get_name($default):
if . == null then $default else . end;
{
name: .servicecheckname | get_name("host_check"),
timestamp: $timestamp,
dimensions: $dimensions,
status: {
# Currently, CA drops status metrics with values > 64 chars in length.
value: .stdout | scan(".{0,64}")
}
};
def perf_metrics($timestamp; $dimensions):
def get_gauge:
. as $value_unit
| ($value_unit | scan("[0-9.+-]+")) as $value
| ($value | length) as $value_length
| {
value: $value,
unit: $value_unit[$value_length:]
};
. | to_entries | .[] | {
name: .key,
timestamp: $timestamp,
dimensions: $dimensions,
gauge: .value | get_gauge
};
.[] as $in
| ($in.info | get_timestamp) as $timestamp
| ($in.message | get_dimensions) as $dimensions
| ($in.message | status_metric($timestamp; $dimensions)) as $status_metric
| [
$status_metric,
($in.message.perf_data | perf_metrics($timestamp; $dimensions))
]
Unit of measurements Copied
The following codes are the acceptable unit symbols and their corresponding unit of measurement and come from ISO 80000-13.
Storage size (bits) Copied
| Symbol | Code | Factor |
|---|---|---|
| bit | bits | 1.0 |
| kbit | kilobits | 1000.0 |
| Mbit | megabits | 1000000.0 |
| Gbit | gigabits | 1000000000.0 |
| Tbit | terabits | 1000000000000.0 |
Storage size (bytes) Copied
| Symbol | Code | Factor |
|---|---|---|
| B | bytes | 1.0 |
| kB | kilobytes | 1000.0 |
| KiB | kibibytes | 1024.0 |
| MB | megabytes | 1000000.0 |
| MiB | mebibytes | 1048576.0 |
| GB | gigabytes | 1000000000.0 |
| GiB | gibibytes | 1073741824.0 |
| TB | terabytes | 1000000000000.0 |
| TiB | tebibytes | 1099511627776.0 |
| PB | petabytes | 1000000000000000.0 |
| PiB | pebibytes | 1125899906842624.0 |
| EB | exabytes | 1000000000000000000.0 |
| EiB | exbibytes | 1152921504606846976.0 |
Network throughput (bits) Copied
| Symbol | Code | Factor |
|---|---|---|
| bit/s | bits per second | 1.0 |
| kbit/s | kilobits per second | 1000.0 |
| Mbit/s | megabits per second | 1000000.0 |
| Gbit/s | gigabits per second | 1000000000.0 |
| Tbit/s | terabits per second | 1000000000000.0 |
Network throughput (bytes) Copied
| Symbol | Code | Factor |
|---|---|---|
| B/s | bytes per second | 1.0 |
| kB/s | kilobytes per second | 1000.0 |
| KiB/s | kibibytes per second | 1024.0 |
| MB/s | megabytes per second | 1000000.0 |
| GB/s | gigabytes per second | 1000000000.0 |
| TB/s | terabytes per second | 1000000000000.0 |
Rate per second Copied
| Symbol | Code | Factor |
|---|---|---|
| /s | per second | 1.0 |
Rate per minute Copied
| Symbol | Code | Factor |
|---|---|---|
| /min | per minute | 1.0 |
Duration Copied
| Symbol | Code | Factor |
|---|---|---|
| ns | nanoseconds | 1.0 |
| µs | microseconds | 1000.0 |
| ms | milliseconds | 1000000.0 |
| s | seconds | 1000000000.0 |
| min | minutes | 60000000000.0 |
| h | hours | 3600000000000.0 |
| d | days | 86400000000000.0 |
Temperature Copied
| Symbol | Code | Factor |
|---|---|---|
| °C | degrees Celsius | 1.0 |
Clock speeds Copied
| Symbol | Code | Factor |
|---|---|---|
| Hz | hertz | 1.0 |
| MHz | megahertz | 1000000.0 |
| GHz | gigahertz | 1000000000.0 |
Fractional representation Copied
| Symbol | Code | Factor |
|---|---|---|
| % | percent | 1.0 |
| — | fraction | 100.0 |
CPU usage (Kubernetes) Copied
| Symbol | Code | Factor |
|---|---|---|
| — | cores | 1000000000.0 |
| — | nanocores | 1.0 |
| — | microcores | 1000.0 |
| — | millicores | 1000000.0 |
Epoch time Copied
| Symbol | Code | Factor |
|---|---|---|
| ns | epoch nanoseconds | 1.0 |
| ms | epoch milliseconds | 1000000.0 |
Length Copied
| Symbol | Code | Factor |
|---|---|---|
| m | metres | 1.0 |
| km | kilometres | 1000.0 |
Electric potential Copied
| Symbol | Code | Factor |
|---|---|---|
| V | volts | 1.0 |
| kV | kilovolts | 1000.0 |
Electric current Copied
| Symbol | Code | Factor |
|---|---|---|
| A | amperes | 1000.0 |
| mA | milliamperes | 1.0 |
Energy Copied
| Symbol | Code | Factor |
|---|---|---|
| J | joules | 1.0 |
| kJ | kilojoules | 1000.0 |
Power Copied
| Symbol | Code | Factor |
|---|---|---|
| W | watts | 1.0 |
Weight Copied
| Symbol | Code | Factor |
|---|---|---|
| kg | kilograms | 1000.0 |
| g | grams | 1.0 |
Currency Copied
| Symbol | Code | Factor |
|---|---|---|
| AUD | AUD | 1.0 |
| CAD | CAD | 1.0 |
| CHF | CHF | 1.0 |
| CNY | CNY | 1.0 |
| DKK | DKK | 1.0 |
| EUR | EUR | 1.0 |
| GBP | GBP | 1.0 |
| HKD | HKD | 1.0 |
| JPY | JPY | 1.0 |
| NOK | NOK | 1.0 |
| NZD | NZD | 1.0 |
| SEK | SEK | 1.0 |
| SGD | SGD | 1.0 |
| USD | USD | 1.0 |
| ZAR | ZAR | 1.0 |
Byte-seconds Copied
| Symbol | Code | Factor |
|---|---|---|
| — | byte-seconds | 1.0 |