×
Collection Agent configuration reference
Overview Copied
Collection Agent configuration reference contains details about setting up collectors, reporters, workflows and plugins.
Caution
Make sure to read Upgrade to Collection Agent 3.x, which outlines the breaking changes that may have an impact on your upgrade, before you upgrade to the latest version of Geneos and Collection Agent.
Configuration reference Copied
Below is an example YAML file which may require some changes for your project’s configuration:
# Collection Agent Configuration Reference
# Directory containing plugin artifacts. Required.
plugin-directory: /usr/local/lib/geneos/plugins
# Agent monitoring and self-metrics.
# This section is optional.
monitoring:
# Optional. Defaults to true.
enabled: true
# Health and metrics reporting interval in milliseconds. Defaults to 10 seconds.
reporting-interval: 10000
# The agent will listen on an HTTP port so that an external system can probe its health.
# In Kubernetes, this can be used in conjunction with the readiness/liveness probes.
# 200 is returned if the agent is started, 500 otherwise.
health-probe:
# Optional. Defaults to true.
enabled: true
# HTTP listen port, defaults to 8080.
listen-port: 8080
# Agent self metrics.
self-metrics:
# Whether to enable self metric collection (optional, defaults to true).
enabled: true
# Dimensions to add to all self metrics from this agent (optional).
dimensions:
custom: value
#
# A collector creates data points and submits them to the workflow.
#
collectors:
# Collector type (all collectors are of type 'plugin').
- type: plugin
# Optional. Defaults to true.
enabled: true
# Optional name used in logging. If omitted, an auto-generated name will be assigned.
name: statsd
# Simple class name of the collector in the plugin jar.
class-name: StatsdServer
# Data point processors applied to data points published from this collector.
# This optional processing chain allows for manipulating and/or filtering data points prior
# to workflow publication. This is the recommended way to perform edge processing, when applicable, so that
# unneeded data can be dropped before incurring workflow overhead.
processors:
# For example, drop all events collected from statsd. See "workflow -> common -> processors" section for
# details on each type of processor.
- type: drop-filter
matchers:
- type: kind
kind: generic-event
# Additional properties are specific to each collector type. See plugin's configuration reference for details.
listen-port: 8125
#
# A reporter receives data points from the workflow and sends them to a remote target.
# At least one reporter must be configured.
#
reporters:
# Each reporter has these common configuration settings:
- type: [ logging | tcp | routing | plugin ]
# Optional. Defaults to true.
enabled: true
# Reporter name. Referenced from a pipeline's 'reporter' setting.
name: myReporterName
# Persist all data points sent to this reporter. It's intended for testing purposes only
# and is disabled by default.
recording:
enabled: false
# Directory where the recording is saved. If undefined, a directory with the name of the reporter
# will be created in the current working directory.
directory: /var/lib/geneos/recording
# Maximum number of data points to record. Recording will stop when capacity is reached.
# Default value shown.
capacity: 1000000
# Optional: Enabled store and forward based reporting.
# This is typically only used when a reporter is being used as a routing reporter destination.
store-and-forward:
# Mandatory. Root directory for store and forward persisted messages.
directory: /var/lib/geneos/collection-agent
# Optional. Store capacity. Defaults to 8192 (8 Ki messages).
capacity: 8192
# Optional. Max store file length in bytes. Defaults to 16777216 (16 MiB).
max-file-length: 16777216
# Logging reporter that simply logs each data point to stdout. This is intended for testing purposes only.
- type: logging
name: logging
# Log level at which each data point is logged. Can be: error, info (default), warn, debug or trace.
level: info
# TCP reporter that sends data points over a TCP connection.
- type: tcp
name: myTcpReporter
# The TCP server hostname. Default value shown.
hostname: localhost
# The TCP server port. Default value shown.
port: 7137
# The TCP server connection timeout in milliseconds. Default value shown.
connection-timeout-millis: 10000
# The TCP server write timeout in milliseconds. Default value shown.
write-timeout-millis: 10000
# Routing reporter which can be used to route data points via 1 of N other reporters based on matching criteria.
# Note that for other reporters to be valid routing destinations they must:
# - appear in the configuration before this reporter
# - have a 'name' attribute by which they can be referenced.
- type: routing
# Optional but recommended component name.
name: router
# Optional. Routing type: [ first (route only via first matching route) | all (route via all matching routes)].
# Defaults to 'first'.
route-type: first
# List of possible routes.
# The routes are searched in the order specified for the first match.
routes:
# Mandatory. Destination reporter name.
- reporter: destination-reporter-1
# Optional: Match condition type: 'any' (logical OR) | 'all' (logical AND) over the list of matchers.
# Defaults to 'any'.
match: any
# Mandatory. List of matchers.
matchers:
# Mandatory. The data point field to be matched: [ name | namespace | dimension | property ].
# If the match type is 'dimension' or 'property' then the 'key' attribute is also required.
- type: name
# Matching regular expression.
pattern: promtest.*
- type: dimension
# Mandatory when type is 'dimension' or 'property'. Specifies dimension key containing value to match.
key: dimension_key
pattern: dimension_value
- reporter: destination-reporter-2
match: any
matchers:
- type: name
pattern: jvm.*
- reporter: destination-reporter-3
match: any
matchers:
# Will match any data point, to this can be used as a catch-all for any previously unmatched data points.
- type: name
pattern: .*
# External/custom reporters are defined using the 'plugin' type.
- type: plugin
name: myCustomReporter
# Simple class name of the reporter in the plugin jar.
class-name: CustomReporter
# Additional properties are specific to each reporter type. See plugin's configuration reference for details.
custom-prop: asdf
#
# Workflow settings for controlling the flow of data points from plugins to reporters.
# This section is optional - default settings are used if omitted.
#
workflow:
# Directory to store pipeline persistence.
# Required only if at least one pipeline uses 'disk' store type.
# The directory must be writable.
store-directory: /var/lib/geneos/collection-agent
# Pipelines.
#
# A pipeline exists for each class of data (metrics/logs/events/attributes)
#
# Each pipeline is enabled by default if omitted from the configuration.
#
# At least one pipeline must be enabled. A runtime error will occur if a plugin attempts delivery to a pipeline
# that is not configured.
#
# Metrics pipeline.
metrics:
# Reporter to which all data points on this pipeline are sent.
# This property is optional if there is only one reporter configured. Otherwise the value is required and
# must correspond to the 'name' of a reporter defined above.
reporter: logging
# Optional. Defaults to true.
enabled: true
# Optional. Whether internal resources are pooled or not. Defaults to false. Does not apply in pass-through mode.
# Resource pools consume more static memory but result in less garbage collection and therefore less CPU load.
pooling: false
# Number of retries after initial delivery fails. Defaults to 3. For infinite retries set to -1.
# The interval between consecutive retries for the same message increases from 1 second up to 120 seconds.
max-retries: 3
# Optional pass through mode configuration (disabled by default).
#
# In pass through mode, there is no buffering between collectors and the pipeline - data points pass through the
# pipeline on the thread of the collector. This means that collector threads are directly coupled to the behavior of
# the eventual reporter. The nature of the coupling is determined by whether the reporter is synchronous or
# asynchronous and whether pipeline retries are enabled. It is therefore possible the a collector thread becomes
# blocked awaiting reporter completion.
pass-through:
# Optional. Defaults to false.
enabled: false
# Optional. Enable fire and forget mode (disabled by default) on this pipeline.
# Only applicable when pass-through mode is enabled and max-retries is 0.
#
# This option can be used to achieve higher throughput when best effort reporting (i.e. no failure notifications
# or retries) is an acceptable tradeoff.
fire-and-forget: false
# Optional. Only applies when the workflow is in 'pass-through' mode. Defaults to 'parallel'.
#
# Defines whether multiple threads are allowed to enter the pipeline concurrently or not.
# Serial mode can be used in the case of stateful pipeline processors.
concurrency: [ parallel | serial ]
# Store settings.
#
# Data points are stored either in memory or on disk before delivery to a reporter.
#
# If a reporter's target becomes unavailable, data points are queued until either the store is full or
# the reporter target becomes available again.
#
# Plugins are informed when a store becomes full and are free to handle the situation in a way that makes
# sense for that plugin (i.e. dropping the message if not critical, or waiting for the store to re-open before
# collecting any more data).
store:
# Store type.
#
# Permitted values:
# 'memory': A circular, fixed-size, in-memory store that provides no persistence. The oldest data point
# is removed when adding to a full store, therefore this store never rejects new data points
# and will begin to drop data if a slow reporter cannot keep up.
#
# 'disk': A fixed-size store that is persisted to disk. Requires the workflow 'store-directory' setting
# to be configured.
#
# For the metrics pipeline, it is recommended (and the default) to use a memory store, as metric data is
# generally non-critical and loses relevance if delayed.
#
type: memory
# Maximum number of data points to hold before the store is considered full and new data points are rejected.
# The default capacity for a memory store is 8192 data points and 10,000,000 data points for a disk store.
capacity: 8192
# Custom processing of data points on this pipeline. Processors can manipulate, enrich and/or filter
# data points before reporting.
#
# See the 'common' pipeline for more details.
processors:
- type: enrichment
name: metrics-enricher
dimemsions:
custom_dimension: value
# Logs pipeline.
logs:
reporter: logging
store:
# For logs, it is recommended (and the default) to use a disk store if data loss is not tolerable.
type: disk
# Maximum size (in bytes) of one store file. Only applicable when store type is "disk".
# The value must be a multiple of 4096.
# Optional - default value is 128MB for logs and 16MB for events.
max-file-length: 134217728
# For logs, it is recommended (and the default) to retry infinitely if data loss is not tolerable.
max-retries: -1
# Events pipeline.
events:
reporter: logging
store:
# For events, it is recommended (and the default) to use a disk store if data loss is not tolerable.
type: disk
# For events, it is recommended (and the default) to retry infinitely if data loss is not tolerable.
max-retries: -1
# Attributes pipeline.
attributes:
reporter: logging
store:
# For attributes, it is recommended (and the default) to use a disk store if data loss is not tolerable.
type: disk
# For attributes, it is recommended (and the default) to retry infinitely if data loss is not tolerable.
max-retries: -1
# Common pipeline.
#
# This is a unique pipeline that only has data-point processors (there is no reporter). The processors are applied
# to data points on all pipelines, before any pipeline-specific processors are applied.
common:
# Data-point processors.
#
# Processors can manipulate, enrich and/or filter data points before reporting. They are applied before
# a data point is saved in the pipeline's store.
#
processors:
# Enrichment processor. Adds dimensions and/or properties to all data points.
- type: enrichment
# Optional. Defaults to true.
enabled: true
# Optional name used in logging. If omitted, an auto-generated name will be assigned.
name: enricher
# Whether to overwrite an existing dimension or property with the same name (defaults to false)
overwrite: false
# Dimensions to add
dimensions:
node_name: ${env:NODE_NAME}
# Properties to add
properties:
prop: value
# Translation processor.
#
# Translates:
# - data point names
# - dimension and/or property key/values
#
- type: translation
# Translate data point name via a search and replace operation (optional).
name-translation:
# The search regular expression.
# The name is not modified unless it matches this pattern.
# The pattern may contain group captures which may be reference in the 'replace' pattern.
search: search-pattern
# The replace regular expression.
# May contain group references from the 'search' pattern.
replace: replace-pattern
# List of dimension and/or property key/value translators (optional).
key-value-translations:
# First translator.
#
# The source key/value.
- from:
# Either 'dimension' or 'property'.
type: dimension
# Source dimension or property name.
name: dim1
# Optional source value search pattern.
search: search-pattern
# Whether or not to delete the source key/value (default is true).
delete: true
# The target key/value.
# If the 'from' specifies 'delete' then the 'to' section may be omitted.
to:
# Either 'dimension' or 'property'.
type: property
# Target dimension or property name.
name: prop1
# Optional target value replace pattern
replace: replace-pattern
# Whether or not to overwrite the target if it already exists (default is true).
overwrite: true
# Second translator.
- from:
type: dimension
name: dim2
delete: true
to:
type: property
name: prop2
overwrite: true
# Drop filter processor. Drops data points that match the configured criteria.
- type: drop-filter
# One or more match criteria.
# For a data point to be dropped, all configured criteria must match, otherwise the data point
# will be forwarded. If no matchers are configured, all data points will be forwarded.
matchers:
# Match by data point name, either exactly or via regex.
- type: name
# Exact match
name: kubernetes_node_cpu_usage
# Regex match (only one of 'name' or 'name-pattern' can be configured)
name-pattern: kubernetes_.*
# Match by data point dimension key and either an exact value or a regex pattern.
- type: dimension
key: namespace
# Exact value match
value: finance
# Regex match (only one of 'value' or 'value-pattern' can be configured)
value-pattern: ns.*
# Match by data point property key and either an exact value or a regex pattern.
- type: property
key: someProperty
# Exact value match
value: someValue
# Regex match (only one of 'value' or 'value-pattern' can be configured)
value-pattern: value.*
# Match by data point type. Value kinds are: [attribute|counter|gauge|generic-event|log-event|histogram]
- type: kind
kind: counter
# Forward filter processor. Forwards data points that match the configured criteria.
# This behaves inversely to "drop-filter" above but is configured identically.
- type: forward-filter
# One or more match criteria.
# For a data point to be forwarded, all configured criteria must match, otherwise the data point
# will be dropped. If no matchers are configured, all data points will be dropped.
# See "drop-filter" for details on each type of matcher.
matchers:
- type: name
pattern: myCounter
# Normalize processor. Normalizes dimension names for consistency in subsequent processing and reporting.
- type: normalize
# Optional name used in logging. If omitted, an auto-generated name will be assigned.
name: normalize
# Dimension normalization settings.
dimensions:
# Default overwrite behavior, can be overridden per mapping. Defaults to false.
overwrite: false
# Dimension mappings.
mappings:
# Old dimension name.
- from: project
# New dimension name.
to: namespace
# Whether to overwrite if a dimension already exists with the same name. Defaults to parent setting.
overwrite: false
# External/custom processors are defined using the 'plugin' type.
- type: plugin
# Optional name used in logging. If omitted, an auto-generated name will be assigned.
name: kube-enricher
# Simple class name of the processor in the plugin jar.
class-name: KubernetesEnricher
# Additional properties are specific to each processor type. See plugin's configuration reference for details.
custom-prop: abc
["Geneos"]
["Geneos > Netprobe"]
["Technical Reference"]