Exporting results
Easily and securely export high volumes of event data in real time to Splunk and other SIEM/analytics platforms.
Highly-scalable Opsview Monitor is a solution for monitoring, aggregating, visualizing, alerting on, and drilling into event data from across arbitrarily-large, complex, multi-location, premise- and cloud-based enterprise IT estates. Enough so that it can be the “single pane of glass” that IT operations needs to work efficiently.
Opsview Monitor can also work as a take-off point providing data to other analytics, Security Information and Event Management (SIEM), bulk storage and other systems. The Opsview Monitor Results Exporter provides a complete, easy-to-use toolkit for extracting, filtering, and reformatting raw data directly from Opsview Monitor’s message queue, and forwarding it to Splunk (SaaS or local) analytics, Enterprise Security, or to a host of other SIEM platform via syslog or HTTP.
Getting started Copied
The Results Exporter is installed and managed using Opsview Deploy. To use the Results Exporter, you first need to install the Results Exporter component, and then configure an output.
The following are the currently supported outputs for the Results Exporter, along with information on how to configure them:
- Syslog Outputs — sends information to a local or remote syslog server.
- File Outputs — sends information to a file on the local filesystem.
- HTTP Outputs — sends information over HTTP to a remote server, this includes predefined supported types:
- Splunk Event Collector — over unverified SSL.
- Certified Splunk Event Collector — over verified SSL.
Syslog output Copied
Results can be exported to a local or remote syslog server by configuring a syslog output, under the opsview_results_exporter_outputs
variable in the /opt/opsview/deploy/etc/user_results_exporter.yml
file. For example:
opsview_results_exporter_outputs:
syslog:
my_syslog_output:
protocol: udp
host: 192.168.1.1
port: 514
log_facility: user
log_level: info
log_format: '[opsview-resultsexporter] %(message)s'
log_date_format: '%Y-%m-%d %H:%M:%S'
Configuring a syslog output Copied
The following options may be specified for your syslog output:
Required Copied
None. If declared with no options, this will log to /dev/log
with all default settings.
Options Copied
Parameter Name | Type | Description | Example | Default |
---|---|---|---|---|
host | str | The hostname of the syslog server. If specified, port must also be supplied. | host: '192.168.1.1' |
Logs locally to /dev/log |
port | int | The port of the syslog server. If specified, host must also be supplied. | port: 514 |
Logs locally to /dev/log |
protocol | str | The transport protocol to use if using remote logging (not local /dev/log ), either udp or tcp. It is recommended to use udp. |
protocol: tcp |
protocol: udp |
log_facility | str | The facility used for syslog messages. Supported logging facilities: auth, authpriv, cron, lpr, mail, daemon, ftp, kern, news, syslog, user, uucp, local0 - local7 | log_facility: local7 |
log_facility: user |
log_level | str | The log level used for syslog messages. Supported logging levels (Highest priority to lowest): critical, error, warning, notice, info, debug | log_level: error |
log_level: info |
log_date_format | str | The format of the date in syslog messages. Can use any options listed in the Log Date Format Strings table below. | log_date_format: '(%Y) %m %d' |
log_date_format: '%Y-%m-%d %H:%M:%S' |
log_format | str | The format of syslog messages. Can use any options listed in the Log Format Strings table below - the %(asctime)s format option will match the format declared in log_date_format, if it has been specified. |
log_format: 'msg: %(message)s' |
log_format: '[opsview_resultsexporter %(asctime)s] %(message)s' |
filter | See the Filtering section for more details. | |||
fields | See the Field Mapping section for more details. |
Log date format strings Copied
Directive | Meaning |
---|---|
%a |
Locale’s abbreviated weekday name. |
%A |
Locale’s full weekday name. |
%b |
Locale’s abbreviated month name. |
%B |
Locale’s full month name. |
%c |
Locale’s appropriate date and time representation. |
%d |
Day of the month as a decimal number [01,31]. |
%H |
Hour (24-hour clock) as a decimal number [00,23]. |
%I |
Hour (12-hour clock) as a decimal number [01,12]. |
%j |
Day of the year as a decimal number [001,366]. |
%m |
Month as a decimal number [01,12]. |
%M |
Minute as a decimal number [00,59]. |
%p |
Locale’s equivalent of either AM or PM. |
%S |
Second as a decimal number [00,61]. |
%U |
Week number of the year (Sunday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Sunday are considered to be in week 0. |
%w |
Weekday as a decimal number [0(Sunday),6]. |
%W |
Week number of the year (Monday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Monday are considered to be in week 0. |
%x |
Locale’s appropriate date representation. |
%X |
Locale’s appropriate time representation. |
%y |
Year without century as a decimal number [00,99]. |
%Y |
Year with century as a decimal number. |
%Z |
Time zone name (no characters if no time zone exists). |
%% |
A literal ‘%’ character. |
Log format strings Copied
Directive | Meaning |
---|---|
%(message)s |
The logged message. |
%(name)s |
Name of the logger used to log the call. |
%(levelname)s |
Text logging level for the message (‘DEBUG’, ‘INFO’, ‘NOTICE’, ‘WARNING’, ‘ERROR’, ‘CRITICAL’). |
%(asctime)s |
Time when the log record was created. |
File outputs Copied
Results can be exported to a file on the local system by configuring a file output, under the opsview_results_exporter_outputs
variable in the /opt/opsview/deploy/etc/user_results_exporter.yml
file. For example:
opsview_results_exporter_outputs:
file:
my_results_export_file:
path: '/var/log/results_export.log'
Configuring a File Output Copied
The following options can be specified for your file output:
Required Copied
Parameter Name | Type | Description | Examples |
---|---|---|---|
path | str | The path to the local file where this output will log messages. Note: The component will run as the opsview user, so the opsview home directory will be substituted for ~ . |
path: '/var/log/resultsexporter.log' path: '~/logs/my_file' |
Optional Copied
Parameter Name | Type | Description | Example |
---|---|---|---|
format_type | str | The format type of the messages logged to the file - see the Formatting Messages section for more details. | format_type: json |
filter | See the Filtering section for more details. | ||
fields | See the Field Mapping section for more details. | ||
message_format | The format of the messages logged to the file - see the Formatting Messages section for more details. |
HTTP outputs Copied
Results can be exported via HTTP to an external service by configuring an HTTP output, under the opsview_results_exporter_outputs
variable in the /opt/opsview/deploy/etc/user_results_exporter.yml
file. For example:
opsview_results_exporter_outputs:
http:
my_http_output:
type: custom
endpoint: 'http://www.mywebsite.com/resultscollector'
headers:
Username: 'username'
Password: 'pass'
Configuring a custom HTTP output Copied
Required Copied
Parameter Name | Type | Description | Examples |
---|---|---|---|
endpoint | str | The endpoint where this output will send requests. By default, if no port is specified in the endpoint string, the component will attempt to connect to port 80. If the scheme in the endpoint string is not https, the component will default to http. | endpoint: 'http://www.mywebsite.com:8000/resultscollector' |
Optional Copied
Parameter Name | Type | Description | Examples | Default |
---|---|---|---|---|
headers | dict | The headers to be included in the HTTP request. | headers: {Authorization: ‘Basic YWxhZGRpbjpvcGVuc2VzYW1l’, Content-Type: ’text/html; charset=utf-8’} | headers: {} |
type | str | The HTTP output type. If not custom then refer to the table below instead for options. | type: custom | |
body | str | The format of the request body. The %(data)s format string will be replaced by the data being sent in each post request (your messages after message formatting). For JSON format, the messages will be concatenated into a JSON array before being substituted into your specified body format. body: ‘{“my_data”: %(data)s}’ | body: ‘data_prefix %(data)s data_suffix’ | body: ‘%(data)s’ |
ssl_options | dict | The ssl options to be used. Currently supported options: insecure (bool), cert_reqs (str), ssl_version (str), ca_certs (str), ciphers (str), keyfile (str), certfile (str) | ssl_options: {insecure: False, cert_reqs: CERT_REQUIRED, ssl_version: PROTOCOL_TLS, ca_certs: ‘/path/to/ca_certs’, ciphers: ‘HIGH+TLSv1.2:!MD5:!SHA1’, keyfile: ‘/path/to/keyfile’, certfile: ‘/path/to/certfile’} | ssl_options: {insecure: True, cert_reqs: CERT_NONE, ssl_version: PROTOCOL_TLS, ca_certs: null, ciphers: null, keyfile: null, certfile: null} |
format_type | str | The format type of the messages logged to the file - see the Formatting Messages section below for more details. | format_type: json | |
filter | See the Filtering section below for more details. | |||
fields | See the Field Mapping section below for more details. | |||
message_format | The format of the messages logged to the file - see the Formatting Messages section below. |
Configuring a predefined HTTP output Copied
The following options can be specified for an HTTP output with a predefined type
: spunk, splunk-cert
splunk Copied
Results can be exported via unverified HTTPS to Splunk by configuring an HTTP output with type: splunk
under the opsview_results_exporter_outputs
variable in the /opt/opsview/deploy/etc/user_results_exporter.yml
file. For example:
opsview_results_exporter_outputs:
http:
my_splunk_output:
type: splunk
parameters:
host: '192.168.1.1'
port: 8088
token: '103a4f2f-023f-0cff-f9d7-3413d52b4b2b'
The following parameters are required for an HTTP Splunk output:
Parameter Name | Type | Description | Example(s) |
---|---|---|---|
host | str | The hostname/IP Address of your Splunk Server where you have set up Splunk HTTP Event Collection. | host: '192.168.1.1' |
port | int | The port specified in the Global Settings of your Splunk HTTP Event Collectors. | port: 8088 |
token | str | The token relating to your specific Splunk HTTP Event Collector. | token: '103a4f2f-023f-0cff-f9d7-3413d52b4b2b' |
splunk-cert Copied
Results can be exported via HTTPS to Splunk (using a client certificate) by configuring an HTTP output with type: splunk-cert
under the opsview_results_exporter_outputs
variable in the /opt/opsview/deploy/etc/user_results_exporter.yml
. For example:
opsview_results_exporter_outputs:
http:
my_splunk_output:
type: splunk-cert
parameters:
host: '192.168.1.1'
port: 8088
token: '103a4f2f-023f-0cff-f9d7-3413d52b4b2b'
ca_certs: '/mycerts/ca.crt'
keyfile: '/mycerts/client.key'
certfile: '/mycerts/client.crt'
The following parameters are required for an HTTP Splunk output:
Parameter Name | Type | Description | Example(s) |
---|---|---|---|
host | str | The hostname/IP Address of your Splunk Server where you have set up Splunk HTTP Event Collection. | host: '192.168.1.1' |
port | int | The port specified in the Global Settings of your Splunk HTTP Event Collectors. | port: 8088 |
token | str | The token relating to your specific Splunk HTTP Event Collector. | token: '103a4f2f-023f-0cff-f9d7-3413d52b4b2b' |
ca_certs | str | The path to your CA (Certificate Authority) Certificate(s). | ca_certs: '/mycerts/ca.crt' |
certfile | str | The path to your client certificate. | certfile: '/mycerts/client.crt' |
keyfile | str | The path to the private key for your client certificate. | keyfile: '/mycerts/client.key' |
Note
If your client certificate and key are both within the same.pem
file then you can simply list that file path for bothcertfile
andkeyfile
.
Field mapping Copied
The Results Exporter allows you to transform the messages as they are exported, by specifying exactly which message fields should be present in the exported result, so you can remove details from the messages you do not want or need:
opsview_results_exporter_outputs:
syslog:
local_syslog_server:
fields:
- hostname
- servicecheckname
- stdout
Parameter Name | Type | Description | Example | Default |
---|---|---|---|---|
fields | list | The field mapping to apply to your output. | fields: [‘hostname’, ‘servicecheckname’, ‘current_state’, ‘problem_has_been_acknowledged’, ‘is_hard_state’, ‘check_attempt’, ’last_check’, ’execution_time’, ‘stdout’, ‘perf_data’, ‘metadata’] |
Specifying fields Copied
The fields of messages being exported are fully configurable. To select the fields that should be included in exported messages, list the keys under the fields section of an output. For example, to include the hostname
, servicecheckname
, current_state
and stdout
fields, add them to the fields section:
opsview_results_exporter_outputs:
file:
message_backup:
fields:
- hostname
- servicecheckname
- current_state
- stdout
Simple examples Copied
# display the host name and the host state of each result:
fields:
- hostname
- host_state
# display the servicecheck name, the current state of the servicecheck, and the stdout message of each result
fields:
- servicecheckname
- current_state
- stdout
Mapping and renaming fields Copied
Users can also specify custom keys which are given values based on a mapping. For example, to retrieve the host_state
as a new field with name host_state_string
, and value UP
or DOWN
instead of 0
or 1
:
opsview_results_exporter_outputs:
output_name:
fields:
- host_state_string:
host_state:
0: "UP"
1: "DOWN"
In this example, the value of host_state
determines behavior as below:
Value of host_state |
Behaviour |
---|---|
0 |
host_state_string will be added to result with value ‘UP’ |
1 |
host_state_string will be added to result with value ‘DOWN’ |
Anything else | host_state_string will not be added to result |
A default value can also be specified. For example, if the value of the hostname
field does not match web-server
or email-server
, the value will be set to AllCompany
. If a default value is not specified and the key does not match any of the keys provided, the field will be omitted from the output.
opsview_results_exporter_outputs:
output_name:
fields:
- department:
hostname:
web-server: "Engineering"
email-server: "BS"
default: "AllCompany" # default value declared here and used if no match for source name
- check_state
- stdout
This example results in behavior as below:
Value of hostname |
Behaviour |
---|---|
web-server |
department will be added to result with value Engineering . |
email-server |
department will be added to result with value BS . |
Anything else | department will be added to result with value AllCompany . |
Fields can also be added where the value is always constant.
opsview_results_exporter_outputs:
output_name:
fields:
- department:
default: "AllCompany" # default value is always used as there is no source name
- check_state
- stdout
This example results in behavior as below:
Value of hostname |
Behaviour |
---|---|
Anything | department will be added to result with value AllCompany . |
Mapped values can refer to any (original) message fields, by using the format syntax: %(<field>)s
, as shown in the example below.
opsview_results_exporter_outputs:
output_name:
fields:
- priority_msg:
check_state:
0: "%(servicecheckname)s HAS PRIORITY: LOW (OK)"
2: "%(servicecheckname)s HAS PRIORITY: HIGH (CRITICAL)"
default: "%(servicecheckname)s HAS PRIORITY: MEDIUM (%(check_state)s)"
- check_state
- stdout
This example results in behavior as below:
Value of check_state (service check name is “Server Connectivity”) |
Behaviour |
---|---|
0 |
priority_msg will be added to result with value Server Connectivity HAS PRIORITY: LOW (OK) . |
2 |
priority_msg will be added to result with value Server Connectivity HAS PRIORITY: HIGH (CRITICAL) . |
Anything else (here called X) | priority_msg will be added to result with value Server Connectivity HAS PRIORITY: MEDIUM (X) . |
This allows message fields to be renamed easily if required by providing a one to one mapping with the original message field. For example to rename the hostname
field to name
:
opsview_results_exporter_outputs:
output_name:
fields:
- name:
default: "%(hostname)s"
This example results in behavior as below:
Value of hostname |
Behaviour |
---|---|
Anything (here called X) | name will be added to result with value X . |
Note
If you change your mapping values, then you should review all your filters to ensure they will work as expected.
Field operations Copied
You can optionally apply a field operation to create a message field value, or transform an existing retrieved value. The currently supported field operations are listed below.
“replace” operation Copied
The replace[<old>][<new>]
operation will replace any occurrence of the old string with the new string (which can be empty). This must follow the name of a message field, after a “field operation pipe” |>
. For example:
opsview_results_exporter_outputs:
output_name:
fields:
- name:
default: "%(hostname |> replace[server][website])s"
This example results in behavior as below:
Example Value of hostname |
Behavior |
---|---|
server-1 |
name will be added to result with value website-1 . |
The old and new strings cannot contain the literal [
and ]
characters.
“dal_fetchall” and “dal_fetchone” operations Copied
These allow you to query data elsewhere in your Opsview system to enhance your outputs.
The dal_fetchall[<query>]
operation will return all data from the configured query, while the dal_fetchone[<query>]
operation will return only the first result. Queries are constructed in the following format:
<data model type>(<selector>).<data model field>
Where the selector is either *
(all instances of the data model type) or of the format model.<data model field> = <existing message field>
(filter so only data models with matching fields are selected).
The currently supported data model types and relevant fields are:
RuntimeHosts
- Hosts currently in the Opsview system, only updated when Apply Changes is carried out.host_id
- The ID of the Host in the system - this matches thehost_id
message field for results.host_name
- Host name - this matches thehostname
message field for results.network_address
- Host primary address.description
- Host description.interface_count
- the number of SNMP interfaces monitored on the Host.service_count
- the number of Host Services on the Host.child_count
- the number of Hosts that have the Host as a parent.notes
- 1 if the Host has notes added, else 0.monitored_by
- The ID of the Monitoring Cluster monitoring the Host.icon_filename
- The name of the icon applied to the Host in the Opsview UI.host_group_id
- The ID of the closest parent Host Group of the Host.host_group_parent_id
- The ID of the grandparent Host Group of the Host.host_group_name
- The name of the closest parent Host Group of the Host.host_group_notes
- 1 if the parent Host Group has notes, else 0.host_group_hierarchy
- A string representation of the full Host Group path of the Host using names.host_group_id_hierarchy
- A string representation of the full Host Group path of the Host using IDs.host_group_hierarchy_list
- A list representation of the full Host Group path of the Host using names.hashtag_names
- A list of all Hashtags applied to the Host, ordered alphabetically.
RuntimeServicechecks
: Host Services currently in the Opsview system, only updated when Apply Changes is carried out.service_object_id
- The ID of the Host Service in the system - this matches theobject_id
message field for Service Check results.service_name
- The name of the Service Check for the Host Service - this matches theservicecheckname
message field for Service Check results.has_performance_data
- 1 if the Host Service currently has performance data, else 0.notes
- 1 if the Host Service has notes added, else 0.service_group_id
- The ID of the Service Group the Service Check is in.service_group_name
- The name of the Service Group the Service Check is in.hashtag_names
- A list of all Hashtags applied to the Host Service, ordered alphabetically.host_id
- The ID of the Host in the system, that the Host Service is on.host_name
- name of the Host the Host Service is on - this matches thehostname
message field for results.
As an example, if two Hosts exist in the Opsview system:
- “hostA” with description “example host”.
- “hostB” with description “another host”.
And the mapping is configured:
opsview_results_exporter_outputs:
output_name:
fields:
- description:
default: "%(dal_fetchone[RuntimeHosts(model.host_name = hostname).description])s"
This would result in behaviour as below:
Example Value of hostname |
Behaviour |
---|---|
hostA |
description will be added to result with value example host . |
hostB |
description will be added to result with value another host . |
However, if the mapping was configured:
opsview_results_exporter_outputs:
output_name:
fields:
- descriptions:
default: "%(dal_fetchall[RuntimeHosts(*).description])s"
Then for every message, the descriptions
field would be added with the value ["example host", "another host"]
or ["another host", "example host"]
(as a string).
When selecting lists of values as above, it can be useful to export them as an actual list. To enable this, use the %(...)l
field operation format:
opsview_results_exporter_outputs:
output_name:
fields:
- descriptions:
default: "%(dal_fetchall[RuntimeHosts(*).description])l"
Now, for every message, the descriptions
field would be added with a list value ["example host", "another host"]
or ["another host", "example host"]
. How this list is represented during export depends on the selected format_type
.
As another example, if a Host existed in the Opsview system:
- “hostA” with Service “service1”, in Service Group “Example Services”
And the mapping is configured:
opsview_results_exporter_outputs:
output_name:
fields:
- service_group:
default: "%(dal_fetchone[RuntimeServicechecks(model.service_object_id = object_id).service_group_name])s"
Then for a Service Check result from “service1” on “hostA”, the field “service_group” would be added with value “Example Services”.
As an additional example, the parent Host Group of a Host can be retrieved. If “hostA” is in Host Group “My Servers” and the mapping is configured:
opsview_results_exporter_outputs:
output_name:
fields:
- parent_host_group:
default: "%(dal_fetchone[RuntimeHosts(model.host_id = host_id).host_group_name])s"
Then for any Service Check or Host Check result from “hostA”, the field “parent_host_group” would be added with value “My Servers”.
Reusing fields Copied
To avoid duplication of fields specifications between a number of output sections, the fields can be defined once using the &
(yaml anchor) operator and reused multiple times using the *
(anchor reference) operator.
opsview_results_exporter_outputs:
fields: &default_fields
- hostname
- servicecheckname
- current_state
- problem_has_been_acknowledged
- is_hard_state
- check_attempt
- last_check
- execution_time
- stdout
syslog:
local_syslog_server:
fields: *default_fields
file:
message_backup:
fields: *default_fields
Alternatively, anchors can be declared as a list, and can have optional names for clarity, as in this example:
opsview_results_exporter_custom_fields:
fields:
- basic_fields: &basic_fields
- host_state
- servicecheckname
- last_check
- stdout
- &service_check_name_and_stdout:
- servicecheckname
- stdout
opsview_results_exporter_outputs:
syslog:
local_syslog_server:
fields: *basic_fields
file:
message_backup:
fields: *service_check_name_and_stdout
http:
remote-api:
fields:
- host_state
- is_hard_state
Performance data fields Copied
The Results Exporter component exposes the individual metrics within the performance data of each message. To include the raw performance data string in your exported message, include the perf_data_raw
field within your list of fields, for example, for a service check with performance data of:
opsview_results_exporter_outputs:
syslog:
local_syslog_server:
fields:
- hostname
- stdout
- perf_data_raw
To include the entire performance data as a nested structure within your message, include the perf_data
field:
opsview_results_exporter_outputs:
syslog:
local_syslog_server:
fields:
- hostname
- stdout
- perf_data
To include some of the nested fields, but not all, you can specify specific named metrics as below:
opsview_results_exporter_outputs:
syslog:
local_syslog_server:
fields:
- hostname
- stdout
- perf_data.rta
- perf_data.rtmin
- perf_data.rtmax
Supported fields Copied
Field | Type | Example | Description |
---|---|---|---|
check_attempt | int | 1 |
The current check attempt number (0 < check_attempt < max_check_attempts). |
current_state | int | 0 |
Current state of the check. |
downtime_depth | int | 0 |
The number of active downtimes an object is currently included in (0 indicates not in downtime). |
early_timeout | bool | false |
Set if the execution of the plugin timed out. |
end_time | float | 1543504202.051796 |
The epoch time when the plugin execution completed. |
execution_time | float | 4.0345320702 |
How long the plugin took to execute. |
host_id | int | 26 |
The Opsview Host ID for the host relating to this host/service check. |
host_state | int | 0 |
The state the host is currently known to be in. |
hostname | string | ldap-cache1.opsview.com |
Name of the Opsview Monitor host that produced the result message. |
init_time | float | 1543504198.002325 |
The time when the execution request message was created. |
is_flapping | bool | false |
Has flapping been detected (repeated OK->non-OK results subsequent checks). |
is_hard_host_state | bool | true |
Is the host in a HARD or SOFT state. |
is_hard_state | bool | true |
Is the check in a HARD or SOFT state. |
is_hard_state_change | bool | true |
Has this result just changed from SOFT to HARD. |
is_passive_result | bool | true |
Is this result from a passive or an active check. |
is_state_change | bool | false |
Has a change of state been detected. |
last_check | int | 1543504198 |
Integer epoch time of when the check last ran. |
last_hard_state | int | 0 |
The value of the last HARD state. |
last_hard_state_change | int | 1543434256 |
Epoch value of when the check last changed to a HARD state. |
last_state_change | int | 1543486858 |
Epoch value of when the check last changed state (SOFT or HARD). |
latency | float | 0.0149388313 |
The difference between when the check was scheduled to run and when it actually ran. |
max_check_attempts | int | 3 |
Number of check attempts before a SOFT error is counted as HARD. |
object_id | int | 953 |
The Opsview Object ID number for this host/service. |
object_id_type | string | service |
Whether this is a host or a service check. |
perf_data_raw | string | rta=0.034ms;500.000;1000.000;0; pl=0%;80;100;; rtmax=0.113ms;;;; rtmin=0.011ms;;;; |
The performance data returned from the host/service check. |
perf_data | Adds the entire nested structure of perf_data metrics to the message (JSON/YAML/XML), or shorthand for adding each of the metrics as a string to the message (KVP). See Formatting Messages for examples. | ||
perf_data.some-metric-name | Adds that metric to the nested structure of perf_data metrics and adds the nested structure to the message if not already present (JSON/YAML/XML), or add that individual metric as a string to the message (KVP). | ||
prev_state | int | 0 |
The state returned by the previous check. |
problem_has_been_acknowledged | bool | false |
Has a non-OK state been acknowledged. |
servicecheckname | string | TCP/IP |
Service Check name, or null for a host check. |
start_time | float | 1543504198.017264 |
The time the plugin started executing. |
stdout | string | PING OK - Packet loss = 0%RTA = 14.85 ms |
Output from the plugin. |
timestamp | float | 1543504202.057018 |
Time the message was created. |
metadata | Adds the nested structure of metadata metrics to the message (JSON/YAML/XML), or shorthand for adding each of the metrics as a string to the message (KVP). | This currently includes: hostname_run_on (the name of the host machine that the service check was run on). | |
ack_ref | Internal Use. | ||
broadcast | Internal Use. | ||
job_ref | Internal Use. | ||
message_class | Internal Use. | ||
message_source | Internal Use. | ||
ref | Internal Use. |
Filtering Copied
A powerful feature of the Results Exporter is the ability to select exactly which messages you want to be exported via each output, using a filter string. Any message that meets the condition specified in the filter will be processed and exported by that output:
opsview_results_exporter_outputs:
syslog:
local_syslog_server:
filter: '(hostname == "opsview") && (servicecheckname == "Connectivity - LAN")'
Parameter Name | Type | Description | Example(s) |
---|---|---|---|
filter | str | The filter string to apply to your output. | filter: '(current_state !~ "OK")' |
Specifying filters Copied
Using a filter allows you to focus on exporting the results that are important and ignore anything that is not relevant to your chosen reporting or searching tool.
Inbuilt filters Copied
Operator | Description |
---|---|
(empty string) | Allows every message. |
* | Allows every message. |
!* | Allows no messages. |
^ | Hold filter holds all messages. By doing this, the output will enter a dormant state, where it no longer consumes any messages but continues to collect them. This indicates that when a new filter is applied to the output, it will work through all the messages built up while the hold filter was applied. |
A filter string can also consist of a comparison between a key within the message and a value (the value for the key within each message being filtered).
More complex filters can be written by combining filter strings using the &&
(logical and) and ||
(logical or)
operators.
Supported comparisons Copied
Operator | Description |
---|---|
== | is equal to |
!= | is not equal to |
>= | is greater than or equal to |
<= | is less than or equal to |
~ | contains |
!~ | does not contain |
< | is less than |
> | is greater than |
@ | matches (regex) |
!@ | does not match (regex) |
Supported fields Copied
Within your filter, you can refer to any field listed as supported in Field Mapping, with the exception of perf_data
,
perf_data_raw
and any extracted performance data fields. These are not supported by the filter.
Simple filter examples Copied
# allow every message
filter: ''
# allow every message
filter: '*'
# allow no messages
filter: '!*'
# only allow messages where the hostname contains "opsview."
filter: '(hostname ~ "opsview.")'
# only allow messages related to the LAN Connectivity service check
filter: '(servicecheckname == "Connectivity - LAN")'
# only allow messages where the service state is anything except OK - where current_state values have not been remapped
filter: '(current_state !=0 )'
# only allow messages where the service state is anything except OK - where current_state values have been remapped
filter: '(current_state !~ "OK")'
# only allow messages where the hostname is in IPv4 address form
filter: '(hostname @ "^(?:[0-9]{1,3}\.){3}[0-9]{1,3}$")'
# only allow messages where the host ID is 500 or above
filter: '(host_id >= 500)'
# only allow messages where state type is hard
filter: '(is_hard_state == True)'
# only allow messages where the service check name is null
filter: '(servicecheckname == null)'
Complex filter examples Copied
# only allow messages where the hostname contains "opsview." and relates to the LAN Connectivity service check
filter: '(hostname ~ "opsview.") && (servicecheckname == "Connectivity - LAN")'
# only allow messages where the hostname contains "opsview." and relates to the LAN Connectivity service check,
# and the state type is HARD and the service state is anything except OK
filter: '(hostname ~ "opsview.") && (servicecheckname == "Connectivity - LAN") && (check_state_type == "HARD") && (current_state !~ "OK")'
# only allow messages where the hostname is in IPv4 address form
filter: '(hostname @ "^(?:[0-9]{1,3}\.){3}[0-9]{1,3}$")'
# only allow messages from the 'Disk: /' and 'Disk: /tmp' service checks, and only when the host ID is 500 or above
filter: '((servicecheckname == "Disk: /") || (servicecheckname == "Disk: /tmp")) && (host_id >= 500)'
Note
It is advised to surround your filters with single quotes.
filter: '(hostname @ "^\d{2}-\d{4}$")'
If your filter is surrounded in double quotes, you will need to escape backlashes in your regular expressions:
filter: "(hostname @ '^\\d{2}-\\d{4}$')"
Reusing filters Copied
To avoid duplication of filter specifications between a number of output sections, the filters can be defined once using the &
(YAML anchor) operator and reused multiple times using the *
(anchor reference) operator.
opsview_results_exporter_outputs:
filter: &default_filter '(hostname == "Test Host")'
syslog:
local_syslog_server:
filter: *default_filter
Alternatively, anchors can be declared as a list, and can have optional names for clarity, as in this example:
opsview_results_exporter_custom_filters:
filter:
- opsview_host: &opsview_host
'(hostname ~ "opsview")'
- ¬_ok:
'(current_state !~ "OK")'
opsview_results_exporter_outputs:
syslog:
local_syslog_server:
filter: *opsview_host
file:
message_backup:
filter: *not_ok
Multi-line filters Copied
Multi-line filters are possible as long as the entire filter is quoted - this can add clarity for complex filters as seen below:
opsview_results_exporter_outputs:
http:
splunk:
filter:
'(servicecheckname == "CPU Statistics")
||
(servicecheckname == "Connectivity - LAN")'
Formatting Messages Copied
The Results Exporter allows you to declare the format type of the exported messages for file and HTTP outputs, as well as adding any additional information to each message using a format string:
opsview_results_exporter_outputs:
file:
logfile:
format_type: json
message_format: '{"my_message": %(message)s}'
Parameter Name | Type | Description | Example(s) |
---|---|---|---|
format_type | str | Supported formats: kvp, json, yaml, xml | format_type: xml |
message_format | str | The format of each message being exported. The %(message)s format string will be expanded into the exported message formatted in the markup language, as specified by the format_type field. | message_format: '<mytag />%(message)s' |
Format types Copied
The list of currently supported format types is as follows:
- XML
- Lists will be represented with a nested
_item
suffix tag, so a fielda
of value[1,[2]]
becomes<a><a_item>1</a_item><a_item><a_item_item>2</a_item_item></a_item></a>
. - Empty lists will be represented as empty tags,
<a />
. - Maps will be added using nested tags, so a field
a
of value{b: 1, c: 2}
becomes"<a><b>1</b><c>2</c></a>"
.
- Lists will be represented with a nested
- JSON
- KVP (Key Value Pairs)
- Lists will be flattened and added as duplicate keys, so a field
a
of value[1,[2]]
becomes"a=1, a=2"
. - Empty lists will not be added.
- Maps will be added using
.
nesting, so a fielda
of value{b: 1, c: 2}
becomes"a.b=1, a.c=2"
.
- Lists will be flattened and added as duplicate keys, so a field
- YAML
Example job result message Copied
{
"host_state": 0,
"hostname": "My Host",
"servicecheckname": null,
"perf_data": "rta=1ms;50;100;; rtmin=0.5ms;;;; rtmax=2ms;;;; pl=20%;60;90;;",
"stdout": "OK - default msg"
}
Formatted into kvp Copied
[opsview_resultsexporter 2019-01-01 17:53:02] host_state=0, hostname="My Host",
servicecheckname=None, perf_data_raw="rta=1ms;50;100;; rtmin=0.5ms;;;; rtmax=2ms;;;; pl=20%;60;90;;",
perf_data.rta="1ms", perf_data.rtmin="0.5ms", perf_data.rtmax="2ms", perf_data.pl="20%", stdout="OK - default msg"
Formatted into json Copied
{
"info": "opsview_resultsexporter 2019-01-01 17:53:02",
"message": {
"host_state":0,
"hostname": "My Host",
"servicecheckname": null,
"perf_data_raw": "rta=1ms;50;100;; rtmin=0.5ms;;;; rtmax=2ms;;;; pl=20%;60;90;;",
"perf_data": {
"rta": "1ms",
"rtmin": "0.5ms",
"rtmax": "2ms",
"pl": "20%"
},
"stdout":"OK - default msg"
}
}
Formatted into XML Copied
<result>
<info>opsview_resultsexporter 2019-01-01 17:53:02</info>
<message>
<host_state>0</host_state>
<hostname>My Host</hostname>
<servicecheckname />
<perf_data_raw>rta=1ms;50;100;; rtmin=0.5ms;;;; rtmax=2ms;;;; pl=20%;60;90;;</perf_data_raw>
<perf_data>
<rta>1ms</rta>
<rtmin>0.5ms</rtmin>
<rtmax>2ms</rtmin>
<pl>20%</pl>
</perf_data>
<stdout>OK - default msg</stdout>
</message>
</result>
Formatted into YAML Copied
- info: opsview_resultsexporter 2019-01-01 17:53:02
message:
host_state: 0
hostname: My Host
servicecheckname: null
perf_data_raw: rta=1ms;50;100;; rtmin=0.5ms;;;; rtmax=2ms;;;; pl=20%;60;90;;
perf_data:
rta: 1ms
rtmin: 0.5ms
rtmax: 2ms
pl: 20%
stdout: OK - default msg
Housekeeping Copied
On startup, the Results Exporter component attempts to detect and delete any excess messagequeue
channels that are not used by outputs to prevent a buildup of messages in the system. This means that the appropriate queue will be deleted if an output is deleted or commented out in the configuration and the component is restarted.
To build up messages for an output without exporting it, use the hold filter. For more information, see Filtering.
Troubleshooting Copied
The Results Exporter may encounter issues when processing messages, depending on the configured filter or field mapping.
If the component cannot process a message, it will log a WARNING
message and continue processing other messages. To
get more details about the issues encountered during message processing, turn on DEBUG
logging.
The following error messages may indicate problems with the configured filter or field mapping:
Error during expression evaluation
— indicates a problem while processing a filter.Error during field mapping
— indicates an issue processing a field mapping.
You should review your filters and field mappings, paying particular attention to any configured field operations, to see if there are any cases where operations may be invalid on some messages.
Here are some examples of potential problems:
- Running operations that only apply to either Host or Service Checks on both types. For example, querying
RuntimeServicechecks
on a Host Check result. In these cases, consider filtering on the result messages before processing takes place. - Using
dal_fetchone
to fetch a single result from a query that is not guaranteed to always return at least one result. - Using
dal_...
field operations to query or extract model fields that are unsupported.