Message Tracker configuration

Plug-In Configuration Copied

Adapter Settings Copied

These settings configure the internal adapters that the plug-in will create when it runs.

adapters > adapter > name Copied

The name of the adapter

Mandatory: Yes

adapters > adapter > checkpoint Copied

The name of the checkpoint that the messages this adapter reads are associated with.

Mandatory: Yes

adapters > adapter > source Copied

The source of the data for this adapter. Currently only files are supported as data sources

Mandatory: Yes

adapters > adapter > source > file > filename Copied

This setting specifies that the file to read message data from. The value of this setting specifies the location of the file. Relative paths will be evaluated from the Netprobe working directory.

If a filename contains wildcard characters, then the adapter will automatically check for creation of newer files matching the wildcard pattern. When a new file is detected (provided the current file has been scanned to the end) the adapter will then switch to monitor the newer file. The file check occurs only so often to stop the system from overusing resources. (See fileReadControl > newFileCheckWaitTime)

There are two types of wildcards specifications which can be used:

  1. Globbing using * and ? characters, as typically used in a UNIX shell session. e.g. apache*.log
  2. Filename date generation using the dates-stamping within the filenames. e.g. app<today%Y%m%d>.log

Note

and dates-stamping are not currently supported.

Mandatory: Yes

adapters > adapter > source > file > matchAllFiles Copied

This setting makes the adapter match multiple files instead of the latest file. This is useful e.g. if a fix engine stores conversations per file instead of in one file.

Mandatory: No

adapters > adapter > source > file > multilineMessages Copied

This setting specifies that the messages span multiple file lines. If this is the case a regular expression that can be used to identify lines that start the message must be provided. The end of the message is then detected by additional optional settings.

Mandatory: No

adapters > adapter > source > file > multilineMessages > startPattern Copied

Specifies the regex used to detect the start of a message. The line containing this pattern will be added to the message contents, and all subsequent lines until the end of the message. The end of the message is determined either by the endPattern regex if specified, or by detecting the start of a new message using this start pattern.

Note

The start pattern applies to a single line that starts the message. Character sequences spanning lines cannot be matched.

Mandatory: Yes

adapters > adapter > source > file > multilineMessages > messageEnd Copied

Configures the end-of-message rules for the file. The end of the message is detected either by a file line which matches a configured endPattern, or by finding a new message with the startPattern.

When using only a startPattern, if a new message is not found then the current partial message will be held for the time specified by the maxWaitTimeMs setting for more data to become available, before then being processed. This is to prevent issues when the adapter has read to the end of a log file while the logging process is still writing data.

Mandatory: No

Default: If not specified, the default is maxWaitTimeMs with a value of 1000 ms.

adapters > adapter > source > file > multilineMessages > messageEnd > maxWaitTimeMs Copied

The maximum wait time specifies how long the adapter will wait (in milliseconds) for more data of a partial message before processing it.

A partial message can occur when the adapter reads to the end of a log file before the logging process has finished writing. This can also happen for the last message in the file, since there will not be a following “start message” line which matches the startPattern.

Mandatory: No

Default: 1000 ms

adapters > adapter > source > file > multilineMessages > messageEnd > endPattern Copied

A regex which indicates the end of a message. The file line matching this regex will also be added to the message.

Note

The end pattern applies to a single line that ends the message. Character sequences spanning lines cannot be matched.

Mandatory: No

adapters > adapter > source > glLogFile > filename Copied

Specifies the location of the GL P10 log file to read messages from. Globbing and wildcards can be used, as in the file source > filename setting.

Mandatory: Yes

adapters > adapter > source > glLogFile > truncatedLines Copied

This setting informs the file reader that the GL log file contains truncated lines (i.e. lines are written out to a fixed line width of 80 characters), which is produced by some versions of GL. An example of this output is shown below.

Note

This setting does not perform auto-detection, therefore only enable it if the file is known to be truncated.
-10 06:31:35:371 (00.TRC.00087) DataReply : 2919                 DataHead {type: 'X' - [0:0]->[0:0] (0) }                 <---------------------------------->                 <-------------- REPLY ------------->                 <---------------------------------->                 Fixed field = { key : 0 - chainage : 0 - noUti : 0 - typeMessage : 79 'O'                                 typeReply : 74 'J' - index : -1 - nbRequestReply : 1 - classOrder : 79 'O'                                 command : 32 '.' - exchangeName : 'SEHK' - exchangeLabel : ''                                 GLID : '0008-1-01-01' - mnemo : 'ETS' }
<... message continues ...>

Mandatory: No

Default: false

adapters > adapter > filters Copied

The adapter applies a set of filters to the messages . Messages that do not pass all the filters are ignored.

Mandatory: No

adapters > adapter > filters > message Copied

This specifies a single regular expression that is applied to the entire message. The filter can be inverted.

Mandatory: No

adapters > adapter > filters > tags > tag Copied

Multiple tag filters can be defined on an adapter. The tag name and the regular expression both have to be supplied. The filter can be inverted.

Mandatory: No

adapters > adapter > formatType Copied

This specifies the type of message to read. ITRS provide a supported set of format types. Each one has its own documentation. It can be specified as data, or as a static variable which can then be shared by more than one adapter.

Mandatory: Yes

adapters > adapter > formatType > messageTrackerFormatType Copied

A reference to a Message Tracker Format Type static variable.

Mandatory: No

See the Gateway 2 Reference Guide for information on configuring Message Tracker Format Type static variables.

adapters > adapter > formatType > data Copied

Specifying a format type without using a static variable.

Mandatory: No

adapters > adapter > formatType > data > timestamp Copied

This specifies the location and format of the message timestamp.

Mandatory: Yes

adapters > adapter > formatType > data > timestamp > regexPattern Copied

This specifies the location of the timestamp with the message. Data extracted by the applying the regular expression to the message and extracting the first group (the data the matches the regular expression within the first bracket) from the message. The system uses perl regular expressions.

More information can be found at http://www.perl.com/doc/manual/html/pod/perlre.html

Mandatory: Yes

adapters > adapter > formatType > data > timestamp > format Copied

This specifies the format of the timestamp. The Time Format codes are available in the Time Formatting Codes section at the end of this document.

Mandatory: Yes

adapters > adapter > tagMapping Copied

This specifies a set of mappings from the tags provided by the formatType to a normalised message. This mapping is composed of two parts: IDs and attributes.

It can be specified as data, or as a static variable which can then be shared by more than one adapter.

Mandatory: Yes

adapters > adapter > tagMapping > messageTrackerTagMapping Copied

A reference to a Message Tracker Tag Mapping static variable.

Mandatory: No

See the Gateway 2 Reference Guide for information on configuring Message Tracker Tag Mapping static variables.

adapters > adapter > tagMapping > data Copied

Specifying a tag mapping without using a static variable.

Mandatory: No

adapters > adapter > tagMapping > data > IDs > item Copied

This creates a named ID from a set of tags. These tags will be user defined if the format type selected was Regex. The tags will be defined by the format type reader for other format types. For example, when using Fix the tags will be the numeric FIX tags. Please see the ITRS provided documentation for the format type being used. An ID should be unique for the message. Two messages are considered to be the same message if they share an ID with the same name and the same value.

Mandatory: No

adapters > adapter > tagMapping > data > IDs > item > format Copied

This allows the tags to be combined together in a user specified way. If the format is not specified then the tag values are placed together space separated.

Mandatory: No

adapters > adapter > tagMapping > data > attributes > item Copied

This creates a named attribute from a set of tags. These tags will be user defined if the format type selected was Regex. The tags will be defined by the format type reader for other format types. For example, when using Fix the tags will be the numeric FIX tags. Please see the ITRS provided documentation for the format type being used.

Mandatory: No

adapters > adapter > tagMapping > data > attributes > item > format Copied

This allows the tags to be combined together in user specified way. If the format is not specified then the tag values are placed together space separated.

Mandatory: No

adapters > adapter > realTimeTracking Copied

If this setting is present (even if no sub-settings are configured) then the adapter will pass the message to the real time checkpoint as defined.

Mandatory: No

Default: Unset

adapters > adapter > realTimeTracking > trackingData > description Copied

This setting specifies which attribute or ID to use as a message description. The message description is used in the Lost Messages view. If not set then it will default to the first message ID defined on the current checkpoint.

Mandatory: No

Default: First Message ID

adapters > adapter > realTimeTracking > trackingData > category Copied

This setting specifies which attribute to use to categorise the Latency messages. The category description is used in the Latency view. If not set then the messages will be uncategorised.

Mandatory: No

adapters > adapter > realTimeTracking > remotePlugin Copied

If this setting is present if the adapter is reading messages but sending them to a remote real time checkpoint. If this is the case the Host, Managed entity and sampler of the plug-in hosting the real time checkpoint must be provided.

Mandatory: No

adapters > adapter > realTimeTracking > remotePlugin > host Copied

Host and listen port of the Netprobe that is providing the real time checkpoint. The form of the input is host:port

Currently the Message Tracker plugin does not support connecting via HTTPS, so cannot send messages to a netprobe that has been started with the -secure flag.

Mandatory: Yes (if remotePlugin exists)

adapters > adapter > realTimeTracking > remotePlugin > managedEntityName Copied

Name of the managed entity that is providing the real time checkpoint.

Mandatory: Yes (if remotePlugin exists)

adapters > adapter > realTimeTracking > remotePlugin > samplerName Copied

Name of the sampler that is providing the real time checkpoint.

Mandatory: Yes (if remotePlugin exists)

adapters > adapter > record Copied

If this is setting is set to true then the message will be logged to the database specified in the database section.

Real Time Checkpoint Settings Copied

These settings configure the real time checkpoints that this sampler will track messages for.

realTimeCheckpoints > checkpoint > name Copied

The name of the real time checkpoint that the plug-in should monitor. This name must be unique for all checkpoints which will be running on a particular Netprobe (i.e. across different samplers). This is so that external adapters (which are configured with the checkpoint name) send data to the correct checkpoint.

realTimeCheckpoints > checkpoint > children Copied

The name of the child checkpoints that this real time checkpoint expects messages to go to.

An optional host can be provided, if the checkpoint resides on a different probe. Messages are passed between probes using Geneos’s EMF protocol. If the probe running the plugin is started with the secure flag, then the communications will be sent over a secure channel. It is essential that all probes that are connected together via checkpoints use the same transport. Thus they should all use secure communications (set the -secure flag on start up) or they should all use open communications.

realTimeCheckpoints > checkpoint > parents Copied

The names of the parent checkpoints that this real time checkpoint expects messages to come from.

You can specify the list of parents using a stringList variable. For more information, see environments > environment > var > stringList in User Variables and Environments.

realTimeCheckpoints > checkpoint > messageDelivery Copied

The message delivery expectations of the message

Setting Description
TO ALL CHILDREN The message is expected to be seen at all the child checkpoints or it will be considered lost.
TO ONE CHILD The message is expected to be seen at one or the child checkpoints. It will only be considered lost if it is delivered to no checkpoints

Real Time Tracking Settings Copied

These settings are used if the plug-in is configured to perform Real Time Tracking. There is no point is setting these settings if only database recording of messages is being performed, or if the real time trackers being used are on a separate probe.

average > interval Copied

The period of time over which to calculate average latency statistics. Units: Seconds

Mandatory: No

Default: 60 seconds

timeDisplayUnits Copied

The units to use when displaying latency calculations.

Mandatory: No

Default: seconds

Setting Description
Seconds Latency measurements will be displayed in seconds.
Milliseconds Latency measurements will be displayed in milliseconds.

listSizes > pendingListSize Copied

The maximum number of messages that will be held awaiting confirmation from Child/Parent or the current Checkpoint.

Mandatory: No

Default: 5000

listSizes > reportingListSize Copied

The maximum number of failed messages of a type that will be held.

Mandatory: No

Default: 1000

heartbeatPeriod Copied

The time period between heartbeats. Units: Seconds

Mandatory: No

Default: 1

timeoutPeriods > message Copied

This setting specifies a Lost Message Timeout in seconds. The timeout is measured using the system time (or wall time) between the plug-in first seeing the message and the current time when evaluating lost messages (e.g. the time when a sample occurs). Lost messages are displayed in the Lost Messages view.

Messages older than the configured timeout are considered lost and will be displayed in the Lost Messages view. The reportingListSize setting controls how many lost messages will be retained for display. Lost messages are no longer tracked by the plug-in for further acknowledgements or latency calculations.

The Lost Message timeout is also related to the Unacknowledged and Slow timeouts. The diagram below describes the possible transitions between message views. When a message is seen at a checkpoint, it is tracked until it becomes either Acknowledged (not shown in the diagram) or Lost.

Users may optionally configure an Unacknowledged timeout, which will display messages older than the timeout until that message is either Acknowledged or Lost. The optional Slow timeout will additionally display Acknowledged messages where the latency is greater than specified.

msg-tracker26 Units: Seconds

Mandatory: No

Default: 10 seconds

timeoutPeriods > messageUnacknowledged Copied

This setting specifies an Unacknowledged Message Timeout in seconds. The timeout is measured using the system time (or wall time) between the plug-in first seeing the message and the current time (i.e. sample time) when displaying messages as unacknowledged. When this setting is configured the Unacknowledged Messages view is enabled.

The Unacknowledged Message timeout must be less than the Lost Message timeout. Messages seen at a checkpoint will be displayed as Unacknowledged once they are older than the specified timeout. If they remain unacknowledged (no acknowledgements from child checkpoints are sent) then they will be marked as lost when the lost timeout elapses.

This setting only applies to unacknowledged messages; it has no effect on slow messages. See the lost message timeout setting for a diagram describing how these settings are related.

Mandatory: No

Default: No unacknowledged messages displayed.

timeoutPeriods > messageLatency Copied

This setting specifies a Slow Message Timeout to millisecond precision. If configured the Slow Message view will be enabled, which will display only acknowledged messages where the latency exceeds the slow message timeout. This latency is computed using the message timestamps between the parent and child checkpoints the message is seen at.

In the case where ALL child checkpoints must acknowledge a message, the message will only be marked as slow if ALL child checkpoints have acknowledged the message and ANY child checkpoint latency is greater than the timeout value. (If not all child checkpoints respond then the message will eventually be marked as lost after the lost message timeout elapses).

See the lost message timeout setting for a diagram describing how these settings are related.

Mandatory: No

Default: No slow messages displayed.

timeoutPeriods > connection Copied

Time that has to elapse with no communication before a connection to a Parent or Child plug-in is considered to have dropped. The time is in seconds.

No communication is represented by the sampler hearing a heartbeat message. If a netprobe process is killed or dies the timeout period will be ignored and the status will be updated immediately. Units: Seconds

Mandatory: No

Default: 10 seconds

timeoutPeriods > maxShownUnacknowledMsgsPerQueue Copied

The number of unacknowledged messages that should be shown on each checkpoint’s queue. Limiting the number with this setting reduces the size of the dataview should a checkpoint fail completely.

If this is unset the number is unlimited.

Mandatory: No

Default: Unlimited

Recorder Settings Copied

These settings are used it the plug-in is configured to log messages to a database. There is no point is setting these settings if only Real Time Tracking is being performed by the plug-in.

database Copied

This selects the database in which to store the Messages that pass through the adapters. Only adapters with the adapters > adapter > record setting set to true will record data. The database must support the ITRS Latency Schema.

Mandatory: No

database > Disable logging Copied

Enable this option to stop collecting data.

Mandatory: No

database > vendor Copied

This selects the database vendor of the database used to store the data. Supported databases are:

Mandatory: Yes (if database has been selected)

database > vendor > mysql > serverName Copied

Name of the machine hosting the database

Mandatory: Yes (if database has been selected)

database > vendor > mysql > databaseName Copied

Name of the database

Mandatory: Yes (if database has been selected)

database > vendor > mysql > port Copied

Port the Database server listens for requests on.

Mandatory: No

Default: 3306

database > vendor > oracle > databaseName Copied

This setting specifies the name of the Oracle database connection to use.

The connection should be specified as part of the Oracle client library installation, defined in the TNSNAMES.ORA configuration file.

Mandatory: Yes

database > userName Copied

Username used when logging into the database

Mandatory: No

database > password Copied

Password used when logging into the database. This can be specified either as plaintext, or encrypted using Geneos standard encryption.

Mandatory: No

initialConnectSamplingDelay Copied

Specifies the number of samples to wait when the plug-in initialises, before connecting to the database. You may wish to use this setting if the plug-in takes several samples to detect and process all log files on startup, to improve database logging performance.

Mandatory: No

Default: 1 sample

persistenceFile Copied

The name of the file used to store the state of the adapters when the probe stops, so that no data is lost.

Mandatory: No

Default: ..cache

Advanced Settings Copied

fileReadControl > fileCheckWaitTime Copied

The time period a source file (of an adapter) will wait after reaching the end of the file, before trying to read the file again.

Note

A source file may be shared between multiple adapters.

This setting effectively controls how frequently an adapter will poll for new data, when it reaches the end of a log file. A smaller interval means that more CPU will be consumed performing the checks, but new data added to the log file will be detected quicker.

Latency calculations will not be effected by this setting. Units: milliseconds

Mandatory: No

Default: 100 milliseconds

fileReadControl > newFileCheckWaitTime Copied

The time period a source file (of an adapter) will wait after reaching the end of the file, before checking for a new file to replace the current one.

Note

A source file may be shared between multiple adapters.

This setting controls how frequently an adapter will poll for new data sources (log files) when it reaches the end of the current file. This is particularly relevant for files specified with wildcard characters. More frequent polling (caused by decreasing the wait time) will cause an increase in CPU usage.

Latency calculations will not be effected by this setting. Units: milliseconds

Mandatory: No

Default: 2000 milliseconds

fileReadControl > maxLinesPerMinute Copied

This setting controls the maximum number of lines (per minute) that Message Tracker will read from a configured file, and is applied globally for all files read by the plug-in.

This means that the total maximum number of lines read by the plug-in (per minute) will be the value of this setting, multiplied by the number of files that have been configured.

Note

A source file may be shared between multiple adapters, if the adapters are configured with the same file name / pattern.

Mandatory: No

Default: 0 (no maximum)

fileReadControl > consecutiveReads Copied

The consecutiveReads settings control message reading speed from a file source. By limiting the speed users can set avoid peaks in CPU usage during busy times, fine-tune proportions of CPU time given to different adapters, or reduce load caused by database logging.

If set, then the file source will pause reading for delayFor milliseconds, after every delayAfter messages.

Mandatory: No

fileReadControl > consecutiveReads > delayAfter Copied

The file source will pause reading for delayFor milliseconds, after every delayAfter messages.

Note

A file source may be shared between several adapters.

This setting affects messages read from a file consecutively, which assumes that the plug-in does not reach the end of the file (e.g. due to a surge in messages being logged). Decreasing the value of this setting causes less messages to be processed, before the delay is applied. You may want to do this if you have peak times where many messages are logged to a file. Configuring the consecutiveReads setting can then limit the amount of work done to normalise CPU usage.

Latency calculations will not be effected by this setting.

Mandatory: Yes (if consecutiveReads is set)

fileReadControl > consecutiveReads > delayFor Copied

The file source will pause reading for delayFor milliseconds, after every delayAfter messages.

Note

A file source may be shared between several adapters.

This setting is applied when a number of consecutive messages are read from the file. The delay means that no message processing will happen for this file source for the configured time. During the delay other adapters may continue to run, and so the delay can be used to balance CPU load if one adapter is starving the others due to high rates of message logging. It can also be used to limit CPU load overall, or slow down message processing to the speed of your database.

Latency calculations will not be effected by this setting.

Unit: milliseconds

Mandatory: Yes (if consecutiveReads is set)

fileReadControl > catchupAdapter > overscanTime Copied

An overscan time setting should only be required if non-unique IDs mode has been enabled, due to the alterations to database logging in this mode. For more details, please see the Coping with non-unique IDs section.

This setting controls how many extra messages a catch-up adapter will read. The overscan time is specified in seconds, and is compared against the message timestamps in the file.

Note

This is a minimum value, and an extra message may be read since the message timestamp is not available until the message is processed.

e.g. Suppose that the overscanTime is set to 5 (seconds). This means that when the catch-up adapter finishes scanning the initial portion of the file, it will then read the current timestamp from messages in the file, and continue processing more messages until 5 seconds has elapsed as indicated by the message timestamps.

Mandatory: No

Default: 0 seconds (no overscan)

resetTime Copied

The time that the message counters reset for the Latency View and the Admin Adapter view. All lost messages are removed at the end of each day.

Mandatory: No

Default: 00:00

dateOffset Copied

This offset is applied to the read date. It allows data to be read from files for previous days that only have a timestamp. A day offset of 2 will move the day back 2 days.

Mandatory: No

Default: 0

hideAdminViews Copied

This will hide the Admin Checkpoints and Admin Links views. It will also hide the Admin Adapters view, but only if there is at least one real time check point configured.

Mandatory: No

Default: false

permitNonUniqueOrderIds Copied

This setting enables the MessageTracker tracking logic to cope with messages which do not have unique order IDs. Enabling this setting is subject to several pre-requisites as discussed in the Coping with non-unique IDs section, and will also decrease performance slightly. Therefore, the setting should only be enabled if required.

Mandatory: No

Default: false

Coping with non-unique IDs Copied

The MessageTracker plug-in logic assumes that messages contain a unique ID which it then uses to tie messages together, both for real-time tracking latency calculations and database logging.

By enabling the permitNonUniqueOrderIds setting, MessageTracker can cope with messages where the ID is re-used between unrelated messages, subject to the following restrictions.

To handle non-unique IDs MessageTracker relies on the message ordering to determine which messages go together. Therefore messages must be presented to MessageTracker in the correct order for it to function properly. One way to ensure this is to read all messages from the same source (i.e. a single log file, although multiple adapters can be used to read the same file) or to send messages in order via XML-RPC.

MessageTracker matches messages with the same ID by associating an acknowledgement (A) with the immediately preceding request (R). We expect that the messages sequence will typically look like R ARA … which MessageTracker will then match as R↔A R↔A.

For a sequence R RAA, MessageTracker will match this as R R↔AA. The first request will be marked as lost, and the second acknowledgement will be ignored for latency calculation, although any attribute changes will be logged to database.

If logging messages to the database where the IDs are not unique, you must also configure a set of real time checkpoints (even if you do not intend to use real time tracking for latency calculations).

MessageTracker uses the checkpoint configuration (specifically, the parent checkpoints configured for each checkpoint) to correctly link messages together in the database. This is required so that the plug-in can match a given message with a message seen at the corresponding parent checkpoint, and not another unrelated message.

A catch-up adapter runs when the MessageTracker plug-in detects a file it should monitor where data has already been written to the file. This temporary adapter then processes only the initial file data to ensure that database logging is complete.

Catch-up adapters running in “non-unique IDs mode” also have an additional optional configuration setting overscanTime. This setting specifies how much extra time the catch-up adapter should run for once it has finished processing the initial data, using the timestamps in the source file.

It may be necessary to set this setting if messages typically have a long acknowledgement time. Consider the situation where a request is contained in the initial portion of a file handled by a catch-up adapter, and the acknowledgement is later in the file. The request will then only be acknowledged in the database if the catch-up adapter also processes the acknowledgement.

Note

However, increasing this setting will increase CPU usage, as more of the file will be processed.

Filename date generation Copied

Message Tracker file definitions can be configured to generate a filename using the current date and time. The target filename is generated every sample, and if a file exists which matches this name the file will be monitored by the adapter (once the current file has been processed to the end).

Filenames can be generated using today date code; yesterday and tomorrow are not currently supported for Message Tracker. These are replaced in the filename using the appropriate date.

For example, if the current date is 22-Aug-2008 the following will be produced:

The output format for dates can be controlled by placing format codes in the date tag. Examples of this usage are shown below.

A full list of the available Time Formatting Codes are available below.

Filename Generated name
app<today>.log app20080822.log
Filename Generated name
app<today %d-%m-%Y>.log app22-08-2008.log
'app<today %d%b%y>.log" app22Aug08.log

Time Formatting Codes Copied

The following formatting codes that can be used with in this plug-in:

On Unix systems some conversion specifications can be modified by preceding the conversion specifier character by the E or O modifier to indicate that an alternative format should be used. If the alternative format or specification does not exist for the current locale, the behaviour will be as if the unmodified conversion specification were used. (SU) The Single Unix Specification mentions %Ec, %EC, %Ex, %EX, %Ey, %EY, %Od, %Oe, %OH, %OI, %Om, %OM, %OS, %Ou, %OU, %OV, %Ow, %OW, %Oy, where the effect of the O modifier is to use alternative numeric symbols (say, roman numerals), and that of the E modifier is to use a locale-dependent alternative representation.

Specifier Replaced by
%a Abbreviated weekday name (ie Thu)
%A Full weekday name (ie Thursday).
%b Abbreviated month name (ie Aug)
%B Full month name (ie August)
%c Date and time representation (local dependant)
%d Day of the month (01-31)
%f Milliseconds. (Reads between 0 and 3 digits). The period is part of this code, to read seconds and milliseconds use %S%f. If you do not want the period included in the code please use %Of which reads just the digits.
%g Microseconds. (Reads between 0 and 6 digits). The period is part of this code, to read seconds and microseconds use %S%g. If you do not want the period included in the code please use %Og which reads just the digits.
%H Hour in 24h format (00-23)
%I Hour in 12h format (01-12)
%j Day of the year (001-366)
%m Month as a decimal number (01-12)
%M Minute (00-59)
%p AM or PM designation
%qd Day of the month (1-31) (no preceding 0 or space)
%qm Month as a decimal number (1-12) (no preceding 0 or space)
%S Second (00-61)
%U Week number with the first Sunday as the first day of week one (00-53)
%w Weekday as a decimal number with Sunday as 0 (0-6)
%W Week number with the first Monday as the first day of week one (00-53)
%x Date representation (local dependant)
%X Time representation (local dependant)
%y Year, last two digits (00-99)
%Y Year, last all digits (ie 2008)
%Z, %z Timezone name or abbreviation (ie CDT)
%% A % sign

User defined Tag combining Copied

Multiple tag values can be combined in controlled fashion to produce an ID or attribute value. The format setting allows a large amount of control. Format markers can be impeded in the string. These are strings starting with a % character that match the following protyotype;

%[width][.precision]specifier

The first format marker will be replaced by the first tag, the second by the second tag and so on. The supported specifiers are;

The width and precision are numeric values that configure how the tag value will be represented. The width defines the minimum characters to be used as the value. If the data is supplied is shorter then the data is value will be padded with blank spaces.

The precision defines specifies the number of digits after the decimal point when used with the specifier f, it has no effect when used with the specifier s. When used with the specifier d it specifies the minimum number of digits to be used when representing. If the value shorter the number will be padded with leading zeros.

The debug configuration section holds settings used for debugging the plug-in. Please contact ITRS support if you require assistance with this plug-in.

Specifier Output Example
d Decimal integer 234
f Decimal floating point number 22.67
s Raw string value sample
% A % followed by a second % will become a single % %

Database Schema Copied

In order to record latency data, the following tables need to be added to the database schema. The tables can sit inside a normal Geneos schema, or they can sit in a separate schema.

The database vendors that are currently supported by Message Tracker are as follows:

Version 1 Schema Copied

The following schema is used by GA2010.1 and earlier Netprobes for database logging.

-- Table structure for table `lat_attrib`
--
CREATE TABLE `lat_attrib` (  `ID` int(11) NOT NULL,  `NAME` varchar(128) collate latin1_bin NOT NULL,  `VALUE` varchar(128) collate latin1_bin NOT NULL,  PRIMARY KEY  (`ID`,`NAME`),  KEY `NAME` (`NAME`,`VALUE`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin;

-- Table structure for table `lat_id`
--
CREATE TABLE `lat_id` (  `ID` int(11) NOT NULL,  `NAME` varchar(128) collate latin1_bin NOT NULL,  `VALUE` varchar(128) collate latin1_bin NOT NULL,  PRIMARY KEY  (`ID`,`NAME`),  KEY `NAME` (`NAME`,`VALUE`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin;

-- Table structure for table `lat_msg`
--
CREATE TABLE `lat_msg` (  `ID` int(11) NOT NULL auto_increment,  `LOCK_COL` int(11) NOT NULL,  PRIMARY KEY  (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

-- Table structure for table `lat_time`
--
CREATE TABLE `lat_time` (  `ID` int(11) NOT NULL,  `NAME` varchar(128) collate latin1_bin NOT NULL,  `TIME_SEEN` double NOT NULL,  PRIMARY KEY  (`ID`,`NAME`,`TIME_SEEN`),  KEY `NAME` (`NAME`,`TIME_SEEN`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin;

-- Table structure for table `lat_unqattr`
--
CREATE TABLE `lat_unqattr` (  `NAME` varchar(128) collate latin1_bin NOT NULL,  `VALUE` varchar(128) collate latin1_bin NOT NULL,  PRIMARY KEY  (`NAME`,`VALUE`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin;

-- Table structure for table `lat_version`
--
CREATE TABLE `lat_version` (  `major` int(11) default NULL,  `minor` int(11) default NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

INSERT INTO `lat_version` (`major`, `minor`) VALUES (1, 0);

Version 2 Schema for MySQL Copied

The following database schema is used for logging to a MySQL database by GA2010.4 and later Netprobes.

-- Table structure for table `lat_chkpnt`
--
CREATE TABLE IF NOT EXISTS `lat_chkpnt` (  `CP` int(10) unsigned NOT NULL AUTO_INCREMENT,  `NAME` varchar(128) COLLATE latin1_bin NOT NULL,  `GATEWAY` varchar(384) COLLATE latin1_bin NOT NULL,  `ENTITY` varchar(384) COLLATE latin1_bin NOT NULL,  `SAMPLER` varchar(384) COLLATE latin1_bin NOT NULL,  `CATCHUP` boolean NOT NULL,  PRIMARY KEY (`CP`),  KEY `CHKPNT` (`NAME`,`GATEWAY`, `ENTITY`, `SAMPLER`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin AUTO_INCREMENT=1 ;

-- Table structure for table `lat_attr`
--
CREATE TABLE IF NOT EXISTS `lat_attr` (  `MID_CP` int(11) unsigned NOT NULL,  `MID_ID` int(11) unsigned NOT NULL,  `NAME` varchar(128) COLLATE latin1_bin NOT NULL,  `VALUE` varchar(128) COLLATE latin1_bin NOT NULL,  PRIMARY KEY (`MID_CP`,`MID_ID`,`NAME`),  KEY `ATTR` (`NAME`,`VALUE`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_bin;

-- Table structure for table `lat_id`
--
CREATE TABLE IF NOT EXISTS `lat_id` (  `MID_CP` int(11) unsigned NOT NULL,  `MID_ID` int(11) unsigned NOT NULL,  `NAME` varchar(128) COLLATE latin1_bin NOT NULL,  `VALUE` varchar(128) COLLATE latin1_bin NOT NULL,  PRIMARY KEY (`MID_CP`,`MID_ID`,`NAME`),  KEY `ID` (`NAME`,`VALUE`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_bin;

-- Table structure for table `lat_time`
--
CREATE TABLE IF NOT EXISTS `lat_time` (  `MID_CP` int(11) unsigned NOT NULL,  `MID_ID` int(11) unsigned NOT NULL,  `TIME_SEEN` decimal(16,6) NOT NULL,  PRIMARY KEY (`MID_CP`,`MID_ID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_bin;

-- Table structure for table `lat_order`
--
CREATE TABLE IF NOT EXISTS `lat_order` (  `OID_CP` int(11) unsigned NOT NULL,  `OID_ID` int(11) unsigned NOT NULL,  `MID_CP` int(11) unsigned NOT NULL,  `MID_ID` int(11) unsigned NOT NULL,  PRIMARY KEY (`OID_CP`,`OID_ID`,`MID_CP`,`MID_ID`),  KEY `MSG` (`MID_CP`,`MID_ID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_bin;

-- Table structure for table `lat_unqattr`
--
CREATE TABLE IF NOT EXISTS `lat_unqattr` (  `NAME` varchar(128) COLLATE latin1_bin NOT NULL,  `VALUE` varchar(128) COLLATE latin1_bin NOT NULL,  PRIMARY KEY (`NAME`,`VALUE`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_bin;

-- Table structure for table `lat_version`
--
CREATE TABLE IF NOT EXISTS `lat_version` (  `major` int(11) NOT NULL,  `minor` int(11) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_bin;

INSERT INTO `lat_version` (`major`, `minor`) VALUES
(2, 0);

Version 2 Schema for Oracle Copied

The following schema definition is used for logging to an Oracle database. This feature is available in Netprobes with version GA2011.2 or later.

--
-- Table structure for table `lat_chkpnt`
--
CREATE TABLE "LAT_CHKPNT"
(        "CP" NUMBER(10, 0) NOT NULL,        "NAME" VARCHAR2(128 BYTE) NOT NULL,        "GATEWAY" VARCHAR2(384 BYTE) NOT NULL,        "ENTITY" VARCHAR2(384 BYTE) NOT NULL,        "SAMPLER" VARCHAR2(384 BYTE) NOT NULL,        "CATCHUP" NUMBER(1, 0) NOT NULL,        CONSTRAINT "LAT_CHKPNT_PK" PRIMARY KEY (CP) ENABLE
);

CREATE INDEX "CHKPNT" ON "LAT_CHKPNT" (NAME, GATEWAY, ENTITY, SAMPLER);

CREATE SEQUENCE "LAT_CHKPNT_SEQ"
START WITH 1
INCREMENT BY 1
NOMAXVALUE;

CREATE TRIGGER "LAT_CHKPNT_TRG"
BEFORE INSERT ON "LAT_CHKPNT"
FOR EACH ROW
BEGIN        SELECT LAT_CHKPNT_SEQ.NEXTVAL INTO :NEW.CP FROM DUAL;
END;
/

ALTER TRIGGER "LAT_CHKPNT_TRG" ENABLE;

--
-- Table structure for table `lat_attr`
--
CREATE TABLE "LAT_ATTR"
(        "MID_CP" NUMBER(11) NOT NULL,        "MID_ID" NUMBER(11) NOT NULL,        "NAME" VARCHAR2(128 BYTE) NOT NULL,        "VALUE" VARCHAR2(128 BYTE) NOT NULL,        CONSTRAINT "LAT_ATTR_PK" PRIMARY KEY (MID_CP, MID_ID, NAME) ENABLE
);

CREATE INDEX "ATTR" ON "LAT_ATTR" (NAME, VALUE);

--
-- Table structure for table `lat_attr`
--
CREATE TABLE "LAT_ID"
(        "MID_CP" NUMBER(11) NOT NULL,        "MID_ID" NUMBER(11) NOT NULL,        "NAME" VARCHAR2(128 BYTE) NOT NULL,        "VALUE" VARCHAR2(128 BYTE) NOT NULL,        CONSTRAINT "LAT_ID_PK" PRIMARY KEY (MID_CP, MID_ID, NAME) ENABLE
);

CREATE INDEX "ID" ON "LAT_ID" (NAME, VALUE);

--
-- Table structure for table `lat_time`
--
CREATE TABLE "LAT_TIME"
(        "MID_CP" NUMBER(11) NOT NULL,        "MID_ID" NUMBER(11) NOT NULL,        "TIME_SEEN" NUMBER(16,6) NOT NULL,        CONSTRAINT "LAT_TIME_PK" PRIMARY KEY (MID_CP, MID_ID) ENABLE
);

-- Converts UNIX timestamp to Oracle TIMESTAMP
CREATE FUNCTION FROM_UNIXTIME(unixts IN NUMBER)
RETURN TIMESTAMP IS        max_ts NUMBER :=  2145916799; -- max Oracle timestamp 2938-12-31 23:59:59        min_ts NUMBER := -2114380800; -- min Oracle timestamp 1903-01-01 00:00:00        unix_epoch TIMESTAMP := CAST(TO_DATE('19700101000000','YYYYMMDDHH24MISS') AS TIMESTAMP);        oracle_ts TIMESTAMP;
BEGIN        IF unixts > max_ts THEN                RAISE_APPLICATION_ERROR(-20010, 'UNIX timestamp too large for 32-bit limit');        ELSIF unixts < min_ts THEN                RAISE_APPLICATION_ERROR(-20010, 'UNIX timestamp too small for 32-bit limit');        ELSE                oracle_ts := unix_epoch + NUMTODSINTERVAL(unixts, 'SECOND');        END IF;        RETURN oracle_ts;
END;
/

--
-- Table structure for table `lat_order`
--
CREATE TABLE "LAT_ORDER"
(        "OID_CP" NUMBER(11) NOT NULL,        "OID_ID" NUMBER(11) NOT NULL,        "MID_CP" NUMBER(11) NOT NULL,        "MID_ID" NUMBER(11) NOT NULL,        CONSTRAINT "LAT_ORDER_PK" PRIMARY KEY (OID_CP, OID_ID, MID_CP, MID_ID) ENABLE
);

CREATE INDEX "MSG" ON "LAT_ORDER" (MID_CP, MID_ID);

--
-- Table structure for table `lat_unqattr`
--
CREATE TABLE "LAT_UNQATTR"
(        "NAME" VARCHAR2(128 BYTE) NOT NULL,        "VALUE" VARCHAR2(128 BYTE) NOT NULL,        CONSTRAINT "LAT_UNQATTR_PK" PRIMARY KEY (NAME, VALUE) ENABLE
);

--
-- Table structure for table `lat_version`
--
CREATE TABLE "LAT_VERSION"
(        "MAJOR" NUMBER NOT NULL,        "MINOR" NUMBER NOT NULL
);

INSERT INTO "LAT_VERSION" ("MAJOR", "MINOR") VALUES (2, 0);
["Geneos"] ["Geneos > Netprobe"] ["Technical Reference"]

Was this topic helpful?