Google Cloud Platform
Overview Copied
This Collection Agent plugin monitors Google Services through the Google Cloud Platform metrics. The metrics gathered depend on the plugin or Google service configuration.
You can use the following collectors in the Collection Agent YAML file:
GoogleCloudMonitoringCollector— monitors Google Cloud services. Supported services can be found at Monitored GCP Services.GoogleCloudPubSubMessageCollector— collects Google Cloud logs. Supported log metrics can be found at Google Cloud Platform logging resources.GoogleCloudBigQueryCollector— runs custom queries against Google Cloud BigQuery. The metrics produced depend on the queried tables and the plugin configuration. For example, you can use this collector to monitor Google Cloud cost and usage from Cloud Billing exports and other BigQuery data.
Prerequisites Copied
Geneos environment Copied
The GCP Collection Agent plugin requires the following versions of Geneos components:
- Gateway and Netprobe 7.1.x or higher. The same version must be used for the GSE schema.
- Collection Agent 5.0.0 or higher. To run a Collection Agent, see Collection Agent setup.
The GCP binaries are packaged with the Netprobe, and are stored in the collection_agent > plugins folder. Alternatively, you can download separate binaries for the GCP plugin from ITRS Downloads.
GCP environment Copied
The GCP plugin requires valid Google credentials to use.
Google application default credentials searches for credentials in the following locations:
- Google Application Credentials environment variable
- User credentials set up by using the Google Cloud CLI
- The attached service account, returned by the metadata server
The order of the locations ADC checks for credentials is not related to the relative merit of each location.
Google Application Credentials environment variable Copied
You can use the GOOGLE_APPLICATION_CREDENTIALS environment variable to provide the location of a credential JSON file. This JSON file can be one of the following types of files:
-
A credential configuration file for workload identity federation. Workload identity federation enables you to use an external identity provider to access Google Cloud resources. For more information, see Authenticating by using client libraries, the gcloud CLI, or Terraform in the Identity and Access Management (IAM) documentation.
-
A service account key. Service account keys create a security risk and are not recommended. Unlike the other credential file types, compromised service account keys can be used by a bad actor without any additional information. For more information, see Best practices for using and managing service account keys.
User credentials provided by using the gcloud CLI Copied
You can provide user credentials to ADC by running the gcloud auth application-default login command. This command places a JSON file containing the credentials you provide (usually from your own Google Account) in a well-known location on your file system. The location depends on your operating system:
- Linux, macOS:
$HOME/.config/gcloud/application_default_credentials.json - Windows:
%APPDATA%\gcloud\application_default_credentials.json
The credentials you provide to ADC by using the gcloud CLI are distinct from your gcloud credentials—the credentials the gcloud CLI uses to authenticate to Google Cloud.
The attached service account Copied
Many Google Cloud services let you attach a service account that can be used to provide credentials for accessing Google Cloud APIs. If ADC does not find credentials it can use in either the GOOGLE_APPLICATION_CREDENTIALS environment variable or the well-known location for Google Account credentials, it uses the metadata server to get credentials for the service where the code is running.
Using the credentials from the attached service account is the preferred method for finding credentials in a production environment on Google Cloud. To use the attached service account, follow these steps:
- Create a user-managed service account.
- Grant that service account the least privileged IAM roles possible.
- Attach the service account to the resource where your code is running.
Please refer to Google Application Default Credentials for more information.
Required Google Cloud Permissions Copied
For all services, the following monitoring permissions are needed to gather metrics from Cloud Monitoring:
monitoring.metricDescriptors.listmonitoring.monitoredResourceDescriptors.listmonitoring.timeSeries.list
Some services require additional permissions to get resource information (e.g. user labels):
- compute
compute.disks.listcompute.instances.list
- file
file.instances.list
- storage
storage.buckets.list
- vpn
compute.vpnGateways.list
Required Pub/Sub permissions Copied
pubsub.subscriptions.getpubsub.subscriptions.pullpubsub.subscriptions.acknowledge
Required BigQuery permissions Copied
bigquery.jobs.createbigquery.tables.getbigquery.tables.getData
Note
These permissions can be granted using the predefined roleroles/bigquery.dataViewer, combined withroles/bigquery.jobUser.
Configure Geneos to deploy the GCP plugin Copied
The GCP plugin supports Collection Agent publication into Geneos using dynamic Managed Entities. Setting up this plugin in Geneos involves these primary steps:
-
Configure your mappings.
-
Configure your other Dynamic Entities in the Gateway, see Create Dynamic Entities in Collection Agent setup for a more detailed procedure.
Set up your Collection Agent plugin Copied
Set up your collector in the Gateway Setup Editor by adding the following configuration in Dynamic Entities > Collectors. For more information, see Collectors in Dynamic Entities.
Below are the available collectors for the GCP plugin:
| Collectors | Description |
|---|---|
GoogleCloudMonitoringCollector |
Enables the GCP collector. By default, all supported services are monitored. If you want to monitor specific services, use the enabledServices: option |
GoogleCloudPubSubMessageCollector |
Enables the GCP logging collector. The logs are collected from the pubsub service. Desired logs from the Cloud Logging API should be routed to a pubsub topic with a corresponding subscriber ID. A list of subscriber IDs are required and added to the subscribers: |
GoogleCloudBigQueryCollector |
Enables the GCP BigQuery collector, which executes queries in Google Cloud BigQuery and displays the data as configured. You can collect data from any tables you have access to, such as billing and cost data, usage and consumption data, or logs exported to BigQuery. The metrics produced depend on the BigQuery tables you query and how you configure the plugin. |
Reference configuration for Google Cloud Monitoring Collector Copied
collectors:
# GCP collector configuration
- name: gcp
type: plugin
className: GoogleCloudMonitoringCollector
# Interval (in millis) between collections (optional, defaults to five minutes).
collectionInterval: 300000
# Project id to monitor (required)
projectId: itrsdev
# Google cloud service metrics to be gathered (optional, default to all supported services)
#enabledServices:
# - compute
# Plugin self monitoring (optional, disabled by default)
#selfMonitoring:
# Collection interval of the selfMonitoring metrics (required if selfMonitoring is used)
# interval: 300000
# Enable detailed selfMonitoring metrics reporting (optional, false by default)
# Two levels of monitoring are :
# - Detailed (true): API Call counts are reported per monitored service, per resource, per specific api
# - Detailed (false): API Call counts are reported total per monitored service
# detailedMode: false
# Google cloud service launch stage configuration (optional, default is GA)
#enabledStages:
# - ALPHA
The plugin contains an optional selfMonitoringInterval configuration to enable or disable self monitoring, which counts the number of Google APIs called. This feature is disabled by default and can be enabled by setting the interval value in seconds.
You can also set the interval value to determine how often metrics are published.
By default, a summary view where only the total calls per service are published by the plugin. If detailedMode is true, the plugin publishes a summary view with more detailed metrics by breaking down the API calls per service, per resource, and per API called.
Optional enabledStages configuration to include specific metric launch stages is also available.
Reference configuration for Google Cloud PubSub Message Collector Copied
collectors:
# GCP collector configuration
- name: gcp
type: plugin
className: GoogleCloudPubSubMessageCollector
# Project id to monitor (required)
projectId: ${env:PROJECTID}
# Google cloud logs from subscriber list to be gathered (required)
subscribers:
- MySub
# Message consumption (optional, disabled by default)
#acknowledgeMessage: false
The plugin contains an optional acknowledgeMessage configuration to consume or acknowledge messages received by the collector. This is disabled by default.
Reference configuration for Google Cloud BigQuery Collector Copied
Before using the GoogleCloudBigQueryCollector, ensure that you set up the required BigQuery permissions.
collectors:
# GCP collector configuration
- name: gcp
type: plugin
className: GoogleCloudBigQueryCollector
#Interval (in millis) between collections (optional, defaults to five minutes).
collectionInterval: 300000
# Project id to monitor (required)
projectId: ${env:PROJECTID}
# Array of query configurations. Multiple queries can be specified.
queryConfigs:
# Query to run
- query: ""
# Unique identifier of this query. This will be used as a Dimension
name: ""
# Column name where event timestamp will be obtained (Optional)
# Column configured would not be available in dataview
#timestampColumn: ""
# Query timeout in seconds. (Optional, default is no timeout)
#timeout: 5
# The timezone offset used to process TIMESTAMP (without time zone) data. (Optional, default is "+00:00" or UTC)
#timestampZoneOffset: "+08:00"
# Dimensions added to each datapoint. At least one dimension is required from either dbColumns or static key-value pairs.
dimensions:
# List of DB columns to use as datapoint dimensions.
dbColumns:
- column_name
# Map of static key-value pairs that will be added as a dimension to each datapoint.
static:
key1: "value1"
key2: "value2"
# List of DB columns that will be added as datapoints
dataPoints:
# Column Name
- column: db_column_name
# Datapoint type. Possible values: StatusMetric, Gauge, Counter, EntityAttribute
type: Gauge
# Unit. This is only applicable to gauge.
# Possible values are as described in Unit class. Will Default to Unit.NONE.
unit:
# Unit Source. Can either be column or static
source: static
# Unit Value. Column name for column source, value for static source.
value: BYTES
# Columns that are not specified in the dimensions or datapoints config will be treated as a datapoint.
# If the SQL data is of numeric SQL type, it will be treated as a Gauge by default.
# If it is a Date, Time, or Timestamp type, it will be treated as an EntityAttribute.
# Otherwise, it will be treated as a StatusMetric.
# Common query configuration that will be applied to all queries defined in queryConfigs.
#commonQueryConfig:
# Query timeout in seconds. Default is no timeout.
# Value inside queryConfig takes priority.
#timeout: 5
# The timezone offset used to process TIMESTAMP (without time zone) data. Default is "+00:00" or UTC.
# Value inside queryConfig takes priority.
#timestampZoneOffset: "+08:00"
#dimensions:
# dbColumns:
# - common_column_name
# static: {}
# commonKey1: "commonValue1"
#dataPoints:
# # Column Name
# - column: ""
# # Datapoint type. Possible values: StatusMetric, Gauge, Counter, EntityAttribute
# type: ""
# # Unit. This is only applicable to gauges.
# # Possible values are as described in Unit class.
# unit:
# source:
# value:
The collector runs each query defined in queryConfigs at the configured collectionInterval. Each query uses standard BigQuery SQL.
You can run multiple queries by adding more entries to queryConfigs. Each entry must have a unique name, which is used as a dimension to identify which query produced each row.
Note
To monitor cost and usage with the GoogleCloudBigQueryCollector, configure your mappings using the provided template. For detailed steps, refer to Monitor Google Cloud cost and usage .
Supported timestampColumn values Copied
The optional timestampColumn specifies the event timestamp. The configured column must contain a supported timestamp format and is not displayed in the dataview. The supported timestamp formats are:
- “2026-01-01”
- “2026-01-01T00:00:00Z”
- “2026-01-01T00:00:00+08:00”
- “2026-01-01T00:00:00.000”
- “2026-01-01T00:00:00”
- “2026-01-01 00:00:00”
- “2026-01-01 00:00:00.000”
If the value cannot be parsed, the event is still processed and the current event processing time is used as the timestamp.
Dimension ordering and precedence Copied
When dimensions are configured at both the commonQueryConfig level and individual queryConfig level, they are merged with specific ordering and precedence rules to ensure consistent dimension handling.
Dimension ordering Copied
Dimensions are applied to datapoints in the following order:
- Common Query Config: Static Dimensions — static key-value pairs defined in
commonQueryConfig.dimensions.static, following the order specified in the configuration. - Query Config: Static Dimensions — static key-value pairs defined in
queryConfig.dimensions.static, following the order specified in the configuration. - Common Query Config: DB Column Dimensions — database columns defined in
commonQueryConfig.dimensions.dbColumns, following the order specified in the configuration. - Query Config: DB Column Dimensions — database columns defined in
queryConfig.dimensions.dbColumns, following the order specified in the configuration.
This ordering is preserved using SequencedMap collections to maintain consistent dimension key ordering across all datapoints.
Note
Dimensions within each category appear in the exact order defined in your configuration. For a specific dimension ordering, list the static dimensions and DB columns in your desired sequence.
Value precedence Copied
When the same dimension key is defined in both common and individual query configurations, the query-specific value takes precedence:
- Static dimensions — if the same static dimension key exists in both
commonQueryConfig.dimensions.staticandqueryConfig.dimensions.static, the value from the individual query config is used. - DB column dimensions — if the same database column is specified in both
commonQueryConfig.dimensions.dbColumnsandqueryConfig.dimensions.dbColumns, the column appears only once in the final dimension list and is not duplicated.
Given this configuration:
commonQueryConfig:
dimensions:
static:
environment: "production" # 1st: First static dimension in common config
region: "us-east-1" # 2nd: Second static dimension in common config
dbColumns:
- server_id # 3rd: First DB column in common config
- database_name # 4th: Second DB column in common config
queryConfigs:
- query: "SELECT * FROM processes"
dimensions:
static:
region: "us-west-2" # Overrides common config value, keeps position #2
service: "web-api" # 5th: First static dimension in query config
dbColumns:
- process_id # 6th: First DB column in query config
- user_name # 7th: Second DB column in query config
The resulting dimension order for each datapoint would be:
| Order | Dimension | Ordering rule |
|---|---|---|
| 1 | environment = “production” |
The first static dimension under commonQueryConfig was used, following the dimension ordering. |
| 2 | region = “us-west-2” |
The second static dimension under commonQueryConfig was used, but the value from queryConfigs takes precedence. |
| 3 | service = “web-api” |
The first query static under commonQueryConfig was used, following the dimension ordering. |
| 4 | server_id = [value from DB] |
The first DB column under commonQueryConfig was used, following the dimension ordering. |
| 5 | database_name = [value from DB] |
The second common DB column under queryConfigs was used, following the dimension ordering. |
| 6 | process_id = [value from DB] |
The first DB column under queryConfigs was used, following the dimension ordering. |
| 7 | user_name = [value from DB] |
The second DB column under queryConfigs was used, following the dimension ordering. |
This predictable ordering ensures that dimension keys appear in the same sequence across all datapoints for consistent metric aggregation and visualization.
Dimensions with blank or empty values default to N/A, and dimension values exceeding 256 characters are truncated. If the dimensions.dbColumn does not exist in the table schema, it is ignored.
Configure your mappings Copied
Use one of the following options listed below to configure your dynamic mappings.
-
Add the
templates/gcp_mapping.xmlas an include file in your Gateway. -
Set up a custom mapping in Dynamic Entities > Mapping. For more information, see Mapping and mapping group in Dynamic Entities.
Monitor Google Cloud cost and usage Copied
To monitor Google Cloud cost and usage data with the GoogleCloudBigQueryCollector using the packaged template templates/gcp_mapping.xml, perform the following steps:
-
Assign an appropriate Cloud Billing IAM role. You can use either the Billing Account Costs Manager or the Billing Account Administrator.
-
Export Cloud Billing data to BigQuery in your Google Cloud project. Once the export is complete, note the BigQuery dataset and table where the billing data is stored.
-
Update the
templates/gcp_mapping.xmlfile by setting the effective date range and the billing table reference.-
Date range (in the query
WHEREclause)Set
usage_start_timeto the period you want to report on. For example:
WHERE cost_type != 'tax' AND cost_type != 'adjustment' AND usage_start_time >= '2026-02-01T00:00:00 US/Pacific' AND usage_start_time < '2026-03-01T00:00:00 US/Pacific' )-
Table reference (in the query
FROMclause)Specify your project, dataset, and table:
FROM `project_id.dataset_id.table_id`The collector will then query the Cloud Billing data from BigQuery using this template.
-
Note
- Billing costs can take a few hours to appear in the BigQuery export and may take longer than 24 hours.
- The base SQL query comes from the Billing Reports Generate query feature in the Google Cloud console. You can change the query to suit your needs. See Example queries for Cloud Billing data export for more options.
Example Gateway-SQL Summary view Copied
The GCP mapping template also includes a Gateway-SQL sampler named GCP that shows a summary view of your GCP project.
This view provides information on all the instances, disks, and networks under the monitored project.
Access GCP cloud through a proxy host Copied
Accessing GCP cloud through a proxy can be configured by adding the https.* properties in the JVM arguments. For example, you can access the cloud via a proxy host and port by adding the following properties:
-Dhttps.proxyHost=webcache.example.com -Dhttps.proxyPort=8080
For more information on adding JVM arguments, see Managed Collection Agent.
To learn more about the available properties to enable proxy access, see Java Networking and Proxies.