Configuration reference

ITRS provides sample configuration files for installing an Obcerv instance. You can download sample scenarios in Sample configuration files.

Advanced configuration settings are all optional.

You are not required to change the settings described here for a default installation of Obcerv.

Resource settings

For each Obcerv workload there is a resources parameter that defines the resource requests and limits for the pods in that workload. We recommend setting these parameters for all applications, otherwise some pods may consume all available resources on your Kubernetes cluster.

The provided configuration examples have baseline resources and limits set for all workloads. They can be altered as needed, for example:

kafka:
  resources:
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"

Timescale volume sizing

By setting certain parameters, you can control the sizing and distribution of persistent data volumes for Timescale. The following are sample values:

- dataDiskSize: "50Gi"
- timeseriesDiskCount: 4
- timeseriesDiskSize: "10Ti"

There are two types of data stored in Timescale: timeseries data and non-timeseries data. By default, all data is stored on the data volume which is sized by the dataDiskSize parameter, while the timeseriesDiskCount parameter is set to 0. However, Timeseries data can be stored on isolated volumes that may use a different storage class and can be scaled when more storage is needed.

When the timeseriesDiskCount parameter is set to greater than 0, all timeseries data is distributed among the timeseries data volumes, each of which has a size defined by the timeseriesDiskSize parameter. This is helpful when there are constraints on the maximum volume sizes set by a cloud provider (for example, when AWS has a max volume size of 16Ti).

To calculate the total disk volume size utilised by Timescale, use the following formula:

dataDiskSize + (timeseriesDiskSize * timeseriesDiskCount)

Timescale retention settings

To adjust metric data retention, update the timescale.retention parameter. The values are in the format of <number><time>, such as: 5d or 45m. The time format options are:

- m  (minutes)
- h  (hours)
- d  (days)
- mo (months)
- y  (years)
timescale:
  retention:
    metrics:
      chunkSize: 3h
      retention: 60d
      compressAfter: 3h
  ...

Kafka retention settings

The kafka.defaultRetentionMillis parameter controls how long data is retained in Kafka topics, which defaults to 21600000 (6 hours).

Timescale or Kafka on reserved nodes

For larger deployments, you may want to run either Timescale or Kafka or both on reserved Kubernetes nodes. This can be achieved using Kubernetes labels and taints on the nodes and tolerations and nodeSelector on the workloads.

The following example shows how to deploy Timescale on reserved nodes:

  1. Add a label to the Timescale nodes in the manifest or using kubectl:

    instancegroup: timescale-nodes
    
  2. Add a taint to the Timescale nodes in the manifest or using kubectl:

    dedicated=timescale-nodes:NoSchedule
    
  3. Set the following in your parameters file:

    timescale:
    
      nodeSelector:
        # only schedule on nodes that have this label
        instancegroup: timescale-nodes
    
      tolerations:
      # must match the tainted node setting 
      - key: dedicated
        operator: Equal
        value: timescale-nodes
        effect: NoSchedule
    
  4. (Optional) In order for Obcerv to collect pod logs from the reserved nodes, the following tolerations are required so that the logs agent can run on these nodes:

    collection:
      daemonSet:
        tolerations:
        # must match the tainted node setting 
        - key: dedicated
          operator: Equal
          value: timescale-nodes
          effect: NoSchedule
        - key: dedicated
          operator: Equal
          value: kafka-nodes
          effect: NoSchedule
    

Ingestion services

Obcerv supports ingesting data via two gRPC endpoints, which are both serviced from the same external host name (as defined by ingestion.externalHostname) and port (443).

In production environments, you can disable one of the two ingestion services to slightly reduce the CPU and memory footprint of the ingestion pod. To do this, you need to change the value of the ingestion.internalEnabled and ingestion.otelEnabled parameters, which are set to true by default.

Note

You cannot disable both ingestion services at the same time.

OpenTelemetry ingestion service

While raw traces are currently not stored, the OpenTelemetry ingestion service in the Obcerv Platform captures the following metrics (where <span> is the span name):

When pointing your instrumentation to the Obcerv OpenTelemetry API, make sure to:

For more information, refer to the instrumentation guide from Open Telemetry.

["Obcerv"] ["User Guide", "Technical Reference"]

Was this topic helpful?