Parameters file for the Obcerv instance

You can reference the following scenarios when planning to install the Obcerv instance.

Sample configuration for AWS with an ALB Ingress controller

This example is intended for installations with High Availability (HA) disabled. In this sample scenario, it is assumed that the resource requests total approximately 19 cores and approximately 42 GiB memory (where the collection agent DaemonSet runs on 3 nodes), including LinkerD resources.

Disk requirements

Type Requirement
Timescale
  • 1 TiB data disk
  • 30 GiB WAL disk
Kafka 140 GiB
Loki 30 GiB
Zookeeper 1 GiB
etcd 1 GiB
Downsampled Metrics
  • Raw: 5 GiB
  • Bucketed: 5 GiB

The AWS Load Balancer controller must support external ingestion. Version 2.3.0 or later must be installed. See AWS Load Balancer Controller.

The AWS Load Balancer controller requires annotations for each ingress configured below. Make sure to change the ARN of the certificate and the group names. Group names can be any unique value (for example, the same value that you set for externalHostname) but it must be the same for the apps and iam ingresses.

Sample configuration for AWS with NGINX Ingress controller

This example is intended for installations with High Availability (HA) disabled. In this sample scenario, it is assumed that the resource requests total approximately 19 cores and approximately 42 GiB memory (where the collection agent DaemonSet runs on 3 nodes), including LinkerD resources.

Disk requirements

Type Requirement
Timescale
  • 1 TiB data disk
  • 30 GiB WAL disk
Kafka 140 GiB
Loki 30 GiB
Zookeeper 1 GiB
etcd 1 GiB
Downsampled Metrics
  • Raw: 5 GiB
  • Bucketed: 5 GiB

Sample configuration for AWS EC2 handling 100k metrics/sec (large)

This example is intended for installations with the following nodes:

In this sample scenario, the following are also assumed:

Disk requirements

Type Requirement
Timescale
  • 16 TiB data disk for each replica (x3)
  • 75 GiB WAL disk for each replica (x3)
Kafka 400 GiB for each replica (x3)
Loki 30 GiB for each replica (x1)
Zookeeper 1 GiB for each replica (x3)
etcd 1 GiB for each replica (x3)
Downsampled Metrics
  • Raw: 5 GiB for each replica (x6)
  • Bucketed: 5 GiB for each replica (x6)

The configuration references a StorageClass named io1-25 which uses io1 with 25 iops per GB. You can create this class or change the config to use a class of your own, but it should be similar in performance. For a sample definition of io1-25, see Storage classes.

Sample configuration for AWS EC2 handling 50k metrics/sec (medium)

This example is intended for installations with the following nodes:

In this sample scenario, the following are also assumed:

Disk requirements

Type Requirement
Timescale
  • 16 TiB data disk for each replica (x3)
  • 50 GiB WAL disk for each replica (x3)
Kafka 200 GiB for each replica (x3)
Loki 30 GiB for each replica (x1)
Zookeeper 1 GiB for each replica (x3)
etcd 1 GiB for each replica (x3)
Downsampled Metrics
  • Raw: 5 GiB for each replica (x6)
  • Bucketed: 5 GiB for each replica (x6)

The configuration references a StorageClass named io1-25 which uses io1 with 25 iops per GB. You can create this class or change the config to use a class of your own, but it should be similar in performance. For a sample definition of io1-25, see Storage classes.

Sample configuration for AWS EC2 handling 10k metrics/sec (small)

This example is intended for installations using the following node: (3) c5.4xlarge (16CPU, 32GB).

In this sample scenario, it is assumed that the resource requests total approximately 30 cores and approximately 84 GiB memory (where the collection agent DaemonSet runs on 3 nodes), including LinkerD resources.

Disk requirements

Type Requirement
Timescale
  • 8 TiB data disk for each replica (x3)
  • 30 GiB WAL disk for each replica (x3)
Kafka 140 GiB for each replica (x3)
Loki 30 GiB for each replica (x1)
Zookeeper 1 GiB for each replica (x3)
etcd 1 GiB for each replica (x3)
Downsampled Metrics
  • Raw: 5 GiB for each replica (x3)
  • Bucketed: 5 GiB for each replica (x3)

The configuration references a StorageClass named io1-25 which uses io1 with 25 iops per GB. You can create this class or change the config to use a class of your own, but it should be similar in performance. For a sample definition of io1-25, see Storage classes.

["Obcerv"] ["User Guide", "Technical Reference"]

Was this topic helpful?