Install
Install with defaults Copied
helm install obcerv-app-query-service itrs/obcerv-app-query-service \
--version 2.3.0 -n <namespace> --wait
Install with overrides Copied
The Query Service consists of three workloads, all of which are Kubernetes Deployments/ReplicaSets:
obcerv-app-query-service-bff
: the backend-for-frontend service that serves the Alerting and Overview UIs.obcerv-app-query-service-sink
: the process that populates the database.obcerv-app-query-service-db
: the PostgreSQL database that backs the Query Service.
- Create a chart config file, named
app.yaml
containing content similar to:
bff:
threadPoolSize: 20
resources:
requests:
memory: "512Mi"
cpu: "200m"
limits:
memory: "2Gi"
cpu: "1"
sink:
resources:
requests:
memory: "512Mi"
cpu: "200m"
limits:
memory: "3Gi"
cpu: "2"
db:
resources:
requests:
memory: "4Gi"
cpu: "1"
limits:
memory: "8Gi"
cpu: "4"
- To install the chart, run:
helm install -f app.yaml obcerv-app-query-service itrs/obcerv-app-query-service
--version 2.3.0 -n <namespace> --wait
Storage Copied
The Persistent Volume Claims (PVC) used by the Query Service are:
PVC | Mount |
---|---|
app-query-service-data | /data |
app-query-service-wal | /wal |
The allocated storage can be changed by modifying the PVCs in Kubernetes, or the defaults can be overridden at install
time by setting db.dataDiskSize
and db.walDiskSize
in the chart config file.
db:
dataDiskSize: 20Gi
walDiskSize: 5Gi
Resource allocation Copied
The app deploys a query service and a database with the following default resource allocations:
bff:
threadPoolSize: 20
resources:
requests:
memory: "512Mi"
cpu: "200m"
limits:
memory: "2Gi"
cpu: "1"
db:
resources:
requests:
memory: "4Gi"
cpu: "1"
limits:
memory: "8Gi"
cpu: "4"
The following additional parameters are available:
bff.threadPoolSize
: determines how many parallel requests the internal gRPC service can handle. If request queuing is experienced in any of the apps using this service, then you should increase the number of threads to allow more parallelism.bff.resources.*
: specifies the resource allocations for the query service container.
Batch processing Copied
The following additional parameters for sink
are available:
sink:
batchSize: 1000
queueSize: 100000
attributeLookbackPeriod: P7D
-
batchSize
: since the Query Service inserts data into the database in batches, increasing the batch size may help improve the effective data insertion rate. -
queueSize
: during its operation, the Query Service maintains three platform subscriptions: entities, signals, and entity attributes. The rate at which the Obcerv platform supplies these data can be greater than the rate at which the Query Service can insert them into its own database.Items that arrive from the Obcerv platform but cannot be immediately inserted into the Query Service database are added to a queue. Use this parameter to increase the queue size if the volume of data received by the Obcerv platform is causing the queue to fill up.
-
attributeLookbackPeriod
: an ISO-8601 duration that defines from what point in time historicalSeverity
andSnooze
attributes will be loaded into the database. A large volume ofSeverity
and/orSnooze
attribute updates may cause bootstrapping issues in Query Service. Use this parameter to adjust the lookback period, which can reduce the total volume of historical data returned by the Obcerv platform.
Queue size Copied
Since each queue has a different data source, the volume of data largely differs. You can separately configure queue size parameters for entities, attributes, and signals.
The following additional parameters for sink
are available:
sink:
queue:
entities: 100000
attributes: 500000
signals: 100000
entities
: the entity subscription queue size.attributes
: the entity attribute subscription queue size.signals
: the signals subscription queue size.
Uninstall Copied
To uninstall, run:
helm uninstall obcerv-app-query-service -n itrs
Upgrade Copied
To upgrade the Query Service, first uninstall and then install it again.
None of the data stored in the Query Service is authorative, and the state is re-built from the Platform on re-install.