Application - Kubernetes Opspack

Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation.

What You Can Monitor Copied

Opsview provides an all in one Kubernetes Opspack that can monitor a Kubernetes setup hosted locally or on the cloud. Monitor live usage metrics such as CPU, Memory, Disk and Network Status from your cluster down to your individual pods. Additionally, this Opspack collects other useful metrics such as HTTP statistics, file descriptors and more.

Host Templates Copied

The following Host Templates are currently provided by this Opspack. Click the name of each Host Template to be taken to the relevant information page, including a full Service Check description and usage instructions.

Host Template Description
Application - Kubernetes - Cluster Monitor the status of your Kubernetes cluster
Application - Kubernetes - Namespace Monitor the status of your Kubernetes namespace
Application - Kubernetes - Node Monitor the status of a Kubernetes node
Application - Kubernetes - Pod Monitor the status of a Kubernetes pod

Prerequisites Copied

To access live usage metrics, you must install metrics-server on your cluster and follow the correct authentication setup for your host. It is assumed that kubectl is installed and configured for use with your cluster.

Setup Kubernetes for Monitoring Copied

Install Metrics Server Copied

Local cluster Copied

If you are using a local Kubernetes cluster, run the following commands from the location of your cluster:

git clone https://github.com/kubernetes-incubator/metrics-server.git

# deploy the latest metric-server
cd metrics-server
kubectl create -f deploy/1.8+/
kubectl edit deploy -n kube-system metrics-server

When the edit window opens, add the following flags underneath spec.containers.name:

args:
- --kubelet-insecure-tls  # only required if using self signed certificates
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

AWS/EKS Copied

If you are using a Kubernetes cluster hosted on AWS / EKS, refer to the Installing Metrics Server on AWS guide.

Google Cloud Platform (GCP) or Microsoft Azure Copied

If you are using a GCP or Azure Kubernetes cluster, the Metrics Server is installed and configured by default. Ensure you have setup the read-only service account and role bindings shown in the steps below.

Retrieve API Server address and the port number Copied

From the location of your cluster:

kubectl config view

This will give you a list of all the configuration information for your Kubernetes environment.

It will look something like:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://1.1.1.1:6443   # COPY THIS ADDRESS
  name: kubernetes

See the cluster, server address shown above. The port may or may not be present, copy the entire URL (including the port, to the KUBERNETES_CLUSTER_DETAILS, API server address variable.

Setup an authentication mechanism Copied

This Opspack supports client authentication through X509 Client Certs and Bearer Tokens.

Note

For more details, refer to Kubernetes authentication strategies

Client authentication using X509 Client Certs Copied

Client certificate authentication is enabled by supplying the CA path, client certificate and client key arguments in the KUBERNETES_CERTIFICATES variable.

Client authentication using Bearer Tokens Copied

Setup a service account for authentication Copied

To create a service account for authentication, copy and paste the following commands into your Kubernetes cluster terminal.

kubectl create sa opsview  # create the service account

# create the read only role
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: 'true'
  labels:
  name: opsview-read-only
  namespace: default
rules:
- apiGroups: ['*']
  resources: ['*']
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources: ['*']
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources: ['*']
  verbs:
  - get
  - list
  - watch
- nonResourceURLs:
  - /metrics
  - /api/*
  verbs:
  - get
  - list
  - watch
EOF

# bind the role to the service account
cat <<EOF | kubectl apply -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: opsview-binding
subjects:
- kind: ServiceAccount
  name: opsview
  namespace: default
roleRef:
  kind: ClusterRole
  name: opsview-read-only
  apiGroup: rbac.authorization.k8s.io
EOF
Retrieve the bearer token for authentication Copied
Local Copied

If your Kubernetes environment has been set up locally, you will need to run the following commands:

SECRET_NAME=$(kubectl get serviceaccount opsview -o jsonpath='{.secrets[0].name}')

TOKEN=$(kubectl get secret $SECRET_NAME -o jsonpath='{.data.token}' | base64 --decode)

echo $TOKEN

Copy the value of $TOKEN to your KUBERNETES_CLUSTER_DETAILS Opsview variable.

AWS Copied

If your Kubernetes environment has been set up on AWS, you will need to run the following commands:

Ensure you have the AWS CLI installed. For details on how to install the AWS CLI, refer to: Installing the AWS CLI

# update kubectl config with your AWS setup
aws eks --region YOUR_REGION update-kubeconfig --name YOUR_CLUSTER_NAME

# download the aws kubernetes config map
curl -o aws-auth-cm.yaml https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/aws-auth-cm.yaml

# edit the config map, replacing the rolearn variable with the Role ARN shown in your EKS dashboard
nano aws-auth-cm.yaml

# apply the config map
kubectl apply -f aws-auth-cm.yaml

APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
SECRET_NAME=$(kubectl get serviceaccount opsview -o jsonpath='{.secrets[0].name}')
TOKEN=$(kubectl get secret $SECRET_NAME -o jsonpath='{.data.token}' | base64 --decode)
echo $TOKEN

Copy the value of $TOKEN to your KUBERNETES_CLUSTER_DETAILS Opsview variable.

To ensure the communication between the cluster and nodes, AWS requires you to add inbound and outbound rules in your security group for the node pool to allow HTTPS connections on port 443 with the source of 0.0.0.0/0.

Google Cloud Platform (GCP) Copied

If your Kubernetes environment has been set up on GCP, you will need to run the following commands:

SECRET_NAME=$(kubectl get serviceaccount opsview -o jsonpath='{.secrets[0].name}')

TOKEN=$(kubectl get secret $SECRET_NAME -o jsonpath='{.data.token}' | base64 --decode)

echo $TOKEN

Copy the value of $TOKEN to your KUBERNETES_CLUSTER_DETAILS Opsview variable.

Microsoft Azure Copied

If your Kubernetes environment has been set up on Azure, you will need to run the following commands:

Ensure you have the Azure CLI installed. For details on how to install the Azure CLI, refer to: Installing the Azure CLI

# login to azure
az login

# get kube config for azure
az aks get-credentials --resource-group YOUR_RESOURCE_GROUP --name YOUR_CLUSTER_NAME

SECRET_NAME=$(kubectl get serviceaccount opsview -o jsonpath='{.secrets[0].name}')

TOKEN=$(kubectl get secret $SECRET_NAME -o jsonpath='{.data.token}' | base64 --decode)

echo $TOKEN

Copy the value of $TOKEN to your KUBERNETES_CLUSTER_DETAILS Opsview variable.

["Opsview Cloud"] ["Opsview > Opspacks"] ["User Guide", "Technical Reference"]

Was this topic helpful?