Skip to content

Single Cluster Open Source Observability - OTEL Collector Monitoring

Objective

This pattern aims to add Observability on top of an existing EKS cluster and adds monitoring for ADOT collector health, with open source managed AWS services.

Prerequisites:

Ensure that you have installed the following tools on your machine:

  1. aws cli
  2. kubectl
  3. cdk
  4. npm

You will also need:

  1. Either an existing EKS cluster, or you can setup a new one with Single New EKS Cluster Observability Accelerator
  2. An OpenID Connect (OIDC) provider, associated to the above EKS cluster (Note: Single EKS Cluster Pattern takes care of that for you)

Deploying

  1. Edit ~/.cdk.json by setting the name of your existing cluster:
    "context": {
        ...
        "existing.cluster.name": "...",
        ...
    }
  1. Edit ~/.cdk.json by setting the kubectl role name; if you used Single New EKS Cluster Observability Accelerator to setup your cluster, the kubectl role name would be provided by the output of the deployment, on your command-line interface (CLI):
    "context": {
        ...
        "existing.kubectl.rolename":"...",
        ...
    }
  1. Amazon Managed Grafana workspace: To visualize metrics collected, you need an Amazon Managed Grafana workspace. If you have an existing workspace, create an environment variable as described below. To create a new workspace, visit our supporting example for Grafana

Note

For the URL https://g-xyz.grafana-workspace.us-east-1.amazonaws.com, the workspace ID would be g-xyz

export AWS_REGION=<YOUR AWS REGION>
export COA_AMG_WORKSPACE_ID=g-xxx
export COA_AMG_ENDPOINT_URL=https://g-xyz.grafana-workspace.us-east-1.amazonaws.com

Warning

Setting up environment variables COA_AMG_ENDPOINT_URL and AWS_REGION is mandatory for successful execution of this pattern.

  1. GRAFANA API KEY: Amazon Managed Grafana provides a control plane API for generating Grafana API keys or Service Account Tokens.
# IMPORTANT NOTE: skip this command if you already have a service token
GRAFANA_SA_ID=$(aws grafana create-workspace-service-account \
  --workspace-id $COA_AMG_WORKSPACE_ID \
  --grafana-role ADMIN \
  --name cdk-accelerator-eks \
  --query 'id' \
  --output text)

# creates a new token
export AMG_API_KEY=$(aws grafana create-workspace-service-account-token \
  --workspace-id $COA_AMG_WORKSPACE_ID \
  -name "grafana-operator-key" \
  --seconds-to-live 432000 \
  --service-account-id $GRAFANA_SA_ID \
  --query 'serviceAccountToken.key' \
  --output text)
export AMG_API_KEY=$(aws grafana create-workspace-api-key \
  --key-name "grafana-operator-key" \
  --key-role "ADMIN" \
  --seconds-to-live 432000 \
  --workspace-id $COA_AMG_WORKSPACE_ID \
  --query key \
  --output text)
  1. AWS SSM Parameter Store for GRAFANA API KEY: Update the Grafana API key secret in AWS SSM Parameter Store using the above new Grafana API key. This will be referenced by Grafana Operator deployment of our solution to access Amazon Managed Grafana from Amazon EKS Cluster
aws ssm put-parameter --name "/cdk-accelerator/grafana-api-key" \
    --type "SecureString" \
    --value $AMG_API_KEY \
    --region $AWS_REGION
  1. Install project dependencies by running npm install in the main folder of this cloned repository.

  2. The actual settings for dashboard urls are expected to be specified in the CDK context. Generically it is inside the cdk.json file of the current directory or in ~/.cdk.json in your home directory.

Example settings: Update the context in cdk.json file located in cdk-eks-blueprints-patterns directory

   "context": {
    "fluxRepository": {
      "name": "grafana-dashboards",
      "namespace": "grafana-operator",
      "repository": {
        "repoUrl": "https://github.com/aws-observability/aws-observability-accelerator",
        "name": "grafana-dashboards",
        "targetRevision": "main",
        "path": "./artifacts/grafana-operator-manifests/eks/infrastructure"
      },
      "values": {
        "GRAFANA_CLUSTER_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/cluster.json",
        "GRAFANA_KUBELET_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/kubelet.json",
        "GRAFANA_NSWRKLDS_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/namespace-workloads.json",
        "GRAFANA_NODEEXP_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/nodeexporter-nodes.json",
        "GRAFANA_NODES_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/nodes.json",
        "GRAFANA_WORKLOADS_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/workloads.json",
        "GRAFANA_ADOTHEALTH_DASH_URL": "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/adot/adothealth.json"
      },
      "kustomizations": [
        {
          "kustomizationPath": "./artifacts/grafana-operator-manifests/eks/infrastructure"
        },
        {
          "kustomizationPath": "./artifacts/grafana-operator-manifests/eks/adot"
        }
      ]
    },
    "adotcollectormetrics.pattern.enabled": true
  }
  1. Once all pre-requisites are set you are ready to deploy the pipeline. Run the following command from the root of this repository to deploy the pipeline stack:
make build
make pattern existing-eks-opensource-observability deploy

Visualization

The OpenTelemetry collector produces metrics to monitor the entire pipeline.

Login to your Grafana workspace and navigate to the Dashboards panel. You should see three new dashboard named OpenTelemetry Health Collector, under Observability Accelerator Dashboards

This dashboard shows useful telemetry information about the ADOT collector itself which can be helpful when you want to troubleshoot any issues with the collector or understand how much resources the collector is consuming.

Below diagram shows an example data flow and the components in an ADOT collector:

ADOTCollectorComponents

In this dashboard, there are five sections. Each section has metrics relevant to the various components of the AWS Distro for OpenTelemetry (ADOT) collector :

Receivers

Shows the receiver’s accepted and refused rate/count of spans and metric points that are pushed into the telemetry pipeline.

Processors

Shows the accepted and refused rate/count of spans and metric points pushed into next component in the pipeline. The batch metrics can help to understand how often metrics are sent to exporter and the batch size.

receivers_processors

Exporters

Shows the exporter’s accepted and refused rate/count of spans and metric points that are pushed to any of the destinations. It also shows the size and capacity of the retry queue. These metrics can be used to understand if the collector is having issues in sending trace or metric data to the destination configured.

exporters

Collectors

Shows the collector’s operational metrics (Memory, CPU, uptime). This can be used to understand how much resources the collector is consuming.

collectors

Data Flow

Shows the metrics and spans data flow through the collector’s components.

dataflow

Note: To read more about the metrics and the dashboard used, visit the upstream documentation here.

Disable ADOT health monitoring

Update the context in cdk.json file located in cdk-eks-blueprints-patterns directory

   "context": {
    "adotcollectormetrics.pattern.enabled": false
  }

Teardown

You can teardown the whole CDK stack with the following command:

make pattern existing-eks-opensource-observability destroy

If you setup your cluster with Single New EKS Cluster Observability Accelerator, you also need to run:

make pattern single-new-eks-cluster destroy