Monitor Istio running on Amazon EKS¶
This example demonstrates how to use Terraform modules for AWS Observability Accelerator, EKS Blueprints with the Tetrate Istio Add-on and EKS monitoring for Istio.
The current example deploys the AWS Distro for OpenTelemetry Operator for Amazon EKS with its requirements and make use of an existing Amazon Managed Grafana workspace. It creates a new Amazon Managed Service for Prometheus workspace unless provided with an existing one to reuse.
It uses the EKS monitoring
module
to provide an existing EKS cluster with an OpenTelemetry collector,
curated Grafana dashboards, Prometheus alerting and recording rules with multiple
configuration options for Istio.
Prerequisites¶
Ensure that you have the following tools installed locally:
Setup¶
This example uses a local terraform state. If you need states to be saved remotely, on Amazon S3 for example, visit the terraform remote states documentation
1. Clone the repo using the command below¶
git clone https://github.com/aws-observability/terraform-aws-observability-accelerator.git
2. Initialize terraform¶
cd examples/eks-istio
terraform init
3. Amazon EKS Cluster¶
To run this example, you need to provide your EKS cluster name. If you don't have a cluster ready, visit this example first to create a new one.
Add your cluster name for eks_cluster_id="..."
to the terraform.tfvars
or use an environment variable export TF_VAR_eks_cluster_id=xxx
.
4. Amazon Managed Grafana workspace¶
To run this example you need an Amazon Managed Grafana workspace. If you have
an existing workspace, create an environment variable
export TF_VAR_managed_grafana_workspace_id=g-xxx
.
To create a new one, visit this example.
In the URL
https://g-xyz.grafana-workspace.eu-central-1.amazonaws.com
, the workspace ID would beg-xyz
5. Grafana API Key¶
Amazon Managed Service for Grafana provides a control plane API for generating Grafana API keys. We will provide to Terraform
a short lived API key to run the apply
or destroy
command.
Ensure you have necessary IAM permissions (CreateWorkspaceApiKey, DeleteWorkspaceApiKey
)
export TF_VAR_grafana_api_key=`aws grafana create-workspace-api-key --key-name "observability-accelerator-$(date +%s)" --key-role ADMIN --seconds-to-live 1200 --workspace-id $TF_VAR_managed_grafana_workspace_id --query key --output text`
Deploy¶
Simply run this command to deploy (if using a variable definition file)
terraform apply -var-file=terraform.tfvars
or if you had setup environment variables, run
terraform apply
Additional configuration¶
For the purpose of the example, we have provided default values for some of the variables.
- AWS Region
Specify the AWS Region where the resources will be deployed. Edit the terraform.tfvars
file and modify aws_region="..."
. You can also use environement variables export TF_VAR_aws_region=xxx
.
- Amazon Managed Service for Prometheus workspace
If you have an existing workspace, add managed_prometheus_workspace_id=ws-xxx
or use an environment variable export TF_VAR_managed_prometheus_workspace_id=ws-xxx
.
Visualization¶
1. Grafana dashboards¶
Go to the Dashboards panel of your Grafana workspace. You will see a list of Istio dashboards under the Observability Accelerator Dashboards
Open one of the Istio dasbhoards and you will be able to view its visualization
2. Amazon Managed Service for Prometheus rules and alerts¶
Open the Amazon Managed Service for Prometheus console and view the details of your workspace. Under the Rules management
tab, you will find new rules deployed.
Note
To setup your alert receiver, with Amazon SNS, follow this documentation
Deploy an example application to visualize metrics¶
In this section we will deploy Istio's Bookinfo sample application and extract metrics using the AWS OpenTelemetry collector. When downloading and configuring istioctl
, there are samples included in the Istio package directory. The deployment files for Bookinfo are found in the samples
folder. Additional details can be found on Istio's Getting Started documentation
1. Deploy the Bookinfo Application¶
- Using the AWS CLI, configure kubectl so you can connect to your EKS cluster. Update for your region and EKS cluster name
aws eks update-kubeconfig --region <enter-your-region> --name <cluster-name>
- Label the default namespace for automatic Istio sidecar injection
kubectl label namespace default istio-injection=enabled
- Navigate to the Istio folder location. For example, if using Istio v1.18.2 in Downloads folder:
cd ~/Downloads/istio-1.18.2
- Deploy the Bookinfo sample application
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
- Connect the Bookinfo application with the Istio gateway
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
- Validate that there are no issues with the Istio configuration
istioctl analyze
- Get the DNS name of the load balancer for the Istio gateway
GATEWAY_URL=$(kubectl get svc istio-ingressgateway -n istio-system -o=jsonpath='{.status.loadBalancer.ingress[0].hostname}')
2. Generate traffic for the Istio Bookinfo sample application¶
For the Bookinfo sample application, visit http://$GATEWAY_URL/productpage
in your web browser. To see trace data, you must send requests to your service. The number of requests depends on Istio’s sampling rate and can be configured using the Telemetry API. With the default sampling rate of 1%, you need to send at least 100 requests before the first trace is visible. To send a 100 requests to the productpage service, use the following command:
for i in $(seq 1 100); do curl -s -o /dev/null "http://$GATEWAY_URL/productpage"; done
3. Explore the Istio dashboards¶
Log back into your Amazon Managed Grafana workspace and navigate to the dashboard side panel. Click on the Observability Accelerator Dashboards
folder and open the Istio Service
Dashboard. Use the Service dropdown menu to select the reviews.default.svc.cluster.local
service. This gives details about metrics for the service, client workloads (workloads that are calling this service), and service workloads (workloads that are providing this service).
Explore the Istio Control Plane, Mesh, and Performance dashboards as well.
Destroy¶
To teardown and remove the resources created in this example:
kubectl delete -f samples/bookinfo/networking/bookinfo-gateway.yaml
kubectl delete -f samples/bookinfo/platform/kube/bookinfo.yaml
terraform destroy