SCREEN

This guide will walk you through the steps of optimizing an application running on a Kubernetes cluster using Akamas.

You will optimize Online Boutique, a cloud-native microservices application implemented by Google as a demo application for Kubernetes. It is a web-based sample yet fully-fledged e-commerce application.

Architecture

What you will learn

What you will need

You can find all the configuration files used in this guide on Akamas' public GitHub repository.

Please clone the repository in the home directory of the Akamas instance by running the command

git clone https://github.com/akamaslabs/kubernetes-online-boutique.git

There are two possible architectural configurations, depending on whether you are using a dedicated cluster or running the application on Minikube inside you Akamas-in-a-box machine:

Scenario 1: Dedicated Cluster

In this scenario, you are running the application on a dedicated cluster; it will require at least one node with 4 CPUs and 8 GB of RAM. Akamas will run on a dedicated VM.

Architecture Overview

Scenario 2: Minikube Cluster

In this scenario, you are running the application on a Minikube cluster installed on your Akamas-in-a-box machine; it will need at least 8 CPUs and 16 GB of RAM (e.g., c5.xlarge on AWS EC2). In the following section, Build Minikube Cluster, you will learn to install it with a single command.

Architecture Overview

In both scenarios, you will install the following applications in the cluster:

This section describes how to build a local Kubernetes cluster for the following Akamas optimization study. The local cluster will be installed in the Akamas-in-a-box machine, so this machine needs at least 8 CPUs and 16 GB of RAM (e.g., c5.2xlarge on AWS EC2).

Before proceeding with the installation of the Minikube cluster, please ensure that your Akamas-in-a-box host has all the following prerequisites:

Then make sure Akamas is running. You can verify if Akamas services are up and running by executing the following command:

akamas status

At this point, you can create the Minikube cluster with a single command by leveraging the script create-minikube-cluster.sh in the scripts folder of the cloned repo (as described in section Download artifacts) and passing the public IP of your machine as the CLUSTER_IP:

bash kubernetes-online-boutique/scripts/create-minikube-cluster.sh <CLUSTER_IP>

The command may take a few minutes, and will output the message "Cluster created" once the installation process completes correctly.

Now the cluster should be up and running. You can verify this by running any kubectl command, such as:

>>>$ kubectl get namespaces

NAME              STATUS   AGE
default           Active   92m
kube-node-lease   Active   92m
kube-public       Active   92m
kube-system       Active   92m

To get the target application (Online Boutique), the load generator (Locust), and the telemetry provider (Prometheus) installed, you need to use the three Kubernetes manifests available in the kube folder of your cloned repo. The corresponding kubectl commands must be issued from any terminal pointing to your cluster.

Notice that all these three manifests refer to a label akamas/node=akamas to ensure that the corresponding pods are scheduled on the same node. For the sake of simplicity, run the following command to assign this label to the node you want to use for these pods (this is not needed for the Minikube cluster, which already is correctly configured):

kubectl label node <NODE> akamas/node=akamas

Install the target application

To install the Online Boutique application, you need to apply the boutique.yaml manifest to your cluster with the following command:

kubectl apply -f kubernetes-online-boutique/kube/boutique.yaml

This command will create the namespace akamas-demo and all the Deployment and Services of the Online Boutique inside that namespace. You can verify that all the pods are up and running with the command:

watch -d kubectl get pods -n akamas-demo

You can wait until the output is similar to the following one, then proceed:

NAME                                        READY   STATUS    RESTARTS   AGE
ak-adservice-76cd99dffc-x8srv               1/1     Running   0          3m26s
ak-cartservice-5fbb6b6444-lw2lp             1/1     Running   0          3m18s
ak-checkoutservice-5bfc7765f9-lw4nd         1/1     Running   0          3m31s
ak-currencyservice-86b4779f5f-cwc8r         1/1     Running   0          3m25s
ak-emailservice-5dd45d469c-nkpj5            1/1     Running   0          3m32s
ak-frontend-56ddf7478b-q2b5z                1/1     Running   0          3m23s
ak-paymentservice-5756458bb8-4zmrh          1/1     Running   0          3m28s
ak-productcatalogservice-7bb94dff65-4n9sh   1/1     Running   0          3m20s
ak-recommendationservice-7f89d7fdc8-4dnmk   1/1     Running   0          3m21s
ak-redis-cart-6cc66bb4c9-rgggv              1/1     Running   0          3m16s
ak-shippingservice-fc6bbc6d5-6l2nl          1/1     Running   0          3m29s

Install the load generator

Then, to install Locust, you need to apply the loadgenerator.yaml manifest to your cluster:

kubectl apply -f kubernetes-online-boutique/kube/loadgenerator.yaml

You can verify that all the pods are up and running with the following command:

kubectl get pods -n akamas-demo -l app=ak-loadgenerator

The output should be similar to the following one:

ak-loadgenerator-5669ff6c96-nstcs           2/2     Running   0          10s

Install the telemetry provider

Finally, to install Prometheus, you need to apply the prometheus.yaml manifest:

kubectl apply -f kubernetes-online-boutique/kube/prometheus.yaml

You can verify that all the pods are up and running with the command:

kubectl get pods -n akamas-demo -l app=ak-prometheus

The output should be similar to the following one:

ak-prometheus-76b4b749b5-4g6wf                 1/1     Running   0          52s

As described in the section Architecture Overview, Akamas needs to communicate with the cluster to apply new configurations of the Online Boutique using the kubectl tool. Therefore, if you are optimizing your own Kubernetes cluster, you need to make sure that Akamas can interact with it.

First, you need to go to your Akamas-in-a-box machine and copy your kubeconfig file (i.e., ~/.kube/config) in the /akamas-config folder. Then, you have to ensure that the container named benchmark has the credentials to access the cluster. The credentials to pass could differ depending on your cluster provider. The examples here below are specific to an EKS cluster and other providers.

EKS Cluster

Please copy & paste the following command and substitute your AWS credentials in place of the placeholders. It will create the file /akamas-config/envs that contains the variables required to communicate with the cluster:

cat << EOF > /akamas-config/envs
AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID>
AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
AWS_DEFAULT_REGION=<YOUR_AWS_DEFAULT_REGION>
EOF

Other Providers

You might write a file envs as above in the /akamas-config directory and put there all the environment variables needed to connect to your cluster as in the example below.

cat << EOF > /akamas-config/envs
<ENV_KEY>=<ENV_VALUE>
...
EOF

The setup is now complete. You can now proceed to the next section to verify it.

You need to make sure that Akamas can interact with the target cluster check that:

To check that the container can connect to your cluster run the following command and verify that you can see your Kubernetes namespaces:

docker exec -it --env-file /akamas-config/envs benchmark bash -c "kubectl get namespaces --kubeconfig /kubeconfig/config"

Next, you need to check that you can reach your Prometheus and the loadgenerator from the benchmark container.

To verify that you can communicate with Prometheus try to run the following command, by substituting the CLUSTER_IP placeholder with your public cluster IP:

docker exec -it benchmark bash -c "curl http://<CLUSTER_IP>:30900"

You should see the output:

<a href="/graph">Found</a>.

Lastly, to verify that you can connect with the loadgenerator run the following command, by substituting the CLUSTER_IP placeholder with your public cluster IP:

docker exec -it benchmark bash -c "curl http://<CLUSTER_IP>:30899/stats/reset"

You should see the output:

ok

At this point, Akamas is correctly configured to interact with the target cluster, and you can start modeling and then optimizing your cluster in Akamas.

To model the Online Boutique inside Akamas, we need to create a corresponding System with its components in Akamas and also associate a Prometheus telemetry instance to the system to allow Akamas to collect the performance metrics.

The entire Akamas system, together with the Prometheus telemetry instance, can be installed with a single command by leveraging the create-system.sh script provided in the scripts folder.

This script requires, as an argument, the public IP address of the cluster CLUSTER_IP.

Now you need to first login to Akamas with the following command:

akamas login --user <login> --password <password>

and then run the above-mentioned script in a shell where you have the Akamas CLI installed:

bash kubernetes-online-boutique/scripts/create-system.sh <CLUSTER_IP>

The scripts should return a message similar to: System created correctly.

At this point, you can access the Akamas UI under and verify that the System and its component are listed in the Systems menu:

System

Notice that this System leverages the following Optimization Packs:

The next step is to create a workflow describing the steps executed in each experiment of your optimization study.

A workflow in an optimization study for Kubernetes is typically composed of the following tasks:

You can create the Akamas workflow with a single command by leveraging the script create-workflow.sh provided in the scripts folder. Also, this script requires a parameter:

The artifact in your cloned repo contains the workflow.yaml file that creates the workflow by issuing the following command:

bash kubernetes-online-boutique/scripts/create-workflow.sh <CLUSTER_IP>

You can verify that this workflow has been created by accessing the corresponding Workflow menu in the Akamas UI:

Workflow

Notice that all load testing scenarios launched by the performance test steps are in the kubernetes-online-boutique/optimization folder.

For this study, we define a goal of increasing the application throughput, while enforcing the constraint of keeping the latency below 500 ms and error rate below 2%.

You can create the optimization study using the study.yaml file in the akamas/studies folder by issuing the Akamas command:

akamas create study kubernetes-online-boutique/akamas/studies/study.yaml

and then run it by issuing the Akamas command:

akamas start study 'Maximizing Kubernetes Online Boutique throughput while matching SLOs'

You can now explore this study from the Study menu in the Akamas UI and then move to the Analysis tab.

System

As the optimization study executes the different experiments, this chart will display more points and their associated score.

Let's now take a look at the results and benefits Akamas achieved in this optimization study. Mind that you might achieve different results as the actual best configuration may depend on your actual setup (i.e., operating systems, cloud or virtualization platform, and the hardware).

First of all, the best configuration was quickly identified, providing an application throughput speed-up by 39%.

System

Let's look at the best configuration from the Summary tab: this configuration specifies the right amount of CPU and memory for each microservice.

System

It's interesting to notice that Akamas did adjust the CPU and memory limits of every single microservice:

Besides increasing throughput, the configuration identified by Akamas also made the application run 63% faster (in terms of transaction response time) than the baseline, as we can see from the radar chart in the Configuration Analysis tab:

System

Also, notice that a couple of identified configurations improved the application response time even more (up to 69%) while not representing the best configuration.

This optimization study shows how it is possible to tune a Kubernetes application made by several microservices, which represents a complex challenge, typically requiring days or weeks of time and effort even for expert performance engineers, developers, or SRE. With Akamas, the optimization study only took about 4 hours to automatically identify the optimal configuration for each Kubernetes microservice.

You have finished your first Akamas optimization of a Kubernetes application.

You can continue exploring Akamas' powerful goal-driven optimization capabilities by leveraging other quick guides or by trying to apply Akamas to your Kubernetes environment.

© Akamas Spa 2018-present. Akamas and the Akamas logo are registered trademarks of Akamas Spa.