In this guide, you'll learn how to optimize Konakart, a real-world Java-based e-commerce application, by leveraging JMeter performance testing tool and Prometheus monitoring tool.

What you'll learn

What you'll need

The following picture represents the high-level architecture of how Akamas operates in this scenario.

Here are the main elements:

SCREEN

Akamas provides an out-of-the-box optimization pack called Web Application that comes very handy for modeling typical web applications, as it includes metrics such as transactions_throughput and transaction_response_time which you will use in this guide to define the optimization goal and to analyze the optimization results. These metrics will be gathered from JMeter, thanks to Akamas out-of-the-box Prometheus telemetry provider.

Let's create the system and its components.

The file system.yaml contains the following definition for our system:

name: konakart
description: The konakart eCommerce shop

Run the command to create it:

akamas create system system.yaml

Now, install the Web Application optimization pack from the UI:

SCREEN

You can now create the component modeling the Konakart web application.

The file comp_konakart.yaml defines the component as follows:

name: konakart
description: The konakart web application
componentType: Web Application
properties:
  prometheus:
    instance: jmeter
    job: jmeter

As you can see, this component contains some custom properties, instance and job, under the prometheus group. These properties are used by the Prometheus telemetry provider as values for the corresponding instance and job labels in the Prometheus queries to collect metrics for the correct entities. You will configure the Prometheus integration in the next sections.

You can now run the command to create the component:

akamas create component comp_konakart.yaml konakart

You can now explore the result of your system modeling in the UI. As you can see, your konakart component is now populated with all the typical metrics of a web application:

SCREEN

Next you will need to create a workflow that specifies how Akamas applies the parameters to be optimized, how to automates the launch of JMeter performance tests and how to collect metrics from Prometheus telemetry. For now, you will create a simple automation workflow that executes a quick two-minute performance test to make sure everything is working properly.

The file workflow-baseline.yaml contains the definition of the steps to perform during the test:

name: konakart-baseline
tasks:
- name: Performance test
  operator: Executor
  arguments:
    command: "docker run --net=akamas_lab --rm --name jmeter -i -v $(pwd)/konakart-docker/jmeter:/tmp -w /tmp -p 9270:9270 chiabre/jmeter_plugins -t ramp_test_plan.jmx -JTARGET_HOST=target_host -JTHREADS=10 -JRAMP_SEC=120 -JRANDOM_DELAY_MAX_MS=0"
    host:
      hostname: target_host
      username: ubuntu
      key: /home/jsmith/.ssh/akamas.key

Please make sure to modify the workflow-baseline.yaml file replacing the following placeholders with the correct references to your environment:

Then create the workflow:

akamas create workflow workflow-baseline.yaml

To execute this workflow we'll use a simple Akamas study that includes a single step of type baseline. This type of step simply executes one experiment without leveraging Akamas AI - you will add the AI-driven optimization step later.

The study-baseline.yaml file defines the study as follows:

name: Baseline konakart
description: A first study to validate the automated testing process
system: konakart

goal:
  objective: maximize
  function:
    formula: konakart.transactions_throughput

workflow: konakart-baseline

steps:
- name: baseline
  type: baseline

Create the study:

akamas create study study-baseline.yaml

Now, you can run the study by clicking Start from the UI, or by executing the following command:

akamas start study 'Baseline konakart'

You should now see the baseline experiment running in the Progress tab of the Akamas UI.

Notice that you can also monitor JMeter performance tests live by accessing Grafana on port 3000 of your Konakart instance, then selecting the JMeter Exporter dashboard:

SCREEN

Congratulations, you have successfully integrated JMeter and Akamas! You can relaunch the baseline study at any time you want by pressing the Start button again. If you want, you can also adjust the JMeter scenario settings in the workflow - see the Konakart setup guide for more details on the JMeter plans and variables you can set.

It is time now to configure Akamas telemetry to collect the relevant JMeter performance metrics. You will use the out-of-the-box Prometheus provider for that purpose.

The Prometheus telemetry provider collects metrics for a variety of technologies, including JVM and Linux OS metrics. Moreover, you can easily extend it to import additional metrics via custom promQL queries. In this example, you are collecting JMeter performance test metrics that are exposed by the JMeter Prometheus exporter already configured in the Konakart performance environment.

The file tel_prometheus.yaml defines the telemetry instance as follows - make sure to replace the target_host placeholder with the address of your Konakart instance:

provider: Prometheus

config:
  address: target_host
  port: 9090

metrics:
  - metric: transactions_throughput
    datasourceMetric: 'sum(rate(Ratio_success{instance=~"$INSTANCE$"}[30s]))'
  - metric: transactions_response_time
    datasourceMetric: 'avg((rate(ResponseTime_sum{instance=~"$INSTANCE$",code="200"}[30s])/(rate(ResponseTime_count{instance=~"$INSTANCE$",code="200"}[30s])>0)))'
  - metric: transactions_response_time_p90
    datasourceMetric: 'avg(ResponseTime{instance="$INSTANCE$", code="200", quantile="0.9"})'
  - metric: transactions_error_rate
    datasourceMetric: 'sum(rate(Ratio_failure{instance="$INSTANCE$"}[30s]) ) / sum(rate(Ratio_total{instance=~"$INSTANCE$"}[30s]))'
  - metric: users
    datasourceMetric: 'sum(jmeter_threads{instance=~"$INSTANCE$",state="active"})'

Now create a telemetry instance associated with the konakart system:

akamas create telemetry-instance tel_prometheus.yaml konakart

Now you can test the Prometheus integration by running again the baseline study you have created before (you can simply press again the Start button in the Study page). At the end of the experiment, you should see JMeter performance metrics such as transactions_throughput and transactions_response_time displayed as time series in the Metrics tab, and as aggregated metrics in the Analysis tab:

SCREEN

At this point, you can launch your JMeter performance tests from Akamas and see the relevant performance metrics imported from Prometheus.

Before starting the optimization, you need to also add the JVM component to your system.

First of all, install the Java optimization pack:

SCREEN

The file comp_jvm.yaml defines the JVM as follows:

name: jvm
description: The JVM running e-commerce platform
componentType: java-openjdk-11
properties:
  prometheus:
    instance: sut_konakart
    job: jmx

Notice that the jvm component has some additional properties, instance and job, under the prometheus group. These properties are used by the Prometheus telemetry provider as values for the corresponding instance and job labels used in Prometheus queries to collect JVM metrics (e.g. JVM garbage collection time or heap utilization). The Prometheus telemetry provider collects these metrics out-of-the-box - no query needs to be specified.

You can create the JVM component as follows:

akamas create component comp_jvm.yaml konakart

You can now see all the JVM parameters and metrics from the UI:

SCREEN

At this point, your system is composed of the web application and the JVM component you need to perform the optimization study.

You can now create a new workflow that you will use in your optimization study.

A workflow in an optimization study is typically composed of the following tasks:

The file workflow-optimize.yaml contains the pre-configured workflow, you only need to include the correct references to your environment:

name: konakart-optimize

tasks:
- name: Configure JVM options
  operator: FileConfigurator
  arguments:
    source:
      hostname: target_host
      username: ubuntu
      key: /home/jsmith/.ssh/akamas.key
      path: /home/ubuntu/konakart-docker/konakart/docker-compose.yml.templ
    target:
      hostname: target_host
      username: ubuntu
      key: /home/jsmith/.ssh/akamas.key
      path: /home/ubuntu/konakart-docker/konakart/docker-compose.yml

- name: Restart konakart
  operator: Executor
  arguments:
    command: "docker stack deploy --compose-file /home/ubuntu/konakart-docker/konakart/docker-compose.yml sut"
    host:
      hostname: target_host
      username: ubuntu
      key: /home/jsmith/.ssh/akamas.key

- name: Performance test
  operator: Executor
  arguments:
    command: "docker run --net=akamas_lab --rm --name jmeter -i -v /home/ubuntu/konakart-docker/jmeter:/tmp -w /tmp -p 9270:9270 chiabre/jmeter_plugins -t ramp_test_plan.jmx -JTARGET_HOST=target_host -JTHREADS=40 -JRAMP_SEC=300 -JRANDOM_DELAY_MAX_MS=0"
    host:
      hostname: target_host
      username: ubuntu
      key: /home/jsmith/.ssh/akamas.key

Please make sure to modify the workflow-optimize.yaml file so as to get some variables replaced with the correct references to your environment:

Once you have edited this file, you can then run the following command to create the workflow:

akamas create workflow workflow-optimize.yaml

In the optimization workflow you have just created, the FileConfigurator operator is used to automatically apply the configuration of the JVM parameters at each experiment. In order for this to work, you need to allow Akamas to set the parameter values being tested in each experiment. This is made possible by the following Akamas templating approach:

Therefore, you will now prepare the Konakart configuration file (a Docker Compose file).

First of all, you want to inspect the Konakart configuration file by executing the following command:

cat konakart-docker/konakart/docker-compose.yml

which should return the following output, where you can see that the JAVA_OPTS variable specifies a maximum heap size of 256 MB:

version: "3.8"
services:
  konakart:
    image: chiabre/konakart_jmx_exporter:latest
    environment:
      JAVA_OPTS: "-Xmx256M"
    deploy:
      resources:
...

In order to allow Akamas to be able to apply this hardcoded heap size value (and any other required as optimization parameter) at each experiment, you need to prepare a new Konakart Docker Compose file docker-compose.yml.templ where you can put the Akamas parameter template.

First, copy the Docker Compose file and rename it so as to keep the original file:

cd konakart-docker/konakart
cp docker-compose.yml docker-compose.yml.templ
mv docker-compose.yml docker-compose.yml.orig

Now, edit this file docker-compose.yml.templ file and replace the hardcoded value for the JAVA_OPTS variable with the Akamas parameter template:

version: "3.8"
services:
  konakart:
    image: chiabre/konakart_jmx_exporter:latest
    environment:
      JAVA_OPTS: "${jvm.*}"
...

Therefore, the FileConfigurator operator in your workflow will expand all. the JVM parameters and replace them with the actual values provided by Akamas AI-driven optimization engine.

At this point you are ready to create your optimization study!

In this guide, your goal is to optimize Konakart performance such that:

This business-level goal translates into the following configuration for your Akamas study:

The study-max-throughput-with-SLO.yaml provides the pre-configured study:

name: Optimize konakart throughput with response time SLO
description: Tune the JVM to increase transaction throughput while keeping good performance
system: konakart

goal:
  objective: maximize
  function:
    formula: konakart.transactions_throughput
  constraints:
    absolute:
    - konakart.transactions_response_time <= 100        # 100ms service-level objective (SLO)

windowing:
  type: stability
  stability:
    metric: konakart.transactions_response_time
    width: 2
    maxStdDev: 1000000
    when:
      metric: konakart.transactions_throughput
      is: max

workflow: konakart-optimize

parametersSelection:
- name: jvm.jvm_gcType
- name: jvm.jvm_maxHeapSize
  domain: [32,1024]
- name: jvm.jvm_newSize
  domain: [16,1024]
- name: jvm.jvm_survivorRatio
- name: jvm.jvm_maxTenuringThreshold
- name: jvm.jvm_parallelGCThreads
  domain: [1,4]
- name: jvm.jvm_concurrentGCThreads

parameterConstraints:
- name: "JVM max heap must always be greater than new size"
  formula: jvm.jvm_maxHeapSize > jvm.jvm_newSize
- name: "JVM GC concurrent threads must always be less than or equal to parallel"
  formula: jvm.jvm_parallelGCThreads >= jvm.jvm_concurrentGCThreads

steps:
- name: baseline
  type: baseline
  values:
    jvm.jvm_maxHeapSize: 256

- name: optimize
  type: optimize
  numberOfExperiments: 50

Run the following command to create your study:

akamas create study study-max-throughput-with-SLO.yaml

Let's now take a look at the results and benefits Akamas achieved in this real-life optimization.

Application throughput increased by 30%

By optimally configuring the application configurations (JVM options), Akamas increased the application throughput by 30%:

SCREEN

The automatic optimization took 15 hours

Properly tuning a modern JVM is a complex challenge, and might require weeks of time and effort even for performance experts.

Akamas was able to find the optimal JVM configuration after a bit more than half a day of automatic tuning:

SCREEN

How did Akamas achieve that? A look at the best configurations

In the Summary tab you can quickly see the optimal JVM configuration Akamas found: SCREEN

As you can see, without being told anything about how the application works, Akamas learned the best settings for some interesting JVM parameters:

Those are not easy settings to tune manually!

Besides increasing throughput, Akamas also made the application run 23% faster

Another very interesting side benefit is that the optimized configuration not only improved application throughput, but also made Konakart run 23% with respect to the baseline (Configuration Analysis tab): SCREEN

Also notice how the 3rd best configuration actually improved response time even more (26%).

The best configuration significantly increased application scalability and resiliency

The significant effects the optimal configuration had on application scalability can be also analyzed by looking at the over-time metrics (Metrics tab).

As you can see, the best configuration highly increased the application scalability and the ability to sustain peak traffic volumes with very low response times. Also notice how Akamas automatically detected the peak throughput achieved by the different configurations while keeping the response time under 100 ms, as per the goal constraints.

SCREEN

The best configuration made the application run more efficiently on the CPU (hence less costly on the cloud)

As a final but important benefit, the best configuration Akamas identified is also more efficient CPU-wise. As you can see by looking at the jvm.jvm_cpu_used metric, at peak load the CPU consumption of the optimized JVM was more than 20% less than the baseline configuration. This can translate to direct cost savings on the cloud, as it allows using a smaller instance size or container.

SCREEN

Congratulations, you have just done your first Akamas optimization of a real-life Java application in a performance testing environment with JMeter and Prometheus!

Here are a couple of next steps for you:

© Akamas Spa 2018-present. Akamas and the Akamas logo are registered trademarks of Akamas Spa.