SCREEN

In this guide, you'll learn how to optimize Konakart, a real-world Java-based e-commerce application, by leveraging Micro Focus LoadRunner Enterprise performance testing tool and Prometheus monitoring tool.

What you'll learn

What you'll need

The following picture represents the high-level architecture of how Akamas operates in this scenario.

Here are the main elements:

Architecture

You need a fully working LoadRunner Enterprise environment to run a load test on your target Konakart system. You can configure the out-of-the-box integration between Akamas and LRE following the integration guide.

Take note of the following ids/configurations while setting up all your LRE artifacts:

you will need to set them into your workflow configuration.

Moreover, take note of the address, the schema name, and the credentials of your InfluxDB external analysis server since they will be required while configuring the telemetry instance.

To create the LoadRunner Enterprise test you will need a script to simulate user navigations on the Konakart website. You can find a working script in the repository.

Akamas provides an out-of-the-box optimization pack called Web Application that comes very handy for modeling typical web applications, as it includes metrics such as transactions_throughput and transaction_response_time which you will use in this guide to define the optimization goal and to analyze the optimization results. These metrics will be gathered from LRE, thanks to Akamas out-of-the-box LoadRunner Enterprise telemetry provider.

Let's start by creating the system and its components.

The file system.yaml contains the following description of the system:

name: konakart
description: The konakart eCommerce shop

Run the command to create it:

akamas create system system.yaml

Add the Web Application component

The Web Application component is used to model the typical performance metrics characterizing the performance of a web application (e.g. the response time or the transactions throughput).

Akamas comes with a Web Application optimization pack out-of-the-box. You can install it from the UI:

SCREEN

You can now create the component modeling the Konakart web application.

The comp_konakart.yaml file describes the component as follows:

name: konakart
description: The konakart web application
componentType: Web Application
properties:
  loadrunnerenterprise: ""

As you can see, this component contains the loadrunnerenterprise property that instructs Akamas to populate the metrics for this component leveraging the LoadRunner Enterprise integration.

Create the component running:

akamas create component comp_konakart.yaml konakart

You can now explore the result of your system modeling in the UI. As you can see, your konakart component is now populated with all the typical metrics of a web application:

SCREEN

Add the JVM component

Before starting the optimization, you need to add the JVM component to your system.

First of all, install the Java optimization pack:

SCREEN

The comp_jvm.yaml file defines the component for the JVM as follows:

name: jvm
description: The JVM running e-commerce platform
componentType: java-openjdk-11
properties:
  prometheus:
    instance: sut_konakart
    job: jmx

Notice how the jvm component has some additional properties, instance and job, under the prometheus group. These properties are used by the Prometheus telemetry provider as values for the corresponding instance and job labels used in Prometheus queries to collect JVM metrics (e.g. JVM garbage collection time or heap utilization). Such metrics are collected out-of-the-box by the Prometheus telemetry provider - no query needs to be specified.

You can create the JVM component as follows:

akamas create component comp_jvm.yaml konakart

You can now see all the JVM parameters and metrics from the UI:

SCREEN

You have now successfully completed your system modeling!

It is now time to create the telemetry instances that Akamas will use to collect data from LoadRunner Enterprise (web application metrics) and Prometheus (JVM and OS metrics).

Prometheus telemetry instance

The Prometheus telemetry instance collects metrics for a variety of technologies, including JVM and Linux OS metrics. Moreover, it can also be easily extended to import additional metrics (via custom promQL queries). In this example, you are going to use Prometheus to import JVM metrics exposed by the Prometheus JMX exporter.

First, update the tel_prometheus.yaml file replacing the target_host placeholder with the address of your Konakart instance:

provider: Prometheus
config:
  address: target_host
  port: 9090

And then create a telemetry instance associated with the konakart system:

akamas create telemetry-instance tel_prometheus.yaml konakart

LoadRunner Enterprise telemetry instance

As described in the LRE integration guide you need an instance of InfluxDB running in your environment to act as an external analysis server for your LRE instance. Therefore, the telemetry instance needs to provide all the configurations required to connect to that InfluxDB server.

The file tel_lre.yaml is an example of a LRE telemetry instance. Make sure to replace the variables with the actual values of your configurations:

provider: LoadRunnerEnterprise
config:
  address: http://target_host
  port: target_influx_port
  username: influx_user
  password: influx_user_password
  database: influx_database_schema

and then create telemetry instance:

akamas create telemetry-instance tel_lre.yaml konakart

You can now create a new workflow that you will use in your optimization study.

A workflow in an optimization study is typically composed by the following tasks:

To create the optimization workflow, update the workflow-optimize.yaml file replacing the correct references to your environment:

name: konakart-optimize

tasks:
- name: Configure JVM options
  operator: FileConfigurator
  arguments:
    source:
      hostname: target_host
      username: ubuntu
      key: /home/jsmith/.ssh/akamas.key
      path: /home/ubuntu/konakart-docker/konakart/docker-compose.yml.templ
    target:
      hostname: target_host
      username: ubuntu
      key: /home/jsmith/.ssh/akamas.key
      path: /home/ubuntu/konakart-docker/konakart/docker-compose.yml

- name: Restart konakart
  operator: Executor
  arguments:
    command: "docker stack deploy --compose-file /home/ubuntu/konakart-docker/konakart/docker-compose.yml sut"
    host:
      hostname: target_host
      username: ubuntu
      key: /home/jsmith/.ssh/akamas.key

- name: Performance test
  operator: LoadRunnerEnterprise
  arguments:
    retries: 0
    address: http://lre_target_host
    username: lre_user
    password: lre_user_password
    project: lre_project
    domain: lre_project_domain
    tenantID: lre_tenant_id
    testId: test_id
    testSet: test_set_name
    timeSlot: test_max_duration
    verifySSL: false

Make sure to replace the placeholders with the correct references to your environment:

Regarding the LoadRunnerEnterprise operator, update the configuration above with the actual values of:

For more information about the configurations available for LoadRunner Enterprise, please refer to LRE dedicated integration guide.

Once you have edited this file, run the following command to create the workflow:

akamas create workflow workflow-optimize.yaml

In the optimization workflow you have just created, the FileConfigurator operator is used to automatically apply the configuration of the JVM parameters at each experiment. In order for this to work, you need to allow Akamas to set the parameter values being tested in each experiment. This is made possible by the following Akamas templating approach:

Therefore, you will now prepare the Konakart configuration file (a Docker Compose file).

First of all, you want to inspect the Konakart configuration file by executing the following command:

cat konakart-docker/konakart/docker-compose.yml

which should return the following output, where you can see that the JAVA_OPTS variable specifies a maximum heap size of 256 MB:

version: "3.8"
services:
  konakart:
    image: chiabre/konakart_jmx_exporter:latest
    environment:
      JAVA_OPTS: "-Xmx256M"
    deploy:
      resources:
...

In order to allow Akamas to be able to apply this hardcoded heap size value (and any other required as optimization parameter) at each experiment, you need to prepare a new Konakart Docker Compose file docker-compose.yml.templ where you can put the Akamas parameter template.

First, copy the Docker Compose file and rename it so as to keep the original file:

cd konakart-docker/konakart
cp docker-compose.yml docker-compose.yml.templ
mv docker-compose.yml docker-compose.yml.orig

Now, edit this file docker-compose.yml.templ file and replace the hardcoded value for the JAVA_OPTS variable with the Akamas parameter template:

version: "3.8"
services:
  konakart:
    image: chiabre/konakart_jmx_exporter:latest
    environment:
      JAVA_OPTS: "${jvm.*}"
...

Therefore, the FileConfigurator operator in your workflow will expand all. the JVM parameters and replace them with the actual values provided by Akamas AI-driven optimization engine.

At this point, you are ready to create your optimization study!

In this guide, your goal is to optimize Konakart performance such that:

This business-level goal translates into the following configuration for your Akamas study:

You can simply take the following description of your study and copy it in a study-max-throughput-with-SLO.yaml file:

name: Optimize konakart throughput with response time SLO
description: Tune the JVM to increase transaction throughput while keeping good performance
system: konakart

goal:
  objective: maximize
  function:
    formula: konakart.transactions_throughput
  constraints:
    absolute:
    - konakart.transactions_response_time <= 100    # 100ms service-level objective (SLO)

windowing:
  type: stability
  stability:
    metric: konakart.transactions_response_time
    width: 2
    maxStdDev: 1000000
    when:
      metric: konakart.transactions_throughput
      is: max

workflow: konakart-optimize

parametersSelection:
- name: jvm.jvm_gcType
- name: jvm.jvm_maxHeapSize
  domain: [32,1024]
- name: jvm.jvm_newSize
  domain: [16,1024]
- name: jvm.jvm_survivorRatio
- name: jvm.jvm_maxTenuringThreshold
- name: jvm.jvm_parallelGCThreads
  domain: [1,4]
- name: jvm.jvm_concurrentGCThreads

parameterConstraints:
- name: "JVM max heap must always be greater than new size"
  formula: jvm.jvm_maxHeapSize > jvm.jvm_newSize
- name: "JVM GC concurrent threads must always be less than or equal to parallel"
  formula: jvm.jvm_parallelGCThreads >= jvm.jvm_concurrentGCThreads

steps:
- name: baseline
  type: baseline
  values:
    jvm.jvm_maxHeapSize: 256

- name: optimize
  type: optimize
  numberOfExperiments: 50

and then run the following command to create your study:

akamas create study study-max-throughput-with-SLO.yaml

Let's now take a look at the results and benefits Akamas achieved in this real-life optimization. Since the optimization results don't depend on the load test tool you are using and the scenario you run with LRE is equivalent to the Jmeter one the same considerations you can find in the Konakart tuning guide with Jmeter can be applied to this use case too.

Congratulations, you have just done your first Akamas optimization of a real-life Java application in a performance testing environment with LRE and Prometheus!

Here are a couple of next steps for you:

© Akamas Spa 2018-present. Akamas and the Akamas logo are registered trademarks of Akamas Spa.