As part of one of our projects, we needed to introduce performance/load testing. This kind of testing can be initialized in many different ways and it depends mostly on customer requirements and goals. The goal we set up in our project was to perform API calls in given scenario on a BackEnd server (HTTP requests) and measure response times. The scenario of HTTP requests was discussed with the customer beforehand and it covered many different real user behaviors in the web application.
In the development of these performance tests we tried three different solutions for getting, storing, and visualizing reliable results for customer satisfaction. I will describe these three approaches below:
The first solution we came up with used the user-friendliness of Jmeter software to create the request scenarios which we than converted into a Taurus test (JMX file into a YAML file) for better integration into our pipeline.
Afterwards the tests needed to be manually modified a little bit, because the integrated converter is not always precise and sometimes does not include all test elements. And last but not least, we had to setup the Taurus elements (for example number of users, run time, etc.)
In our first solution, we chose Taurus as execution software instead of sticking all the way with Jmeter because Taurus had much easier integration on the server via docker image. This solution, however, was not perfect because we needed to store the test result data for long periods of time (to create reports and analyze this data later if needed) and while Blazemeter shows very nice visualizations of the test results, its data retention in the free version is only 7 days. Also, the user cannot customize which data will be visualized.
The Local InfluxDB database gave us the proper way to store test result data on our own servers without data retention. This raw data can be later used in any way we choose or analyzed/filtered without the need for modification from other tools.
For a data visualizer, we used Grafana as their integration with InfluxDB is prepared perfectly and we’ve already had some experience with it before. Tests result data were sent to the InfluxDB database in real time so we could see results in Grafana nearly in real time too, as it works with a minimal automatic refresh rate of 5 seconds.
The Grafana dashboard for K6.io tests can be found in the Grafana store. Of course, we had to make some small modifications to the dashboard to show the desired visualizations. In our case, the K6.io testing tool showed some minor problems where we suspected that some of the test results were slightly modified by the testing tool and we did not receive enough information about each sent HTTP request (for example error details). So we moved to solution number 3…
As our third and final solution, we kept InfluxDB and Grafana integration but for the test execution software we choose Jmeter.
We eventually found a way to integrate Jmeter as a docker image on our servers with the right plugins and we found Jmeter to be very reliable to use.
We only needed to solve one minor problem with its default memory allocation which is only 256 MB – too little for our performance/load testing.
This small memory allocation caused interruptions in the test executions. But fortunately for the docker image usage of Jmeter we needed just a small modification. We increased allocation to minimal 2GB and maximal 4GB which quickly solved all our problems.
The Grafana store includes a dashboard for Jmeter tests too and one of them was specially prepared for Jmeter/InfluxDB integration, which fully satisfied our usage as it was setup well for the measurements we needed to see in the visualization.
In summary, we found that Jmeter/InfluxDB/Grafana was the most reliable and customizable of the three solutions. It gave us the best way to execute tests, store raw results in the local environment (our servers), and visualize these results in a timeline with graphs and with lists where values can be marked with threshold colors.
These visualizations of the test results helped us show some problematic areas of the tested application and we hope that it can help other developers find the source of their apps’ performance problems.