It's a common misconception that performance testing with Docker is a lengthy and tiring process. If I asked you to set up JMeter for performance testing on one virtual machine (VM), how long would it take you? And if you needed to set up the same on multiple virtual machines, how much longer would it take? It's widely agreed that this task is tedious and time-consuming.
Once you've decided to use JMeter as your performance testing tool, the next step is to set up JMeter testing environment, which isn't very difficult but can be cumbersome. These are the steps typically followed to set up JMeter performance testing environment:
- Procuring the Machine (Master/Slave)
- Downloading JDK
- Installing JDK
- Setting up the environment variable of JDK
- Downloading & extracting the JMeter package
- Downloading & setting up the required plugins
- Setting up the environment variable of JMeter
If you follow this sequence, it will probably take almost an hour per machine. And if you encounter any issues while setting it up, it will take even longer. But here's a hack that can make your life easier and save you plenty of time. It might sound unbelievable, but you can complete this task in just five minutes for any geolocation with just a few clicks.
Yes, you read that right! There are many ways to accomplish this, and one of them is by using a Docker and Container-based JMeter solution.
What do you need?
(The following list is based on Azure cloud)
- Docker Desktop software: to build and push the Docker image.
- Azure CLI software: for connecting to Azure cloud from the local laptop where we are building the image.
- AzCopy software: to help download and upload files from Azure storage, such as JMX, .JTL, .LOG files, etc.
- Azure Container Registry: to store the image in a ready-to-use condition.
- Azure Container Instance: to run the image or the test.
- InfluxDB: for live monitoring of the load test execution (additional step).
- Grafana Dashboard: for live monitoring of the load test execution (additional step).
Let's get started
Starting with the easiest one, we are going to create the folder structure in cloud storage.
We will create 3 folders in file sharing under Azure cloud storage.
- TestScript - To keep all .jmx files
- ConfigFiles - To keep JMeterRun.sh, User.property files, JMeter.sh file, and TestData.csv files
- TestResult – Here, we will keep all .jtl and jmeter.log files
- Now, we will create an Azure Container Registry, which will act as the repository to keep Docker images.
We are almost done with the Azure portal setup except for setting up the "Azure Container Instance," which we will do once we complete the Docker file.
Let's create the Dockerfile, which will contain instructions to download and install the following apps/software:
- Download and install the Ubuntu OS
- Download and unzip JMeter
- Download and install the JMeter Plugin Manager
- Download and install JMeter plugins
- Download and unzip AZCopy
- Download the Master.sh file (we will talk about it in a few minutes)
- Execute the Master.sh
Once the Dockerfile is ready, we can build it using the Docker build command and push it to the Azure Container Registry using the Docker push command. Now, our Docker image is ready to use. However, so far, we have only created the JMeter infrastructure, and it does not have any code or instructions to download the script, run the test, and most importantly, save the test results.
The reason there aren't any instructions to get the test script or run the test or save the result in the Dockerfile is that there is no benefit to hardcoding them into the Docker image. It should work dynamically, with one particular folder where we can simply upload the test script, and similarly, one particular folder where it can keep saving all the result files. To achieve all of this, I am going to create one more .sh file and name it JMeterRun.sh.
The Docker image will call the "Master.sh" file, which is an executable file downloaded from Azure storage. Master.sh will then download and call "JMeterRun.sh," which contains all the instructions to download the testing artifacts, execute the test, and upload the file back to cloud storage.
The reason for creating two .sh files (Master.sh and JMeterRun.sh) is that Master.sh is directly mapped and built on the image. This means that any changes made to Master.sh will require rebuilding the images. However, by creating JMeterRun.sh, which is called Master.sh, we can make changes at any time and upload them to cloud storage, making it 100% dynamic.
Content of Master.sh
# Below command will download JMeterRun.sh file from Azure storage Azcopy copy https://storagename.file.core.windows.net/foldername/JMeterRun.sh?sv<yo… secured path of cloud storage> “/opt/apache-jmeter-5.5/bin” # Below command will run the JMeterRun.sh Sh JMeterRun.sh
Content of JMeterRun.sh
# Capture the current system time
#Download the Test data files Azcopy copy https://storagename.file.core.windows.net/foldername/TestData.csv?<your secured path of cloud storage>“/opt/apache-jmeter-5.5/bin”
#Download user.property file, it will help if we want to make any changes to user.property file Azcopy copy https://storagename.file.core.windows.net/foldername/user.property?sv<y… secured path of cloud storage> “/opt/apache-jmeter-5.5/bin”
#Download jmeter.sh file, it will help to play around with heap size Azcopy copy https://storagename.file.core.windows.net/foldername/jmeter.sh?sv<your secured path of cloud storage>“/opt/apache-jmeter-5.5/bin”
#Download the .JMX file Azcopy copy https://storagename.file.core.windows.net/foldername/MyJMeterScript.jmx… secured path of cloud storage> “/opt/apache-jmeter-5.5/bin”
#Run the JMeter test
Sh jmeter.sh -n -t “/opt/apache-jmeter-5.5/bin/MyJMeterScript.jmx” -l “MyResult_(MyCurrrentTime).jtl” -j “MyJMeterLog_( MyCurrrentTime).log”
#upload the .jtl & JMeter log file Azcopy copy “/opt/apache-jmeter-5.5/bin/MyResult_(MyCurrrentTime).jtl” “https://storagename.file.core.windows.net/foldername/MyResult_(MyCurrre… secured path of cloud storage>”
Azcopy copy “/opt/apache-jmeter-5.5/bin/MyJMeterLog_(MyCurrrentTime).log” “https://storagename.file.core.windows.net/foldername/MyJMeterLog_(MyCur… secured path of cloud storage>”
- As most of our setup is ready, let's create an Azure Container instance and run our test. Here are the steps to follow:
- Go to the Azure portal
- Create a new resource
- Search for "Container instance"
- Select your Container instance
- Click on "Create"
- Provide your subscription and resource group name
- Give a name to your Container
- Select the geolocation where you want to deploy your image or from where you want to run the test
- Choose "Azure Container registry" as your image source
- Select the image you pushed in step #4
- Select the size, which will allow you to choose the CPU and memory configuration
- Go to the networking tab and allow port 8086 to communicate with InfluxDB if you have already set it up with the default port
- Validate and create the Container instance
After creation, the Container instance will automatically run for the first time and terminate automatically once all activities are done as per the instructions. In the future, if you need to run the test, simply hit the "run" button, and it will take care of everything. You will only need to upload a fresh script for another test from the same GEO location. If you want to run the test from a different GEO location, you will need to create a new Azure instance.
You can visit the Container page of your Azure Container instance and view the current status, events, properties, and logs.
My Azure Setup For this article
Assuming that you are already aware of live monitoring using InfluxDB and Grafana and have the setup ready, you just need to add the backend listener in the test script with the correct hostname and port.
Using the Docker/Container approach will reduce all the manual efforts required to set up the infrastructure and complete the task within 5 minutes. It also prevents issues that may arise during infrastructure setup and can be run in any geolocation offered by the cloud providers. Once you create your Docker file, it can be used in any other cloud provider with minimal changes, as all major cloud providers offer similar features.