Performance testing

The page will show you a summary related to the performance testing.

  • Performance testing

Performance Testing is a software testing process used for testing the speed, response time, reliability, scalability and resource usage of a software application under workload. One of the main reasons is to recognize and dispense with the execution bottlenecks within the program application.

  • Why do performance testing?

Features and functionality supported by a software system are not the only concern. A software application´s response time, reliability, resource usage and scalability matter as well.

Performance Testing is done to provide stakeholders with information regarding speed, stability and scalability. More importantly, Performance Testing uncovers what needs to be improved as the product goes to market. Without running performance test, software is likely to endure from issues such as: running slow while several users use it simultaneously, inconsistencies across different operating systems and poor usability (user experience).

  • Types of performance testing

  • Common problems

Most performance problems revolve around speed, response time, load time and poor scalability. Speed is often one of the most important attributes of an application. A slow running application will lose potential users. Performance testing is done to make sure an app runs fast enough to meet user requirements/expectations. Common types of performance problems include the following:

  1. Long load time

  2. Poor response time

  3. Poor scalability

  4. Bottlenecking

Performance Testing Steps

  1. Identify the testing environment: Understand details of the software, networks configurations and other important settings used during testing before beginning the testing process. It will help create more efficient tests. It will also help identify possible challenges that testers may encounter during the performance testing procedures.

  2. Identify the performance acceptance criteria: This includes goals and constraints for throughput, response times and resources allocation. It is also necessary to identify project success criteria outside of these goals and constraints.

  3. Plan and design performance tests: Identify performance test scenarios that consider user variability, test data, and target metrics. This could be taken from the automated test already done.

  4. Configure the test environment: Prepare the elements of the test environment and instruments needed to monitor resources. In this case, the tools to create performance testing will be the following:

  5. Implement test design: Create the performance tests. There are two ways to create a test in JMeter, the first one is to create a script with Selenium and type all code in WebDriver Sampler through JMeter. For example:

Figure 1: Selenium script through JMeter

Also, there is a video tutorial as a reference to get familiar with this method of testing in JMeter. It is advisable to follow this option to create any kind of performance testing except: Stress, Volume and Scalability testing. Because when it tries to increase the number of users, JMeter will open n times the explorer, which means, the computer won’t support in some moment that work.

The second one is to create a recorded test. JMeter has this option but it is not advisable to use it because there are missing steps in the record when interacting with the browser. Instead, the BlazeMeter browser extension is an alternative that can be used for this purpose. BlazeMeter has the option to download the recorded test as a .jmx file, which means it can be used in JMeter. Here a good video explaining how to do it.

  1. Run the test: Execute and monitor tests. It is important to keep in mind that JMeter has different options to monitor the test cases. The most important to add is Summary Report. Another consideration is that the JMeter (UI mode) can’t load the test plan (jmx file). It is necessary to execute the JMeter.batch file first, then the UI will be displayed. That is how we can load a test plan using Jmeter UI.

Besides loading the test in this way, there is another way to do it. Execute the following command in a cmd: jmeter -n -t ...path\Testplan.jmx -l ...path\log.csv -JChromeDriverPath=...path\chromedriver.exe

  1. Analyze, tune and retest: Consolidate, analyze and share test results. JMeter has a dashboard to show the data analyzed. However, the best way to show the report is with Taurus (Taurus is a free and open-source framework for Continous Testing which hepls you hiding the complexities of running performance tests). Execute the following command in a cmd: “python -m bzt pathfileJMeterScript.jmx pathfileConfig.yml” (it is necessary to create a file with .yml extension and include this information:

Figure 2: Taurus file

Using the BlazeMeter report module, will allow us to have the relevant information about the most important performance metrics of the test plan executed.

Finally, every single performance test must be automated and using a pipeline in Azure DevOps with all the steps needed. DevOps will execute this task, but everything must run well. Also, there are two steps that DevOps will need to take, which are:

  1. Compare the information with baseline (the baseline will be the first report created for the first time). If one field goes down its percentage down, this will break the pipeline.

  2. All information gathered (the .jtl files) must be saved in one repository thus the history of performance testing will be seen and in this way a dashboard can be created.

What performance testing metrics are measured

Metrics are needed to understand the quality and effectiveness of performance testing. Improvements cannot be made unless there are measurements

    • Measurements: The data being collected, such as the seconds it takes to respond to a request.

    • Metrics: A calculation that uses measurements to define the quality of results such as average response time (total response time/requests). Among the metrics used in performance testing, the following are often used:

      • Response time

      • Wait time

      • Average load time

      • Peak response time

      • Error rate

      • Request per second

      • Transactions passed/failed

      • Throughput

      • CPU utilization

      • Memory utilization

References

Hamilton, T. (2022, February 26). Guru99. Retrieved from https://www.guru99.com/performance-testing.html

Stackify. (2021, April 16). Retrieved from https://stackify.com/ultimate-guide-performance-testing-and-software-testing/

Last updated