Welcome to Innominds Blog. As thought leaders and visionaries in the tech industry, our blog serves as your resource for innovative ideas, advanced technical solutions and industry-standard technologies. Enjoy our insights and engage with us!

Innominds Blog

6 Pitfalls to Avoid During Your Performance Test Execution

By Rakesh Reddy Ponnala

Testing is one of the critical processes in application development. The success or failure of the application entirely depends on it. A typical performance test execution’s success depends on various interdependent factors like:

  • Test scripts
  • Test data
  • Load pattern setup
  • Configuration settings for load simulation
  • Load generating hardware and software

And any oversight of these factors may lead to a failed simulation of load on the application or even abortion of the performance test execution.

At Innominds, we know this can be a struggle, which is why we have put together some helpful guidance points for you to help ensure a smooth and successful load simulation during performance test executions.

1. Avoid Excessive Logging

Various levels of logging options are available in the configuration of the various load-testing tools that are in use. They provide options to log HTTP response, informational messages like actions happening in the script, parameter substitutions, warning messages and error message.

It’s important to enable full logging only for debugging purposes and the logging option to Log only on Error. During actual execution of the performance test, keep the logging to as minimal as possible. An increase in the amount of logging will increase the resource utilization of load generator machines. Most often, the reasons for load generator failures are related to extensive tool logging.

2. Keep Appropriate Run Time Settings

Always ensure that appropriate run time settings are configured as per-the -oad patter requirements. Even a minor misconfiguration of the settings can make the load simulation very unrealistic and render the test results useless.

For example, let’s consider a scenario where we achieve a realistic load simulation. Scripts are configured with an appropriate think time. However, while executing a performance test, if the engineer leaves the simulate think time’ setting disabled, the load test tool will choose to ignore the think time incorporated in it to the scripts, and this will lead to an inaccurate load simulation against the application.

Having a checklist of the required run-time settings will help in avoiding misconfiguration of such settings.

3. Provide A Sufficient Amount of Test Data

Always have a proper estimation for the amount of test data required for each test script that is part of the test execution. In many cases, records from the test data cannot be reused because of the nature of the data involved and the operations that are performed by test scripts using that data. Exhaustion of such test data will eventually lead to the failure of virtual users and eventually will cause a drop in the user load. In some cases this drop is so significant that continuing the test execution becomes meaningless. If the scripts run out of test data after a considerable amount of time in to the test execution, it will be even more painful. Having enough test data will avoid the test execution from running into such a situation.

testing.png

4. Keep a Check on Load Generator Failures

One of the major issues that lead to a failed performance test execution is load generator failure. There could be many reasons for a load generator failure, including excess logging, flawed estimate for the number of virtual users a load generator can support, network issues between load controller and load generator. To avoid or reduce the load generator failures, we suggest you:

  • Do not select excessive logging options
  • Do proper scientific estimates for the number of virtual users a load generator can simulate
  • Set aside a few computers as a backup for load generators whenever possible

5. Shield the Test Results From Corruption

One of the most common issues that make a performance test execution fail is performance test results corruption. Test results can be corrupted because of many reasons:

  • High resource utilization on load generator machines
  • Excessive logging, which in turn causes high resource utilization
  • Insufficient disk space on the load generators
  • Insufficient disk space on the controller machines
  • Load generator failure during the test
  • Network issues between load generators and controllers

Always make it a habit to take a backup of the test result once the results are collected and merged by the load controller. In few instances, there is a possibility of recovering the results from a failed load generator provided another test is not started in between. In case of a load generator failure, try to recover the results from the load generator before opting for re-execution of the performance test.

6. Verify if Server Monitoring is in Place and Has Started

If you are monitoring the servers with custom scripts or using monitoring tools like Nmon, Perfmon or any other monitoring solution, verify that the monitoring is setup and started before beginning the performance test. Often engineers forget to start the monitoring on servers and begin the test execution only to realize that performance metrics from the servers are not being collected as they failed to start the monitoring. Having only partial metrics will not serve a purpose for analyzing the application’s performance, and this will warrant you to execute another test.

Making a check list of these above items and verifying each item before starting a test execution will increase the chances of a performance test execution’s success. To find out more important keys to performance test execution or to talk with us about your particular project needs, contact us today.

Topics: Testing

Rakesh Reddy Ponnala

Rakesh Reddy Ponnala

Performance Testing Lead, Innominds

Subscribe to Email Updates