NashTech

Why early performance testing helps deliver better Agile outcomes

Early-performance-testing

Most organisations understand the benefits that Agile development brings, and recognise the importance of testing often and early. Performance testing, however, rarely happens until just before deployment. In this article, Lam Pham, Senior Test Team Manager at NashTech, discusses how to implement performance testing alongside Scrum Agile, including the use of test automation. 

Performance testing — now or later?

The business value offered by Agile development and DevOps — in terms of better-quality software and faster time to market — is well understood today. There’s also widespread recognition that testing often and early is critical to Agile approaches. However, performance testing is rarely included in sprints. Generally, it’s carried out on the release candidate just before deployment, and this can lead to project risk. 

At NashTech, we recommend implementing performance testing alongside Scrum Agile. By conducting performance testing during sprints, and defining non-functional acceptance criteria for each story as part of the Definition of Done (DoD), performance can be proven to meet requirements before agreeing that any functionality is complete.

The top 3 challenges of performance testing within sprints

Despite the benefits of conducting performance testing within sprints, there are a number of challenges:  

  • How do you incorporate performance testing into the continuous integration (CI) pipeline? 
  • How do you performance-test an unfinished and frequently changing code base? 
  • How do you get meaningful results in an environment that’s much smaller than production? 

At NashTech, we’ve developed a 3-part strategy to address those challenges.

1. Performance testing in the CI pipeline

Performance testing plays a critical role in establishing acceptable quality levels for the end user as it evaluates the quality of the system under a certain workload. Shifting performance testing left — making sure it happens as close to development as possible — gives developers an early feedback loop for performance issues. 

Given the growing need for faster release cycles, continuous integration (CI) and continuous delivery (CD) help improve product quality while reducing project cost.  

Plainly, then, performance testing is a necessary part of the CI/CD pipeline. Identifying the goals (including the volume metrics) and acceptance values in a specific environment is critical to successful performance testing. Depending on the goals, we identify the threshold for each module and set its DoD in the sprint. Performance testing is then executed when the new build is ready. And if something fails, we’re notified straightaway.

2. Performance testing on an unfinished and changing code base

There’s no single right answer for all situations: we have to adapt the performance-testing approach to the individual case. The following aspects will need to be considered. 

Measurements and metrics. You first need to understand the required performance measurements and metrics. To make them realistic, we take into account the technical stacks of the system, the business rules, and how the application will be used in production. We then define the performance testing goals and evaluation methods.  

Taking too many measurements will make analysis difficult; and may also negatively impact on the application’s actual performance. That’s why it’s vital to understand which measurements and metrics are most relevant to achieving the testing goals. 

Test early. Performance testing should be done as early as possible with a component part of the system. This is often cheaper than testing with the whole system — and it can be conducted as soon as an individual component is developed. The trigger for performance testing execution is each build in the CI process. However, the test should be run with all aspects of the technologies, including the infrastructure and data volume. Testing must provide assurance that the system is ready for its intended audience the moment it’s rolled out.  

Test scenario. In general, we should test using the most realistic scenario possible. To that end, we base the scenario on as muchinformation as possible about the test, use case, purpose and environment, gathered in advance. This kind of test is carried out once all the components are integrated. 

3. Ensuring meaningful results

There’s a common belief that performance testing can be done on a simple, low-profile environment, and that a prediction can then be made for the real environment based on analysis of the results. But there are many risks with this approach. In particular, the size and structure of the data could dramatically affect load test results.  

The closer the test environment is to the production environment in data size, structure and infrastructure, the more reliable the performance test results will be.  

Subject to our earlier point about metrics, we collect as much information as we can using the testing tools and monitoring services. 

Running regular automated performance tests is helpful for two main reasons: 

  • We have the data as soon as possible 
  • The more information we have, the easier it is to detect trends 

A better understanding of the system’s behaviour, coupled with faster identification of the pain points, allows us to more quickly provide an accurate forecast of how the system will perform in production.

Final thoughts

There are significant benefits to uncovering performance issues in each Agile sprint. If you leave performance testing until just before deployment, issues have to be fixed immediately, which may delay the release. 

As well as increasing the likelihood of your project going live on time, raising awareness of performance issues as soon as possible will lead to development of better architected, more scalable applications.

Ready to know more?

To learn more about NashTech Software Testing Services, email info@nashtechglobal.com and a member of the team will be in touch.

Suggested articles

From rising above adversity to riding the wave of digital transformation in the education sector

Explore how NashTech help Trinity College London ride the wave of digital transformation in the education sector

Migrating and modernising the virtual learning environment to AWS for an enhanced experience

The migrated and modernised Moodle infrastructure means that The Open University can now take advantage of cloud benefits.

A glimpse into a year-long RPA journey with a leading digital advertising service

A glimpse into a year-long RPA journey with a leading digital advertising services and solutions provider and how NashTech helped them.

We help you understand your technology journey, navigate the complex world of data, digitise business process or provide a seamless user experience

Scroll to Top
SAMPLE TITLE
sample short
sample heading lorem isump
FREE WHITEPAPER
Unlock the power of knowledge with our new whitepaper
“Elevating User Experience for Product Owners”
FREE WHITEPAPER
Unlock the power of knowledge with our new whitepaper
“Elevating User Experience for Product Owners”