testing-agile-method

How do we reconcile testing and Agile methods when developing our web analytics solution? For several years, our engineers have worked in development cycles focused around continuous improvement. Join us behind the scenes of AT Internet’s R&D. 

 

The main premise: Quality is vital  

As software creators, we want to provide our customers with products that will add value – not additional limitations and difficulties – to their initiatives. With this goal in mind, for the past several years, we’ve worked on improving our development cycles, our requirements for software quality, and how we combine testing best practices with Agile methods… all while remembering our customers’ objective: to collect and leverage digital analytics data as optimally as possible. 

Before investing in testing efforts, we first need to know which objectives we want to achieve with testing, and what they entail. This will enable us to define a clear testing strategy and to establish a shared understanding between different stakeholders. 

 

Objective #1: Identify defects  

Most often, software testing helps us detect the presence of bugs in a product. This is the most obvious and well-known objective when testing software.   

tests-bugs-detection

source: Applied Software Measurement, Capers Jones, 1996 

 

We have observed time and again that the cost of correcting these anomalies skyrockets after release, whereas (as illustrated in the graph above), 85% of bugs are introduced during the implementation phase. So why not correct the bugs immediately, while they’re still not too costly for us, and while things are still transparent for our customers? We therefore invest in testing to identify defects and achieve this objective… but this is not the only reason. 

 

Objective #2: Boost confidence levels 

Software testing can sometimes help us reach a certain confidence level regarding the quality of our products. Once we have exhaustively evaluated a feature with all the relevant tests and have not found any defects, we are reassured and are more confident about the forthcoming delivery (though we must always be wary of the illusion of “the absence of bugs”). Successful non-regression tests are also largely reassuring when delivering new features or corrections.  

 

Objective #3: Provide information 

All types of tests, no matter their results, can be a precious source of information about our products. For example: 

  • Non-functional variables like evolution of performance across deliveries   
  • Regression lists or identified anomalies (known issues) 
  • Highlighting certain technical debt indicators 

All this information enables us to take the right decisions at the right time. Tests are crucial to avoid advancing blindly – they allow us to be ready to act appropriately, should there be any issues.   

It should also be noted that with certain tests, we can actually prevent bugs in the product, before the code itself is produced. This is the case with the INVEST phases of stories, or architecture reviews, for example.  

 

The Agile approach to development 

agile-illustration-sprint-method

Agile methods are designed to regularly deliver fully functional increments on products that target the value provided to customers. This rhythm has many advantages, including: 

  • flexibility 
  • continuous adaptation to the needs of our customers 
  • the ability to retrieve regular feedback from our users 
  • continuous improvement of our teams’ practices 

But this approach doesn’t come without a few constraints during the software testing process: 

  • The cost in time spent on manual testing activities is incompatible with the necessary Agile pace: The short duration of a sprint does not allow testing phases to stretch out over several days or weeks, and to an even lesser degree, to be outsourced.   
  • It would be much too risky for the planned delivery date if we only learn the test results at the end of the sprint – we need visibility as early as possible in the sprint. 
  • The succession of deliveries over the course of many sprints also creates a major constraint around the need to be alerted easily, should a regression be introduced on a past delivery.   

We must adapt our strategy to wisely automate that which can be automated, and to invest our testing efforts in the right places.  

 

Our Agile testing pyramid 

Now, let’s have a look at the (famous) “Agile testing pyramid” which we’ve adapted to our context. We want to optimise the effort/gain ratio of our testing activities by structuring these different layers of testing. We wish to have regular feedback and as early as possible for our developers. To do this, we’ve selected the following approach:  

agile-test-pyramid
  • A majority of automated tests are in the bottom layers, which are quick to execute and cost less to implement (unit testing)  
  • Intermediary automatic testing layers also exist to validate how components are assembled and how they are displayed in the interfaces (system/integration tests, and system/solution tests) 
  • We reserve a part for manual testing which will simply serve to validate the feature’s delivery (ensure that the deliverable is aligned with the initial need expressed) 

 

Do away with the “edge effect” 

I remember a time when tests were seen as annoying and expensive, when teams had to stop everything after the release of a new feature in order to fully devote themselves to correcting a major problem with the release: the infamous “edge effect” that everyone secretly expects, without knowing where it will occur…  Things are different now, and testing activities are now an integral part of development: they ensure the product is reliable and they also reassure everyone – releases can therefore be less stressful and more regular. How satisfying it is to see teams detect and correct regressions in the product before customers do!  

 

It is with this mindset that we hope to continue providing highly reliable products which digital analysts can fully trust and count on. This kind of work is a long-term endeavour and requires continued effort, which is why we are always looking for meticulous minds who care about customer satisfaction and have an appetite for testing automation. So if this article speaks to you, drop us a line! 

Author

With more than 10 years of experience in software testing strategy and implementation in an Agile environment, Alexandre is responsible for industrialising development at AT Internet. His daily challenge: guide our dev teams through implementing tools and methods with the aim of guaranteeing regular and high-quality deliveries to our customers.

Comments are closed.