Introduction to Agile Development  


Agile is a flexible software development methodology that divides planning, development, and testing into iterations, often called sprints. The duration of each sprint may vary, but it usually lasts 1 to 4 weeks. The application is delivered continuously throughout the sprints, and the goal of each sprint is to deliver and verify a specific functionality from the application.

Once specific functionality is delivered and launched, it may be presented to the clients. This kind of product delivery is considered the main advantage of the Agile methodology, which contributes to many aspects, but we can mention the following: 

Client Involvement 

Because the product is delivered piece by piece after each iteration, the client is more involved in the development process, provides constant feedback, and validates some assumptions that may occur (due to lack of documentation, contradictory statements, business requirement change, etc.).

Quality Assurance and Testing 

Agile methodology allows for more advanced QA practices and procedures to be implemented, including the topic of this article (continuous testing), which will significantly decrease the chances of a bug being introduced into a production environment. Testing is done throughout the entire development process, with each iteration having a new scope, contributing to the better quality of the software.

Change Requests 

Change requests during development, big or small, are often inevitable; however, the Agile methodology can handle them without any more significant effort. Even though there is initial planning at the beginning of the project, there is a specific planning and task breakdown on each new iteration, often called sprint planning, in which these change requests can be discussed and planned for implementation.

What is Continuous Testing? 


As the term implies, continuous testing involves continuously testing the application throughout the entire development process. This is best achieved by integrating automated tests into the software delivery process (CI/CD pipelines) and combining the code deployment and testing processes. This practice ensures that the automated tests will be executed immediately after each new build or deployment, giving us fast feedback about the quality of the code and detecting bugs and defects early in the development process. 

There are various ways to implement continuous testing. When specifying the test coverage, “the more, the better” is usually the approach that is often taken into consideration, including all automated test cases. This has its own benefits but may also have a negative impact (as we will see further down the article). Either way, carefully selecting the test cases is undoubtedly a key point when we are considering continuous testing. 

What are the Benefits of Continuous Testing? 


Fast Feedback and Early Bug Detection 

By implementing continuous testing, the development team receives fast feedback of the quality of their code. Once the automated tests are done, a report is generated, signaling if the tests passed and the delivery process can go forward or if some test/s has failed, in which case the process is stopped, and the failure reason inspected.

Risk Reduction 

The application has been continuously tested with various tests, minimizing the risk of faulty code or defects.

Cost Effectiveness 

The entire goal of this workflow is to detect bugs early in the process, which directly contributes to the overall cost-effectiveness of the project; the sooner the bug is detected and resolved, the less it will cost.

Faster Delivery  

The product's delivery is much faster because, after new code deployment, there is no need to notify and wait for the QA team to start the testing process manually after each code change. Instead, the code is automatically tested right after each new application build, giving real-time feedback to the developers.

Transparency 

Since the automated tests are triggered and executed via the CI/CD pipeline and not locally by a developer or a tester, the testing results are also presented there. Anyone with proper access (including the stakeholders) can view the current and previous test runs and the results.

Best Practices of Continuous Testing 


Early Implementation 

To obtain the maximum benefits of Continuous Testing, it is essential that this technique be implemented, or at least planned, early in the development process. This will allow early bug detection and prevent faulty code in the project's codebase. Remember, the earlier the bug is detected, the less it costs to fix it.

Covering as Many Test Types as Possible 

Key aspect in the continuous testing technique is the automated test types.
If we follow the initial practice and implement continuous testing at earliest development phase, we must consider the implementation of unit and integration tests, these tests can be implemented at the earliest phase of development. 

As the application grows, we can introduce functional testing, which, depending on the application’s nature, can be API, UI, or both. In addition to this we can add up regression and smoke tests. 

Performance and Security tests should also be implemented once that is possible; these tests should be implemented at a later phase in the development process as we would need to have a stable version of the application to get valid results of the performance and security tests. 

A variety of testing types will increase the application’s testing coverage and the chance of bug detection. 

Choosing the Correct Tools

Choosing the correct tools is a very important aspect of the continuous testing process. The nature of the application needs to be analyzed properly along with the version control system and the CICD pipelines in order to choose the correct tools. Each tool involved in this workflow needs to be compatible with each other.

Implementing Continuous Testing in CI/CD pipelines 

In which state of the CI/CD process do we trigger the automated tests? Usually this is set after deployment in the testing environment.  Once the tests are executed, further deployment is stopped (deployment to stage/production) until the tests pass. When all relevant tests pass, the next stage (deployment to stage/production) is triggered automatically. 

If any of the relevant tests failed, then further deployment (to stage/production) is blocked. However, there should be an option to trigger the deployment manually event if there is a failure.  

This can be helpful in situation where there is a “hotfix” or an important emergency updates that must be released to the production environment, and we are sure that the failure reason is false-positive or we are simply willing to accept the risk of deploying a bug to production because the hotfix or the emergency updates have greater priority. 

Setup Proper Reporting System 

The development and QA team should monitor the reports generated from continuous testing. The generated report must be easily accessible and should provide data about the testing result. If any test has failed there should be a proper alerting system so the issue can be addressed immediately. Also, the report should contain information about the failure (ideally a recording of the failed test). 

Test Case Maintenance 

The tests created must be regularly updated and maintained in order to match any new requirements or code changes. The already created tests will likely require maintenance if there is still active development. 

Parallelization 

One of the continuous testing challenges is the execution time of the tests. The more testing coverage we have, the higher the execution time. This will become a problem when the Continuous Testing flow is integrated into the Continuous Integration and Continuous Delivery process, and it will cause a delay in the deployment process.  
To tackle this issue, parallel test execution should be introduced. This approach will significantly lower the execution time and avoid bottlenecks in the deployment process.

Consider Headless Browser Execution! 

If we are developing a web application and have to execute tests on the browser, headless browser execution is another consideration.  
Headless browser execution also lowers the execution time significantly, as the browser operates in the background without rendering the graphical interface. 

Tracking and Analyzing Metrics 

Once a continuous testing process is implemented, it is crucial to track metrics such as total execution time, current test coverage of the application, test types currently implemented, and the failure rate.
Why do we need to track all this? To
improve the process we need to detect its weaknesses.  

  • If the execution time is too high and is causing a delay in the deployment process, then we need to look into parallel test execution or locate some other factors that contribute to high execution time.  
  • If the test coverage is too low or we need to include some crucial test types/cases, we need to increase our resources to create more automated tests and include those in the process.  
  • If some specific test cases have very high failures, not because of bugs but because of other factors such as bad data, then we need to improve the test and lower the failure rate to avoid false positives. 
  • If some specific test cases have a high failure rate because of bugs, then we need to inform the development team to pay more attention to that part of the code as we may have repeating issues.

Implementing Continuous Testing in Agile and Overcoming the Challenges 


Every workflow has its own challenges that would often appear, that is the case with continuous testing as well, let’s review some of the challenges:

Automation Coverage 

To have successful continuous testing workflow, a significant number of test cases must be automated. This is time-consuming and sometimes all test cases cannot be automated.

Test Data 

Since the entire continuous testing workflow is automated, it is essential that we manage the test data correctly. The test data must be generated before and erased after each new execution, which can require some effort in end-to-end automation.

Execution Time 

Depending on the application size and the automation coverage the execution time might become a problem for the deployment process. On the other hand, the deployment process will wait for the test result, extending the overall deployment time.

False Positives 

False Positives might become a problem if they appear often. If Continuous Testing often blocks the deployment process because of failed tests for reasons other than bugs or defects (improper test data, environment slowness or instability, faulty testing script, etc.). In that case, this will become a problem that must be addressed.

Increased Maintenance 

Since the deployment process will be stopped in case of a failure, the automated tests implemented as part of continuous testing workflow require immediate maintenance when that is required.  

  • The pressure to automate as many tests as possible also adds to increased maintenance.  
  • The more tests are automated the higher the maintenance requirement will be. 

To implement a seamless and robust continuous testing workflow which will make a significant contribution to the overall quality of the application and the entire delivery process we need to predict and address the possible issues before they occur. How do we achieve this? 

Step 1: Choosing the Tools 


The first step is to choose the proper tools for creating automated test scripts. If we were to choose a tool as a part of automated testing and not continuous testing, we would only evaluate the tool based on the time and effort to build and run the automated tests and whether our QA or dev team is familiar with it. In our case, we need to include other aspects such as 

  • The reporting capability (we cannot watch the automation or see the results locally; we must store each execution report as test artifact and view it from the delivery pipeline). 
  • The level of details in the report (in case of a failure we must clearly see the failing step and failing reason). 
  • The compatibility with the current CICD flow. 
  • The ability for command line test suite execution. 
  • The operating system support (we may have a Windows working machines where we build the tests but the server where the tests are executed could be running on Linux OS). 
  • The running tests in parallel…  

All this must be evaluated before we even start with the creation of automated tests. 

It is not unexpected that different testing types have different tools for creating and executing the testing scripts. For example, the unit tests can be created on Junit, functional UI tests can be created and executed on Selenium, Playwright or Cypress, the Performance tests on JMeter or Locust, API tests via Newman and so on.  

Step 2: Developing the Automated Tests 


The next step is to start with the actual development of the automated tests. In this step, the QA/Dev team must specify the automated test types and prioritize the most critical test cases.
If there are some critical test cases that cannot be automated, they must be addressed and tested manually. That said, if the goal of the project is full continuous testing workflow, then more resources should be allocated to automated test case creation to avoid lack of automation coverage on the test cases. 

A stable environment and valid test data are other conditions for proper continuous testing workflow. The testing automation environment must be stable and 100% up during execution. The data generated by the testing scripts must, in most cases, be erased upon execution. Preferably there should be a separate environment for automation, it’s good practice for the environment to be triggered and created on each new execution and not to be up constantly, we do not need it when tests aren’t running. 

Step 3: Implementing the Automation 


The third step is to implement the automation execution in the CICD pipeline and set up the workflow which will officially initiate the beginning of continuous testing execution. This step is very important because even minor errors here may lead to incorrect implementation.  

Let’s say we trigger the automated tests on the testing environment before the new build is deployed. The tests would be executed on the old build and would not provide feedback for the new build. That is why the automated tests are triggered right after the new build is deployed to the testing environment where the tests are executed. 

Another key point here is that the deployment process must be stopped (to stage and production) until the execution is done and passed. 

Suppose the build moves to stage and production while tests are still running. In that case, the whole continuous testing setup will not be useful, as the initial goal is to detect bugs and defects before they reach the production environment. On the other hand, in the case of microservice infrastructure, it is good practice for the QA Team to label the tests related to each microservice to avoid running unrelated tests to the specific build. 

Let’s say we have five separate microservices and we trigger new build for one service.
In this case, we do not need to run the test cases from all five microservices; instead, we should limit the execution only to those test cases related to the specific microservice for which the new build has been triggered. This will save some resources and will avoid too long test execution which will eventually speed up the deployment process. 

As the automation coverage grows and we handle different types of tests the execution time of the tests will also grow, delaying the entire deployment process, so this approach will be useful. 

When it comes to false positives there is no such thing as a generic guideline, as all cases would be very specific to different applications. The only thing that should be considered is that we do not need to automate scenarios at all costs. If there are some scenarios that are not very sustainable, often require maintenance and provide different results, those cases its best to be executed manually. 

Step 4: Automated Test Suits


The last challenge to overcome is the increased maintenance of the automated test suites that will appear once the continuous testing workflow is implemented.  

As we mentioned above, the more automated tests we have, the higher the maintenance requirement, but at the same time, the need for manual testing will be significantly lower, which means we will have the required resources allocated to automated test case maintenance.  

Manual testing often requires more time and resources to be executed, and this seems like a good trade.

Standard Automation vs Continuous Testing 


Both standard automated testing and continuous testing have automated scripts at their core. In both workflows, we rely on their execution and reporting. We must prepare a specific testing environment and set of data to execute the tests. 

In such a general overview both terms look very similar, so what is the difference then?
The key difference and the main advantage of continuous testing is the way of setting up the automation workflow. 

In regular automation the scripts are executed manually when that is required (as part of functional or regression testing) and the results must be shared with other team members, so we still have a human factor in the entire process.  

On the other hand, the continuous testing workflow aims to remove the human factor as much as possible. The automation tests are triggered automatically throughout the entire software delivery process. The tests are integrated into the CI/CD pipelines and are part of the deployment process. On each new build a whole set of different automated tests will be triggered, upon execution if any test has failed there should be an alerting system to notify the developers about the failure and to stop the deployment process preventing faulty code to move further in the delivery process.

Conclusion 


Continuous testing is an extremely valuable asset in the Quality Assurance Process and should always be considered when starting a new project or even implementing it into an existing one. However, it should not be implemented at all costs. The dedicated person will have to carefully analyze whether Continuous Testing is a good fit for the relevant project. 

This workflow usually is more suitable for long-term projects, it may have benefits short-term projects as well but that needs to be analyzed first. 

The goal of this article is to point out the common possible challenges that may arise during Continuous Testing implementation and offers ways to overcome them. Being prepared before the issue happens will significantly reduce the time and effort required to overcome it and achieve your goals. 

Author

Stefan Gulicoski

QA Engineer