1-860-882-1150info@VIAcode.com

White Papers

Best Practices in Automated Software Testing: Four Key Steps to Success

By Sergey Mikhalev

Overview

Most software programmers do not even think about testing what they have built until the final stage of development is complete. Unfortunately, this approach does not always ensure that the resulting application is flexible, maintainable and stable. Savvy developers, on the other hand, consider how the application will be tested right from the conceptual stage. Doing so sometimes changes what the initial application vision might be and how it achieves the business objectives. But, it surely produces a better overall software product and user experience.

Building a solid and mature test strategy requires vision, time, dedication and a roadmap. This approach requires the engagement all development team members in the process before any code gets written.

In this white paper, we will discuss the four key steps and best practices ‘tips’ to building an effective software testing strategy and plan, and why it is important to continually revisit and revise it.

Step 1. Automated or Manual Testing?

While automated testing appears to be more appropriate for current day technologies, in fact, most development efforts require a mix of automated and manual testing. Furthermore, automated testing is more appropriate for larger development projects than smaller ones. Here’s why.

Automated testing involves creating another software development project...on top of your current software development project. You will need developers to create the testing modules. And, they will need time to do it right – before any testing ever begins.

The developers you select to create the automated testing application up front should be as highly qualified as those that developed the actual application. Not only must they know application thoroughly but they need to clearly understand the business model and have the imagination to create good test scenarios. Google refers to their test engineers as "product experts, quality advisers, and analyzers of risk" i.

Finally, if you choose to do automated testing, it is always a good practice to separate the work between one group of people who develop the test scenarios and another group that automates it. Without such work distribution, it will be very difficult to achieve honest and unbiased testing.

Best Practices #1:

  1. Think about automated testing as a development project which obviously requires developers.
  2. Separate the creation of the test scenarios from the automated testing phase itself. Those are different roles and should be done by different people as a form of checks and balances.

Step 2. What should be automated?

The common mistake in test automation is conducting all the tests via UI or record/play. Two key reasons not to do this: speed and stability. UI tests tend to be very slow since all layers of code, including the UI itself, must be tested. Creating preconditions only in the UI is always a big problem as the same features tend to be tested repeatedly instead of reaching the complete feature set. And it is very difficult for record and play tests. Those are easier to do manually: locate the problem, record the test, then try it again… rather than attempt to decipher what is going on generated tests.

The better approach is to automate test scenarios via the API of the application instead of the UI. All valuable tests of business transactions, functions and algorithms should be handled this way using REST services, SOAP services or any other approaches. If there is no API in the application and just the UI, there could be issues with "internal quality" ii. If faced with this situation, confer with the development team to resolve this issue.

It is still necessary to do UI testing. For web applications, it is essential that they work correctly in different browsers, in different version of OS, on different platforms and even in a different resolution. UI testing should involve separate, special test scenarios to ensure this.

Finally, you need to maintain and update your UI tests as the application and UI standards change, or the tests will fail in operation.

Best Practices #2:

  1. Automate business logic testing through the API.
  2. Create a separate, targeted UI test for all functions related directly to the UI.
  3. Always create very targeted test scenarios. Divide and conquer.

Step 3. What about a Unit Test?

That Unit Test is the library of code samples that demonstrate how it works and is the first client of your code. A Unit Test (UT) is also an automated test. One difference between a UT, an API test and a UI test lies in the scope. A UT covers a small piece of code. It can be a class, a method in a class or a small component. API and UI tests, on the other hand, cover multiple parts of functionality. Another difference is that UTs always pertain to the internal project code base. That is why UTs enable you to test almost everything… even those functions not accessible directly from the API.

The main advantage of a unit test is to actually test the design of your code.

Ideally, each developer should create appropriate unit tests for checking his or her own work properly throughout the development cycle. API and UI testing typically takes place after the main code is done. If you find a failed API test without a failed UT, there probably is not enough UT. You should create it. All bug fixes should also have a UT for it.

It is very difficult to say what percent of UT coverage would be adequate for a software project. While it is almost impossible to do 100 percent UT coverage, doing so does not even ensure that all possible variations have been tested. A good rule of thumb is that the size of test code base should approximate your project code base. The most important measurement for the successful UT is how well your application performs throughout the complete UT process.

Best Practices #3:

  1. Create an appropriate amount of unit test for checking code properly not after but in the same time with main code.
  2. For checking a fixed bug, unit tests representing this bug should be created.

Step 4. Evolving Your Automation Testing

You have to evolve your test strategy if you are going to be continually successful. Doing so requires an understanding of the characteristics of good test automation:

  1. It should be understandable.
  2. It should be consistent.
  3. It should be written faster than checking it manually.

The first is the most important. All the tests should be readable and understandable to all - not only to person who actually wrote it. There is a simple way to do this: if a non-technical person 'gets' it, then it is most likely a good test.

Then it must be consistent. If it fails, it should always fail; if it passes it should be successful all the time. Otherwise you will spend a lot of time trying to analyze the results. That also means that you need to be sure that the tests operate in the ‘actual’ state. To do this, you need change a development process and run a test in it periodically.

Finally, the test framework should provide the ability to write the test faster than checking it in the UI. Surely execution and rerunning the automated test is always faster. But the goal is to create a framework or test ecosystem from the beginning.

Best Practices #4:

  1. Keep your tests simple and readable so that non-technical staff can understand them.
  2. Run the tests periodically.
  3. Evolve your test framework to provide the ability to automate test scenarios faster than checking them manually.

______________________________
i - How Google tests software
ii - Martin Fowler http://martinfowler.com/bliki/TradableQualityHypothesis.html, http://martinfowler.com/bliki/FlaccidScrum.html