Automated checks consist of a machine that executes checks or test cases automatically, by reading its specification in some way which could be scripts in a general-purpose programming language or one that’s tool-specific, from spreadsheets, models, etc. The goal of automating is to increase testers’ “bandwidth”; by automating certain repetitive processes, testers can devote themselves to other activities.
Here are just some of the benefits of automation:
- Run more tests in less time, speeding up time to market and increasing coverage
- Image enhancement, increased user confidence
- Capability of multi-platform execution
- Evaluation of application performance in different versions and over time
- Systematic execution, always test the same thing and in the same way, without losing any verification step
- Earlier detection of errors leads to lower correction costs
- Enhance tester motivation, by freeing up time for more challenging pursuits
- Facilitation of continuous integration
One of the desired objectives of automation is to receive feedback on the status of the software’s quality as soon as possible, and reduce costs not only associated with testing but also development.
It is well known that automating chaos brings faster chaos. For the success of this activity, it’s essential to properly select which cases to automate at each level and pick the ones that promise a higher return on investment.
The typical problem is that what comes to most people’s mind when they think of automation is automating the actions of the user at the graphical interface level, but it’s neither the only nor the best option. To understand this, take a look at the automation pyramid by Mike Cohn.
Cohn’s pyramid establishes that there are various levels of checks, indicating to which degree they should be automated. The ideal situation would be to have:
- Many automated unit tests/checks during development since it’s a primary point for detecting failures. If a feature fails at this point, tests/checks could fail at the subsequent levels: integration, API, etc.
- Some tests/checks at the API level and integration of components and services, which are the most stable candidates for automation
- Less automated GUI tests/checks as they are harder to maintain, slower than others in execution, and dependent on many other components
Performing GUI tests/checks lends to a greater degree of tranquility since they check the functionality end-to-end, but it’s not advisable to aim for just having this one kind of automated checks, nor for it to be the majority of the test-set.
For reference, Google claims to have 70% of its automated checks at the unit level, 20% at the API level and only 10% at the GUI level.
The objective of this scheme is for there to come a time when greater test coverage is increasingly achieved, while investing the same amount of resources.
There is a very interesting problem that occurs at this level. The design and programming of unit test cases has always been a thorn in the software developer’s side. Unit testing is a fundamental step for adding a piece of code to the system, but there isn’t always enough time, resources, nor the will to do it. While this level of testing is recognized as a good practice to improve code quality (and avoid technical debt), it is also true that often when designing, preparing and planning for completing a programming task, things that are not considered absolutely fundamental are left out, and so, unit tests may fall by the wayside. At Abstracta, we strongly recommend not leaving them out.
Maybe this is the deeper problem: unit testing is not considered to be part of development and ends up being regarded as an optional, support activity.
Clearly, unit automated checks are extremely helpful for a continuous integration scheme in which errors are identified as soon as possible.
Furthermore, it is essential to define a good strategy for automated checks following Mike Cohn’s pyramid: a strong base in unit testing, some tests at the service level and only the most critical at the graphical interface level. It’s important to always consider the maintainability of the tests in order to sustain a good cost-benefit ratio.
Automated checks are the most important tests at the functional level. Truth be told, it is not possible to achieve continuous integration without them, so teams must seek to execute them frequently.