Introduction
Test automation is not brand new thing. Even more, it's pretty known and quite old part of entire software development process. Thus, a lot of people have good and bad experience with it. As the result, there is some common set of best practices which were formulated in different ways. Different people concentrate on different aspects of test automation. Some of them are concentrating on technical parts while others pay more attention to higher level things. As always, the truth is somewhere in between.
So, what are the best practices in test automation?
For this post I've collected several posts/articles covering this topic (the list of them can be found at the References section of this post) and I'll try to combine them into one common list of practices. Mainly I'll contentrate on the top most UI level automation, however, a lot of practices are fully applicable to any level of automated testing.In order to list practices in more appropriate and structured way we should define stages where each specific practice is applicable. This will also help us identifying proper sequence to apply practices in. So, generally we can define the following test automation stages:
- Analysis - major stage where we define what to automate and major approaches we should use for automation. Major output from this stage is:
- The scope of test automation
- Priorities for test automation
- Major list of technologies to use
- Planning - on this stage we define who should perform automation and when it should take place. It is based on analysis stage results where we already should define the scope and expectations. This time we should define how it should work as well as we should set up initial process here. The output of this stage is:
- Finalized scope for test automation
- The list of roles performed during test automation
- Interaction between different roles within the project
- Entire process and standards
- Goals and metrics to measure how good we are at goals achievement
- Timings
- Design - this is the stage where we start creating test scenarios. Here we create initial scenarios which are needed to be covered with automation. As the output from this stage we get:
- The set of scenarios to automate
- The set of test data we need for automation
- Initial framework to use for test automation
- Implementation - this is major stage where we define how automated tests should work. It is the stage where automated implementation is being developed. Major output of this stage is the set of automated tests which are ready to use.
- Execution - this is the stage where we perform automated tests run and results processing. The output of this stage is the list of passed/failed tests as well as the list of problems revealed.
So, let's list those practices based on different stages.
Analysis
1.1 Know your objective
Test automation can be involved in many different areas aiming different goals. Also, it is an investment which should pay back somehow. So, in order to get expected payback we should clearly understand what exactly we want to achieve. This way we know what to automate, how and what areas to pay attention to. Depending on goals we may have preference to small but fast suite of tests or wide range of tests covering vast area of application functionality. Depending on goals to achieve we should check proper approach. So, definitely we need to have some objectives first. But this is general practice which can be split into details.
1.1.1 Define Scope of Automation
This is one of the most important objective we should define. The scope helps identifying what we are going to automate, where we are at any point of time. Also, scope gives us idea whether we should take each specific module/functionality or any other area for automation or not. Generally, the scope is the answer to major question: What to automate at all?
1.1.2 Perform a Cost-Benefit Analysis
Once we have scope defined we can identify what tests do we need to have automated. And one of major things in it is the priority for automation. E.g. some part of application is expected to be updated more frequently thus it requires more frequent testing. Another part can be more critical for the business. Thus, it must be checked frequently. Some other parts can be covered fast with minimal efforts. All those factors (in additional to many others which may appear) can help to define what should be automated first, what's next etc. For this purpose we should estimate our efforts we put on automation of each specific part and compare it to benefits we get after that. If we can spent some small range of time and cover huge application area, well, why shouldn't we do that in first turn so that we free our resources from another routine part. At the same time, even if some application part is critical but it's automation requires a lot of efforts and resources then we'll probably should take something else first. In any case we shouldn't blindly automate everything we find on our way. Thus, we always should care about value each specific test can bring us.
1.1.3 Start with the easy things first
Easy things are easier to do as well as they require less efforts than complex things (that makes easy things easy). There are many reasons to start with easy things first. Some of them are:
- When we just start automation we may not be aware of many tricky things we may encounter later. And easy things let us concentrate just on basic aspects of our test automation solution.
- Easy things are better to gain necessary experience before going to more complicated things.
- We should never forget that test automation should bring value. And the value is normally brought when we have some tests being performed without human interaction. So, the more tests we develop the more testing effort we can delegate to machines the earlier we return our investment in test automation the more value we bring with it. Easy things require less time and resources for that.
1.1.4 Automation sanity
Automated testing brings biggest value when it runs as frequently as possible and reasonable. Ideally, testing should be performed after any change we make to application under test. This way we can confirm our application under test correctness all the time it changes. But testing is time-consuming so in most cases it's simply useless to run all tests after some minor changes to application under test. So, the compromising solution is to have some small set of tests which takes some reasonably small amount of time and which cover large area of application under test (maybe with high-level checks). That should form some kind of smoke or sanity tests which can be performed against any new build and can be a part of continuous integration. This is the way we always keep hand on pulse and have quick feedback in case of major problem when entire application functionality stops working.
1.1.5 Regression tests are good for test automation
Any automation is targeted to delegate routine operations from humans to machines (this is one of the major purposes of automation however not the only). Test automation is not an exception here. And the most routine and boring part of testing is the regression testing. So, it perfectly fits to automation needs as:
- it is repetitive activity when application under test is expected to behave the same way as before
- it should be performed on regular basis
- any other tests which should be performed not just for current version but for future releases as well will become regression tests
1.1.6 Plan to Automate Tests for Both Functional and Non-Functional Requirements
Functional testing is definitely an important part of testing but it's not the only. There are many other testing types which can be performed using the same techniques and approaches. As an example we can take such testing types as Security, Installation, Compatibility, Configuration testing. They are also up to interacting with application under test (similar to what functional testing does) but they have a bit different checkpoints and different run conditions. So, when we define what to automate we should not forget about other testing types as at each point of time any of testing types can be the most critical to the entire system.
1.1.7 Tests that check UI mechanics vs Tests that check application functionality
This is mainly about UI-level testing. In the ideal world at the UI level the only thing we should check is that the data is rendered properly and displays all the data. But on practice we still have to create tests which replay user scenarios. Thus, we have 2 major group of UI-level tests:
- Tests that verify UI layout itself
- Tests that replay user scenarios via UI
- Major exercise of application UI - the major target of UI testing and the area where UI-level testing is the only appropriate solution
- A kind of fast certification of UI descriptions in automated tests - this is a form of unit tests verifying that window definitions you have inside your solution are still up to date with actual UI. It is very typical that UI changes from time to time and we should be able to verify it as fast as it is possible.
1.2 Select the Right Automated Testing Tool
Choosing proper test automation tool is one of the most fundamental thing in automated testing setup. If we do it wrong we may fail entire test automation. So, it is important to know what to look for while making proper selection.
1.2.1 Choose a testing tool that answers your Automation needs
Tools selection normally goes after we define our test automation objectives as well as we know our system requirements and entire scope of testing. It means that before starting any selection we already should have the list of:
- system requirements
- testing types to perform
- list of technologies to use
- list of infrastructure systems
1.2.2 Choose tool set which reproduces actual user experience in the closest way
Test automation is normally based on simulating user actions so that system under test should not make any distinction between actual user interaction and automated tests. The key thing is that automated testing is just a simulation which starts at some certain level. It means that there is probability that test automation tool may avoid some level which will be triggered when real users operate with the system under test. Just some examples of it:
- Some custom or complex UI controls may have additional handlers on user events (like setting focus, moving mouse over etc.). Usually, test automation tools work by sending some specific system messages which only trigger just some part of those events.
- Web-services normally have a client API which is either generated (especially in case of SOAP) or have quite typical usage interface. Main thing is that interface itself may contain some mistakes which may lead to errors while trying to use that service. At the same time if you take a look at popular web-service testing tools you may notice that they usually ignore the client part but mainly based around simulating final requests with all bells and whistles around building them.
- Some GUI (including mobile UI) testing tools require some software module to be built into the application under test so that we can access all elements directly having knowledge of their internal structure. Such approach is typical for such solutions as Calabash, Robotium, White. Also, such possibility was given for TestComplete at the early versions when it didn't support some technologies from outside the application under test. Main thing is that we should make some custom build for our application under test and this build is not production quality build just by definition.
- It's very likely that we can do some actions which cannot be reproduced in real life (e.g. in some cases I was able to modify read-only field)
- It's very likely that we cannot see the problems on production-like system either due to we skip this level or because the final build simply works differently
1.2.3 Select the automation tool which is familiar to your resources
Obviously, when you choose some test automation solution you should make sure that people who are supposed to work with it will pick that up quickly. That saves time for training as well as it gives some guarantee that test automation will be done at all.
1.2.4 Use the same tools as the development team
It is definitely good practice which gives several major benefits like:
- Minimize efforts on tool set setup and configuration as most of things are already done during development infrastructure setup
- Less infrastructure support as we don't need to maintain development and testing infrastructure separately
- Technology alignment gives an extra possibility to get access to system under test code and re-use some modules, constants and any other stuff. This way we may be less sensitive to some changes (e.g. if we use some constants we no longer care if actual value was changed) or detect potential sensitive changes far before entire test suite run starts (in case of some interface changes which may lead simply to compilation errors)
- There is an ability to involve developers in building test automation. They may share some practices and help building test automation solution more flexible to potential changes.
1.2.5 An automation tool is important, but it is not the solution of everything
Every tool has restricted applicability area as well as it has restricted set of use cases. So, we should not expect tools doing everything. If something is missing it may be a responsibility of another automation tools or simply something outside of automated testing scope. Major thing is that we shouldn't try to do anything the tool is not supposed to do at all.
1.2.6 Train Employees
Of course, it's good when we have a solution and we already have people which are ready to use it. But it is not common case. People should get familiar with application under test as wel as with some nuances of test automation solution. So, it takes some time to pick up everything. So, the time for training people in test automation should also be taken into account. Otherwise, we may appear in the situation when people fail test automation before they realize how to utilize it properly.
1.3 Automated Testing is not the replacement for Manual Testing
Frequently, I see questions like "Why do you still have manual testing?", "Can automation fully replace manual testing?". The most confusing part is that testing (both manual and automated) are treated as the same kind of testing. Well, in some cases that can be true (e.g. for API level testing where testing initially means using some kind of API). But in general case there are some deviations.
1.3.1 Manual vs Automated - Testing vs Checking
Automated testing usually goes through pre-defined set of steps and check-points while manual testing is flexible to various checks and can be targeted to various different area depending on nature of changes under test. At the same time every testing includes a lot of routine activities where automated testing is very helpful. Such difference makes automated testing a good addition to manual testing process but not the replacement. In other words, the more routine testing is delegated to machines the more time and resources left for more flexible and thorough testing performed by humans.
1.3.2 Arrange a proper manual test process
Again, we should not forget that testing is wider process than just performing some test scenarios. It also includes analysis, test design, some exploratory activities which are very hard to automate. At least those activities are hard enough to make automation too expensive to involve. Also, there may be some areas which are not covered with automation. Due to all the above reasons we should not forget to setup manual testing process covering activities which are not covered with automated testing. And again, this process should include automated testing as similar part as ordinary manual testing. So, test automation should not be something outstanding, it should be a part of some unified solid process.
1.4 Keep Realistic Expectations
A lot of failed test automation projects take place due to improper expectations from it. So, in order to be successful we should clearly understand what is really covered by test automation and what can be done by something else. Having proper expectations we may properly detect the value of automated testing we should have.
1.4.1 Do Not Expect Magic From Test Automation
Automated testing definitely can bring some benefits to entire testing process but don't expect more than it physically can achieve. It may reach higher precision for high volume calculations, it may run in 24/7 mode, it may guarantee that something working before works the same way now after applying additional changes. But do not expect automated testing doing something more without any additional preparations. E.g. often people mention high velocity and fast feedback as an advantage of automated testing. It's a bit far from truth as there are a lot of restricting factors preventing automated testing running faster (the velocity of application under test, time for locating specific elements, additional time losses for data processing etc.). And in general, if we have our testing automated it doesn't mean that we should forget about it. It is still the process which requires maintenance, corrections, extensions and some other activities around it.
1.4.2 Manual and exploratory testing are much better than automated one to find bugs
The number of bugs found is frequently considered as major metric of testing quality. It is not really correct but it is very convenient and very convincing when we get the number of bugs found before and after using automated testing. And it often appears that automated testing catches less bugs than manual testing. Why does it happens? The thing is that most of the bugs are normally introduced during some modifications or new functionality to application under test. This part is normally poorly covered with test automation as this is yet not finalized and thus unpredictable area of the tested application. At the same time such new changes are good subject of exploratory testing. So, the most buggy area is usually something which is not covered by automated testing. So, if we measure the number of bugs found it is more correct to compare the number of bugs found by automated and manual testing altogether with the number of bugs found by manual testing only within the same time frame, against the same test suite and with the same resources. And in such dimensions we'll see that automated testing saves a lot of time and resources on bringing confidence that the most routine parts of application under test definitely work as expected and manual testing mainly can be concentrated on some edge cases, critical paths and exploratory where human intelligence is more required.
1.4.3 Keep Functional Test Automation Out of the Critical Path
It's good when we have some critical functionality testing automated. But we should not forget that automated testing does repetitive and very restricted set of checks. Also, automated testing sometimes provides distorted picture of actual application under test state due to various false-positives or spontaneous connectivity or environment issues. Also, there are some operations which are too risky to be performed unattended (e.g. exercising payments on production environments). All those situations either should be assisted or completely handled manually.
Planning
2.1 Segregate Your Automated Testing Skills And Efforts
As soon as the number of people involved in testing increases 1 person we definitely encounter the fact that people are different. They have different experience, skill set and areas where they can specialize better. As the result, there are some areas where one person can do better than others while someone else can be better in some different area.
On the other hand software testing itself requires different areas of knowledge. Mainly it can be distributed to technical and business domains. In most of the cases they are pretty irrelevant to each other especially for the software which is not targeted to software engineering people. Each of those areas require specific understanding of things to be done.
In order to get maximal output from the team we should be able to make people collaborate the way that each person is mainly targeted to the area where he/she is more effective in filling the gaps by involving other people where they are more specialized in. E.g. business analysts are good at product knowledge and they can be major source to get an idea if the behaviour is right or wrong but they are not required to keep in mind all tricky moves you can do with the application. Test designers are normally good at check points definition but they normally have lack of clarity if each specific combination is really correct as well as they may have lack of technical skills when we go to the test automation implementation part. Test automation engineers are good at technical part but they are worse at test design and business domain. Of course, ideally all those roles should be merged. But each specialization requires quite wide range of knowledge and it takes a while to merge all this knowledge conglomerate into one person. That's why we should be able to split activities between different people taking into account the areas they are better with.
So, this set of best practices is about building the team and activities/roles distribution.
2.1.1 Build the right team and invest in training them
Of course, in order to get things right we need proper people. So, building the right team is always good practice. Actually, making valuable outcome is what makes team good no matter the area team works in. Another thing is that this valuable outcome appears after some time. If you hire new people it's hard to expect them working properly immediately. That's why it's said that "9 women will not bring 1 baby in 1 month". It means that each activity has lower border where we should expect some outcome. This is applicable to testing team as well. So, this lower border is the set of resources invested into the team in order to bring them up to speed. In case of test automation we should invest not just in learning the system under test but also into the tool set used for automation as there's huge variety of them and they are quite different. So, training people is always investment which should be taken into account.
2.1.2 Hire a Dedicated Automation Engineer or Team
Modern methodologies are targeted to mix roles. In case of test automation it can be combined with test design or development. Also, a lot of test automation tools still have record and playback features or some other visualized approaches to have an ability to involve non-technical people into test automation. But reality shows that:
- The bigger test automation grows the more efforts are put into existing solution modifications rather than developing something new. So, fast and easy development doesn't bring good value anymore.
- The more features we have the more higher priority activities are required. That situation never happens in case of independent testing but in case of mixing roles it's pretty typical case.
2.1.3 Get executive management commitment
Test automation can yield very substantial results, but requires a substantial commitment to be successful. Do not start without commitments in all areas, the most important being executive management. In other words, test automation is not something which exists on it's own but the decision should come and solution should be driven from different sides. Management is one of that side.
2.2 You Should Not Automate Everything
This is one of the most important topic in automated testing. Of course, it's good to have as much testing automated as possible but the automation itself takes a while to develop, execute and maintain. So, we put some effort into that and all the time we should keep in our mind that any effort we put should bring some value back. If the outcome is not so valuable then maybe we shouldn't pay too much attention to it at least at first turn. So, this group of practices is mainly targeted to the fact that we should always take into account the value we have after introducing such automation.
2.2.1 Choose the automation candidates wisely
Automated testing requires some resources to be invested into:
- development - we definitely need to spend some time to create automated tests
- execution - each automated test also takes some time to be performed. Automated test doesn't exactly mean fast test. These characteristics are not so relevant. Also, execution may require some complicated infrastructure
- maintenance - tests may be flaky or unstable due to many reasons and maintenance also takes some resources
- Priority - the higher priority of feature is the more important the test is
- Coverage - the higher coverage test provides the less area is left for other testing activities. So, small efforts optimizes entire scope highly
- Frequency of use - the more frequently each specific feature is used the more routine will be covered if we have such feature automated. Also, high use frequency will compensate automation costs faster due to cost savings during execution time
- Resource costs - the cheaper test is to create/execute/maintain the more desirable it is for test automation
Generally speaking, since we always have some constraints we should be able to choose the set of tests based on some criteria which eventually brings value.
2.2.2 Not everything should be a UI test
Of course, the application UI is the major interface the end user interacts with and we always should make sure that we see everything the end user is supposed to see. But applications mainly contain multiple levels of abstractions and any potential problem may appear at any of those levels. The lower level is the more distortion we have while trying to detect the source of problem. That's why we have not just high level UI tests but also unit and integration tests covering lower level of application abstractions. Eventually such distribution of tests was formulated in a form of Test Pyramid or in more general form as Levels, Pyramids & Quadrants.
The idea is that each application level should have dedicated set of tests targeted just to that level in order to:
- better problem source localization
- higher feedback (lower level tests are normally faster)
2.2.3 Sometimes you have to ask yourself, "Does an automated test really make sense here?"
During testing we may encounter some edge cases which requires some tricky configuration or hardware manipulation or generally some critical operations which normally require high attention on their completeness and recoverability. And at the same time such operations may appear to be one time or very rarely executing.
So, this is the situation when we definitely need to make sure that the resources we spend on creating/maintaining such tests are appropriate to the outcome we receive. E.g. if we need to spend a week for some test which may produce flaky results but this test is needed only once or it is rarely used, well, why should it be better than one time manual testing?
Or what about testing functionality which is hard to predict or results are not stable enough (e.g. various graphical images verification)? In some cases it may appear to be cheaper to perform manual testing than all the time maintaining automated tests just because of some tiny changes. The problem here isn't just about flaky tests but also about application functionality itself and existing tools capabilities.
Also, there may be cases when you make crucial changes to the environment and if test fails you'll have to spend a lot of time repairing the environment. This is also something which requires more human attention and sometimes it's better to perform this manually rather than delegate to automated test and pray that everything goes well.
Generally, when you encounter all those cases you should always keep in mind that you are doing the automation in order to make entire testing process more efficient but not for entertainment. So, if automation doesn't bring more efficiency it is very likely doesn't make any sense.
2.2.4 Do not run against all browsers and OS versions
A lot of applications are targeted to run against different browsers, operating systems, devices etc. And all those configurations are needed to be tested. But each run takes some resources to be involved (time, hardware, etc). But additionally we should make sure that the value we bring with that is worth efforts we put. E.g. what is the point of testing ordinary web applications against IE6 on Linux while no end users are expected to have such configuration? On the other hand there are some popular configuration parameters combinations, e.g. Mac OS users use Safari browser more frequently. Also, based on various feed backs we may know the most popular configurations used by customers of our application under test. So, in order to use our resources more efficiently we should not blindly test against all possible configuration options combinations but rather cover the most popular configurations. All remaining parameter values can be used in other test configurations which use each configuration parameter at least once.
2.3 Involve Test Automation as Early as Possible
Automated testing requires some resources to be spent before it starts paying back. Mainly it is for framework development, infrastructure preparation, scenarios definition and many other activities which go before we have at least some minimal set of tests to run. At the same time application under test also requires some resources to be invested before first working version appears. If application development starts together with test automation all those constant costs are spent at the same time and automated testing starts working by it's direct purpose much earlier.
Also, the earlier application development stage is the less features are implemented. As the result it requires less resources to cover everything possible. If we involve automated testing at some late stage of application development when a lot of functionality is implemented we have to spend a lot of time to provide the same level of coverage as we have to cover new features as well as some time has to be spent on old functionality which hasn't been covered yet.
In some cases, automated tests may spot problems which are hard to spot by humans (mainly it is related to some complex or huge volume calculations). The latter we spot the real problem the higher probability that such problem is already visible to end users. That's why it's also important to involve such testing at earlier stage.
2.4 Set measurable goals
Automated testing itself also has some specific goals and this process can also be effective/valuable or not. And we should be able to detect this as if it is not effective or doesn't bring value it makes sense to avoid it. Thus, we come up to necessity of having goals. But not all goals can be measured in true/false form. In some cases we can be at some kind of interim state. In order to track such states we should be able to measure our progress/effectiveness/value and many other things which help us understanding whether we do things right or wrong. Having such knowledge we may have more clear picture where we need an improvement, which parts we can get rid of etc. That's why we need to set some goals and find the way to measure them. The set of metrics can be different. That can be consolidated metrics for measuring application under test quality, measuring quality of our tests themselves, coverage, progress and any other information which eventually comes up to some measure of confidence.
Major thing is that we should get some data which shows us how good our automated testing is and define some "red flag" indicators which alert us even before we actually encounter the real problem.
Design
3.1 Design Manual Tests in Collaboration with Test Automation Engineers
Despite test scenarios have some kind of standard structure normally each scenario item is expressed as free-form text unlike automated tests which are represented in strict and structured programmatic form. This means that there may be hundreds ways of describing test cases in order to get similar automated tests implementation. So, the purpose of this practice is to combine test design and test automation processes in such way that tests are designed for automation in the most suitable way. E.g. several scenarios may go through the same flow but operate with different data. Data-driven approach is very useful to automate this.
Also, it simplifies traceability. If test is initially designed for automation it is easier to set 1:1 correspondence between test scenario and implementation.
In some cases there are ways to combine test design and test automation into one activity. That was the idea behind keyword driven approach.
So, this kind of best practices targets to design tests initially the way convenient for automation.
3.2 Establish a Test Automation Architecture
Automated testing solution should be built using similar approaches and practices as any other software. One of the most important thing in it is that automated testing solution should not be disordered mess of code and resources. In order to make entire solution easy to use it should have some specific organization of components and resources. In other words it needs architecture.
3.2.1 Three levels of UI test automation
Entire test automation can be organized based on Test Pyramid idea. As for UI level testing in particular it can be divided into additional separate levels:
- Core - this level contains basic libraries interacting with controls or some application types in general. Key feature of any component of this level is that it can be applicable to any other applications under test of the same type as current. That can be actually some kind of common library
- Routine - this level already contains application specific functionality but it mainly reflects just technical interaction levels. Some examples of it: navigation, filling in fields for some specific form
- Business - this level already reflects application under test business functionality and mainly shows what to do rather than how
3.2.2 Tests Should be efficient to write
Writing automated tests is major way of application test coverage expansion. In order to expand coverage more efficiently we should be able to write tests the faster the better. This can be done with the help of existing code re-use. The bigger code base we have the more possibilities of interacting with application under test we have. It is really important when our automated testing solution grows in size and we start spending more time on maintenance and at the same time we should cover some new features. So, having efficient way to write tests is definitely one of the vital features.
3.2.3 Tests Should be easy to understand
During entire life-cycle automated tests should be changed, they should be properly traced to actual test scenarios and requirements. In order to do this efficiently we should firstly be able to make tests easy to understand. Otherwise it would be easier to re-write tests from the scratch which is far not the best solution.
3.2.4 Tests Should be relatively inexpensive to maintain
Automated testing is normally applied to projects who last longer than a few months. During that time application under test grows with new features as well as a lot of existing features are being updated permanently. The key feature of automated testing is that it states application behaviour expectations based on some current state. Thus, if some part of application is changed some tests start failing not because they are bad but because they become outdated. It may be reflected at different levels. But major thing we should know about is that the more automated testing we have the more time we spend on making updates into existing solution rather than adding new features. So, making tests maintainable is another vital part which should be taken into account while choosing test tools and building architecture. This is major reason why record and playback approach doesn't work good as long-term solution.
And finally, good maintainability is one of the key features which make automated testing competitive to manual testing as if you cannot maintain your tests you have to re-write them -> re-write tests is similar or longer activity than passing them manually -> if automated testing requires more efforts than manual testing we should get rid of automated testing and switch to manual testing. So, if you want to gain the profit from automated testing you should make this automated testing profitable at long-term prospective.
3.2.5 Design framework to be initially capable for parallel execution
Even if you think that you would hardly even decide to run tests in parallel do not rule out such possibility as:
- You may also need to run some actions concurrently
- If you don't need something right away it doesn't mean that it is useless in the future as well. Parallel runs are effective way of time costs optimization in case of long runs and sooner or later it can be the most effective way of optimization
3.3 Create Good, Quality Test Data
It's not just important to reproduce all interactions automatically but it is also necessary to process all necessary combination of the data the application under test operates with. The more complex application is the more various resources it uses and it is very frequent case when such applications operate with some volumes of data. In such cases it is really hard to reproduce some scenarios if we don't have enough data. That's why proper test data is important for testing in general and automated testing in particular. Some major approaches of test data preparation can be found in this article. In regards of test data preparation techniques major approaches are:
- Self-prepared data - each test creates all necessary test data records on it's own. If necessary some data is randomly generated unless explicitly defined
- Pre-defined data - test environment contains some set of initially prepared data items which are initially supposed to exist so that tests do not care about their creation
- Mixed - the approach which combines both self-prepared and pre-defined data where it is more applicable. Thus it gives some flexibility for tests
3.4 Know the application being tested
Domain knowledge and knowledge of the system under test is vital for testing and it is really helpful thing especially when you try to figure out what is covered and what is still needed to be covered. Also, it allows filtering out some check-points which are neither valuable nor realistic for the system. And eventually, it is useful when you analyze results and interpret them. Without proper domain and system under test knowledge you'll hardly be able to interpret results properly.
Implementation
4.1 Create Automated Tests That Are Flexible to Changes in the UI
As it was mentioned before application under test is being changed on regular basis during it's life-cycle. It results in regular changes to automated tests. UI level tests are not an exception especially when we deal with application which user experience takes an essential part. This is the case when UI is being changed quite frequently. In order to keep automated testing up to date within reasonable time frame in such situation we should apply practices which provide flexibility to UI changes. In other words we should minimize the maintenance efforts for UI changes.
4.1.1 Use Strong Element Locators
Some UI changes may be related to some layout modification or some label text updates. So, the change itself is not essential but it may impact a lot of tests which use each specific control. In order to react on such changes we should use object identifiers which hardly depend on such text content. Every technology has some specifics related to the element attributes but it most of the cases there may be some attributes which are not bound to varying text and at the same time they are pretty unique. These are various types of IDs, resource IDs.
Also, it is worth paying attention to the permanent parts of identifiers. In some cases there may be fixed part while there is also varying part. Typical example is any editor window where heading contains name of modified resource (varying part which changes as soon as we switch to another resource) and application title (which is relatively stable).
In any case we should make sure we use identifiers which are not going to be changed after even simple change like next run.
4.1.2 Use object maps
Many UI elements can be used multiple times in various different tests. As soon as each specific element is changed the way we should update the identifiers we use this results in necessity to make changes in all places where this element is used. In order to minimize modification costs we can define some logical name in correspondence to the actual identifier. That can be done in different forms. E.g. we can define global map of page elements where each entry contains some alias and correspondent actual object identifier (like repository in QTP or Aliases, Name Spaces in TestComplete). Each specific element can be wrapped into some object instance where actual identifier is defined as some attribute (as it is done in SilkTest and many other frameworks where controls have representation of class instances). In all those cases tests use just logical names. As the result, if we need to update the object identifier we should do it in one place for each specific control.
4.1.3 Re-Use system under test components
If development and testing teams work together and use the same technology stack (which is also good practice) they may also share their code base. Thus, testing solution may have the dependency on system under test code. As soon as we can share the code we can re-use some constants, enumerations or even interfaces. The trick here is that application changes are triggered by changes into the code. Since test solution has dependencies on code it can pick up changes immediately and start behaving differently without any additional modifications. That brings additional flexibility to testing solution. It is still aligned to system under test while we made some changes to it.
4.2 Remove Uncertainty from Automated Tests
Usually, when we try to make flexible and universal methods to perform automated testing for each specific project we may appear at the situation when in different situations tests return different results. This is mainly related to different environments, different execution time as well as previous execution results when some state was changed but wasn't recovered. This leads to problems which are hard to reproduce or problems which exist but were not properly detected and/or confirmed. In order to mitigate this gap we should make sure that every our test does the same actions going through the same set of application states each time it runs.
4.2.1 The test should always start from a single known state
Major source of test unpredictability is unpredictable initial state. If we run our sequence having different states of application components we use we may have different results each test run. In order to mitigate this we need to drive application to initial state before running test. Generally, this is about starting application, preparing some test resources needed for tests and applying some specific configuration required by this test. The more varying dependencies we pre-define the more predictable test behaviour we get. And if we get an error we have better level of confidence that application started working differently permanently rather than at some randon case.
4.2.2 Manage Test Data from Within the Test Script
Even if we manage to stabilize the test flow there still can be discrepancies in test behaviour related to the fact that data can be changed during previous runs or simply expire. If we use some data which becomes outdated quite frequently (each new run or within small range of time covering up to a few weeks) it makes sense to add additional instructions to create that data before we start running tests.
4.2.3 Provide fast data reset capabilities
For more or less permanent data we just have to make sure that we simply refresh it on regular basis in order to avoid a lot of just records as well as make sure that all necessary data is there initially in the system. For this purpose we need to have some procedures which reset the permanent data and the data storage to initial state. For such data it is OK to have such data reset once per suite or to have some job scheduled once a day/week/month depending on how frequently we need to restore our data.
4.3 Review Automated Tests for Validity
Application under test is being changed on regular basis → automated tests should be updated on regular basis → some changes affect not just specific interfaces but entire flow → some tests may stop verifying the functionality they were targeted to. In order to mitigate this problem we should never forget to review our tests to make sure that they still perform actions and verifications they were initially designed for.
4.4 Keep the Tests Short and Compact
In order to make tests easy to create, easy to maintain, easy to trace major target of test and generally easy to understand we should make our tests compact enough. In order to achieve this we should try to make tests short and compact. The reason for that is based the following principles:
- the shorter test is the less instructions it contains → the less instructions to write → the faster it can be created
- the shorter test is the less instructions we need to update in case of modifications → the less time we spend on modifications
- compact test in particular includes proper level of detail for each instruction so that the higher level instruction is the bigger impact it may have → high impact defects usually highlight the same step in many tests → easier to split defects by priority
- Big number of short tests give more detailed information about each specific feature coverage. E.g. if we have some business functionality 10 small tests will give more detailed picture than one big even if this big test involves the same checks as those small tests. Imagine that one test fails. In case on 10 tests we have other 9 tests passed indicating narrow case which really contains problem and generally indicates that feature meets 90% of expectations. At the same time if big test fails it will only indicate that the feature doesn't meet 100% of expectations which is still true but this information is not so detailed.
- Due to big popularity of xUnit-like engines the approach of hard assertions is also popular. It means that test fails at first mismatch spotted. Of course, we can use soft assertions accumulating all errors but in many cases that would bring unnecessary noise as test might have failed on some blocking problem.
4.4.1 Narrow down the scope of each specific test
Design tests the way that each of them verifies just some particular aspect of the system under test. It's not always possible and sometimes it is not reasonable but generally such approach gives more or less clear picture what exactly went wrong just by looking at the list of failed tests.
4.4.2 Try to re-use the same modules as frequently as possible
It is generally good practice to group repetitive set of instructions into some higher level modules. In this case if we need to change some common flow we can do this in one place. Also, it minimizes the number of instructions to write. Giving meaningful name to the module brings readability as now we see not just sequence of small steps but we can observe entire action we do.
4.4.3 Use Data-Driven approach for similar flow tests
In many cases we need to do the same steps but exercise different set of input and output data. It means that the entire test flow is pretty common. For this purpose the data driven approach is pretty handy.
4.4.4 Use different detail level modules
If during the testing we need to pass several application states before verifying things the test was targeted to we mainly don't need to make detailed verification and step instructions. Normally it is done in dedicated tests covering transitions between earlier states. At the same time the closer we are to the target state the more details we should put. In this case for earlier stages we can use some high level modules which do all necessary steps to navigate, perform all necessary settings and anything else we need before we are at the place we should verify as a part of current test. This way you make tests more sensitive to things it is supposed to verify. At the same time they compact enough to make you understand what exactly do we perform in order to get proper state as well as higher level modules usually combine multiple lower level instructions → easier to write and maintain
4.5 Optimize your tests
The more tests we have the more time it takes to run them all. At some point of time we may encounter situation when the entire execution time is not appropriate for fast feedback. Even before that there may be different cases when we should start thinking that our tests are using too much time. It doesn't really matter when it would happen. The key thing is that normally it happens and we have to be ready for that. So, what should we do in this case?
4.5.1 Test things at proper level
There are cases when people mainly focus on UI level automation and concentrate testing efforts around UI. Well, it definitely plays around all potential end user scenarios the way end users would do that. So, in terms of experiment cleanliness and accuracy that's definitely good approach. But what if we know our system and we know that, for instance, "those two actions will lead to similar results just because they internally use common controls and the same event handler". Mainly at the UI level we verify that the output is done properly with proper layout. This is definitely something we cannot guarantee at any lower levels. So, it's more efficient to delegate logic verification to lower level tests while UI tests concentrate mainly on visual aspects of application outcome.
Again we come back to Test Pyramid. It is not just about proper focus but it also about time savings. Usually lower level tests are faster as they interact with less external components and spend less time on data handling, network communications and many other things which eat execution time. So, having proper tests distribution across levels may save time without any loss in test and code coverage.
4.5.2 Merge exhausting tests
It is good practice to make dedicated test for each specific feature but at high level (especially UI-level) tests we may encounter situation when there are 2 or more features available at the same application state. But it takes much more time to reach proper state while each feature verification takes just moments. Imagine you need 10 minutes to get some application state and you have to exercise actions on 2 buttons which states are independent on each other and each verification takes 2-3 seconds. When there are several tests where getting to common application state takes essentially more time than verification itself these tests are exhausting. So, it isn't really profitable to spend 20 minutes running 2 tests while those tests can be grouped into 1 running 10 minutes + extra few seconds for extra verifications.
4.5.3 Avoid often locating
UI operations are pretty time consuming and in some cases we may interact with UI without actual need. E.g. when we have a web page with a table and we need to read all data items to pack them into some data structure there are 2 major ways of doing this:
- Locate each data item individually and read the value from there
- Get page source and use some in-memory parser to get the data
Another example is when we can avoid unnecessary repetitive UI interaction by caching already retrieved data or simply retrieving data once. E.g. we have the pop-up list with some items and we have to make sure that each element matches some values or expressions. We can do it like this (this is just pseudo-code sample):
for i = 0 to popup.getItems().size popup.getItem(i).matches(expression)or we can do line this:
var array = popup.getItems() for i = 0 to array.size array[i].matches(expression)which interacts with UI only once. After that it uses in-memory object. Additionally, some programming languages have for each or similar loop operators which key feature is that the list value is calculated only once, so the same example can be written like this:
for each value in popup.getItems() value.matches(expression)In both previous examples the number of UI interactions is 1 disregard the number of items pop-up list has. Using such simple and basic knowledge we can save a lot of time.
4.5.4 Synchronization
Application under test generally doesn't respond to any command immediately. Normally it takes some time to perform some server-side operations, UI rendering, data processing etc. But automated test instructions are simply sequence of actions to be performed so normally before running next operation we should make sure application completes previous one. We can set fixed length pauses within the test but it will be much wiser to wait for some specific state to happen. In other words we should wait for some specific application state/event no longer than some specific time out and continue test execution as soon as required state is reached or event is fired. But no more than that. This way we wait no longer than we actually need.
4.5.5 Locate elements wisely
In most of the cases each UI element may be located by many different ways. The element locating speed also depends on location strategy. Usually, elements defined by IDs are faster to find than more complicated ones like XPath. Also, locators can be simple (using one attribute) or complex (using multiple attributes or levels). So, when we define proper element locator we should always keep in mind that the more complicated locator is the more time it takes to locate element.
4.5.6 Use short paths to reach proper initial state
Different technologies provide some mechanisms to open application under test under specific state. E.g. we can open web page at required page with required parameters by specifying proper URL. For mobile applications there is deep linking for that. Also, we should not forget about server-side operations which can be triggered directly from test. The most common feature of all those features is that they are normally faster than similar operations via application UI. And in some cases they are extremely faster. So, if it is not important for current test how do you reach some specific state, well, we easily can choose faster way.
Execution
5.1 Test Early And Frequently
It is important to detect problems as early as they initially appear. In this case it is easier to fix them as at early stage we have less functionality wrapped around problem part which can be affected by the fix. Also, early detection still gives quite easy way of reverting changes back in case of serious problems. That's why it is important to run tests early.
The more frequently we run tests the easier we can detect the place when error was initially introduced → the easier to localize the problem. On the other hand running tests with the highest possible frequency is not really the goal. Ideally, we should be able to run our tests against every single commit as soon as it is available on server. We don't need tests with higher frequency as they will only show our tests stability. But having test execution being performed against any published change is really something that automated testing can do better than manual.
Eventually, we aim fast and frequent feedback which coincides with some other practices listed before. There may be several areas where we can apply current practice to.
5.1.1 User Sanity tests as the part of Continuous Integration
There should always be a set of tests which is executed after each commit. This way we highly localize the set of changes which may cause the problem. It works pretty well for unit tests which are usually fast and take just a few minutes to perform. In case of higher level tests which take longer to execute we should select some small sub-set of tests which runs during acceptable amount of time and covers some basic functionality which must work at any cost. Main idea is that they should be executed against any build we have.
5.1.2 Parallel tests
In some cases we may have a huge number of tests which we cannot avoid and which take too long to run. Of course, we can decrease the scope to optimize run time but this leads to coverage loss and potential quality loss as the result. So, in order to keep the same scope of tests but decrease run time we can run tests in several parallel streams. The idea is not new and there are existing systems which already support this. Major difficulty here is with test design. That's why we should design framework to be initially capable for parallel execution so that when we are here we already can use such possibility. Otherwise, we have to spend some time to make parallel runs possible.
5.1.3 Run big test suites infinitely
If our test run takes up to 8 hours we can easily use nightly runs to get some regular feedback on application status. But as soon as the run time takes longer than that we should do something else to get some frequent feedback. One of the practice is to split entire suite into some sub-groups and run them infinitely so that we are getting all results at least once a day. Infinite run here means that as soon as the test suite completes and results are sent the new run starts immediately. Of course, it requires infrastructure preparation for that especially when we use some CI solutions which require licenses. But even at the early beginning of the project we should realize the fact that the test execution time may take long time and we have to reserve some space for that initially.
5.2 Do Not Rely Solely on Automation. Beware of Passing Tests
It is always good when some work is done for you automatically and you don't have to watch after it all the time. But when you have your tests automated it doesn't mean that you should completely forget about them as long as they pass. The machine does what it was programmed for but the program is normally put by human beings which are still initial source of software problems. Automated tests are not exception here. If we have some automated tests running successfully we should always keep in mind that:
- Automated tests may use some tricks to simplify implementation, so it's not necessarily real user behaviour
- Some automated tests can have incomplete verifications due to different limitations. It means that if test passes it doesn't mean that everything around tested functionality works correct.
- Some tests may contain no verifications at all. They can be evergreen. The mutation testing can detect such problems but it is resource consuming and it is already long-running for system level tests.
- In some cases we can use various kinds of mocks which simulate actual component behaviour but it is still mock component. Who knows what's going to happen with system under test in case of real component modifications.
5.3 Use Automation For Other Purposes as Well
Automated testing is valuable when it is used frequently. But it can be even more valuable if we involve it somewhere outside of test execution process. We can use automation in some other areas like:
- Self-diagnostic system - in some cases automated tests can be embedded into system under test so that they can be invoked somewhere outside of testing process. E.g. it can be useful to run some tests against client system and provide detailed information to support teams.
- Automated environment or data setup - automated testing simulates some interactions with application under test. In some cases such actions are useful to perform environment setup when we have to populate fresh system with some test data. Or we simply can run some code to prepare some data with complex relationships which can be easily defined in programmatic way.
5.4 Watch out for Flaky tests
Flaky tests are the most annoying group of tests. They may fail sometimes but when we check them they pass. Probably it is quite frequent case when we see some test failing while it passes after next run. The biggest danger here is that we stop paying enough attention to such flaky test and we may ignore real problem spotted by this test just thinking that it is something related to environment or temporary problem. That's why we should pay additional attention to unstable tests.
5.4.0 Use synchronization
This is rule above any other rules in this group. We should always make sure that our tests handle application state each time. There should be no immediate actions without verifying that we can perform some operation. It is especially necessary for UI-level tests when each UI element becomes accessible after some delay. In case of slow environment such delays can be bigger and if we don't handle that properly the probability of false errors is higher. And thing brings big distortion to the entire test run information.
5.4.1 Automate failed tests re-run
A lot of spontaneously failing tests can be passed after next run. And it is pretty routine operation which takes some time. And it is annoying when you spend some time to find out that there is no actual problem there. So, in order to minimize such false problems we re-run failing tests automatically. Some CI systems have this functionality, for specific needs we can have some custom solution. But the main idea is that if we automatically re-run failed tests several times until they pass or fail permanently we can filter out a lot of false problems and concentrate our attention on tests which show permanent errors. This indicates the problem they spot is something real.
5.4.2 Separate stable and unstable tests
Flaky tests usually indicate 2 potential source of problems:
- Tests instability
- Some potential application problem which reproduces from time to time
In case of floating problem we can have some set of tests which potentially cover the problem and we can run them separately to have more representative statistics on where the problem actually happens. On the other hand, having some group of stable tests leads us to the next practice.
5.4.3 Keep test runs green by pruning known failures. It's important to know which tests have just started failing today.
Green/red scheme of test suite execution status is easy to use system which gives easy to interpret results and make go/no-go decision. If test suite is frequently red people stop paying attention to the results. It is especially vital for unit tests when many build systems integrate testing stage assuming that successful build should have successful test run as well. Otherwise, people stop paying attention to failed tests at all as normally failed tests indicate that something was broken. So, it is important to have some set of stable tests. When at least one of them starts failing it clearly indicates that something became broken recently.
5.4.4 Tests that are disabled must be re-added to manual test runs, or the coverage will be dropped
When we do our testing we should never forget about coverage. If we have automated tests we normally don't do them manually. But if we move them from automated runs it doesn't mean that we should forget them at all. The functionality under test should still be covered. If we don't cover tests automatically we should do it manually to keep the same level of coverage. Otherwise, we have a risk to have some area non tested → we may have some actual bug missed.
Summary
This is consolidated but still quite high-level list of best practices. Each specific tool, engine may have some more details. The list of practices I've listed before is something common to most of them. And yet, the more experience we get the more additional practices we can find out. So, this list is not something set in stone and we can have more approaches, more solutions which makes our test automation better.
And never forget that best practices are not something we MUST follow. They are mainly targeted to solve/avoid some specific problems. If you know other way for doing that then you can use it. Maybe it can also become someone's best practice.
References
- Automated Testing Best Practices by SmartBear Software Support
- Test Automation Tips and Best Practices by Testing Excellence
- 7 Key Best Practices of Software Test Automation by Nalashaa.com
- Test Automation - Best Practices on Quadrant 4 Blog
- Best Practices in Automation Testing by PIT Solutions
- Automating tests vs. test-automation by Markus Clermont
- 10 Best Practices and Strategies for Test Automation
- How to implement UI testing without shooting yourself in the foot by Gojko Adzic
- Test Automation Best Practices on bqurious.com Blog
- Top 10 Tips For Best Practices In Test Automation by Jayakumar Sadhasivam reposted from Top 10 Tips For Best Practices In Test Automation Sanchari Banerjee, EFYTIMES News Network
- Automation Testing - Best Practices - Part 1 by Ananya Das
- UI Test Automation Best Practices by Filip Czaja
- Best Practices for Functional Test Design
- Best Practices for Functional Test Script Development
- The Top 6 Automation Best Practices by Joe Colantonio
- 4 Reasons Why Test Automation Fails on Gallop.net
- Web test automation best practices by Guy Arieli
- Automation Best Practices: Building To Stand the Test of Time by Naman Aggarwal
- 5 Agile Test Automation Best Practices by QA Revolution
- 5 Tips To Maximize ROI Of Your Mobile Test Automation
- Test Automation Best Practices by Shikha Rav
- 5 Best Practices for Automated Browser Testing by Justin Klemm
- Improving Software Quality: Nine Best Practices for Test Automation (PDF) by Kenneth "Chip" Groder
- Considerations for Best Practices with Selenium by Brian Van Stone, QualiTest Group
- Best practices in test automation by Dilato
- Test Automation: Helpful Tips and Best Uses by Haley Kaufeldt
- B2G/QA/Automation/UI/Best Practices on Mozilla Wiki
- Best practices to maximize the RoI on Mastering Mobile Test Automation book by Feroz Pearl Louis, Gaurav Gupta
- How To Design An Effective Test Automation Framework by Sheshajee Dasari
- 6 Best Practices for Selenium on Grazitti Interactive
- Test Pyramid by Martin Fowler
- Test Automation Basics - Levels, Pyramids & Quadrants by Duncan Nisbet
- Database Testing - Properties of a Good Test Data and Test Data Preparation Techniques on softwaretestinghelp.com by Rizwan Jafri
No comments:
Post a Comment