Search

Monday 11 May 2015

The Future of Test Automation Frameworks

The Future of Test Automation Frameworks

It's always interesting to know the future to be able to react on that properly. It's especially true for the world of technology when every time we get something new, when something which was just a subject of science fiction yesterday becomes observable reality nowadays. Automated testing is not an exception here. We should be able to catch proper trend and to be prepared for that. Actually, our professional growth depends on that. Should we stick to the technologies we use at the moment of should we dig more into some areas which aren't well-developed at the moment but still have big potential? In order to find an answer to that question we need to understand how test automation was evolving, what promising areas are. Based on that we can identify what should we expect next.

So, let's observe test automation frameworks evolution to see how it evolves and where we can grow to.

Background

Before we go further we should clearly understand that test automation is not limited with some specific tools emulating user clicks. It starts from some utility scripts (shell scripts or batch files) doing some routine operations for us. Then it goes to the world of code-level testing (involving Unit and Integration tests) and up to system level where we interact with the system as end users. The problem here is that end users are not necessarily human beings. E.g. for services, databases and any other back-end stuff it's still some other software. Only some high-level application components have dedicated GUI which interaction is up to human being.

Secondly, test automation activities can be performed by different group of people depending on which entities we cover. Thus, code-level testing is normally handled by developers while higher level testing like system, integration normally requires dedicated QA team (or whatever name you give to it). Generally, it means that such testing levels require different skill set, knowledge and different requirements. E.g. for QA team the automation may take up to 100% of activities (in some cases there are specialized test automation teams of people which in some cases are called as software developers in test). At the same time test development is not the major activity for development team and it should take preferably no more than 30% of entire working time. Also, since it's subsidiary activity for developers they don't need to have too complicated abstractions. In such cases it's more efficient to keep testing solution simple. At the same time for QAs there may be additional requirements to provide mapping between tests and requirements with minimal maintenance efforts in case of requirement changes which requires some additional abstractions. So, all above examples (and many others) show that depending on the major activity performed we need different complexity of testing solution.

Thirdly, a lot of things depend on the actual information to be provided. In classical case we simply should provide the information whether system meets some pre-defined acceptance criteria. At the same time there are cases when more or less specific criteria are either undefined or automated analysis requires too much efforts. In this case we can use automation for the most routine parts while the final decision is made based on analysis performed by human.

All the above items mean that no matter how advanced the test automation framework is there are always cases when it may be non-applicable or it's use is not rational.

Test Automation Framework Approaches Evolution

So, how test automation frameworks were evolving? Well, this post is not dedicated to "software archaeology" so I don't want to dig into all that software that was at the beginning of test automation. And I can't really say what was before and what was after. For this post I'll use classification based on complexity growth which is very likely correlates with actual evolution of software test automation tools as they also went through all those enhancements. There are many different framework types classifications based on complexity and they reflect more or less the same things. So, for this post I'll highlight the following framework types:

  1. Linear - all actions are represented as linear sequence of either commands or separate scripts
  2. Structured - all entities are more or less grouped under dedicated data structures
  3. Data-driven - common flow is extended with the input data table and the same flow is executed for each data table row
  4. Keyword-driven - all actual actions are abstracted with some high-level text instructions so that actual tests can be represented as readable text and reflect business requirements rather than technical implementation
  5. Hybrid - combines 2 or more of the above approaches
So, let's take a detailed look at each of the above approaches.

Linear

Linear framework is the simplest one. It means that each application function/module/section is represented as an independent script. The tests are constructed as large hierarchical structure of these scripts. Also, it can be simple linear structure of tests with some sub-modules when all instructions are executed in the sequence from top to bottom. Typical example is any shell script where we mainly contain some sequence of commands, maybe some sub-scripts grouping common functionality and (but not necessarily) some structure powered by the engine which defines general flow of entire test suite.

In the most of cases it is represented via scripts (like shell scripts or batch files as well as some simple scripting constructions via VBScript or JScript). Mainly due to that a lot of times I hear that automated tests are normally called "automated scripts". Well, in the modern world they aren't necessarily scripts but let's consider it's rudimentary artifact which came from this framework approach. Typical code may look like (in pseudo-code):

start myprogram.exe
engine get_top_window
engine click button "Start"
call create_user "user" "password"
call login "user" "password"
IF %ERRORLEVEL% > 0 echo "[ERROR] unable to login"
Since the approach is pretty simple it was used since the early beginning. E.g. such approach was used by WinRunner which was replaced by QTP long time ago. Similar approach is applicable for STAF. From the modern tools I can say that SoapUI uses something similar. Entire suite is definitely represented as some combination of isolated scripts powered by main engine.

This approach is pretty simple and easy to implement. Also, it is very convenient for Record & Playback abilities which was (and probably still is) one of the greatest marketing feature for a number of test automation tools from vendors (even taking into account that a lot of test automation specialists eventually come to decision that generally Record & Playback sucks and it is not applicable for full-scale automation projects). However, there were some drawbacks observed, like:

  • High dependency between tests as entire test suite is usually single long script which is executed sequentially
  • Hard to pass values between steps as in some cases each steps could be just individual scripts which couldn't have an idea about other scripts
  • Limited capabilities for running sub-set of tests
  • Hard to handle errors properly especially for the cases when we had to stop current test and reset application to initial state.

Structured

All the drawbacks of previous approach forced to add some more abstractions which represent tests, suites, pre/post-conditions as well as provide more advanced techniques for exception handling and general test execution control. This way the structured test automation frameworks appeared.

Firstly, more attention was paid to an ability to group repetitive steps into some functions. Then tests themselves are started to be represented with dedicated code entities, e.g. for SilkTest there were dedicated keywords which could indicate each specific function as test case (testcase) or pre/post-condition (appstate). Also, there was introduced the JUnit and all other engines of xUnit family which are ports to other languages and which are now representing can be a kind of market standard for test frameworks or their kind. Using OOP methodology there was introduced an ability to create abstractions for test suites, test cases. Also, inheritance mechanism optimized the effort for test automation as a lot of common functionality could be re-used in sibling classes.

Generally, test automation solution became more structured as well as test execution became more controllable and recoverable.

Here is an example of tests using structured framework approach:

package com.github.mkolisnyk.aerial.readers;

import java.io.File;
import java.util.ArrayList;

import org.junit.Assert;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.apache.commons.io.FileUtils;
import org.apache.http.HttpStatus;

import com.github.kristofa.test.http.Method;
import com.github.kristofa.test.http.MockHttpServer;
import com.github.kristofa.test.http.SimpleHttpResponseProvider;
import com.github.mkolisnyk.aerial.core.AerialTagList;
import com.github.mkolisnyk.aerial.core.params.AerialParamKeys;
import com.github.mkolisnyk.aerial.core.params.AerialParams;
import com.github.mkolisnyk.aerial.core.params.AerialSourceType;

public class AerialGitHubReaderTest {

    private static final int PORT = 51234;
    private static final String BASE_URL = "http://localhost:" + PORT;
    private MockHttpServer             server;
    private SimpleHttpResponseProvider responseProvider;
    private AerialGitHubReader reader = null;
    private AerialParams params;

    @Before
    public void setUp() throws Exception {
        responseProvider = new SimpleHttpResponseProvider();
        server = new MockHttpServer(PORT, responseProvider);
        server.start();
    }

    @After
    public void tearDown() throws Exception {
        server.stop();
    }

    private void initReader() throws Exception {
        params = new AerialParams();
        params.parse(new String[] {
                AerialParamKeys.INPUT_TYPE.toString(), AerialSourceType.JIRA.toString(),
                AerialParamKeys.OUTPUT_TYPE.toString(), AerialSourceType.FILE.toString(),
                AerialParamKeys.SOURCE.toString(), BASE_URL,
                AerialParamKeys.DESTINATION.toString(), "output",
                "repo:mkolisnyk/aerial state:open"
        });
        reader = new AerialGitHubReader(params, new AerialTagList());
        reader.open(params);
    }

    @Test
    public void testOpenGitHubReaderValidQueryShouldFillContentProperly()
            throws Exception {
        String[] expectedContent = new String[] {
                "Value 1",
                "Value 1-1",
                "Value 2",
                "Value 3"
        };
        String mockOutput = FileUtils.readFileToString(new File("src/test/resources/json/github_valid_output.json"));
        responseProvider
                .expect(Method.GET,
                        "/search/issues?q=repo:mkolisnyk/aerial+state:open&sort=created&order=asc")
                .respondWith(HttpStatus.SC_OK, "application/json", mockOutput);
        initReader();
        Assert.assertTrue(reader.hasNext());
        ArrayList<String> actual = new ArrayList<String>();
        while (reader.hasNext()) {
            actual.add(reader.readNext());
        }
        Assert.assertEquals(expectedContent.length, actual.size());
        for (String expected : expectedContent) {
            actual.remove(expected);
        }
        Assert.assertEquals(0, actual.size());
        reader.close();
        Assert.assertFalse(reader.hasNext());
        Assert.assertNull(reader.readNext());
    }
}
That was example of JUnit test where:
  • Class itself represents test container and can be either independent suite or some sub-suite withing begger one.
  • Green highlighted part correspond to pre/post-conditions which make sure that each test will drive application under test to initial state disregard the execution results
  • Red highlighted text shows an example how to group repetitive code into functions
  • Yellow highlighted code shows an example how test itself can be moved into separate entity

Data-driven

Of course, there were some cases when we have to run the same test flow against different data set. So, there was a necessity to add an ability to bind data source and provide an engine which runs the same test for each data row. I wouldn't say it is something which appeared after structured frameworks were introduced. It was even before. Anyway, this is another enhancement to decrease test automation efforts for routine operations.

Below is an example of data-driven test:

/**
 * .
 */
package com.github.mkolisnyk.aerial.expressions.value;

import java.util.Arrays;
import java.util.Collection;

import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;

import com.github.mkolisnyk.aerial.document.InputRecord;

/**
 * @author Myk Kolisnyk
 *
 */
@RunWith(Parameterized.class)
public class DateRangeValueExpressionTest {

    private DateRangeValueExpression expression;
    private InputRecord record;
    private boolean validationPass;

    public DateRangeValueExpressionTest(
            String description,
            InputRecord recordValue,
            boolean validationPassValue) {
        this.record = recordValue;
        this.validationPass = validationPassValue;
    }

    @Parameters(name = "Test date range value: {0}")
    public static Collection data() {
        return Arrays.asList(new Object[][] {
                {"All inclusive range",
                    new InputRecord("Name", "date", "[01-01-2000;02-10-2010], Format: dd-MM-yyyy", ""),
                    true
                },
                {"Upper exclusive range",
                    new InputRecord("Name", "date", "[01-01-2000;02-10-2010), Format: dd-MM-yyyy", ""),
                    true
                },
                {"All exclusive range",
                    new InputRecord("Name", "date", "(01-01-2000;02-10-2010), Format: dd-MM-yyyy", ""),
                    true
                },
                {"Lower exclusive range",
                    new InputRecord("Name", "date", "(01-01-2000;02-10-2010], Format: dd-MM-yyyy", ""),
                    true
                },
                {"Spaces should be handled",
                    new InputRecord("Name", "date", "( 01-01-2000 ; 02-10-2010 ], Format:   dd-MM-yyyy", ""),
                    true
                },
                {"Invalid order range",
                    new InputRecord("Name", "date", "[01-01-2010;02-10-2000], Format: dd-MM-yyyy", ""),
                    false
                },
                {"Empty range should cause the failure",
                    new InputRecord("Name", "date", "[02-10-2010;02-10-2010], Format: dd-MM-yyyy", ""),
                    false
                },
                {"Wrong format should cause the failure",
                    new InputRecord("Name", "date", "[10-02-2010;02-10-2010], Format: MM/dd/yyyy", ""),
                    false
                },
                {"Wrong but matching format should cause the failure",
                    new InputRecord("Name", "date", "[10-02-2010;02-10-2010], Format: MM-dd-yyyy", ""),
                    false
                },
        });
    }

    @Before
    public void setUp() throws Exception {
        expression = new DateRangeValueExpression(record);
    }

    @After
    public void tearDown() throws Exception {
    }

    @Test
    public void testParse() throws Exception {
        try {
            expression.validate();
        } catch (Throwable e) {
            Assert.assertFalse("This validation was supposed to pass",
                    this.validationPass);
            return;
        }
        Assert.assertTrue(
                "This validation was supposed to fail",
                this.validationPass);
        expression.parse();
        Assert.assertEquals(record.getValue().replaceAll(" ", ""),
                expression.toString());
    }

}
The highlighted area shows the code which provides input data so all tests are running against each array item (data row).

Keyword-driven

And yet, all the above approaches still require programming skills. At the same time there is the trend for closer collaboration within software project teams with roles combining. Also, it's generally useful to share some artifacts to other people to bring them an idea of what is covered by test automation to get some feedback from their point of view. Also, it seemed to be good idea to involve non-technical people into test automation. This is one of the reasons why a lot of vendors brought Record & Playback ability and provided it as one of the valuable features. As I mentioned before, this feature can be useful in some cases but for large-scale automation it brings even more damage than profit. Thus, some companies started inventing some other features. E.g. SmarteScript from SmarteSoft implemented their own approach for tests representation where all steps are represented as table rows and columns correspond to elements so that each cell contains command to perform on specific control at specific step. Quite interesting alternative.

But one of the most popular solution was to assign some specific phrase or keyword to actual implementation. So, we have some resource (either table or plain text file) which contains just sequence of keywords and we have some actual code which is linked to specific keyword which implement real actions to be performed during test run. So, for the end user tests might look like:

CommandParameters
Startmyprogram.exe
Create User"user";"password"
Login"user";"password"
Verify TextSuccessful
Well, with proper design this solution is pretty handy and definitely can be helpful to involve non-technical people. However, it's definitely hard to keep proper and convenient design for large scale projects as well as we may have similar problems as for Linear approach as visually they don't make any big difference.

Typical keyword-driven solutions are:

Also, some vendor tools like HP Unified Functional Testing and SmartBear TestComplete provide abilities for using keywords.

Hybrid

So, eventually, we come to the point when we see that none of the approaches is perfect and each of them has some drawbacks. However, each approach also has some advantages which can mitigate gaps in other approaches. This is the reason to combine different framework types. If we use 2 or more different framework types within our framework this framework may be treated as Hybrid framework.

One of the most known hybrid framework examples is Cucumber. Here is some small example of Cucumber scenario:

Scenario Outline: eating
  Given there are <start> cucumbers
  When I eat <eat> cucumbers
  Then I should have <left> cucumbers

  Examples:
    | start | eat | left |
    |  12   |  5  |  7   |
    |  20   |  5  |  15  |
From the above sample we can see that test itself contains just some readable words which should have correspondence to the code which is major feature of Keyword-driven frameworks. At the same time the highlighted part is actually data source and the scenario is executed for each row of the data table. And this is Data-driven approach.

When we look on the back into the glue code we may see something like:

...
@Given("^there are (\\d+) cucumbers$")
public void givenThereAreCucumbers(int number) {
    // TODO: add implementation
}
@When("^I eat (\\d+) cucumbers$")
public void whenIEatCucumbers(int number) {
    // TODO: add implementation
}
@Then("^I should have (\\d+) cucumbers$")
public void thenIShouldSeeCucumbersLeft(int number) {
    // TODO: add implementation
}
...
which definitely represents Structured framework approach.

The hybrid frameworks are now the most advanced and practically applicable solutions. Some features are more popular, some of them less. I'd say even more, it's rare case when we see clean approach implementation without using any elements from another one. Main reason for that is the fact that the more advanced approach is the more practices it can grab from lower-level approaches. At the same time, we shouldn't treat nowadays as the golden age of test automation as technologies are still evolving and many new things appear on the market. Also, there are some more advanced approaches which may already exist but they are not so wide spread due to various complexities. So, now we are logically moving to another part of this post to see where we should grow with.

What's going to be next?

In the previous paragraph there was an overview of existing framework types. Again, this list is not set in stone and there may be several other deviations. And yet, there are still common directions of test automation frameworks evolution. They are:

  • Trend for more structured representation of tests
  • Trend to more high-level representation of tests which potentially can be shared with people outside test automation team
The above trends may help identifying where we can go next.

So, generally test automation isn't concentrated only on simulating user actions. As the automation process it also provides some automation for routine processes like requirements/tests/auto-tests alignment so that each requirement change is automatically reflected in automated tests update. This also minimizes the maintenance effort. Also, this trend shows that tool developers don't give up trying to make test design and test automation more visualized. So, generally everything goes closer to code less automation. This brings us to several potential framework types which can be the next generation of test automation. Some of potential candidates are:

  • Model-based frameworks
  • UI Driven Development (UDD) frameworks
  • Executable Requirements
Now let's try to take a closer look at all of them.

Model Based Frameworks

Entire application under test can be schematically represented as the directed graph where nodes are associated with application states and edges correspond to actions leading to the new state. Here is an example of the model built:

The idea is that we do not define any specific test scenario. We describe the model and the engine itself will generate all linear scenarios going from some initial state to the final one. This is not new approach and there are a lot of tools which already support this. Here is very good overview of Model-Based Testing solutions to get started.

Why it potentially can be promising? Well, it simply provides visual and readable representation of the system under test so everyone can see how it works. Also, potentially it can optimize test automation efforts as automated tests are generated based on the model. So, once we change something in the model we automatically update all tests. Also, we have quite good coverage at all points all the time, so that even if we add at least one state the number of tests will be increased quite seriously (the more edges and nodes we have the more paths potentially can go through this new state and the more tests are generated as the result).

UDD Frameworks

The initial idea was described on Future of Test Automation Tools & Infrastructure post by Anand Bagmar. The core point is that we use visual components instead of coding. E.g. we take the application source code and represent some model of the UI including screens, maybe some functionality and their UI components. Something like this (picture was taken from the original post linked before):

Once we have such representation it can be a kind of repository for test elements which we can drag & drop into our test selecting some actions or something like that. Major idea is to express actual interaction with the UI using pre-defined set of visual interaction patterns we can simply choose from our palette. Everything else is driven by the engine. Well, it's still looks like coding but a bit visualized.

Executable Requirements

I was describing the approach in the Automated Testing: Moving to Executable Requirements post. The idea is that since we already can generate automated tests based on test scenario description (e.g. the way the Cucumber does things) why don't we generate test scenarios based on some formal descriptions? In some cases it's pretty doable as for typical input formats we should do typical set of tests (positive, negative, boundary values). So, eventually we do not write tests but define requirements and specifications in some formalized representation which can be the source for generating test scenarios which in turn can be the source of automated test cases generation. All we have to do is to write glue code implementation (similar as we do for Cucumber or similar engines).

What are the advantages we can have? Well, the BDD approach (which is on the back of executable requirements) itself decreases coding effort due to text instructions re-use. The more we re-use the more time we save on coding. And now imagine we have even more compact structure which expands to the entire set of scenarios with all necessary data generated. All we need to do is to describe all set of data item relationships and basic flows for success and error conditions. Since it is something to be done in plain text it can be stored anywhere where the text can be read. Thus we can use some common documentation storage like Confluence or similar and use this document as the requirement document. Also, this way we always make 100% coverage of requirements by tests and auto-tests.

I've started creating the engine performing that. More detailed documentation on that can be found on the official documentation page.

Who's gonna win?

Main trends observed

OK. These are just several approaches people think to be the future of test automation. These approaches are pretty different but there are some common things in them all. They are:

  • Test automation is seen more visualized
  • Test automation is seen to be targeted more to non-technical people
  • Test automation is targeted to be more aligned with application under test as the source for some test resources generation
  • In general, more additional abstractions are expected above code level in test automation frameworks
Future will show us what's going to work and what's not. But in some areas we can observe how it works in other areas to predict the future of test automation.

How it works in programming world?

Yes, the development world is the one which we can use for test automation future prediction. Since development is core part of the software project it is the development where the biggest number of investment is targeted to. So, let's see how similar things were reflected in the development as a lot of experiments took place in that area as well.

Firstly, it is visualization. It's far not new thing in the software development world. You can start with this to make sure it's true. At the same time you can get the list of the software listed in previous link and then go to any job site like monster.com or indeed.com or LinkedIn (whatever you see more convenient to you) and try to find jobs related to those technologies. After that compare it with the number of jobs for Java, C#, JavaScript, Python, C++. You'll see that such visual programming tools are far apart from mainstream. Why does it happen? There are many reasons. Just some of them are:

  1. Visualization really works when it is domain-specific. Actually, it's not just about software development. The music notation is another example of information visualization when we graphically represent the sounds to be played. Similar thing happens for software development. We may have some specialized software for making games, making video or modelling some physical processes. But try to apply the same software to some other domain areas and you'll see that it's really hard. It's generally hard to define common visual programming language applicable for everything.
  2. Most of visual diagrams looks great and convenient to process only when they are done for small thing. But what if the model becomes more complicated, e.g. like this:
    ? What if the system under test is bigger than that?
  3. Sometimes people say "a picture speaks a thousand words". Yeah, cool. But how many of you can paint such pictures so that many people can understand all those "thousand words" you was going to say with that? Would it be more compact than those words? Well, if it is some structure/object description then maybe yes. But sometimes words are better than pictures especially when you write some instructions. If there are any doubts you can make an experiment. Just take this post, take first thousand words or it and display it with the picture with minimal number of words, let's say 50. This is needed to avoid cheating otherwise you can make a screen shot and represent it as the picture. Then we can send it to some other people so that they can recover the information from that pictures.
  4. Since there is no common standard for graphical representation for everything it's hard to perform such activities as refactoring or maintenance as the system should be readable and clear to understand for anyone who works with it the same way
So, generally speaking, visualization is used in some areas and some parts of development but it is not so universal as programming code. At least at the moment. Maybe in the future something will be changed but now it is the way I mentioned before. So, the same expectations I have for test automation. Visualization will work in some areas but it wouldn't be the mainstream in the nearest future. At least not before the same trend would be for development.

Second item is targeting to non-technical people. Development already went through that. And that was called Rapid Application Development. Well, tools supporting that still exist and quite effectively used. But mainly, they optimize routine part like UI design. Thus, UI creation is usually treated as a kind of "monkey job". But apart from UI there is huge layer of business logic and various other levels which require advanced programming. And this is not something that's going to be changed soon on the entire development front as here we again deal with domain specifics. Yes, maybe there would be (or there are) some specialized wizards targeted to some specific area but for now developers still require programming coding.

Another part of it is the question: Why do we need non-technical people perform technical tasks? Seriously, why should we target something to people who aren't capable to do this by default? Being technical person means to know HOW to do things apart from WHAT to do at all. And this HOW includes not just how to express some specific instructions/behaviour but also how to do it effectively to fit style, architecture and performance requirements. There are many ways to do the same things but only a few of them are really good.

OK. Going to the next item. The test automation is targeted to be more aligned with application under test. Well, in this area things are pretty good in development world. Developers write tests for the code they create and use them to make sure nothing is broken with another change. This makes high alignment with tests and application under test.

The same situation is for more additional abstractions are expected above code level in test automation frameworks. Yes, testing frameworks even for unit tests become more complicated to provide some mocks, some more advanced abilities to group and structure tests. Everything takes place. Even more, with growing popularity of WebDriver and similar code-based engines development and test automation can be really aligned in terms of technological stack.

This is how it works in development world. As we can see a lot of technical enhancements are rather damaging than useful. Well, maybe in some cases we didn't grow enough to make some serious shift but at least for test automation it's good indicator that it wouldn't work neither.

So who is the winner?

And now the moment of truth. What kind of test automation approaches we should expect in the upcoming future?

First of all the test automation world isn't monolith. It has the code-level test automation normally performed by developers and system level tests automation performed by dedicated QA/Test Automation/SDET teams. Of course, some project teams try to build process where all roles are mixed but as for me it's just another extremeness to independent verification and testing when development and testing teams are completely separated. The optimal activities distribution should be somewhere in the middle. So, yes, close communication between development and testing streams is still something that works (and I don't see any reason for it to stop working in the future) but there will always be some areas where one stream has monopoly on. There still should be some areas of responsibility. When responsibilities are shared between everyone eventually it appears that no one is really responsible. And this is bad. So, I still expect that development and testing streams will have their dedicated areas for test automation. Mainly developers work on code-level tests while testing stream works on system-level tests.

Developers testing shouldn't go behind the structured and data-driven approaches. There are several reasons for that:

  • Since developers work with programming code there's no need to switch to some other technologies for writing another programming code.
  • Any high-level abstractions provided by keyword-driven approach do not really bring the value for developers as their tests aren't directly targeted to fit the requirements but rather some specific code. Additionally, they have smaller bandwidth for testing activities, so they have to spend their time more effectively and look more simple solutions rather than play with complicated frameworks.
  • Tests created by developers are not just targeted to verify functionality but also they serve as a kind of sample working code. Developer tests are mainly needed to make sure that when we start using some code within another methods it works properly. Otherwise, we can test and debug it separately from entire system.
So, for code-level testing I can expect essential changes only in case of some programming paradigm changes (e.g. when new paradigm is introduced and become the mainstream). In this case testing should reflect that. Also, some new approaches may be expected for relatively new but already popular technologies (like mobile, cloud-computing etc). There's a lack or normal tool set for testing in this area so it will be created/upgraded.

As for system-level testing (including UI) there's already trend to use similar technological stack as the development. Mainly it happens due to the fact that new automation tools which become mainstream and eat essential portion of market from vendors they are mainly based on some API of mainstream programming languages. This happened with Web, this happens with mobile, with back-end. Other areas won't stay apart maybe except some niche technologies where vendors will still have their clear area.

As for framework types to be involved, what I don't really expect is the popularization of visual programming tools. They didn't shoot for development after many years, so why they should shoot for testing which simply goes a few years behind? And how are they going to shoot for non-UI testing when in most cases API use is more appropriate and more efficient?

Also, I don't expect people to give up trying keyword-driven approach as alignment between requirements and tests is quite essential and routine part. Maybe the BDD solutions like Cucumber wouldn't shoot properly, well, then there would appear something else. Keyword-driven approach isn't new and from time to time people use it in different form for system level tests. It is also can be reflected with the fact that QA and test automation roles are tend to be merged into one so that test automation is no longer a separate activity but rather another "gun" which can shoot. It means that there still would be a need to express test instructions in some readable form. For now the Gherkin standard looks the most appropriate candidate for that. Thus, we bind test scenarios to automated tests. So, after that the next logical step seems to be binding requirements/specifications to the test scenarios as their synchronization is still routine and quite boring part of test automation.

At the same time, structured framework types are still valid options and still quite optimal approach for test automation. So, they wouldn't disappear but still keep essential part of test automation project.

Anyway, this is just another trial to predict the future as an extrapolation of present which is not really right. Who knows? Maybe in a few years someone will invent another approach which will explode test automation and brings the automation to completely new level. Would that be so or would there be something different? Well, there's only one way to find that out :-).

References

  1. Resource dedicated to test automation frameworks
  2. Automation Framework Architecture for Enterprise Products: Design and Development Strategy by Anish Shrivastava
  3. Types of Test Automation Frameworks
  4. Test Automation Framework Designs Presentation
  5. Pros and cons of different automation frameworks
  6. Types of Automation Frameworks by Taylor Reifurth
  7. The Evolution of Test Automation Frameworks
  8. Evolution of test automation frameworks - Master Chef's choice: from spaghetti to cucumber
  9. Future of Test Automation Tools & Infrastructure by Anand Bagmar
  10. The evolution of test automation by Narayana Maruvada
  11. Need for Test Automation & Evolution of Frameworks
  12. Java Unit Testing - JUnit & TestNG
  13. SlideShare: Test Automation 3.0 - Evolution of Rapid Application Testing
  14. Model-based testing (MBT) by Zoltan Micskei on Department of Measurement and Information Systems in Budapest University of Technology and Economics (BME) site
  15. Conformiq: Model-Based Testing
  16. Model-Based Testing
  17. Applied Model Based Testing - Experiences & Examples by Simon Ejsing

No comments:

Post a Comment