Tuesday, 16 December 2014

Automated Testing: Moving to Executable Requirements

Automated Testing: Moving to Executable Requirements

A lot of test automation tools and frameworks are targeted to combine technical and non-technical aspects of test automation. Mainly it's about combination of test design and automated implementation. Thus, we can delegate test automation to people without programming skills as well as make solution more efficient due to maximal functionality re-use. Eventually, test automation tools are usually capable enough to create some sort of DSL which reflects business specifics of system under tests. There is a lot of test automation software created to provide such capabilities. Main thing they all reach is that they mitigate the border between manual and automated testing. As the result we usually control correspondence between test scenarios and their automated implementation.

But it's not enough as we also have requirements and if we change them we should spend some time to make sure that requirements are in line with tests and their automated implementation. Thus, we need an approach which combines requirements, tests and auto-tests into something uniform. One of such approaches is called Executable Requirements. In this post I'll try to describe existing solutions for that and some possible ways where to move.

How Executable Requirements Should Look Like?

Before starting describing any existing approaches/solutions and potential ways to go we should clearly identify what we actually should expect in the end. We have to define attributes or features we need to obtain to make sure we definitely use Executable Requirements approach. So, what are those items?

There Should Be Requirements

This is insanely obvious thing but it is really something we should start with. In order to implement Executable Requirements approach we need requirements. What requirements are?

Requirements is structured, formalized description of software under development, the way it is supposed to work. Normally it is represented as a set of feature descriptions with their expected behaviour.
This is improvised definition but main thing we should know about it is that there may be other approaches of expected system behaviour definition. E.g. we can define that using graphical model or based on similar application or previous experience in general. All those approaches and the way to use them are described in ISO/IEC/IEEE DIS 29119-4:2013 standard.

So, we have requirements if:

  1. Any expected system under development behaviour is described
  2. The description is done in textual form (maybe in combination with some graphics but main information should be stored as readable text)
  3. All descriptions fit good requirements criteria including Traceability which is useful to cover requirements with tests

Requirements Should Be Executable

Well, it's another obvious thing. Otherwise, it wouldn't be Executable Requirements. What do we expect here? In the context of Executable Requirements approach the requirement can be treated as executable if it is (or it has some) shared part which can be executed by some software in order to perform testing of the system under development. To make long story short, our requirement description is actually a kind of source code for automated testing or at least it refers to some resource which is automated test.

A lot of test management systems have similar feature when we have some form for requirements with linked identity for test and linked automated test. Just take anything:

  1. HP Quality Center
  2. Microsoft Test Manager
  3. Rational Quality Manager
  4. SilkCentral Test Manager
But it's not enough to make requirements executable. All those systems provide interface for setting up the link to some executable item. But it doesn't contain this resource itself. It means that if we change the requirement none of automated tests would be affected. Or even more, we may link wrong or empty tests. Well, we'll still be able to run all the stuff linked to requirement but it would be completely irrelevant to it.

Tests Should Be Sensitive To Requirements Change

That's one of the most important thing! Executable Requirements are not about requirements linking to some executable tests but it's about making the requirement to become the source of executable test. Either entire description or some of it's part should be the source for tests generation. This resolves the problem with tests and requirements synchronization. If requirements change tests are updated accordingly.

What Benefits We Should Obtain?

Well, all the above things should have some goals we're about to reach. They are:

  1. Requirements, tests and auto-tests are always kept in synchronization. Thus our testing is always up to date with the requirements
  2. Tests are taken based on requirements thus we always have nominal 100% requirements coverage
  3. Testing effort minimization as requirements definition, test design and automated tests implementation are collapsed into one activity
  4. Human factor minimization as some extra activity is done automatically
Well, that sounds nice. But let's take a look what do we have on practice.

Existing Solutions Overview

Executable Requirements is not something new and there's a number of tools/engines which implement something close to it. I don't want to make an overview of all of them. I can identify several major groups of solutions joint by some common approach with common advantages and gaps.

Fitnesse-like tools

This group of tools provide infrastructure in a form of some site or just some add-on to documentation storage software with extra ability to execute some specific macro or some code on the background based on text from some document. Examples are:

All those tools have some common things. They all provide some interface to deal with the documentation. At the same time those documents may contain some parts which are linked to executable code. So, eventually requirements are represented as the documents (simple web pages) where some of their parts may run some executable code linked to the page.


  1. Requirements are represented as real documentation with good formatting and readable form
  2. Everyone can easily trigger tests execution


  1. Mainly there's no direct integration with CI or source control, so executable code should be deployed somehow on server
  2. Tests aren't easily configurable and portable to different workstations. E.g. we hardly can run the same set of tests on multiple remote machines in parallel as a part of continuous delivery process
  3. Require human interaction to trigger run

Cucumber-like engines

Another family of engines still operate with textual representation of executable tests but they pay more attention to engine itself rather than infrastructure. Main representatives are:


  1. Closely integrated to technical side of the entire infrastructure
  2. Portable to different workstations
  3. Flexible to configuration
  4. Test execution requires minimal human interaction


  1. Despite the source of tests is still textual it is not represented as requirement document shared to everyone. Source is usually separate resource
  2. Requires technical knowledge and access to test solution sources to run and provide some updates

But Is That Really Executable Requirements?

My answer is: No. Mainly all the above solutions provide the way to execute tests based on text instructions + extra ability to combine tests and requirements into a single document. Well, that works pretty well if we talk about some acceptance tests which are normally presented in small quantity and with quite short scenarios.

But that's not applicable for a full featured testing. Also, if we take a look at the way how requirements are defined we'll see that normally they don't contain test scenarios. Maybe some use cases are available but we'll not find entire test suite defined in the same document. So, most of existing solutions just provide an ability to combine requirement definitions with executable test descriptions. In other words, these are not Executable Requirements but they are executable test scenarios added to requirement descriptions. In other words, the part which is the source for automated tests doesn't belong to requirement itself. It means that the requirement part is not executable.

How Should Real Executable Requirements Look Like?

Yes, it seems like existing solutions don't provide the true Executable Requirements approach. At the same time, they show the way how it should look like.

Main thing which is missing is an ability to generate tests based on requirement definition which is normally text. We know how to bind text to the executable code (Cucumber and all similar solutions) but we need an additional step which would generate test scenarios based on requirements descriptions.

Is that something we can do? Actually, yes. When we design our tests we have a lot of typical cases we do. E.g. when we know that some record should have unique combination of some key fields we definitely need to perform additional test which checks the case when we try to create non-unique records. Another example is validating input into some field which requires specific format (e.g. e-mail field). In this case we definitely should check any record which matches the format, then check record which violates it, some very long string, special characters, spaces, case sensitivity. Main thing is that it is typical check list for many different cases.

OK. Let's try to formalize it a bit. So, for most of test scenarios we have such common items as:

  • Action to perform - the set of operations we should perform as the part of test scenario before reaching the state where we compare expected and actual state using check list
  • Input data - the set of values we use during action execution. Here we can identify the format and other constraints of any input value we should use
  • Expectations on success/failure - the check list for verifications in case of positive or negative scenario
Let's see it on example. Imagine we have some form with 2 text fields (e-mail and phone number) and Submit button. Form looks like this:

If we enter properly formatted information we'll see the Welcome screen. Otherwise we'll see the error message. E-mail and phone number fields usually have specific formats. So, based on the above classification we can describe the form behavior in the following way:
  • Representation 1
        I enter "<E-mail>" value as E-mail
     And enter "<Phone Number>" value as Phone Number
     And click on "Submit" button
        | Field        | Type   | Format                |
        | E-mail       | String | (\w+)[@](\w+)[.](\w+) |
        | Phone Number | String | \+\(\d{1,3}\) \d{8}   |
    On Success:
        I should see the Welcome screen is open
    On Failure:
        I should see the error message
    This still looks too formalized. But look! This is not dedicated test. This is something like technical specification. We have major operation flow, data format definition and various behaviour depending on positive or negative input. But these aren't tests yet. It is something which can be a basis for the tests to be generated. Imagine if we can generate the following tests based on the above description:
  • Representation 2
    Feature: Sample Feature
      Scenario Outline: positive test
        When I enter "<E-mail>" value as E-mail
        And enter "<Phone Number>" value as Phone Number
        And click on "Submit" button
        Then I should see the Welcome screen is open
          | Phone Number   | ValidInput | E-mail        |
          | +(81) 23560730 | true       | |
      Scenario Outline: negative test
        When I enter "<E-mail>" value as E-mail
        And enter "<Phone Number>" value as Phone Number
        And click on "Submit" button
        Then I should see the error message
          | Phone Number                   | ValidInput | E-mail                         |
          |                                | false      |                  |
          | +(306) 48051823+(306) 48051823 | false      |                  |
          | \\+\\(\\d{1,3}\\) \\d{8}       | false      |                  |
          | +(81) 23560730                 | false      |                                |
          |                                | false      |                                |
          | +(306) 48051823+(306) 48051823 | false      |                                |
          | \\+\\(\\d{1,3}\\) \\d{8}       | false      |                                |
          | +(81) 23560730                 | false      | |
          |                                | false      | |
          | +(306) 48051823+(306) 48051823 | false      | |
          | \\+\\(\\d{1,3}\\) \\d{8}       | false      | |
          | +(81) 23560730                 | false      | (\\w+)[@](\\w+)[.](\\w+)       |
          |                                | false      | (\\w+)[@](\\w+)[.](\\w+)       |
          | +(306) 48051823+(306) 48051823 | false      | (\\w+)[@](\\w+)[.](\\w+)       |
          | \\+\\(\\d{1,3}\\) \\d{8}       | false      | (\\w+)[@](\\w+)[.](\\w+)       |
    This is already something looking like test set covering different input options. And this is runnable Cucumber feature representation. So, imagine you define technical specification in the format of Representation 1 and there's engine which generates Representation 2 based on the specification. The Representation 2 in turn is runnable test scenario which can be taken by Cucumber or any similar engine.

    If we have it we don't really need to define most of routine scenarios but just some rules for scenarios generation. We can wrap such description into some standard form which can be used as requirement or specification document. This way we have requirements which are the basis for test scenarios generation and test scenarios are basis for generating automated tests. If we change something in requirements all our automated solution would change and react on modifications. That would really makes us not just requirements but Executable Requirements


    All things described above is my opinion and vision of future growth in the direction of Executable Requirements. Some of the elements were described just as an idea but it is something we definitely should be able to reach. Or at least it's the next step to move. It doesn't really matter which form it would take, the main thing is that we should be able to generate test scenarios based on requirements represented as some structured and readable text.


    1. Can u suggest any tools available for automation testing based on executable requirement?

      1. That can be anything which allows you representing your test as some form of specification. It's general approach rather than something tool-specific. Just google "executable requirement specifications" to get the list of examples