tag:blogger.com,1999:blog-25323027632158444162024-03-13T20:42:18.430+00:00Test Automation from insideTest automation shouldn't be a black box between the code and actual results. So, let's dig the code to see how it is made insideMykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.comBlogger62125tag:blogger.com,1999:blog-2532302763215844416.post-36441580780563008172017-12-31T19:47:00.001+00:002017-12-31T19:47:09.206+00:00Video Course: Data-Driven Testing in SeleniumThis is the Data-driven testing in Selenium video course created by me.
<div class="separator" style="clear: both; text-align: center;"><a href="https://www.packtpub.com/web-development/data-driven-testing-selenium-video" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://www.packtpub.com/sites/default/files/bookretailers/V08632_MockupCover.png" width="324" height="400" /></a></div>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-43912393985126541372017-10-25T13:12:00.001+01:002017-10-25T13:12:46.169+01:00Video Course: Automated UI Testing in AndroidThis is the Automated UI Testing in Android video course created by me. It is the continuation of series of courses dedicated to step-by-step UI automation framework development.
Just click on the image below for more details.
<div class="separator" style="clear: both; text-align: center;"><a href="https://www.packtpub.com/application-development/automated-ui-testing-android-video" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://www.packtpub.com/sites/default/files/bookretailers/V08795_MockupCover.png" width="324" height="400" /></a></div>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-36201980490810552992017-05-02T11:33:00.000+01:002017-05-02T11:33:41.923+01:00Video Course: Automated UI Testing in C#This is the Automated UI Testing in C# video course created by me. It is the continuation of series of courses dedicated to step-by-step UI automation framework development.
Just click on the image below for more details.
<div class="separator" style="clear: both; text-align: center;"><a href="https://www.packtpub.com/application-development/automated-ui-testing-c-video" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://www.packtpub.com/sites/default/files/bookretailers/V07255_MockupCover.jpg" width="324" height="400" /></a></div>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-30430578575668508002016-12-29T10:38:00.000+00:002016-12-29T10:38:15.468+00:00Video Course: Automated UI Testing in JavaOK. This is the Automated UI Testing in Java video course created by me. Just click on the image below for more details.
<div class="separator" style="clear: both; text-align: center;"><a href="https://www.packtpub.com/application-development/automated-ui-testing-java-video" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://www.packtpub.com/sites/default/files/bookretailers/V06247_MockupCover.jpg" width="324" height="400" /></a></div>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-78626634382454350262015-10-18T22:49:00.000+01:002016-10-02T20:22:30.858+01:00Cucumber JVM: Advanced Reporting 3. Handling FileNotFound or Invalid JSON Errors<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
td{border:1px solid black;}
th {background-color:#CCCCDD;border:1px solid black;}
table{border:1px solid black;border-collapse:collapse;}
.defrow:nth-child(even) {background: #CCC}
.defrow:nth-child(odd) {background: #FFF}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
.passed {background-color:lightgreen;font-weight:bold;color:darkgreen}
.failed {background-color:tomato;font-weight:bold;color:darkred}
.undefined {background-color:gold;font-weight:bold;color:goldenrod}
</style>
<title>Cucumber JVM: Advanced Reporting 3. Handling FileNotFound or Invalid JSON Errors</title>
</head>
<body>
<h1>Introduction</h1>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-aXHbB9nPPeI/VUZUwaKi8TI/AAAAAAAAAr0/yiXtd5WALPo/s1600/cucumber.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-aXHbB9nPPeI/VUZUwaKi8TI/AAAAAAAAAr0/yiXtd5WALPo/s200/cucumber.png" /></a></div>
</p>
<p>
Since Advanced Cucumber JVM reporting was <a href="http://mkolisnyk.blogspot.com/2015/05/cucumber-jvm-advanced-reporting.html">initially introduced</a> and then <a href="http://mkolisnyk.blogspot.com/2015/06/cucumber-jvm-advanced-reporting-2.html">enhancements were provided</a> the most frequent question was related to the error when input file was not found or it was improperly formatted.
Normally such error was accompanied with the output like:
<pre class="code">
java.io.IOException: Input is invalid JSON; does not start with '{' or '[', c=-1
at com.cedarsoftware.util.io.JsonReader.readJsonObject(JsonReader.java:1494)
at com.cedarsoftware.util.io.JsonReader.readObject(JsonReader.java:707)
at com.github.mkolisnyk.cucumber.reporting.CucumberResultsOverview.readFileContent(CucumberResultsOverview.java:81)
...
</pre>
or
<pre class="code">
java.io.FileNotFoundException
at com.github.mkolisnyk.cucumber.reporting.CucumberResultsOverview.readFileContent(CucumberResultsOverview.java:76)
at com.github.mkolisnyk.cucumber.reporting.CucumberResultsOverview.executeFeaturesOverviewReport(CucumberResultsOverview.java:189)
...
</pre>
As it is one of the most frequent questions I decided to create separate post explaining reasons of it and the way to fix.
</p>
<a name='more'></a>
<h1>Major reasons of problem</h1>
<p>
There are a few reasons why such errors appear. Mainly they are:
<ul>
<li> Report generation is called before the source report is generated
<li> Report generation is targeted to wrong report location
</ul>
Let's take a closer look at all of those items.
</p>
<h2> Report generation is called before the source report is generated</h2>
<p>
Some people try to add report generation as the part of hooks or other post-actions which are mainly annotated with <b>@After</b>, <b>@AfterClass</b> or similar annotations. If you use any form of hooks or built-in post-actions you should know 2 major things:
<ol>
<li> <b>Cucumber runner doesn't call default JUnit Before- and After- methods</b> - that's why there are some pre-defined hooks with custom annotations for that.
<li> <b>Cucumber Reporting solution uses reports generated after entire Cucumber suite run completes including post-condition events</b> - it means that if you add report generation code into <b>@After</b> method the required input file may even appear but it wouldn't be populated with the data yet. Data will appear when Cucumber completes it's execution entirely.
</ol>
So, in order to generate Advanced Cucumber reports properly
<div class="rule">
DO NOT add report generation instructions into hooks or any other standard after-methods.
</div>
</p>
<h2> Report generation is targeted to wrong report location</h2>
<p>
In some cases we may use improper references for files. E.g. we have our Cucumber class like:
<pre class="code">
@RunWith(Cucumber.class)
@CucumberOptions(plugin = { "html:target/cucumber-html-report",
<span class="mark">"json:target/cucumber.json"</span>, "pretty:target/cucumber-pretty.txt",
"usage:target/cucumber-usage.json",
"junit:target/cucumber-results.xml"
},
features = { "./src/test/java/com/github/mkolisnyk/cucumber/features" },
glue = { "com/github/mkolisnyk/cucumber/steps" },
tags = {"@consistent"})
public class SampleCucumberTest {
}
</pre>
and then somewhere we generate detailed report like:
<pre class="code">
CucumberDetailedResults results = new CucumberDetailedResults();
results.setOutputDirectory("target/");
results.setOutputName("cucumber-results-width");
<span class="mark">results.setSourceFile("./src/test/resources/cucumber.json");</span>
results.setScreenShotLocation("../src/test/resources/");
results.setScreenShotWidth("200px");
results.executeDetailedResultsReport(true, false);
</pre>
The highlighted fragments of code should point to the same location. Otherwise we either have FileNotFoundException or wrong report generated at all.
</p>
<h1>Major Checkpoints and Resolution</h1>
<p>
So, what do we need to overcome all those problems? There are several steps you should go through to make sure that you include reporting in proper way.
</p>
<h2>Make sure you use proper reporting library version</h2>
<p>
New features are always to come and versions are updated as soon as some certain number of implemented features is implemented. At the moment when this post is being written the current version is <b>0.0.11</b> and it can be included as Maven dependency using the following construction:
<pre class="code">
<dependency>
<groupId>com.github.mkolisnyk</groupId>
<artifactId>cucumber-reports</artifactId>
<version>0.0.11</version>
</dependency>
</pre>
or Gradle dependency:
<pre class="code">
'com.github.mkolisnyk:cucumber-reports:0.0.11'
</pre>
Also, you can find the latest version of the reporting library in the Maven Central repository via any available web viewers, e.g. <a href="http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22cucumber-reports%22">at search.maven.org</a> or <a href="http://mvnrepository.org/artifact/com.github.mkolisnyk/cucumber-reports">at mvnrepository.org</a>.
</p>
<h2>Use extended Cucumber reporting functionality</h2>
<p>
The library itself includes Cucumber runner extension which includes reports generation at proper time. Here is the example of Extended Cucumber runner use:
<pre class="code">
import org.junit.runner.RunWith;
import com.github.mkolisnyk.cucumber.runner.ExtendedCucumber;
import com.github.mkolisnyk.cucumber.runner.ExtendedCucumberOptions;
import cucumber.api.CucumberOptions;
@RunWith(<span class="mark">ExtendedCucumber.class</span>)
<span class="mark">@ExtendedCucumberOptions(jsonReport = "target/cucumber.json",
retryCount = 3,
detailedReport = true,
detailedAggregatedReport = true,
overviewReport = false,
toPDF = false,
outputFolder = "target")</span>
@CucumberOptions(plugin = { "html:target/cucumber-html-report",
"json:target/cucumber.json", "pretty:target/cucumber-pretty.txt",
"usage:target/cucumber-usage.json",
"junit:target/cucumber-results.xml"
},
features = { "./src/test/java/com/github/mkolisnyk/cucumber/features" },
glue = { "com/github/mkolisnyk/cucumber/steps" },
tags = {"@consistent"})
public class SampleCucumberTest {
}
</pre>
So, in case of such extension use we make sure we have our reports processing done at the moment when standard Cucumber reports have already been generated.
</p>
<h2>Make sure both CucumberOptions and ExtendedCucumberOptions refer to the same report files</h2>
<p>
Advanced reporting is mainly about Cucumber standard reports post-processing so we should make consistent reference between values in <b>CucumberOptions</b> and <b>ExtendedCucumberOptions</b> annotations. Mainly it's about JSON file paths as for <b>CucumberOptions</b> it is output file while <b>ExtendedCucumberOptions</b> annotation utilizes this file as an input file.
</p>
<p>
Additionally, make sure that
<div class="rule">
Usage report operates with the output file different from the one required by other reports. Also, usage report is not generated for dry runs.
</div>
So, pay attention which files you refer to.
</p>
<h1>Summary</h1>
<p>
By following the above rules and getting basic understanding of how this solution works we can mitigate a lot of silly problems. Also, the <a href="http://mkolisnyk.github.io/cucumber-reports/">Cucumber Reporting Documentation
</a> pages are available and they will be populated as soon as new features arrive. So, you can refer to it as well.
</body>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com39tag:blogger.com,1999:blog-2532302763215844416.post-41564028725480088952015-09-13T23:52:00.001+01:002015-09-13T23:57:23.744+01:00Test Automation Best Practices<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.example {border:1px dashed black;background-color:#EEEEEE;margin:1em;padding:1em}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
th {border:1px solid black;background-color:#CCCCDD;}
td{border:1px solid black;}
table{border:1px solid black;border-collapse: collapse;}
.defrow:nth-child(even) {background: #CCC}
.defrow:nth-child(odd) {background: #FFF}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
.java {background-color:#b07219;font-weight:bold;color:black;border-radius:25px;border:1px solid black}
.csharp {background-color:#5a25a2;font-weight:bold;color:white;border-radius:25px;border:1px solid black}
.ruby {background-color:#701516;font-weight:bold;color:white;border-radius:25px;border:1px solid black}
.jscript {background-color:#f1e05a;font-weight:bold;color:black;border-radius:25px;border:1px solid black}
.python {background-color:#3581ba;font-weight:bold;color:white;border-radius:25px;border:1px solid black}
</style>
<title>Test Automation Best Practices</title>
</head>
<body>
<h1>Introduction</h1>
<p>
Test automation is not brand new thing. Even more, it's pretty known and quite old part of entire software development process. Thus, a lot of people have good and bad experience with it. As the result, there is some common set of best practices which were formulated in different ways. Different people concentrate on different aspects of test automation. Some of them are concentrating on technical parts while others pay more attention to higher level things. As always, the truth is somewhere in between.
</p>
<p>
So, what are the best practices in test automation?
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-U1PBU0Vviq8/VfX6EOj4AWI/AAAAAAAAAxY/XPR5RIiYX6w/s1600/Best-Practice.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-U1PBU0Vviq8/VfX6EOj4AWI/AAAAAAAAAxY/XPR5RIiYX6w/s640/Best-Practice.jpg" /></a></div>
For this post I've collected several posts/articles covering this topic (the list of them can be found at the <a href="http://mkolisnyk.blogspot.com/2015/09/test-automation-best-practices.html#refs">References</a> section of this post) and I'll try to combine them into one common list of practices. Mainly I'll contentrate on the top most UI level automation, however, a lot of practices are fully applicable to any level of automated testing.
</p>
<a name='more'></a>
<p>
In order to list practices in more appropriate and structured way we should define stages where each specific practice is applicable. This will also help us identifying proper sequence to apply practices in. So, generally we can define the following test automation stages:
<ol>
<li> <b>Analysis</b> - major stage where we define <b>what to automate</b> and major approaches we should use for automation. Major output from this stage is:
<ul>
<li> The scope of test automation
<li> Priorities for test automation
<li> Major list of technologies to use
</ul>
<li> <b>Planning</b> - on this stage we define <b>who should perform automation</b> and <b>when it should take place</b>. It is based on analysis stage results where we already should define the scope and expectations. This time we should define how it should work as well as we should set up initial process here. The output of this stage is:
<ul>
<li> Finalized scope for test automation
<li> The list of roles performed during test automation
<li> Interaction between different roles within the project
<li> Entire process and standards
<li> Goals and metrics to measure how good we are at goals achievement
<li> Timings
</ul>
<li> <b>Design</b> - this is the stage where we start creating test scenarios. Here we create initial scenarios which are needed to be covered with automation. As the output from this stage we get:
<ul>
<li> The set of scenarios to automate
<li> The set of test data we need for automation
<li> Initial framework to use for test automation
</ul>
<li> <b>Implementation</b> - this is major stage where we define <b>how automated tests should work</b>. It is the stage where automated implementation is being developed. Major output of this stage is the set of automated tests which are ready to use.
<li> <b>Execution</b> - this is the stage where we perform automated tests run and results processing. The output of this stage is the list of passed/failed tests as well as the list of problems revealed.
</ol>
Each of those stages depend on results of previous stages so the early we resolve some potential issues the less problems we have at the latter stage. On the other hand if we did something wrong at earlier stage we may have serious problems in the future and all those problems will only be accumulated. Thus, it is important not just to do things properly but it's necessary to do them at proper time.
</p>
<p>
So, let's list those practices based on different stages.
</p>
<h1>Analysis</h1>
<h2>1.1 Know your objective</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-w0PoOHJHtcA/VfX6ZOKurnI/AAAAAAAAAxg/e7_X2VZqHvw/s1600/Objective.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-w0PoOHJHtcA/VfX6ZOKurnI/AAAAAAAAAxg/e7_X2VZqHvw/s400/Objective.jpg" /></a></div>
Test automation can be involved in many different areas aiming different goals. Also, it is an investment which should pay back somehow. So, in order to get expected payback we should clearly understand what exactly we want to achieve. This way we know what to automate, how and what areas to pay attention to. Depending on goals we may have preference to small but fast suite of tests or wide range of tests covering vast area of application functionality. Depending on goals to achieve we should check proper approach. So, definitely we need to have some objectives first. But this is general practice which can be split into details.
</p>
<h3>1.1.1 Define Scope of Automation</h3>
<p>
This is one of the most important objective we should define. The scope helps identifying what we are going to automate, where we are at any point of time. Also, scope gives us idea whether we should take each specific module/functionality or any other area for automation or not. Generally, the scope is the answer to major question: <b>What to automate at all?</b>
</p>
<h3>1.1.2 Perform a Cost-Benefit Analysis</h3>
<p>
Once we have scope defined we can identify what tests do we need to have automated. And one of major things in it is the priority for automation. E.g. some part of application is expected to be updated more frequently thus it requires more frequent testing. Another part can be more critical for the business. Thus, it must be checked frequently. Some other parts can be covered fast with minimal efforts. All those factors (in additional to many others which may appear) can help to define what should be automated first, what's next etc. For this purpose we should estimate our efforts we put on automation of each specific part and compare it to benefits we get after that. If we can spent some small range of time and cover huge application area, well, why shouldn't we do that in first turn so that we free our resources from another routine part. At the same time, even if some application part is critical but it's automation requires a lot of efforts and resources then we'll probably should take something else first. In any case we shouldn't blindly automate everything we find on our way. Thus, we always should care about value each specific test can bring us.
</p>
<h3>1.1.3 Start with the easy things first</h3>
<p>
Easy things are easier to do as well as they require less efforts than complex things (that makes easy things easy). There are many reasons to start with easy things first. Some of them are:
<ul>
<li> When we just start automation we may not be aware of many tricky things we may encounter later. And easy things let us concentrate just on basic aspects of our test automation solution.
<li> Easy things are better to gain necessary experience before going to more complicated things.
<li> We should never forget that test automation should bring value. And the value is normally brought when we have some tests being performed without human interaction. So, the more tests we develop the more testing effort we can delegate to machines the earlier we return our investment in test automation the more value we bring with it. Easy things require less time and resources for that.
</ul>
</p>
<h3>1.1.4 Automation sanity</h3>
<p>
Automated testing brings biggest value when it runs as frequently as possible and reasonable. Ideally, testing should be performed after any change we make to application under test. This way we can confirm our application under test correctness all the time it changes. But testing is time-consuming so in most cases it's simply useless to run all tests after some minor changes to application under test. So, the compromising solution is to have some small set of tests which takes some reasonably small amount of time and which cover large area of application under test (maybe with high-level checks). That should form some kind of smoke or sanity tests which can be performed against any new build and can be a part of continuous integration. This is the way we always keep hand on pulse and have quick feedback in case of major problem when entire application functionality stops working.
</p>
<h3>1.1.5 Regression tests are good for test automation</h3>
<p>
Any automation is targeted to delegate routine operations from humans to machines (this is one of the major purposes of automation however not the only). Test automation is not an exception here. And the most routine and boring part of testing is the regression testing. So, it perfectly fits to automation needs as:
<ul>
<li> it is repetitive activity when application under test is expected to behave the same way as before
<li> it should be performed on regular basis
<li> any other tests which should be performed not just for current version but for future releases as well will become regression tests
</ul>
So, even if regression testing is not number 1 priority it is definitely good candidate for automation.
</p>
<h3>1.1.6 Plan to Automate Tests for Both Functional and Non-Functional Requirements</h3>
<p>
Functional testing is definitely an important part of testing but it's not the only. There are many other testing types which can be performed using the same techniques and approaches. As an example we can take such testing types as Security, Installation, Compatibility, Configuration testing. They are also up to interacting with application under test (similar to what functional testing does) but they have a bit different checkpoints and different run conditions. So, when we define what to automate we should not forget about other testing types as at each point of time any of testing types can be the most critical to the entire system.
</p>
<h3>1.1.7 Tests that check UI mechanics vs Tests that check application functionality</h3>
<p>
This is mainly about UI-level testing. In the ideal world at the UI level the only thing we should check is that the data is rendered properly and displays all the data. But on practice we still have to create tests which replay user scenarios. Thus, we have 2 major group of UI-level tests:
<ul>
<li> Tests that verify UI layout itself
<li> Tests that replay user scenarios via UI
</ul>
Both groups of tests play different role and can be needed at different stages. E.g. tests that verify UI layout can be used in the following areas:
<ul>
<li> Major exercise of application UI - the major target of UI testing and the area where UI-level testing is the only appropriate solution
<li> A kind of fast certification of UI descriptions in automated tests - this is a form of unit tests verifying that window definitions you have inside your solution are still up to date with actual UI. It is very typical that UI changes from time to time and we should be able to verify it as fast as it is possible.
</ul>
As it's seen from the above examples the UI verification itself is something we should do before running any testing related to functionality verifications. And early reaction saves a lot of time on re-running failed tests as we can bring all UI up to date before running all.
</p>
<h2>1.2 Select the Right Automated Testing Tool</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-A5yOAr7TxxA/VfX6lzIJEaI/AAAAAAAAAxo/SGa1m4Um3j8/s1600/Toolbox.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-A5yOAr7TxxA/VfX6lzIJEaI/AAAAAAAAAxo/SGa1m4Um3j8/s400/Toolbox.jpg" /></a></div>
<a href="http://mkolisnyk.blogspot.com/2010/05/choose-proper-test-automation-tool.html">Choosing proper test automation tool</a> is one of the most fundamental thing in automated testing setup. If we do it wrong we may fail entire test automation. So, it is important to know what to look for while making proper selection.
</p>
<h3>1.2.1 Choose a testing tool that answers your Automation needs</h3>
<p>
Tools selection normally goes after we define our test automation objectives as well as we know our system requirements and entire scope of testing. It means that before starting any selection we already should have the list of:
<ul>
<li> system requirements
<li> testing types to perform
<li> list of technologies to use
<li> list of infrastructure systems
</ul>
and many other things we expect to automate. This should form some kind of check-list. And our test automation tool should fit them all. It is not necessarily just one tool it can be a group of applications or libraries which are specializing just on some area. Main thing is that it should cover all the requirements. Otherwise, we initially put some gap into our test automation which may result in some fundamental mistakes which cannot be overcome just because our tool set does not support some certain functionality at all.
</p>
<h3>1.2.2 Choose tool set which reproduces actual user experience in the closest way</h3>
<p>
Test automation is normally based on simulating user actions so that system under test should not make any distinction between actual user interaction and automated tests. The key thing is that automated testing is just a simulation which starts at some certain level. It means that there is probability that test automation tool may avoid some level which will be triggered when real users operate with the system under test. Just some examples of it:
<ul>
<li> Some custom or complex UI controls may have additional handlers on user events (like setting focus, moving mouse over etc.). Usually, test automation tools work by sending some specific system messages which only trigger just some part of those events.
<li> Web-services normally have a client API which is either generated (especially in case of SOAP) or have quite typical usage interface. Main thing is that interface itself may contain some mistakes which may lead to errors while trying to use that service. At the same time if you take a look at popular web-service testing tools you may notice that they usually ignore the client part but mainly based around simulating final requests with all bells and whistles around building them.
<li> Some GUI (including mobile UI) testing tools require some software module to be built into the application under test so that we can access all elements directly having knowledge of their internal structure. Such approach is typical for such solutions as Calabash, Robotium, White. Also, such possibility was given for TestComplete at the early versions when it didn't support some technologies from outside the application under test. Main thing is that we should make some custom build for our application under test and this build is not production quality build just by definition.
</ul>
All the above situation leads to 2 major groups of test automation gaps:
<ul>
<li> It's very likely that we can do some actions which cannot be reproduced in real life (e.g. in some cases I was able to modify read-only field)
<li> It's very likely that we cannot see the problems on production-like system either due to we skip this level or because the final build simply works differently
</ul>
The first group just make extra efforts in error analysis and useless waste of time on fixing it while second group is more dangerous as this way we miss the actual problem end users may see. So, before selecting any tool we should clearly identify <b>how end users will operate with system under test</b>. If that would be some king of API then we should take the same or similar API. If it is GUI interaction we should make sure that our test automation tool is runnable against the application which is potentially release candidate without any additional builds. Any other deviations may be acceptable only if there is no proper way to do things we need.
</p>
<h3>1.2.3 Select the automation tool which is familiar to your resources</h3>
<p>
Obviously, when you choose some test automation solution you should make sure that people who are supposed to work with it will pick that up quickly. That saves time for training as well as it gives some guarantee that test automation will be done at all.
</p>
<h3>1.2.4 Use the same tools as the development team</h3>
<p>
It is definitely good practice which gives several major benefits like:
<ul>
<li> Minimize efforts on tool set setup and configuration as most of things are already done during development infrastructure setup
<li> Less infrastructure support as we don't need to maintain development and testing infrastructure separately
<li> Technology alignment gives an extra possibility to get access to system under test code and re-use some modules, constants and any other stuff. This way we may be less sensitive to some changes (e.g. if we use some constants we no longer care if actual value was changed) or detect potential sensitive changes far before entire test suite run starts (in case of some interface changes which may lead simply to compilation errors)
<li> There is an ability to involve developers in building test automation. They may share some practices and help building test automation solution more flexible to potential changes.
</ul>
Unfortunately, it's not always possible but it's definitely very helpful. And since modern test automation solutions try to utilize mainstream development technologies such practice is not a rare case.
</p>
<h3>1.2.5 An automation tool is important, but it is not the solution of everything</h3>
<p>
Every tool has restricted applicability area as well as it has restricted set of use cases. So, we should not expect tools doing everything. If something is missing it may be a responsibility of another automation tools or simply something outside of automated testing scope. Major thing is that we shouldn't try to do anything the tool is not supposed to do at all.
</p>
<h3>1.2.6 Train Employees</h3>
<p>
Of course, it's good when we have a solution and we already have people which are ready to use it. But it is not common case. People should get familiar with application under test as wel as with some nuances of test automation solution. So, it takes some time to pick up everything. So, the time for training people in test automation should also be taken into account. Otherwise, we may appear in the situation when people fail test automation before they realize how to utilize it properly.
</p>
<h2>1.3 Automated Testing is not the replacement for Manual Testing</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-WSpPxs1V9n8/VfX6uLYIKnI/AAAAAAAAAxw/4foVHu-7Yi4/s1600/Man-vs-machine.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-WSpPxs1V9n8/VfX6uLYIKnI/AAAAAAAAAxw/4foVHu-7Yi4/s400/Man-vs-machine.jpg" /></a></div>
Frequently, I see questions like "Why do you still have manual testing?", "Can automation fully replace manual testing?". The most confusing part is that testing (both manual and automated) are treated as the same kind of testing. Well, in some cases that can be true (e.g. for API level testing where testing initially means using some kind of API). But in general case there are some deviations.
</p>
<h3>1.3.1 Manual vs Automated - Testing vs Checking</h3>
<p>
Automated testing usually goes through pre-defined set of steps and check-points while manual testing is flexible to various checks and can be targeted to various different area depending on nature of changes under test. At the same time every testing includes a lot of routine activities where automated testing is very helpful. Such difference makes automated testing a good addition to manual testing process but not the replacement. In other words, the more routine testing is delegated to machines the more time and resources left for more flexible and thorough testing performed by humans.
</p>
<h3>1.3.2 Arrange a proper manual test process</h3>
<p>
Again, we should not forget that testing is wider process than just performing some test scenarios. It also includes analysis, test design, some exploratory activities which are very hard to automate. At least those activities are hard enough to make automation too expensive to involve. Also, there may be some areas which are not covered with automation. Due to all the above reasons we should not forget to setup manual testing process covering activities which are not covered with automated testing. And again, this process should include automated testing as similar part as ordinary manual testing. So, test automation should not be something outstanding, it should be a part of some unified solid process.
</p>
<h2>1.4 Keep Realistic Expectations</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-4ySesV6CkQk/VfX60nzoqAI/AAAAAAAAAx4/MKszBhb8YH8/s1600/optimism-realism.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-4ySesV6CkQk/VfX60nzoqAI/AAAAAAAAAx4/MKszBhb8YH8/s400/optimism-realism.jpg" /></a></div>
A lot of failed test automation projects take place due to improper expectations from it. So, in order to be successful we should clearly understand what is really covered by test automation and what can be done by something else. Having proper expectations we may properly detect the value of automated testing we should have.
</p>
<h3>1.4.1 Do Not Expect Magic From Test Automation</h3>
<p>
Automated testing definitely can bring some benefits to entire testing process but don't expect more than it physically can achieve. It may reach higher precision for high volume calculations, it may run in 24/7 mode, it may guarantee that something working before works the same way now after applying additional changes. But do not expect automated testing doing something more without any additional preparations. E.g. often people mention high velocity and fast feedback as an advantage of automated testing. It's a bit far from truth as there are a lot of restricting factors preventing automated testing running faster (the velocity of application under test, time for locating specific elements, additional time losses for data processing etc.). And in general, if we have our testing automated it doesn't mean that we should forget about it. It is still the process which requires maintenance, corrections, extensions and some other activities around it.
</p>
<h3>1.4.2 Manual and exploratory testing are much better than automated one to find bugs</h3>
<p>
The number of bugs found is frequently considered as major metric of testing quality. It is not really correct but it is very convenient and very convincing when we get the number of bugs found before and after using automated testing. And it often appears that automated testing catches less bugs than manual testing. Why does it happens? The thing is that most of the bugs are normally introduced during some modifications or new functionality to application under test. This part is normally poorly covered with test automation as this is yet not finalized and thus unpredictable area of the tested application. At the same time such new changes are good subject of exploratory testing. So, the most buggy area is usually something which is not covered by automated testing. So, if we measure the number of bugs found it is more correct to compare the number of bugs found by automated and manual testing altogether with the number of bugs found by manual testing only within the same time frame, against the same test suite and with the same resources. And in such dimensions we'll see that automated testing saves a lot of time and resources on bringing confidence that the most routine parts of application under test definitely work as expected and manual testing mainly can be concentrated on some edge cases, critical paths and exploratory where human intelligence is more required.
</p>
<h3>1.4.3 Keep Functional Test Automation Out of the Critical Path</h3>
<p>
It's good when we have some critical functionality testing automated. But we should not forget that automated testing does repetitive and very restricted set of checks. Also, automated testing sometimes provides distorted picture of actual application under test state due to various false-positives or spontaneous connectivity or environment issues. Also, there are some operations which are too risky to be performed unattended (e.g. exercising payments on production environments). All those situations either should be assisted or completely handled manually.
</p>
<h1>Planning</h1>
<h2>2.1 Segregate Your Automated Testing Skills And Efforts</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-99yDeA8cWLE/VfX667r-d-I/AAAAAAAAAyA/4D0rsZ6583s/s1600/DivideTasks.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-99yDeA8cWLE/VfX667r-d-I/AAAAAAAAAyA/4D0rsZ6583s/s400/DivideTasks.gif" /></a></div>
</p>
<p>
As soon as the number of people involved in testing increases 1 person we definitely encounter the fact that people are different. They have different experience, skill set and areas where they can specialize better. As the result, there are some areas where one person can do better than others while someone else can be better in some different area.
</p>
<p>
On the other hand software testing itself requires different areas of knowledge. Mainly it can be distributed to technical and business domains. In most of the cases they are pretty irrelevant to each other especially for the software which is not targeted to software engineering people. Each of those areas require specific understanding of things to be done.
</p>
<p>In order to get maximal output from the team we should be able to make people collaborate the way that each person is mainly targeted to the area where he/she is more effective in filling the gaps by involving other people where they are more specialized in. E.g. business analysts are good at product knowledge and they can be major source to get an idea if the behaviour is right or wrong but they are not required to keep in mind all tricky moves you can do with the application. Test designers are normally good at check points definition but they normally have lack of clarity if each specific combination is really correct as well as they may have lack of technical skills when we go to the test automation implementation part. Test automation engineers are good at technical part but they are worse at test design and business domain. Of course, ideally all those roles should be merged. But each specialization requires quite wide range of knowledge and it takes a while to merge all this knowledge conglomerate into one person. That's why we should be able to split activities between different people taking into account the areas they are better with.
</p>
<p>
So, this set of best practices is about building the team and activities/roles distribution.
</p>
<h3>2.1.1 Build the right team and invest in training them</h3>
<p>
Of course, in order to get things right we need proper people. So, building the right team is always good practice. Actually, making valuable outcome is what makes team good no matter the area team works in. Another thing is that this valuable outcome appears after some time. If you hire new people it's hard to expect them working properly immediately. That's why it's said that "9 women will not bring 1 baby in 1 month". It means that each activity has lower border where we should expect some outcome. This is applicable to testing team as well. So, this lower border is the set of resources invested into the team in order to bring them up to speed. In case of test automation we should invest not just in learning the system under test but also into the tool set used for automation as there's huge variety of them and they are quite different. So, training people is always investment which should be taken into account.
</p>
<h3>2.1.2 Hire a Dedicated Automation Engineer or Team</h3>
<p>
Modern methodologies are targeted to mix roles. In case of test automation it can be combined with test design or development. Also, a lot of test automation tools still have record and playback features or some other visualized approaches to have an ability to involve non-technical people into test automation. But reality shows that:
<ul>
<li> The bigger test automation grows the more efforts are put into existing solution modifications rather than developing something new. So, fast and easy development doesn't bring good value anymore.
<li> The more features we have the more higher priority activities are required. That situation never happens in case of independent testing but in case of mixing roles it's pretty typical case.
</ul>
As the result, test automation solution may appear in unwanted state just because we didn't have a time or proper people who definitely could spend the time on making not just up-to-date but also robust, extensible and maintainable test automation solution. Eventually, test automation brings unstable results highly distorted by false errors and in the end nobody pays attention to it. This is typical case when test automation dies. In order to avoid this there should be some people who are fully dedicated to building, running and maintaining test automation solution.
</p>
<h3>2.1.3 Get executive management commitment</h3>
<p>
Test automation can yield very substantial results, but requires a substantial commitment to be successful. Do not start without commitments in all areas, the most important being executive management. In other words, test automation is not something which exists on it's own but the decision should come and solution should be driven from different sides. Management is one of that side.
</p>
<h2>2.2 You Should Not Automate Everything</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-qtO1wkF4-30/VfX7B5314XI/AAAAAAAAAyI/FTanl__caws/s1600/automating_everything.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-qtO1wkF4-30/VfX7B5314XI/AAAAAAAAAyI/FTanl__caws/s400/automating_everything.png" /></a></div>
</p>
<p>
This is one of the most important topic in automated testing. Of course, it's good to have as much testing automated as possible but the automation itself takes a while to develop, execute and maintain. So, we put some effort into that and all the time we should keep in our mind that any effort we put should bring some value back. If the outcome is not so valuable then maybe we shouldn't pay too much attention to it at least at first turn. So, this group of practices is mainly targeted to the fact that we should always take into account the value we have after introducing such automation.
</p>
<h3>2.2.1 Choose the automation candidates wisely</h3>
<p>
Automated testing requires some resources to be invested into:
<ul>
<li> development - we definitely need to spend some time to create automated tests
<li> execution - each automated test also takes some time to be performed. Automated test doesn't exactly mean fast test. These characteristics are not so relevant. Also, execution may require some complicated infrastructure
<li> maintenance - tests may be flaky or unstable due to many reasons and maintenance also takes some resources
</ul>
So, the more automated tests we have the more resources we require. At the same time the testing cycle normally should be kept at least on the same level. Otherwise it may become a big question whether we really need such automation. Thus we have growing resource costs on test automation, on the other hand we have limited capacity which tend to be exhausted after some time. In order to fit the limit we should pay more attention to tests which have higher:
<ul>
<li> Priority - the higher priority of feature is the more important the test is
<li> Coverage - the higher coverage test provides the less area is left for other testing activities. So, small efforts optimizes entire scope highly
<li> Frequency of use - the more frequently each specific feature is used the more routine will be covered if we have such feature automated. Also, high use frequency will compensate automation costs faster due to cost savings during execution time
<li> Resource costs - the cheaper test is to create/execute/maintain the more desirable it is for test automation
</ul>
All the above criteria should be taken into account while making the decision about what to automate first, what's next etc. Even more, when we choose the set of tests to run we should be able to find out which tests are really needed and which ones can be skipped.
</p>
<p>
Generally speaking, since we always have some constraints we should be able to choose the set of tests based on some criteria which eventually brings value.
</p>
<h3>2.2.2 Not everything should be a UI test</h3>
<p>
Of course, the application UI is the major interface the end user interacts with and we always should make sure that we see everything the end user is supposed to see. But applications mainly contain multiple levels of abstractions and any potential problem may appear at any of those levels. The lower level is the more distortion we have while trying to detect the source of problem. That's why we have not just high level UI tests but also unit and integration tests covering lower level of application abstractions. Eventually such distribution of tests was formulated in a form of <a href="http://martinfowler.com/bliki/TestPyramid.html">Test Pyramid</a> or in more general form as <a href="http://www.duncannisbet.co.uk/test-automation-basics-levels-pyramids-quadrants">Levels, Pyramids & Quadrants</a>.
</p>
<p>
The idea is that each application level should have dedicated set of tests targeted just to that level in order to:
<ul>
<li> better problem source localization
<li> higher feedback (lower level tests are normally faster)
</ul>
</p>
<h3>2.2.3 Sometimes you have to ask yourself, "Does an automated test really make sense here?"</h3>
<p>
During testing we may encounter some edge cases which requires some tricky configuration or hardware manipulation or generally some critical operations which normally require high attention on their completeness and recoverability. And at the same time such operations may appear to be one time or very rarely executing.
</p>
<p>So, this is the situation when we definitely need to make sure that the resources we spend on creating/maintaining such tests are appropriate to the outcome we receive. E.g. if we need to spend a week for some test which may produce flaky results but this test is needed only once or it is rarely used, well, why should it be better than one time manual testing?</p>
<p>Or what about testing functionality which is hard to predict or results are not stable enough (e.g. various graphical images verification)? In some cases it may appear to be cheaper to perform manual testing than all the time maintaining automated tests just because of some tiny changes. The problem here isn't just about flaky tests but also about application functionality itself and existing tools capabilities.
</p>
<p>
Also, there may be cases when you make crucial changes to the environment and if test fails you'll have to spend a lot of time repairing the environment. This is also something which requires more human attention and sometimes it's better to perform this manually rather than delegate to automated test and pray that everything goes well.
</p>
<p>
Generally, when you encounter all those cases you should always keep in mind that you are doing the automation in order to make entire testing process more efficient but not for entertainment. So, if automation doesn't bring more efficiency it is very likely doesn't make any sense.
</p>
<h3>2.2.4 Do not run against all browsers and OS versions</h3>
<p>
A lot of applications are targeted to run against different browsers, operating systems, devices etc. And all those configurations are needed to be tested. But each run takes some resources to be involved (time, hardware, etc). But additionally we should make sure that the value we bring with that is worth efforts we put. E.g. what is the point of testing ordinary web applications against IE6 on Linux while no end users are expected to have such configuration? On the other hand there are some popular configuration parameters combinations, e.g. Mac OS users use Safari browser more frequently. Also, based on various feed backs we may know the most popular configurations used by customers of our application under test. So, in order to use our resources more efficiently we should not blindly test against all possible configuration options combinations but rather cover the most popular configurations. All remaining parameter values can be used in other test configurations which use each configuration parameter at least once.
</p>
<h2>2.3 Involve Test Automation as Early as Possible</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-4qD06QcvnuA/VfX7ZHrVSDI/AAAAAAAAAyQ/P2_9M3CALNI/s1600/Time-Costs.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-4qD06QcvnuA/VfX7ZHrVSDI/AAAAAAAAAyQ/P2_9M3CALNI/s400/Time-Costs.png" /></a></div>
Automated testing requires some resources to be spent before it starts paying back. Mainly it is for framework development, infrastructure preparation, scenarios definition and many other activities which go before we have at least some minimal set of tests to run. At the same time application under test also requires some resources to be invested before first working version appears. If application development starts together with test automation all those constant costs are spent at the same time and automated testing starts working by it's direct purpose much earlier.
</p>
<p>
Also, the earlier application development stage is the less features are implemented. As the result it requires less resources to cover everything possible. If we involve automated testing at some late stage of application development when a lot of functionality is implemented we have to spend a lot of time to provide the same level of coverage as we have to cover new features as well as some time has to be spent on old functionality which hasn't been covered yet.
</p>
<p>
In some cases, automated tests may spot problems which are hard to spot by humans (mainly it is related to some complex or huge volume calculations). The latter we spot the real problem the higher probability that such problem is already visible to end users. That's why it's also important to involve such testing at earlier stage.
</p>
<h2>2.4 Set measurable goals</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-1_A55PddtxM/VfX7fm1E1II/AAAAAAAAAyY/pEEqfXimq18/s1600/tape_measuring_success.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-1_A55PddtxM/VfX7fm1E1II/AAAAAAAAAyY/pEEqfXimq18/s400/tape_measuring_success.jpg" /></a></div>
Automated testing itself also has some specific goals and this process can also be effective/valuable or not. And we should be able to detect this as if it is not effective or doesn't bring value it makes sense to avoid it. Thus, we come up to necessity of having goals. But not all goals can be measured in true/false form. In some cases we can be at some kind of interim state. In order to track such states we should be able to measure our progress/effectiveness/value and many other things which help us understanding whether we do things right or wrong. Having such knowledge we may have more clear picture where we need an improvement, which parts we can get rid of etc. That's why we need to set some goals and find the way to measure them. The set of metrics can be different. That can be consolidated metrics for <a href="http://mkolisnyk.blogspot.com/2014/09/blog-post.html">measuring application under test quality</a>, measuring quality of our tests themselves, coverage, progress and any other information which eventually comes up to some measure of confidence.
</p>
<p>
Major thing is that we should get some data which shows us how good our automated testing is and define some "red flag" indicators which alert us even before we actually encounter the real problem.
</p>
<h1>Design</h1>
<h2>3.1 Design Manual Tests in Collaboration with Test Automation Engineers</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-cpyGRsLY9d0/VfX7mT87x_I/AAAAAAAAAyg/6Jwv-ptE1_0/s1600/manual-automated.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-cpyGRsLY9d0/VfX7mT87x_I/AAAAAAAAAyg/6Jwv-ptE1_0/s400/manual-automated.jpg" /></a></div>
Despite test scenarios have some kind of standard structure normally each scenario item is expressed as free-form text unlike automated tests which are represented in strict and structured programmatic form. This means that there may be hundreds ways of describing test cases in order to get similar automated tests implementation. So, the purpose of this practice is to combine test design and test automation processes in such way that tests are designed for automation in the most suitable way. E.g. several scenarios may go through the same flow but operate with different data. Data-driven approach is very useful to automate this.
</p>
<p>
Also, it simplifies traceability. If test is initially designed for automation it is easier to set 1:1 correspondence between test scenario and implementation.
</p>
<p>
In some cases there are ways to combine test design and test automation into one activity. That was the idea behind <a href="http://mkolisnyk.blogspot.com/2015/05/the-future-of-test-automation-frameworks.html#keyword-driven">keyword driven</a> approach.
</p>
<p>
So, this kind of best practices targets to design tests initially the way convenient for automation.
</p>
<h2>3.2 Establish a Test Automation Architecture</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-E_odzWiJGaU/VfX7s0dyEwI/AAAAAAAAAyo/yLkpYUjLERE/s1600/TA-architecture.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-E_odzWiJGaU/VfX7s0dyEwI/AAAAAAAAAyo/yLkpYUjLERE/s400/TA-architecture.jpg" /></a></div>
Automated testing solution should be built using similar approaches and practices as any other software. One of the most important thing in it is that automated testing solution should not be disordered mess of code and resources. In order to make entire solution easy to use it should have some specific organization of components and resources. In other words it needs architecture.
</p>
<h3>3.2.1 Three levels of UI test automation</h3>
<p>
Entire test automation can be organized based on <a href="http://martinfowler.com/bliki/TestPyramid.html">Test Pyramid</a> idea. As for UI level testing in particular it can be divided into additional separate levels:
<ul>
<li> Core - this level contains basic libraries interacting with controls or some application types in general. Key feature of any component of this level is that it can be applicable to any other applications under test of the same type as current. That can be actually some kind of common library
<li> Routine - this level already contains application specific functionality but it mainly reflects just technical interaction levels. Some examples of it: navigation, filling in fields for some specific form
<li> Business - this level already reflects application under test business functionality and mainly shows <b>what</b> to do rather than how
</ul>
Normally, each UI-level test should contain operations from routine and business level with some point verifications which can be taken from lower levels. Such structure is recommended rather but we should admit that it's not 100% realistic. However, keeping such approach we can meet other practices from the same chapter.
</p>
<h3>3.2.2 Tests Should be efficient to write</h3>
<p>
Writing automated tests is major way of application test coverage expansion. In order to expand coverage more efficiently we should be able to write tests the faster the better. This can be done with the help of existing code re-use. The bigger code base we have the more possibilities of interacting with application under test we have. It is really important when our automated testing solution grows in size and we start spending more time on maintenance and at the same time we should cover some new features. So, having efficient way to write tests is definitely one of the vital features.
</p>
<h3>3.2.3 Tests Should be easy to understand</h3>
<p>
During entire life-cycle automated tests should be changed, they should be properly traced to actual test scenarios and requirements. In order to do this efficiently we should firstly be able to make tests easy to understand. Otherwise it would be easier to re-write tests from the scratch which is far not the best solution.
</p>
<h3>3.2.4 Tests Should be relatively inexpensive to maintain</h3>
<p>
Automated testing is normally applied to projects who last longer than a few months. During that time application under test grows with new features as well as a lot of existing features are being updated permanently. The key feature of automated testing is that it states application behaviour expectations based on some current state. Thus, if some part of application is changed some tests start failing not because they are bad but because they become outdated. It may be reflected at different levels. But major thing we should know about is that the more automated testing we have the more time we spend on making updates into existing solution rather than adding new features. So, making tests maintainable is another vital part which should be taken into account while choosing test tools and building architecture. This is major reason why record and playback approach doesn't work good as long-term solution.
</p>
<p>
And finally, good maintainability is one of the key features which make automated testing competitive to manual testing as if you cannot maintain your tests you have to re-write them -> re-write tests is similar or longer activity than passing them manually -> if automated testing requires more efforts than manual testing we should get rid of automated testing and switch to manual testing. So, if you want to gain the profit from automated testing you should make this automated testing profitable at long-term prospective.
</p>
<a id="3_2_5"></a><h3>3.2.5 Design framework to be initially capable for parallel execution</h3>
<p>
Even if you think that you would hardly even decide to run tests in parallel do not rule out such possibility as:
<ul>
<li> You may also need to run some actions concurrently
<li> If you don't need something right away it doesn't mean that it is useless in the future as well. Parallel runs are effective way of time costs optimization in case of long runs and sooner or later it can be the most effective way of optimization
</ul>
The key thing is that it's not really hard to prepare the framework for parallel runs. Mainly you should minimize the use of global objects. This requires some time at the beginning. But if you don't include such possibility at the beginning and after some time you decide to run tests in parallel it may happen that you have to re-work entire test automation solution as a lot of things were not taken into account. So, try to keep in mind possibility of parallel runs.
</p>
<h2>3.3 Create Good, Quality Test Data</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-cNYdtFvLTy4/VfX71Ig9x1I/AAAAAAAAAyw/twN1mjdNLC0/s1600/Test-data.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-cNYdtFvLTy4/VfX71Ig9x1I/AAAAAAAAAyw/twN1mjdNLC0/s400/Test-data.jpg" /></a></div>
</p>
<p>
It's not just important to reproduce all interactions automatically but it is also necessary to process all necessary combination of the data the application under test operates with. The more complex application is the more various resources it uses and it is very frequent case when such applications operate with some volumes of data. In such cases it is really hard to reproduce some scenarios if we don't have enough data. That's why proper test data is important for testing in general and automated testing in particular. Some major approaches of test data preparation can be found <a href="http://www.softwaretestinghelp.com/database-testing-test-data-preparation-techniques/">in this article</a>. In regards of test data preparation techniques major approaches are:
<ul>
<li> Self-prepared data - each test creates all necessary test data records on it's own. If necessary some data is randomly generated unless explicitly defined
<li> Pre-defined data - test environment contains some set of initially prepared data items which are initially supposed to exist so that tests do not care about their creation
<li> Mixed - the approach which combines both self-prepared and pre-defined data where it is more applicable. Thus it gives some flexibility for tests
</ul>
</p>
<h2>3.4 Know the application being tested</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-L8ZsLK9gWK4/VfX9qtdWEpI/AAAAAAAAA0A/TlZr6yk8pFU/s1600/knowledge-exchange.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-L8ZsLK9gWK4/VfX9qtdWEpI/AAAAAAAAA0A/TlZr6yk8pFU/s400/knowledge-exchange.jpg" /></a></div>
Domain knowledge and knowledge of the system under test is vital for testing and it is really helpful thing especially when you try to figure out what is covered and what is still needed to be covered. Also, it allows filtering out some check-points which are neither valuable nor realistic for the system. And eventually, it is useful when you analyze results and interpret them. Without proper domain and system under test knowledge you'll hardly be able to interpret results properly.
</p>
<h1>Implementation</h1>
<h2>4.1 Create Automated Tests That Are Flexible to Changes in the UI</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-2dnnpkreVCY/VfX7-14XIMI/AAAAAAAAAy4/IoQ3Y6Cz6V0/s1600/flexible-changes.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-2dnnpkreVCY/VfX7-14XIMI/AAAAAAAAAy4/IoQ3Y6Cz6V0/s400/flexible-changes.jpg" /></a></div>
As it was mentioned before application under test is being changed on regular basis during it's life-cycle. It results in regular changes to automated tests. UI level tests are not an exception especially when we deal with application which user experience takes an essential part. This is the case when UI is being changed quite frequently. In order to keep automated testing up to date within reasonable time frame in such situation we should apply practices which provide flexibility to UI changes. In other words we should minimize the maintenance efforts for UI changes.
</p>
<h3>4.1.1 Use Strong Element Locators</h3>
<p>
Some UI changes may be related to some layout modification or some label text updates. So, the change itself is not essential but it may impact a lot of tests which use each specific control. In order to react on such changes we should use object identifiers which hardly depend on such text content. Every technology has some specifics related to the element attributes but it most of the cases there may be some attributes which are not bound to varying text and at the same time they are pretty unique. These are various types of IDs, resource IDs.
</p>
<p>
Also, it is worth paying attention to the permanent parts of identifiers. In some cases there may be fixed part while there is also varying part. Typical example is any editor window where heading contains name of modified resource (varying part which changes as soon as we switch to another resource) and application title (which is relatively stable).
</p>
<p>
In any case we should make sure we use identifiers which are not going to be changed after even simple change like next run.
</p>
<h3>4.1.2 Use object maps</h3>
<p>
Many UI elements can be used multiple times in various different tests. As soon as each specific element is changed the way we should update the identifiers we use this results in necessity to make changes in all places where this element is used. In order to minimize modification costs we can define some logical name in correspondence to the actual identifier. That can be done in different forms. E.g. we can define global map of page elements where each entry contains some alias and correspondent actual object identifier (like repository in QTP or Aliases, Name Spaces in TestComplete). Each specific element can be wrapped into some object instance where actual identifier is defined as some attribute (as it is done in SilkTest and many other frameworks where controls have representation of class instances). In all those cases tests use just logical names. As the result, if we need to update the object identifier we should do it in one place for each specific control.
</p>
<h3>4.1.3 Re-Use system under test components</h3>
<p>
If development and testing teams work together and use the same technology stack (which is also good practice) they may also share their code base. Thus, testing solution may have the dependency on system under test code. As soon as we can share the code we can re-use some constants, enumerations or even interfaces. The trick here is that application changes are triggered by changes into the code. Since test solution has dependencies on code it can pick up changes immediately and start behaving differently without any additional modifications. That brings additional flexibility to testing solution. It is still aligned to system under test while we made some changes to it.
</p>
<h2>4.2 Remove Uncertainty from Automated Tests</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-CcXXgWjfces/VfX8EcU8YUI/AAAAAAAAAzA/yev1d_dTIUE/s1600/uncertainty.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-CcXXgWjfces/VfX8EcU8YUI/AAAAAAAAAzA/yev1d_dTIUE/s400/uncertainty.jpg" /></a></div>
Usually, when we try to make flexible and universal methods to perform automated testing for each specific project we may appear at the situation when in different situations tests return different results. This is mainly related to different environments, different execution time as well as previous execution results when some state was changed but wasn't recovered. This leads to problems which are hard to reproduce or problems which exist but were not properly detected and/or confirmed. In order to mitigate this gap we should make sure that every our test does the same actions going through the same set of application states each time it runs.
</p>
<h3>4.2.1 The test should always start from a single known state</h3>
<p>
Major source of test unpredictability is unpredictable initial state. If we run our sequence having different states of application components we use we may have different results each test run. In order to mitigate this we need to drive application to initial state before running test. Generally, this is about starting application, preparing some test resources needed for tests and applying some specific configuration required by this test. The more varying dependencies we pre-define the more predictable test behaviour we get. And if we get an error we have better level of confidence that application started working differently permanently rather than at some randon case.
</p>
<h3>4.2.2 Manage Test Data from Within the Test Script</h3>
<p>
Even if we manage to stabilize the test flow there still can be discrepancies in test behaviour related to the fact that data can be changed during previous runs or simply expire. If we use some data which becomes outdated quite frequently (each new run or within small range of time covering up to a few weeks) it makes sense to add additional instructions to create that data before we start running tests.
</p>
<h3>4.2.3 Provide fast data reset capabilities</h3>
<p>
For more or less permanent data we just have to make sure that we simply refresh it on regular basis in order to avoid a lot of just records as well as make sure that all necessary data is there initially in the system. For this purpose we need to have some procedures which reset the permanent data and the data storage to initial state. For such data it is OK to have such data reset once per suite or to have some job scheduled once a day/week/month depending on how frequently we need to restore our data.
</p>
<h2>4.3 Review Automated Tests for Validity</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-8q7N-zr3mm8/VfX8Lv-OBSI/AAAAAAAAAzI/Y2TQo_mlpl8/s1600/CodeReview_bodbol.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-8q7N-zr3mm8/VfX8Lv-OBSI/AAAAAAAAAzI/Y2TQo_mlpl8/s400/CodeReview_bodbol.png" /></a></div>
Application under test is being changed on regular basis → automated tests should be updated on regular basis → some changes affect not just specific interfaces but entire flow → some tests may stop verifying the functionality they were targeted to. In order to mitigate this problem we should never forget to review our tests to make sure that they still perform actions and verifications they were initially designed for.
</p>
<h2>4.4 Keep the Tests Short and Compact</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-QpDxS538r-o/VfX8RDvgmKI/AAAAAAAAAzQ/B8iswxquHq8/s1600/short-compact.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-QpDxS538r-o/VfX8RDvgmKI/AAAAAAAAAzQ/B8iswxquHq8/s400/short-compact.jpg" /></a></div>
In order to make tests easy to create, easy to maintain, easy to trace major target of test and generally easy to understand we should make our tests compact enough. In order to achieve this we should try to make tests short and compact. The reason for that is based the following principles:
<ul>
<li> the shorter test is the less instructions it contains → the less instructions to write → the faster it can be created
<li> the shorter test is the less instructions we need to update in case of modifications → the less time we spend on modifications
<li> compact test in particular includes proper level of detail for each instruction so that the higher level instruction is the bigger impact it may have → high impact defects usually highlight the same step in many tests → easier to split defects by priority
<li> Big number of short tests give more detailed information about each specific feature coverage. E.g. if we have some business functionality 10 small tests will give more detailed picture than one big even if this big test involves the same checks as those small tests. Imagine that one test fails. In case on 10 tests we have other 9 tests passed indicating narrow case which really contains problem and generally indicates that feature meets 90% of expectations. At the same time if big test fails it will only indicate that the feature doesn't meet 100% of expectations which is still true but this information is not so detailed.
<li> Due to big popularity of xUnit-like engines the approach of hard assertions is also popular. It means that test fails at first mismatch spotted. Of course, we can use soft assertions accumulating all errors but in many cases that would bring unnecessary noise as test might have failed on some blocking problem.
</ul>
The list of advantages can be extended but let's focus on practices leading to making tests short and compact.
</p>
<h3>4.4.1 Narrow down the scope of each specific test</h3>
<p>
Design tests the way that each of them verifies just some particular aspect of the system under test. It's not always possible and sometimes it is not reasonable but generally such approach gives more or less clear picture what exactly went wrong just by looking at the list of failed tests.
</p>
<h3>4.4.2 Try to re-use the same modules as frequently as possible</h3>
<p>
It is generally good practice to group repetitive set of instructions into some higher level modules. In this case if we need to change some common flow we can do this in one place. Also, it minimizes the number of instructions to write. Giving meaningful name to the module brings readability as now we see not just sequence of small steps but we can observe entire action we do.
</p>
<h3>4.4.3 Use Data-Driven approach for similar flow tests</h3>
<p>
In many cases we need to do the same steps but exercise different set of input and output data. It means that the entire test flow is pretty common. For this purpose the <a href="http://mkolisnyk.blogspot.com/2015/05/the-future-of-test-automation-frameworks.html#data-driven">data driven</a> approach is pretty handy.
</p>
<h3>4.4.4 Use different detail level modules</h3>
<p>
If during the testing we need to pass several application states before verifying things the test was targeted to we mainly don't need to make detailed verification and step instructions. Normally it is done in dedicated tests covering transitions between earlier states. At the same time the closer we are to the target state the more details we should put. In this case for earlier stages we can use some high level modules which do all necessary steps to navigate, perform all necessary settings and anything else we need before we are at the place we should verify as a part of current test. This way you make tests more sensitive to things it is supposed to verify. At the same time they compact enough to make you understand what exactly do we perform in order to get proper state as well as higher level modules usually combine multiple lower level instructions → easier to write and maintain
</p>
<h2>4.5 Optimize your tests</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-diBwSzens3s/VfX8YMlpMII/AAAAAAAAAzY/ZFD2GQNrZaI/s1600/break-through-maze.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-diBwSzens3s/VfX8YMlpMII/AAAAAAAAAzY/ZFD2GQNrZaI/s400/break-through-maze.jpg" /></a></div>
The more tests we have the more time it takes to run them all. At some point of time we may encounter situation when the entire execution time is not appropriate for fast feedback. Even before that there may be different cases when we should start thinking that our tests are using too much time. It doesn't really matter when it would happen. The key thing is that normally it happens and we have to be ready for that. So, what should we do in this case?
</p>
<h3>4.5.1 Test things at proper level</h3>
<p>
There are cases when people mainly focus on UI level automation and concentrate testing efforts around UI. Well, it definitely plays around all potential end user scenarios the way end users would do that. So, in terms of experiment cleanliness and accuracy that's definitely good approach. But what if we know our system and we know that, for instance, "those two actions will lead to similar results just because they internally use common controls and the same event handler". Mainly at the UI level we verify that the output is done properly with proper layout. This is definitely something we cannot guarantee at any lower levels. So, it's more efficient to delegate logic verification to lower level tests while UI tests concentrate mainly on visual aspects of application outcome.
</p>
<p>
Again we come back to <a href="http://martinfowler.com/bliki/TestPyramid.html">Test Pyramid</a>. It is not just about proper focus but it also about time savings. Usually lower level tests are faster as they interact with less external components and spend less time on data handling, network communications and many other things which eat execution time. So, having proper tests distribution across levels may save time without any loss in test and code coverage.
</p>
<h3>4.5.2 Merge exhausting tests</h3>
<p>
It is good practice to make dedicated test for each specific feature but at high level (especially UI-level) tests we may encounter situation when there are 2 or more features available at the same application state. But it takes much more time to reach proper state while each feature verification takes just moments. Imagine you need 10 minutes to get some application state and you have to exercise actions on 2 buttons which states are independent on each other and each verification takes 2-3 seconds. When there are several tests where getting to common application state takes essentially more time than verification itself these tests are exhausting. So, it isn't really profitable to spend 20 minutes running 2 tests while those tests can be grouped into 1 running 10 minutes + extra few seconds for extra verifications.
</p>
<h3>4.5.3 Avoid often locating</h3>
<p>
UI operations are pretty time consuming and in some cases we may interact with UI without actual need. E.g. when we have a web page with a table and we need to read all data items to pack them into some data structure there are 2 major ways of doing this:
<ol>
<li> Locate each data item individually and read the value from there
<li> Get page source and use some in-memory parser to get the data
</ol>
Usually second approach works much faster as in-memory processing takes less time. It still directly depends on the volume of data to process but this dependency is less noticeable than UI interaction for the same quantity of elements.
</p>
<p>
Another example is when we can avoid unnecessary repetitive UI interaction by caching already retrieved data or simply retrieving data once. E.g. we have the pop-up list with some items and we have to make sure that each element matches some values or expressions. We can do it like this (this is just pseudo-code sample):
<pre class="code">
for i = 0 to popup.getItems().size
popup.getItem(i).matches(expression)
</pre>
or we can do line this:
<pre class="code">
var array = popup.getItems()
for i = 0 to array.size
array[i].matches(expression)
</pre>
which interacts with UI only once. After that it uses in-memory object. Additionally, some programming languages have <b>for each</b> or similar loop operators which key feature is that the list value is calculated only once, so the same example can be written like this:
<pre class="code">
for each value in popup.getItems()
value.matches(expression)
</pre>
In both previous examples the number of UI interactions is 1 disregard the number of items pop-up list has. Using such simple and basic knowledge we can save a lot of time.
</p>
<h3>4.5.4 Synchronization</h3>
<p>
Application under test generally doesn't respond to any command immediately. Normally it takes some time to perform some server-side operations, UI rendering, data processing etc. But automated test instructions are simply sequence of actions to be performed so normally before running next operation we should make sure application completes previous one. We can set fixed length pauses within the test but it will be much wiser to wait for some specific state to happen. In other words we should wait for some specific application state/event no longer than some specific time out and continue test execution as soon as required state is reached or event is fired. But no more than that. This way we wait no longer than we actually need.
</p>
<h3>4.5.5 Locate elements wisely</h3>
<p>
In most of the cases each UI element may be located by many different ways. The element locating speed also depends on location strategy. Usually, elements defined by IDs are faster to find than more complicated ones like XPath. Also, locators can be simple (using one attribute) or complex (using multiple attributes or levels). So, when we define proper element locator we should always keep in mind that the more complicated locator is the more time it takes to locate element.
</p>
<h3>4.5.6 Use short paths to reach proper initial state</h3>
<p>
Different technologies provide some mechanisms to open application under test under specific state. E.g. we can open web page at required page with required parameters by specifying proper URL. For mobile applications there is deep linking for that. Also, we should not forget about server-side operations which can be triggered directly from test. The most common feature of all those features is that they are normally faster than similar operations via application UI. And in some cases they are extremely faster. So, if it is not important for current test how do you reach some specific state, well, we easily can choose faster way.
</p>
<h1>Execution</h1>
<h2>5.1 Test Early And Frequently</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-CMJ5ZpxEtb0/VfX8grnZE5I/AAAAAAAAAzg/Rjl8M49rYF0/s1600/largebugs.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-CMJ5ZpxEtb0/VfX8grnZE5I/AAAAAAAAAzg/Rjl8M49rYF0/s400/largebugs.jpg" /></a></div>
It is important to detect problems as early as they initially appear. In this case it is easier to fix them as at early stage we have less functionality wrapped around problem part which can be affected by the fix. Also, early detection still gives quite easy way of reverting changes back in case of serious problems. That's why it is important to run tests early.
</p>
<p>
The more frequently we run tests the easier we can detect the place when error was initially introduced → the easier to localize the problem. On the other hand running tests with the highest possible frequency is not really the goal. Ideally, we should be able to run our tests against every single commit as soon as it is available on server. We don't need tests with higher frequency as they will only show our tests stability. But having test execution being performed against any published change is really something that automated testing can do better than manual.
</p>
<p>
Eventually, we aim fast and frequent feedback which coincides with some other practices listed before. There may be several areas where we can apply current practice to.
</p>
<h3>5.1.1 User Sanity tests as the part of Continuous Integration</h3>
<p>
There should always be a set of tests which is executed after each commit. This way we highly localize the set of changes which may cause the problem. It works pretty well for unit tests which are usually fast and take just a few minutes to perform. In case of higher level tests which take longer to execute we should select some small sub-set of tests which runs during acceptable amount of time and covers some basic functionality which must work at any cost. Main idea is that they should be executed against any build we have.
</p>
<h3>5.1.2 Parallel tests</h3>
<p>
In some cases we may have a huge number of tests which we cannot avoid and which take too long to run. Of course, we can decrease the scope to optimize run time but this leads to coverage loss and potential quality loss as the result. So, in order to keep the same scope of tests but decrease run time we can run tests in several parallel streams. The idea is not new and there are existing systems which already support this. Major difficulty here is with test design. That's why we should <a href="#3_2_5">design framework to be initially capable for parallel execution</a> so that when we are here we already can use such possibility. Otherwise, we have to spend some time to make parallel runs possible.
</p>
<h3>5.1.3 Run big test suites infinitely</h3>
<p>
If our test run takes up to 8 hours we can easily use nightly runs to get some regular feedback on application status. But as soon as the run time takes longer than that we should do something else to get some frequent feedback. One of the practice is to split entire suite into some sub-groups and run them infinitely so that we are getting all results at least once a day. Infinite run here means that as soon as the test suite completes and results are sent the new run starts immediately. Of course, it requires infrastructure preparation for that especially when we use some CI solutions which require licenses. But even at the early beginning of the project we should realize the fact that the test execution time may take long time and we have to reserve some space for that initially.
</p>
<h2>5.2 Do Not Rely Solely on Automation. Beware of Passing Tests</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-4oA7-E15Vto/VfX8oXfskTI/AAAAAAAAAzo/uRIASRbTAaw/s1600/Terminator2-5.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-4oA7-E15Vto/VfX8oXfskTI/AAAAAAAAAzo/uRIASRbTAaw/s400/Terminator2-5.jpg" /></a></div>
It is always good when some work is done for you automatically and you don't have to watch after it all the time. But when you have your tests automated it doesn't mean that you should completely forget about them as long as they pass. The machine does what it was programmed for but the program is normally put by human beings which are still initial source of software problems. Automated tests are not exception here. If we have some automated tests running successfully we should always keep in mind that:
<ul>
<li> Automated tests may use some tricks to simplify implementation, so it's not necessarily real user behaviour
<li> Some automated tests can have incomplete verifications due to different limitations. It means that if test passes it doesn't mean that everything around tested functionality works correct.
<li> Some tests may contain no verifications at all. They can be evergreen. The <a href="http://mkolisnyk.blogspot.com/2014/09/mutation-testing-overview.html">mutation testing</a> can detect such problems but it is resource consuming and it is already long-running for system level tests.
<li> In some cases we can use various kinds of mocks which simulate actual component behaviour but it is still mock component. Who knows what's going to happen with system under test in case of real component modifications.
</ul>
So, when we do our testing we simply should delegate some checks to machines but at the same time we should be able to perform some cross-checks to confirm even functionality which is already tested automatically. Just in case.
</p>
<h2>5.3 Use Automation For Other Purposes as Well</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-qy_u7PCJ-yU/VfX8ubmQ1nI/AAAAAAAAAzw/99wde44YCaI/s1600/multi-purpose-tool.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-qy_u7PCJ-yU/VfX8ubmQ1nI/AAAAAAAAAzw/99wde44YCaI/s400/multi-purpose-tool.jpg" /></a></div>
Automated testing is valuable when it is used frequently. But it can be even more valuable if we involve it somewhere outside of test execution process. We can use automation in some other areas like:
<ul>
<li> <b>Self-diagnostic system</b> - in some cases automated tests can be embedded into system under test so that they can be invoked somewhere outside of testing process. E.g. it can be useful to run some tests against client system and provide detailed information to support teams.
<li> <b>Automated environment or data setup</b> - automated testing simulates some interactions with application under test. In some cases such actions are useful to perform environment setup when we have to populate fresh system with some test data. Or we simply can run some code to prepare some data with complex relationships which can be easily defined in programmatic way.
</ul>
Automation is generally targeted to routine operations which require a lot of repetitions. So, if such repetitive actions are needed outside of regular testing and they can be done using current automated testing solution so why not to use it?
</p>
<h2>5.4 Watch out for Flaky tests</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-y3Khn7qWoUo/VfX80erBMWI/AAAAAAAAAz4/45ll_Ezych8/s1600/flaky-test.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-y3Khn7qWoUo/VfX80erBMWI/AAAAAAAAAz4/45ll_Ezych8/s400/flaky-test.jpg" /></a></div>
Flaky tests are the most annoying group of tests. They may fail sometimes but when we check them they pass. Probably it is quite frequent case when we see some test failing while it passes after next run. The biggest danger here is that we stop paying enough attention to such flaky test and we may ignore real problem spotted by this test just thinking that it is something related to environment or temporary problem. That's why we should pay additional attention to unstable tests.
</p>
<h3>5.4.0 Use synchronization</h3>
<p>
This is rule above any other rules in this group. We should always make sure that our tests handle application state each time. There should be no immediate actions without verifying that we can perform some operation. It is especially necessary for UI-level tests when each UI element becomes accessible after some delay. In case of slow environment such delays can be bigger and if we don't handle that properly the probability of false errors is higher. And thing brings big distortion to the entire test run information.
</p>
<h3>5.4.1 Automate failed tests re-run</h3>
<p>
A lot of spontaneously failing tests can be passed after next run. And it is pretty routine operation which takes some time. And it is annoying when you spend some time to find out that there is no actual problem there. So, in order to minimize such false problems we <a href="http://mkolisnyk.blogspot.com/2015/05/cucumber-jvm-junit-re-run-failed-tests.html">re-run failing tests automatically</a>. Some CI systems have this functionality, for specific needs we can have some custom solution. But the main idea is that if we automatically re-run failed tests several times until they pass or fail permanently we can filter out a lot of false problems and concentrate our attention on tests which show permanent errors. This indicates the problem they spot is something real.
</p>
<h3>5.4.2 Separate stable and unstable tests</h3>
<p>
Flaky tests usually indicate 2 potential source of problems:
<ul>
<li> Tests instability
<li> Some potential application problem which reproduces from time to time
</ul>
In case of unstable tests we simply should pay more attention to their implementation. Thus we have some group of tests to fix while others are treated as reliable.
</p>
<p>
In case of floating problem we can have some set of tests which potentially cover the problem and we can run them separately to have more representative statistics on where the problem actually happens. On the other hand, having some group of stable tests leads us to the next practice.
</p>
<h3>5.4.3 Keep test runs green by pruning known failures. It's important to know which tests have just started failing today.</h3>
<p>
Green/red scheme of test suite execution status is easy to use system which gives easy to interpret results and make go/no-go decision. If test suite is frequently red people stop paying attention to the results. It is especially vital for unit tests when many build systems integrate testing stage assuming that successful build should have successful test run as well. Otherwise, people stop paying attention to failed tests at all as normally failed tests indicate that something was broken. So, it is important to have some set of stable tests. When at least one of them starts failing it clearly indicates that something became broken recently.
</p>
<h3>5.4.4 Tests that are disabled must be re-added to manual test runs, or the coverage will be dropped</h3>
<p>
When we do our testing we should never forget about coverage. If we have automated tests we normally don't do them manually. But if we move them from automated runs it doesn't mean that we should forget them at all. The functionality under test should still be covered. If we don't cover tests automatically we should do it manually to keep the same level of coverage. Otherwise, we have a risk to have some area non tested → we may have some actual bug missed.
</p>
<h1>Summary</h1>
<p>
This is consolidated but still quite high-level list of best practices. Each specific tool, engine may have some more details. The list of practices I've listed before is something common to most of them. And yet, the more experience we get the more additional practices we can find out. So, this list is not something set in stone and we can have more approaches, more solutions which makes our test automation better.
</p>
<p>
And never forget that best practices are not something we MUST follow. They are mainly targeted to solve/avoid some specific problems. If you know other way for doing that then you can use it. Maybe it can also become someone's best practice.
</p>
<a id="refs"><h1>References</h1></a>
<p>
<ol>
<li> <a href="https://support.smartbear.com/articles/testcomplete/automated-testing-best-practices">Automated Testing Best Practices</a> by <a href="https://support.smartbear.com/">SmartBear Software Support</a>
<li> <a href="http://www.testingexcellence.com/test-automation-tips-best-practices/">Test Automation Tips and Best Practices</a> by <a href="http://www.testingexcellence.com/">Testing Excellence</a>
<li> <a href="http://nalashaa.com/7-key-best-practices-of-software-test-automation/">7 Key Best Practices of Software Test Automation</a> by <a href="http://nalashaa.com/">Nalashaa.com</a>
<li> <a href="http://www.qfor.com/blog/2014/07/test-automation-best-practices/">Test Automation - Best Practices</a> on <a href="http://www.qfor.com/blog/about-the-q4-blog/">Quadrant 4 Blog</a>
<li> <a href="http://www.pitsolutions.ch/blog/best-practices-in-automation-testing/">Best Practices in Automation Testing</a> by <a href="http://www.pitsolutions.ch/">PIT Solutions</a>
<li> <a href="http://googletesting.blogspot.co.uk/2007/10/automating-tests-vs-test-automation.html">Automating tests vs. test-automation</a> by <a href="http://googletesting.blogspot.co.uk/search/label/Markus%20Clermont">Markus Clermont</a>
<li> <a href="http://www.softwaretestinghelp.com/automation-testing-tutorial-7/">10 Best Practices and Strategies for Test Automation</a>
<li> <a href="http://gojko.net/2010/04/13/how-to-implement-ui-testing-without-shooting-yourself-in-the-foot-2/">How to implement UI testing without shooting yourself in the foot</a> by <a href="http://gojko.net/">Gojko Adzic</a>
<li> <a href="http://www.bqurious.com/test-automation-best-practices/">Test Automation Best Practices</a> on <a href="http://www.bqurious.com/">bqurious.com Blog</a>
<li> <a href="https://iamjayakumars.wordpress.com/2014/09/13/top-10-tips-for-best-practices-in-test-automation/">Top 10 Tips For Best Practices In Test Automation</a> by <a href="https://iamjayakumars.wordpress.com/">Jayakumar Sadhasivam</a> reposted from <a href="http://www.efytimes.com/e1/fullnews.asp?edid=147692">Top 10 Tips For Best Practices In Test Automation</a> Sanchari Banerjee, EFYTIMES News Network
<li> <a href="http://qa.siliconindia.com/qa-expert/Automation-Testing--Best-Practices--Part-1-eid-157.html">Automation Testing - Best Practices - Part 1</a> by <a href="http://qa.siliconindia.com/qa-expert/Ananya-Das--aid-145.html">Ananya Das</a>
<li> <a href="http://fczaja.blogspot.com/2011/01/ui-test-automation-best-practices.html">UI Test Automation Best Practices</a> by <a href="http://www.blogger.com/profile/12289949072596625867">Filip Czaja</a>
<li> <a href="http://docs.oracle.com/cd/E63029_01/books/TestGuide/TestGuide_AutoFuncTest9.html#wp1007176">Best Practices for Functional Test Design</a>
<li> <a href="http://docs.oracle.com/cd/E63029_01/books/TestGuide/TestGuide_AutoFuncTest10.html">Best Practices for Functional Test Script Development</a>
<li> <a href="http://joecolantonio.com/testtalks/06-joe-colantonio-the-top-6-automation-best-practices/">The Top 6 Automation Best Practices</a> by <a href="http://www.joecolantonio.com/">Joe Colantonio</a>
<li> <a href="http://www.gallop.net/blog/tag/test-automation-best-practices/">4 Reasons Why Test Automation Fails</a> on <a href="http://www.gallop.net">Gallop.net</a>
<li> <a href="http://jsystem.org/63/">Web test automation best practices</a> by <a href="">Guy Arieli</a>
<li> <a href="http://3qilabs.com/best-practices-for-achieving-automated-regression-testing-within-the-enterprise-building-automated-test-cases-to-stand-the-test-of-time-section-2/">Automation Best Practices: Building To Stand the Test of Time</a> by <a href="https://plus.google.com/100281099716791141676">Naman Aggarwal</a>
<li> <a href="http://qarevolution.com/5-agile-test-automation-best-practices/">5 Agile Test Automation Best Practices</a> by <a href="http://qarevolution.com/">QA Revolution</a>
<li> <a href="http://www.cigniti.com/blog/tag/test-automation-best-practices/">5 Tips To Maximize ROI Of Your Mobile Test Automation</a>
<li> <a href="http://www.1e.com/blogs/2014/12/10/test-automation-best-practices/">Test Automation Best Practices</a> by <a href="http://www.1e.com/blogs/author/shikharav/">Shikha Rav</a>
<li> <a href="https://ghostinspector.com/blog/5-best-practices-automated-browser-testing/">5 Best Practices for Automated Browser Testing</a> by <a href="https://ghostinspector.com/blog/author/justin/">Justin Klemm</a>
<li> <a href="http://www.intervise.com/wp-content/uploads/2015/01/Best_Practices_Test_Automation.12.14.pdf">Improving Software Quality: Nine Best Practices for Test Automation (PDF)</a> by <a href="">Kenneth "Chip" Groder</a>
<li> <a href="http://www.softwaretestingmagazine.com/knowledge/considerations-for-best-practices-with-selenium/">Considerations for Best Practices with Selenium</a> by Brian Van Stone, <a href="http://www.qualitestgroup.com/">QualiTest Group</a>
<li> <a href="http://www.dilatoit.com/node/185">Best practices in test automation</a> by <a href="http://www.dilatoit.com/company/about-us">Dilato</a>
<li> <a href="http://symbio.com/test-automation-helpful-tips-best-uses/">Test Automation: Helpful Tips and Best Uses</a> by <a href="http://symbio.com/author/haley/">Haley Kaufeldt</a>
<li> <a href="https://wiki.mozilla.org/B2G/QA/Automation/UI/Best_Practices">B2G/QA/Automation/UI/Best Practices</a> on <a href="https://wiki.mozilla.org/Main_Page">Mozilla Wiki</a>
<li> <a href="https://www.safaribooksonline.com/library/view/mastering-mobile-test/9781782175421/ch07s08.html">Best practices to maximize the RoI</a> on <a href="https://www.safaribooksonline.com/library/view/mastering-mobile-test/9781782175421/">Mastering Mobile Test Automation</a> book by Feroz Pearl Louis, Gaurav Gupta
<li> <a href="http://www.evoketechnologies.com/blog/test-automation-framework-design/">How To Design An Effective Test Automation Framework</a> by <a href="http://www.evoketechnologies.com/">Sheshajee Dasari</a>
<li> <a href="http://www.grazitti.com/resources/articles/178-selenium-6-best-practices.html">6 Best Practices for Selenium</a> on <a href="http://www.grazitti.com/index.php">Grazitti Interactive</a>
<li> <a href="http://martinfowler.com/bliki/TestPyramid.html">Test Pyramid</a> by <a href="http://martinfowler.com/">Martin Fowler</a>
<li> <a href="http://www.duncannisbet.co.uk/test-automation-basics-levels-pyramids-quadrants">Test Automation Basics - Levels, Pyramids & Quadrants</a> by <a href="http://www.duncannisbet.co.uk/blog">Duncan Nisbet</a>
<li> <a href="http://www.softwaretestinghelp.com/database-testing-test-data-preparation-techniques/">Database Testing - Properties of a Good Test Data and Test Data Preparation Techniques</a> on <a href="http://www.softwaretestinghelp.com">softwaretestinghelp.com</a> by Rizwan Jafri
</ol>
</p>
</body>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-44437139929718368782015-06-30T23:35:00.000+01:002015-07-01T00:03:41.473+01:00WebDriving Test Automation<html>
<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
td{border:1px solid black;}
th {background-color:#CCCCDD;border:1px solid black;}
table{border:1px solid black;border-collapse:collapse;}
.defrow:nth-child(even) {background: #CCC}
.defrow:nth-child(odd) {background: #FFF}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
.passed {background-color:lightgreen;font-weight:bold;color:darkgreen}
.failed {background-color:tomato;font-weight:bold;color:darkred}
.undefined {background-color:gold;font-weight:bold;color:goldenrod}
</style>
<title>WebDriving Test Automation</title>
</head>
<body>
<h1>Introduction</h1>
<p>
<a href="http://www.seleniumhq.org/projects/webdriver/">WebDriver</a> shows growing popularity for many years and at least for past 4 years it is number 1 player on the market of web test automation solutions. The growth hasn't been stopped yet. The WebDriver exposes open interface and it's server side API is documented in <a href="https://w3c.github.io/webdriver/webdriver-spec.html">W3C Standard</a>. Thus, a lot of people can implement their own back-end API and expand technology support. So, WebDriver can become more than another option among <a href="http://mkolisnyk.blogspot.com/2013/03/web-test-tools-list.html">web test automation tools list</a>. It may become (if not yet) the most wide spread test automation platform so that the term of UI test automation may be replaced with another term like <b>Web-Driving</b>. In this post I'll describe where the WebDriver solution "web-drives" us to and where are the areas of further growth.
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-5OpblttoKyQ/VZMYdbNbLHI/AAAAAAAAAvk/H4VWK0YywcE/s1600/Selenium-logo.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-5OpblttoKyQ/VZMYdbNbLHI/AAAAAAAAAvk/H4VWK0YywcE/s320/Selenium-logo.png" /></a></div>
<a name='more'></a>
<h1>What makes it so popular?</h1>
<p>
It became popular for many different reasons apart from support from different vendors. It's too little to become competitive solution on the market of test automation solutions. There are many other things which made WebDriver far better than many others on the same area. Major factors are:
<ul>
<li> <b>It's free</b>. Well, maybe you'll bring some other advantages which look more prominent than price but I really doubt the WebDriver could gain at least half of it's popularity were it at competitive price to vendor tools. It doesn't have many things almost each vendor tool has. Even more, it's not really wise to compare WebDriver with black box solutions like UFT, TestComplete or similar as these are systems of different class. Nevertheless, the knowledge of WebDriver is more required in current test automation world than anything else and it's price was very helpful factor in it
<li> <b>It's simple</b>. Another thing which can beat many of competitors is definitely simplicity. Mainly you need to know 20-30 methods to be able to do almost everything with your application under test. The way to combine it is up to you but the core functionality is simple and it's too few of it.
<li> <b>It's easily adjustable to user needs</b>. Well, mainly you can use the programming language you like, your favourite IDE and any existing infrastructure the development teams have. And it is not just about making various ports on different programming languages (as it was done with Watir) but it is something which was initially included into design. Yes, client-server architecture initially gives us ability to have common server side and the only part which requires migration is the client code. Also, you don't really need any "bells and whistles" to combine your code into test suites, making some checkpoints and some other routine stuff. Existing test engines, build infrastructure already did that for us.
<li> <b>It's portable</b>. You can run your tests on different environments. Even more, you can pack your testing solution and provide as the binary or similar runnable resource which you can run wherever you want without any necessity to install any specialized software
</ul>
There may be many other good things about WebDriver. I've just mentioned some of them. Main thing is that all of those factors lead to the fact that WebDriver is the most popular web test automation tool nowadays.
</p>
<h1>Expanding WebDriver outside of web</h1>
<p>
Despite the WebDriver contains the word <b>Web</b> it is not necessarily limited with web technologies. The idea is that no matter the UI type we operate with we need to do some common actions to interact with each specific application and each specific control. Some of them are:
<ul>
<li> Start application and adjust communication with it
<li> Detect the control by some identifier or combinations of it
<li> Verify elements presence
<li> Click/tap on the element
<li> Enter some text
<li> Retrieve some attribute value
<li> Handle popup messages (which should be standard for each particular technology)
</ul>
All that lead to the fact that the WebDriver interface is applicable to some other UI technologies. That is what actually happened.
<h2>.NET</h2>
<p>
Thus, we've got the <a href="https://code.google.com/p/twin/">TWIN</a> framework for .NET which implements WebDriver interface but targeted to desktop applications. This framework doesn't seem to be developed for a while (I see the latest changes were made at October 2012) but the idea is still alive.
</p>
<p>
There is another WebDriver <a href="https://winphonewebdriver.codeplex.com">implementation for Windows Phone</a>. Also, no updates since 2013 but still it's another WebDriver implementation for .NET based systems.
</p>
<p>
And it got new implementation in <a href="https://github.com/2gis/Winium">Winnium</a> solution which combines at least 3 modules:
<ul>
<li> <a href="https://github.com/2gis/Winium.Desktop">Winium for Desktop</a>
<li> <a href="https://github.com/2gis/Winium.StoreApps">Winium for Store Apps</a>
<li> <a href="https://github.com/2gis/winphonedriver">Windows Phone Driver</a>
</ul>
These ones are pretty fresh.
</p>
<h2>Android and iOS</h2>
<p>
Major mobile systems also didn't experience lack of attention. Thus, WebDriver had separate implementation for mobile browsers for both systems:
<ul>
<li> <a href="https://code.google.com/p/selenium/wiki/IPhoneDriver">IPhoneDriver</a>
<li> <a href="https://code.google.com/p/selenium/wiki/AndroidDriver">AndroidDriver</a>
</ul>
The above solutions were initially included into the set of WebDriver modules but at the moment they became deprecated. However, this area isn't left empty. The above solutions were replaced by:
<ul>
<li> <a href="http://ios-driver.github.io/ios-driver/">iOS Driver</a>
<li> <a href="http://selendroid.io/webview.html">Selendroid</a>
</ul>
And of course, it would be unfair to miss such solution as <a href="http://appium.io">Appium</a> which combines both platforms and it's pretty intensively growing.
</p>
<h2>Other tools</h2>
<p>
It is also good indicator of popularity when the engine is getting support from other vendor tools. E.g. TestComplete is targeted to <a href="http://www.joecolantonio.com/2014/12/16/testcomplete-10-5-embraces-open-source-tools-two-new-features-youre-going-to-love/">support WebDriver starting from 10.5 version</a>. Well, I'm not sure what are they trying to do with it, maybe there's going to be some benefit. The future shows how should it be but anyway there is such fact and it's worth attention.
</p>
<h1>Supported Technologies Market Share</h1>
<p>
All right, we now see that WebDriver goes outside of web and it means that it will eat some bigger portion of test automation market. In order to see the size of this portion we should simply identify market share for different technologies and figure out which market segment the WebDriver can be applied to. Mainly, we can highlight 3 major technology areas where WebDriver can now be applied to:
<ul>
<li> Browsers
<li> Mobile (including mobile browsers)
<li> Desktop
</ul>
Schematically coverage map looks like this:
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-JMtwe8Uz-DU/VZMYr6DAcpI/AAAAAAAAAvs/vMl3hYpKGk4/s1600/WebDriverCoverage.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-JMtwe8Uz-DU/VZMYr6DAcpI/AAAAAAAAAvs/vMl3hYpKGk4/s640/WebDriverCoverage.png" /></a></div>
So, let's take a look at the market share for all those segments and try to identify which part of it is covered by WebDriver and engines based on it.
</p>
<h2>Browsers Market Share Coverage</h2>
<p>
Browsers is the initial WebDriver area which nowadays covers not just desktop but also mobile browsers. Here is joint market share data for both desktop and mobile browsers:
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-D_YXXv4Lf3k/VZMfC1V-VbI/AAAAAAAAAwE/8Aq4ZunB1W8/s1600/001.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-D_YXXv4Lf3k/VZMfC1V-VbI/AAAAAAAAAwE/8Aq4ZunB1W8/s640/001.png" /></a></div>
and this is more detailed market share for desktop browsers:
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-6yF2VnEPsB0/VZMfP55Y1WI/AAAAAAAAAwM/VEpWrirKFAA/s1600/002.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-6yF2VnEPsB0/VZMfP55Y1WI/AAAAAAAAAwM/VEpWrirKFAA/s640/002.png" /></a></div>
As it's seen Safari is less presented on desktops and it gets it's share mainly due to mobile solutions.
</p>
<p>
Alternative data can be got from <a href="http://www.w3counter.com/globalstats.php">W3C Counter stats</a> source:
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-r16v1sXmCGQ/VZMY7K_SRMI/AAAAAAAAAv0/UhDcDLdIKV4/s1600/W3C-Counter-browser-market-share.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-r16v1sXmCGQ/VZMY7K_SRMI/AAAAAAAAAv0/UhDcDLdIKV4/s1600/W3C-Counter-browser-market-share.png" /></a></div>
Despite some number differences browsers market shows some common trends. Chrome is obvious leader while Safari, IE, Firefox still occupy essential part of market going neck to neck to each other. As for WebDriver coverage, all those charts show 92.11%, 98.08% and 93.7% technology coverage correspondingly which is essential coverage.
</p>
<h2>Mobile Market Share Coverage</h2>
<p>
As for mobile market we may find such data:
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-y_uD89wX2H0/VZMffY-0eWI/AAAAAAAAAwU/j4nijvNtScc/s1600/003.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-y_uD89wX2H0/VZMffY-0eWI/AAAAAAAAAwU/j4nijvNtScc/s640/003.png" /></a></div>
or alternative source:
</p>
<p>
<a href="http://www.idc.com/prodserv/smartphone-os-market-share.jsp"><img src="http://www.idc.com/prodserv/smartphone-ms-img/chart-ww-smartphone-os-market-share.png" alt="IDC: Smartphone OS Market Share 2015, 2014, 2013, and 2012 Chart" style="width: 100%; max-width: 979px;"></a>
<table>
<tr>
<th>Period</th>
<th class="data-color-1">Android</th>
<th class="data-color-2">iOS</th>
<th class="data-color-3">Windows Phone</th>
<th class="data-color-4">BlackBerry OS</th>
<th class="data-color-5">Others</th>
</tr>
<tr class="mark">
<td>Q1 2015</td>
<td>78.0%</td>
<td>18.3%</td>
<td>2.7%</td>
<td>0.3%</td>
<td>0.7%</td>
</tr>
<tr>
<td>Q1 2014</td>
<td>81.2%</td>
<td>15.2%</td>
<td>2.5%</td>
<td>0.5%</td>
<td>0.7%</td>
</tr>
<tr>
<td>Q1 2013</td>
<td>75.5%</td>
<td>16.9%</td>
<td>3.2%</td>
<td>2.9%</td>
<td>1.5%</td>
</tr>
<tr>
<td>Q1 2012</td>
<td>59.2%</td>
<td>22.9%</td>
<td>2.0%</td>
<td>6.3%</td>
<td>9.5%</td>
</tr>
</table>
<p class="table-source">
<a href="http://www.idc.com/prodserv/smartphone-os-market-share.jsp">Source: IDC, May 2015</a>
</p>
The above charts and tables show that major players there are Android and iOS. Windows Phone has a few percent while all other platforms in total occupy near 1% of market. And it is only this 1% which isn't covered by WebDriver-based solutions and according to the trend observed this number will decrease even more. Thus, currently we have 99% coverage on mobile market.
</p>
<h2>Desktop Market Share Coverage</h2>
<p>
For desktop software it's a bit more difficult to calculate more or less reasonable parameters to spot actual coverage as we have to list all major desktop GUI technologies and their popularity. Such information is hard to find. But at least the accessible market share can be defined by identifying market share of desktop operating systems:
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-vFcVpccSP68/VZMfoL0UIuI/AAAAAAAAAwc/yog2CIFD3C4/s1600/004.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-vFcVpccSP68/VZMfoL0UIuI/AAAAAAAAAwc/yog2CIFD3C4/s640/004.png" /></a></div>
Currently, we can use Web-driver for WinForms/WPF applications which are available on Windows platform. So, according to the above graph it's about 87% of market share. Of course, once again I have to mention that it is not the actual coverage but rather the potential. It is big portion however, OS X is still quite big non-covered part.
</p>
<h2>Summarizing market share data</h2>
<p>
So, the joint market share where WebDriver can be applicable can be represented with the following table:
<table>
<tr>
<th>Technology area</th><th>% Market Share Covered</th>
</tr>
<tr>
<td>Browsers</td><td>90-95%</td>
</tr>
<tr>
<td>Mobile</td><td>~99%</td>
</tr>
<tr>
<td>Desktop</td><td>~87%</td>
</tr>
</table>
Well, it's really good coverage and taking into account quite big advantages of WebDriver and it's expansion targeting the most popular technology areas this portion will grow.
</p>
<h1>Where WebDriver can grow next?</h1>
<p>
However, the WebDriver-based solutions also need a lot development efforts. A lot of things are still not covered, a lot of "nice to have" things are missing. So, there's wide range of areas where to grow. Just some of them are:
<ul>
<li> <b>Technology expansion</b> - as it was mentioned before there are some quite essential technology areas where WebDriver-based solutions aren't represented. So, if such technologies become more popular there would be some developers who will prepare some solution for testing it. WebDriver is good at that as we already have common interface for communicating between client and server part. Everything else is up to technical implementation
<li> <b>More unification</b> - currently all previously mentioned WebDriver-based solutions are just separate and independent projects which grow with different velocity. In the future it's good to see some unified solution which combines all of these implementations and it should be provided as a single box.
<li> <b>Well-developed high-level API</b> - the WebDriver interface is very core part of test automation framework. It provides major solutions but we still require to create some additional abstractions to represent different control types with major interface for interaction with them. Many times when I started the automation project I had to write such layer of abstractions. After some time I've came up with some king of common framework, very likely others did the same. But what if there would be some other unified higher level ported to different languages? Well, it's something worth thinking about as well as such solution can be the basis for the next item.
<li> <b>More or less unified solution for page objects generation and UI elements definition</b> - if you take a look at most of big vendor solutions like QTP, SilkTest, IBM Functional Tester and many others you will see they all have some forms for displaying, modelling UI objects hierarchy with the ability to store such objects map in some resource. That can be either dedicated repository or just another portion of code. Nevertheless, they all definitely have tools for visual modelling of all those page objects where we can select proper identifiers for elements, define which elements we need to be generated and which are not. What do we have for WebDriver-based tools right now? Well, I know Appium has inspector which can view objects hierarchy, also we shouldn't forget about Selenium IDE. What else? Almost nothing! So, we definitely need something more solid in this area as well. Maybe it should be some kind of plugins to different IDEs or it can be some dedicated UI tool based on some IDE (just like IBM Functional Tester was based on Eclipse or Android Studio is based on Intelij IDEA).
</ul>
</p>
<h1>Conclusion</h1>
<p>
WebDriver is the most popular UI test automation framework for many years already and it went far beyond just web testing framework occupying wide range of modern technologies and eating some portion of market out from competitors. The approach itself has been proven useful and WebDriver still has huge potential to grow. It's not just technology coverage but also libraries expansion to higher level API, various helper tools and other things which a typical for big vendor solutions. I expect to see the same around WebDriver.
</p>
</body>
</html>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-77839014919284886042015-06-13T21:01:00.000+01:002015-06-13T21:01:07.511+01:00Cucumber JVM: Advanced Reporting 2. Detailed Report (HTML, PDF)<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
td{border:1px solid black;}
th {background-color:#CCCCDD;border:1px solid black;}
table{border:1px solid black;border-collapse:collapse;}
.defrow:nth-child(even) {background: #CCC}
.defrow:nth-child(odd) {background: #FFF}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
.passed {background-color:lightgreen;font-weight:bold;color:darkgreen}
.failed {background-color:tomato;font-weight:bold;color:darkred}
.undefined {background-color:gold;font-weight:bold;color:goldenrod}
</style>
<title>Cucumber JVM: Advanced Reporting 2. Detailed Report (HTML, PDF)</title>
</head>
<body>
<h1>Introduction</h1>
<p>
<a href="http://mkolisnyk.blogspot.com/2015/05/cucumber-jvm-advanced-reporting.html">Previously</a> I've described basic report samples based on <a href="https://github.com/cucumber/cucumber-jvm">Cucumber-JVM</a>. Since I'm using this reporting solution on practice quite frequently the number of requirements and enhancements grows so since <a href="http://mkolisnyk.blogspot.com/2015/05/cucumber-jvm-advanced-reporting.html">last time</a> I had to add some more additional features which can be in use.
</p>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-aXHbB9nPPeI/VUZUwaKi8TI/AAAAAAAAAr0/yiXtd5WALPo/s1600/cucumber.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-aXHbB9nPPeI/VUZUwaKi8TI/AAAAAAAAAr0/yiXtd5WALPo/s200/cucumber.png" /></a></div>
</p>
<p>
Some of new features are:
<ul>
<li> <a href="http://mkolisnyk.blogspot.com/2015/05/cucumber-jvm-junit-re-run-failed-tests.html">Cucumber extension to support failed tests re-run</a>
<li> Detailed results report which supports the following:
<ul>
<li> Detailed results report generation in HTML format
<li> Detailed results report generation in PDF format
<li> Screen shot are included into both the above reports
</ul>
</ul>
All those new features are included into <b>0.0.5</b> version, so we can add Maven dependency:
<pre class="code">
<dependency>
<groupId>com.github.mkolisnyk</groupId>
<artifactId>cucumber-reports</artifactId>
<version>0.0.5</version>
</dependency>
</pre>
or the same thing for Gradle:
<pre class="code">
'com.github.mkolisnyk:cucumber-reports:0.0.5'
</pre>
</p>
<p>
Since <a href="http://mkolisnyk.blogspot.com/2015/05/cucumber-jvm-junit-re-run-failed-tests.html">Cucumber failed tests re-run</a> functionality was described before in this post I'll concentrate more on detailed results report features.
</p>
<a name='more'></a>
<h1>Why do we need the detailed results report?</h1>
<p>
First question which may appear here is: why do we need such report? Most of the details can be retrieved from standard Cucumber HTML report. Well, main reason is that existing HTML results report isn't enough and on practice we need some additional features. Some of them are:
<ul>
<li> Report itself should have minimal external dependencies (e.g. styles, scripts) so that it is easier to publish it as build artifact
<li> It is very useful to have report showing screen shots on every error encountered
<li> The entire report should also have some portable and self-contained format so that it can be sent via e-mail. Thus, PDF format can be quite useful for that
</ul>
Also, standard HTML report looks pretty "standard" and empty. So, I've added some features to make new detailed report more convenient to read and navigate through the list of test results.
</p>
<h1>Detailed Report Structure</h1>
<p>
The image below shows how detailed report looks like:
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-5QSfTkHrxMc/VXyLRPHrp2I/AAAAAAAAAvE/4lMBdGEFo9w/s1600/DetailedReport.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-5QSfTkHrxMc/VXyLRPHrp2I/AAAAAAAAAvE/4lMBdGEFo9w/s400/DetailedReport.png" /></a></div>
It contains 3 major sections:
<ul>
<li> Overview
<li> Table of Contents
<li> Detailed Results Report
</ul>
</p>
<h2>Overview section</h2>
<p>
Overview section contains aggregated data on entire run results. In particular it shows the number of passed/failed/skipped features/scenarios and steps with the % of passed items in comparison to entire number of items. Here is how typical overview section looks like:
<table><tr><th></th><th>Passed</th><th>Failed</th><th>Undefined</th><th>%Passed</th></tr><tr><th>Features</th><td class="passed">4</td><td class="failed">3</td><td class="undefined">0</td><td>57.14</td></tr><tr><th>Scenarios</th><td class="passed">47</td><td class="failed">6</td><td class="undefined">0</td><td>88.68</td></tr><tr><th>Steps</th><td class="passed">406</td><td class="failed">6</td><td class="undefined">28</td><td>92.27</td></tr></table>
</p>
<h2>Table of Contents section</h2>
<p>
Table of contents is the hyper-linked list of features and scenarios highlighting the execution status. It is convenient addition to the report which provides possibility to navigate to each specific scenario result. Thus we have fast way to get to the test we're interested in.
</p>
<h2>Detailed Results Report section</h2>
<p>
Detailed results report looks similar to standard HTML report and simply contains the description of steps with the status and detailed stack trace is case of error. Also, every scenario result contains hyperlink to the table of contents. Thus we can easily navigate to scenario we need and then switch to any other scenarios we're interested in.
</p>
<h1>How to Generate Detailed Report</h1>
<p>
The report generation is based on standard Cucumber JSON results report post-processing. So, before generating detailed results report we should specify which results report file we should use for report generation. So, typical Java code for report producing looks like:
<pre class="code">
CucumberDetailedResults results = new CucumberDetailedResults();
results.setOutputDirectory("target/");
results.setOutputName("cucumber-results");
results.setSourceFile("./src/test/resources/cucumber.json");
results.executeDetailedResultsReport(false);
</pre>
This code produces detailed report stored at the following path: <b>target/cucumber-results-test-results.html</b>.
</p>
<h2>Adding Screen Shots</h2>
<p>
Often it's very useful to have the screen shot on error to see what was the real problem which caused error. It's not the silver bullet but it's definitely helpful for analysis. Since currently described report is produced as the result of post-processing but not as the part of hooks the way screen shots are generated are out of report generator scope. We simply should assume that we already have some folder containing screen shot files which fit the following rules:
<ul>
<li> Image files are of PNG format
<li> Each failed test contains only one screen shot as we normally have only one error on failed test. Maybe in the future I'll add some additional options to handle multiple files but currently it is the way we do things now.
<li> Screen shot file names are taken from the names of corresponding scenario IDs where special characters like ; or spaces are replaced with underscore.
</ul>
Once we meet such requirements all we need is to define such screen shot locations. Thus, the Java code producing detailed report now looks like:
<pre class="code">
CucumberDetailedResults results = new CucumberDetailedResults();
results.setOutputDirectory("target/");
results.setOutputName("cucumber-results");
results.setSourceFile("./src/test/resources/cucumber.json");
<span class="mark">results.setScreenShotLocation("../src/test/resources/");</span>
results.executeDetailedResultsReport(false);
</pre>
The highlighted part defines where to look for screen shots.
</p>
<h2>PDF Generation</h2>
<p>
And the last feature is the ability to generate file in PDF format. Actually, when we generate HTML report with all the above features it is still hard to transport as it may contain references to image files. In order to make the report more portable (e.g. if we need to send it then via e-mail). In order to reach this we simply need to convert generated detailed report HTML to PDF file. This flow is handled by the parameter of the <b>executeDetailedResultsReport</b> method. If it is set to <b>true</b> the PDF report is generated in addition to HTML one. If it is <b>false</b> only HTML report is generated. So, in order to produce PDF output we should modify previous code sample this way:
<pre class="code">
CucumberDetailedResults results = new CucumberDetailedResults();
results.setOutputDirectory("target/");
results.setOutputName("cucumber-results");
results.setSourceFile("./src/test/resources/cucumber.json");
results.setScreenShotLocation("../src/test/resources/");
results.executeDetailedResultsReport(<span class="mark">true</span>);
</pre>
The above example will produce PDF file located by the path: <b>target/cucumber-results-test-results.pdf</b>.
</p>
<h1>Summary</h1>
<p>
That were additional Cucumber reporting features introduced recently. So, currently we have reports which can be set as e-mail message body + some additional reports which can be provided as attachments. This is good foundation for quite convenient notification system where we don't need to keep all our reports as build artifacts as well as now we have one major report where we can get both overview and detailed information. The list of enhancements will grow as long as more additional requirements appears. But the stuff we have right now is already good.
</p>
</body>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com89tag:blogger.com,1999:blog-2532302763215844416.post-25233374258901871852015-05-26T00:56:00.000+01:002015-05-26T01:05:25.806+01:00Cucumber JVM + JUnit: Re-run failed tests<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
td{border:1px solid black;}
th {background-color:#CCCCDD;border:1px solid black;}
table{border:1px solid black;border-collapse:collapse;}
.defrow:nth-child(even) {background: #CCC}
.defrow:nth-child(odd) {background: #FFF}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
.passed {background-color:lightgreen;font-weight:bold;color:darkgreen}
.failed {background-color:tomato;font-weight:bold;color:darkred}
.skipped {background-color:gold;font-weight:bold;color:goldenrod}
.undefined {background-color:gold;font-weight:bold;color:goldenrod}
</style>
<title>Cucumber JVM + JUnit: Re-run failed tests</title>
</head>
<body>
<p>
Automated tests should run reliably and provide predictable results. At the same time there are some spontaneous temporary errors which distort the entire picture while reviewing test results. And it's really annoying when tests fail on some temporary problem and mainly pass on next run. Of course, one thing is when we forgot to add some waiting time out to wait for element to appear before interacting with it. But there are cases when the reason of such temporary problem lays beyond automated tests implementation but mainly related to environment which may cause some delays of downtime for short time. So, normal reaction on that is to re-run failed tests and confirm functionality is fine. But this is too routine task and it doesn't really require some intellectual work to perform. It's simply additional logic which handles test result state and triggers repetitive run in case of error. If test fails permanently we'll still see the error but if test passes after that then the problem doesn't require too much attention.
</p>
<p>
And generally, if you simply stepped out it's not the reason to fail.
<div class="separator" style="clear: both; text-align: center;"><img border="0" src="http://1.bp.blogspot.com/-Gy-AMYS8UiA/VWO1l7qUMhI/AAAAAAAAAug/GW2sX-oUDAE/s1600/Skater.gif" /></div>
</p>
<p>
This problem is not new and it's been resolved already for many particular cases. E.g. here is the <a href="http://stackoverflow.com/a/20762914">JUnit solution example</a>. In this post I'll show you how to perform the same re-run for <a href="https://github.com/cucumber/cucumber-jvm">Cucumber-JVM</a> in combination with <a href="http://junit.org">JUnit</a> as I'm actively using this combination of engines and it is quite popular. The solution shown in previous link doesn't really fit the Cucumber as each specific JUnit test in Cucumber-JVM corresponds to some specific step rather than entire scenario. Thus, the re-run functionality for this combination of engines looks a bit different. So, let's see how we can re-run our Cucumber tests in JUnit.
</p>
<a name='more'></a>
<h1>Major approach for re-run and areas of impact</h1>
<p>
General idea is to update existing engine classes with some post-processing which does the following:
<ol>
<li> Handles test execution and status monitoring
<li> If current test fails initiate re-run
<li> If test still fails after some maximal number of re-runs the error should be escalated
<li> If test passes on re-run the corresponding scenario should be marked as passed
</ol>
Since it is about handling test runs as well the <a href="https://cucumber.io/docs/reference#hooks">hook mechanism</a> doesn't seem to be the best idea. Also, we have an extra difficulty in handling errors for scenarios and scenario outline examples as they have a bit different hierarchical structure, so these scenario types should be handled separately.
</p>
<p>
So, major area of impact is JUnit runner class for Cucumber + some additional classes handling feature and specific scenario runs. So, let's describe all those items in more details.
</p>
<h1>Overriding existing Cucumber-JVM classes</h1>
<p>
There are 4 major classes which should be replaced with their versions supporting re-run functionality. They are:
<ul>
<li> <a href="https://github.com/cucumber/cucumber-jvm/blob/master/junit/src/main/java/cucumber/runtime/junit/FeatureRunner.java">cucumber.runtime.junit.FeatureRunner</a> - responsible for handling data structures and entire processing of features
<li> <a href="https://github.com/cucumber/cucumber-jvm/blob/master/junit/src/main/java/cucumber/api/junit/Cucumber.java">cucumber.api.junit.Cucumber</a> - major Cucumber runner which is used in JUnit tests via <b>@RunWith</b> annotation
<li> <a href="https://github.com/cucumber/cucumber-jvm/blob/master/junit/src/main/java/cucumber/runtime/junit/ExamplesRunner.java">cucumber.runtime.junit.ExamplesRunner</a> - stores data and handles processing of each specific scenario outline row.
<li> <a href="https://github.com/cucumber/cucumber-jvm/blob/master/junit/src/main/java/cucumber/runtime/junit/ScenarioOutlineRunner.java">cucumber.runtime.junit.ScenarioOutlineRunner</a> - handles scenario outline processing. Unlike single scenario the scenario outlines are another form of test suite within the bigger test suite represented by feature.
</ul>
All the above classes should be either extended or simply replaced with enhanced classes doing mainly the same except additional handling of re-runs on failure. Actually, we should build additional classes which represent similar structure to the above classes. General class hierarchy can be represented with the below UML diagram:
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-gzkDWLlXHCE/VWO2E_KJ14I/AAAAAAAAAuo/o9gHaPlEnx0/s1600/ExtensionDiagram.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-gzkDWLlXHCE/VWO2E_KJ14I/AAAAAAAAAuo/o9gHaPlEnx0/s640/ExtensionDiagram.png" /></a></div>
Classes with the red background are custom classes we should add while other classes represent existing Cucumber and JUnit classes. As you can see custom classes represent similar structure and relationships to original classes. This way we still can keep consistent behaviour between new and original runner classes so that normally Cucumber tests are run completely the same way as before. All customization should be related only to re-run functionality and appropriate data structure storage.
</p>
<p>
In this example only <a href="https://github.com/cucumber/cucumber-jvm/blob/master/junit/src/main/java/cucumber/runtime/junit/FeatureRunner.java">cucumber.runtime.junit.FeatureRunner</a> class could be simply extended. Other classes were implemented as the replacement of original classes as in some cases we need to access some internal data structures which cannot be inherited by simple extension. So, let's take a look at each custom class separately.
</p>
<h2>Overriding FeatureRunner</h2>
<p>
All the below sources will be pasted in full with key parts highlighted with <span class="mark">yellow</span> with hyperlinks to detailed explanation of each specific item. This way the code is explained and at the same time it can be simply copy/pasted and ready to use.
</p>
<p>
First class to describe is the <b>ExtendedFeatureRunner</b> which is an extension of the <b>FeatureRunner</b> class. This is the first class where the retry functionality is applied. Mainly it is targeted to handle re-runs for scenarios. In case of scenario outlines it doesn't do anything special but transferring control to other classes. This difference takes place due to the fact that each scenario represents the list of classes corresponding to test steps. At the same time scenario outlines are containers for multiple scenarios as well as scenario outline examples have a bit different internal structure representing scenario data. So, for extended feature runner class the code looks like:
<pre class="code">
package com.github.mkolisnyk.cucumber.runner;
import java.util.ArrayList;
import java.util.List;
import org.junit.Assert;
import org.junit.internal.AssumptionViolatedException;
import org.junit.runner.Description;
import org.junit.runner.Result;
import org.junit.runner.notification.Failure;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.ParentRunner;
import org.junit.runners.model.InitializationError;
import cucumber.runtime.CucumberException;
import cucumber.runtime.Runtime;
import cucumber.runtime.junit.ExecutionUnitRunner;
import cucumber.runtime.junit.FeatureRunner;
import cucumber.runtime.junit.JUnitReporter;
import cucumber.runtime.model.CucumberFeature;
import cucumber.runtime.model.CucumberScenario;
import cucumber.runtime.model.CucumberScenarioOutline;
import cucumber.runtime.model.CucumberTagStatement;
public class ExtendedFeatureRunner <a href="#1_1"><span class="mark">extends FeatureRunner</span></a> {
private final List<ParentRunner> children = new ArrayList<ParentRunner>();
<a href="#1_2"><span class="mark">private final int retryCount = 3;
private int failedAttempts = 0;
private int scenarioCount = 0;</span></a>
private Runtime runtime;
private CucumberFeature cucumberFeature;
private JUnitReporter jUnitReporter;
public ExtendedFeatureRunner(CucumberFeature cucumberFeature,
Runtime runtime, JUnitReporter jUnitReporter)
throws InitializationError {
super(cucumberFeature, runtime, jUnitReporter);
this.cucumberFeature = cucumberFeature;
this.runtime = runtime;
this.jUnitReporter = jUnitReporter;
buildFeatureElementRunners();
}
private void buildFeatureElementRunners() {
for (CucumberTagStatement cucumberTagStatement : cucumberFeature.getFeatureElements()) {
try {
ParentRunner featureElementRunner;
if (cucumberTagStatement instanceof CucumberScenario) {
featureElementRunner = new ExecutionUnitRunner(runtime, (CucumberScenario) cucumberTagStatement, jUnitReporter);
} else {
<a href="#1_3"><span class="mark">featureElementRunner = new ExtendedScenarioOutlineRunner(runtime, (CucumberScenarioOutline) cucumberTagStatement, jUnitReporter);</span></a>
}
children.add(featureElementRunner);
} catch (InitializationError e) {
throw new CucumberException("Failed to create scenario runner", e);
}
}
}
public final Runtime getRuntime() {
return runtime;
}
@Override
protected void runChild(ParentRunner child, RunNotifier notifier) {
notifier.fireTestStarted(child.getDescription());
try {
child.run(notifier);
<a href="#1_4"><span class="mark">Assert.assertEquals(0, this.getRuntime().exitStatus());</span></a>
} catch (AssumptionViolatedException e) {
notifier.fireTestAssumptionFailed(new Failure(child.getDescription(), e));
} catch (Throwable e) {
<a href="#1_5"><span class="mark">retry(notifier, child, e);</span></a>
} finally {
notifier.fireTestFinished(child.getDescription());
}
<a href="#1_6"><span class="mark">scenarioCount++;
failedAttempts = 0;</span></a>
}
<a href="#1_7"><span class="mark">public void retry(RunNotifier notifier, ParentRunner child, Throwable currentThrowable)</span></a> {
Throwable caughtThrowable = currentThrowable;
ParentRunner featureElementRunner = null;
boolean failed = true;
Class<? extends ParentRunner> clazz = child.getClass();
CucumberTagStatement cucumberTagStatement = this.cucumberFeature.getFeatureElements().get(scenarioCount);
<a href="#1_8"><span class="mark">if (cucumberTagStatement instanceof CucumberScenarioOutline) {
return;
}</span></a>
while (retryCount > failedAttempts) {
try {
featureElementRunner = new ExecutionUnitRunner(runtime, (CucumberScenario) cucumberTagStatement , jUnitReporter);
featureElementRunner.run(notifier);
Assert.assertEquals(0, this.getRuntime().exitStatus());
failed = false;
break;
} catch (Throwable t) {
failedAttempts++;
caughtThrowable = t;
this.getRuntime().getErrors().clear();
}
}
if (failed) {
notifier.fireTestFailure(new Failure(featureElementRunner.getDescription(), caughtThrowable));
}
}
<a href="#1_9"><span class="mark">@Override
protected List<ParentRunner> getChildren() {
return children;
}
@Override
protected Description describeChild(ParentRunner child) {
return child.getDescription();
}</span></a>
}
</pre>
The highlighted items represent the following features:
<ol>
<li> <a id="1_1"></a> Here we extend our functionality from the <b>FeatureRunner</b> class as we need to override just some specific methods while internal data structures are either accessible directly or can be overridden in descendent class
<li> <a id="1_2"></a> Additional fields which handle internal state of re-run process. Those fields are:
<ul>
<li> <b>retryCount</b> - contains the number of retries to be applied in case of failed test. If test doesn't show positive result after <b>retryCount</b> tries it is treated as failed
<li> <b>failedAttempts</b> - stores the number of failed attempts. This counter is reset for each new test and controls how many times before the same test failed
<li> <b>scenarioCount</b> - stores the index of current scenario within the feature. It's needed for re-run functionality as we need to re-create the same data structure from initial feature description
</ul>
<li> <a id="1_3"></a> Actually, the <b>buildFeatureElementRunners</b> method code is fully copied from the parent class except highlighted part where we initialize runner for scenario outlines. As it was mentioned before scenario outlines represent more complex structure in comparison to scenarios. Thus re-run process is handled for them differently. And this is another reason why scenario outline runners are customized. But that will be described later
<li> <a id="1_4"></a> The <b>this.getRuntime().exitStatus()</b> statement returns the exit status of the latest test run. It returns 0 in case of successful completion. If any other value is returned it indicates the test failed
<li> <a id="1_5"></a> This is the place where we call the <b>retry</b> method. If some exception occurs (in our case it's AssertionError on test status check) we initiate retry process
<li> <a id="1_6"></a> When we completed with re-run processing we should reset <b>failedAttempts</b> count to 0 and increment the index of current scenario as we ended up with previous one
<li> <a id="1_7"></a> Here is the definition of <b>retry</b> method itself
<li> <a id="1_8"></a> This part of code filters out scenario elements which are actually scenario outlines. Since we initialize scenario outline instances with the <b>ExtendedScenarioOutlineRunner</b> runner type we customize re-run handling for scenario outlines there but not in current class
<li> <a id="1_9"></a> These methods simply override existing methods in order to pick up proper references to objects of current class but not the parent
</ol>
So, generally, this class handles re-runs for scenarios only. But it is already big step (almost half of the entire work is done). So, let's integrate it into major entry point.
</p>
<h2>Custom main runner class</h2>
<p>
And the major entry point for our re-run functionality is the main runner class which is supposed to be used as the parameter to <b>@RunWith</b> annotation. Mainly it's copy/paste from original <b>Cucumber</b> runner with small deviations related to the fact that instead of original <b>FeatureRunner</b> class we now use <b>ExtendedFeatureRunner</b> class. The code looks like:
<pre class="code">
package com.github.mkolisnyk.cucumber.runner;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.junit.runner.Description;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.ParentRunner;
import org.junit.runners.model.InitializationError;
import cucumber.runtime.ClassFinder;
import cucumber.runtime.Runtime;
import cucumber.runtime.RuntimeOptions;
import cucumber.runtime.RuntimeOptionsFactory;
import cucumber.runtime.io.MultiLoader;
import cucumber.runtime.io.ResourceLoader;
import cucumber.runtime.io.ResourceLoaderClassFinder;
import cucumber.runtime.junit.Assertions;
import cucumber.runtime.junit.JUnitReporter;
import cucumber.runtime.model.CucumberFeature;
public class ExtendedCucumber <a href="#2_1"><span class="mark">extends ParentRunner<ExtendedFeatureRunner></span></a> {
private final JUnitReporter jUnitReporter;
<a href="#2_2"><span class="mark">private final List<ExtendedFeatureRunner> children = new ArrayList<ExtendedFeatureRunner>();</span></a>
private final Runtime runtime;
public ExtendedCucumber(Class clazz) throws InitializationError, IOException {
super(clazz);
ClassLoader classLoader = clazz.getClassLoader();
Assertions.assertNoCucumberAnnotatedMethods(clazz);
RuntimeOptionsFactory runtimeOptionsFactory = new RuntimeOptionsFactory(clazz);
RuntimeOptions runtimeOptions = runtimeOptionsFactory.create();
ResourceLoader resourceLoader = new MultiLoader(classLoader);
runtime = createRuntime(resourceLoader, classLoader, runtimeOptions);
final List<CucumberFeature> cucumberFeatures = runtimeOptions.cucumberFeatures(resourceLoader);
jUnitReporter = new JUnitReporter(runtimeOptions.reporter(classLoader), runtimeOptions.formatter(classLoader), runtimeOptions.isStrict());
addChildren(cucumberFeatures);
}
protected Runtime createRuntime(ResourceLoader resourceLoader, ClassLoader classLoader,
RuntimeOptions runtimeOptions) throws InitializationError, IOException {
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
return new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
}
<a href="#2_3"><span class="mark">@Override
public List<ExtendedFeatureRunner> getChildren() {
return children;
}
@Override
protected Description describeChild(ExtendedFeatureRunner child) {
return child.getDescription();
}
@Override
protected void runChild(ExtendedFeatureRunner child, RunNotifier notifier) {
child.run(notifier);
}</span></a>
@Override
public void run(RunNotifier notifier) {
super.run(notifier);
jUnitReporter.done();
jUnitReporter.close();
runtime.printSummary();
}
private void addChildren(List<CucumberFeature> cucumberFeatures) throws InitializationError {
for (CucumberFeature cucumberFeature : cucumberFeatures) {
<a href="#2_4"><span class="mark">children.add(new ExtendedFeatureRunner(cucumberFeature, runtime, jUnitReporter));</span></a>
}
}
}
</pre>
The highlighted parts represent the following:
<ol>
<li> <a id="2_1"></a> This runner extends <b>ParentRunner<ExtendedFeatureRunner></b> instead of <b>ParentRunner<FeatureRunner></b> class as it was for original <b>Cucumber</b> runner
<li> <a id="2_2"></a> The list of child items are now using <b>ExtendedFeatureRunner</b> class
<li> <a id="2_3"></a> These methods are simply overridden to match the entire interface (the parent class is abstract generic class so we should use proper types and implement some methods provided by the interface)
<li> <a id="2_4"></a> Child elements are now initialized as instances of <b>ExtendedFeatureRunner</b> class
</ol>
After the above change we already can use our new runner to deal with our Cucumber tests. If we use only simple scenarios this is more than enough. Unfortunately, if there are some scenario outlines this solution isn't applied as we simply ignore any custom processing. So, let's implement re-run functionality which carries out scenario outline specifics as well.
</p>
<h2>Re-running scenario outlines</h2>
<p>
The <b>Scenario outline</b> processing is handled by 2 major classes:
<ul>
<li> <b>ScenarioOutlineRunner</b> - stores and processes scenario outlines as the set of scenarios. So, it works similarly to <b>FeatureRunner</b> but the main difference is that scenario outlines are parsed and processed differently as on the back of Cucumber-JVM each scenario is generated based on outline scenario with parameters inserted.
<li> <b>ExamplesRunner</b> - actually represents the runner for specific scenario. So, in terms of re-run functionality it is the major place where re-run logic should be applied to.
</ul>
Since the <b>ExamplesRunner</b> class contains more logic related to re-run functionality for scenario outlines let's start with it first. The code is:
<pre class="code">
package com.github.mkolisnyk.cucumber.runner;
import java.util.ArrayList;
import java.util.List;
import org.junit.Assert;
import org.junit.internal.AssumptionViolatedException;
import org.junit.runner.Description;
import org.junit.runner.Runner;
import org.junit.runner.notification.Failure;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.ParentRunner;
import org.junit.runners.Suite;
import org.junit.runners.model.InitializationError;
import cucumber.runtime.Runtime;
import cucumber.runtime.junit.ExecutionUnitRunner;
import cucumber.runtime.junit.JUnitReporter;
import cucumber.runtime.model.CucumberExamples;
import cucumber.runtime.model.CucumberScenario;
public class ExtendedExamplesRunner extends Suite {
<a href="#3_1"><span class="mark">private int retryCount = 3;</span></a>
private Runtime runtime;
private final CucumberExamples cucumberExamples;
private Description description;
private JUnitReporter jUnitReporter;
<a href="#3_2"><span class="mark">private int exampleCount = 0;
private static List<Runner> runners;
private static List<CucumberScenario> exampleScenarios;</span></a>
protected ExtendedExamplesRunner(Runtime runtime, CucumberExamples cucumberExamples, JUnitReporter jUnitReporter) throws InitializationError {
super(ExtendedExamplesRunner.class, buildRunners(runtime, cucumberExamples, jUnitReporter));
this.cucumberExamples = cucumberExamples;
this.jUnitReporter = jUnitReporter;
this.runtime = runtime;
}
private static List<Runner> buildRunners(Runtime runtime, CucumberExamples cucumberExamples, JUnitReporter jUnitReporter) {
runners = new ArrayList<Runner>();
exampleScenarios = cucumberExamples.createExampleScenarios();
for (CucumberScenario scenario : exampleScenarios) {
try {
<a href="#3_3"><span class="mark">ExecutionUnitRunner exampleScenarioRunner = new ExecutionUnitRunner(runtime, scenario, jUnitReporter);
runners.add(exampleScenarioRunner);</span></a>
} catch (InitializationError initializationError) {
initializationError.printStackTrace();
}
}
return runners;
}
<a href="#3_4"><span class="mark">
public final Runtime getRuntime() {
return runtime;
}
@Override
protected String getName() {
return cucumberExamples.getExamples().getKeyword() + ": " + cucumberExamples.getExamples().getName();
}
@Override
public Description getDescription() {
if (description == null) {
description = Description.createSuiteDescription(getName(), cucumberExamples.getExamples());
for (Runner child : getChildren()) {
description.addChild(describeChild(child));
}
}
return description;
}
@Override
public void run(final RunNotifier notifier) {
jUnitReporter.examples(cucumberExamples.getExamples());
super.run(notifier);
}
</span></a>
@Override
protected void <a href="#3_5"><span class="mark">runChild(Runner runner, RunNotifier notifier)</span></a> {
ParentRunner featureElementRunner = null;
featureElementRunner = (ExecutionUnitRunner)runner;
try {
featureElementRunner.run(notifier);
<a href="#3_6"><span class="mark">Assert.assertEquals(0, this.getRuntime().exitStatus());</span></a>
} catch (AssumptionViolatedException e) {
notifier.fireTestAssumptionFailed(new Failure(runner.getDescription(), e));
} catch (Throwable e) {
<a href="#3_7"><span class="mark">retry(notifier, featureElementRunner, e);</span></a>
} finally {
notifier.fireTestFinished(runner.getDescription());
}
<a href="#3_8"><span class="mark">exampleCount++;</span></a>
}
public void retry(RunNotifier notifier, ParentRunner child, Throwable currentThrowable) {
Throwable caughtThrowable = currentThrowable;
<a href="#3_9"><span class="mark">CucumberScenario scenario = exampleScenarios.get(exampleCount);</span></a>
ParentRunner featureElementRunner = null;
boolean failed = true;
int failedAttempts = 0;
while (retryCount > failedAttempts ) {
try {
<a href="#3_10"><span class="mark">featureElementRunner = new ExecutionUnitRunner(runtime, scenario, jUnitReporter);
featureElementRunner.run(notifier);
Assert.assertEquals(0, this.getRuntime().exitStatus());
failed = false;
break;</span></a>
} catch (Throwable t) {
failedAttempts++;
caughtThrowable = t;
this.getRuntime().getErrors().clear();
}
}
if (failed) {
notifier.fireTestFailure(new Failure(featureElementRunner.getDescription(), caughtThrowable));
}
}
}
</pre>
Here highlighted items represent the following:
<ol>
<li> <a id="3_1"></a> Here is another place where <b>retryCount</b> is applied
<li> <a id="3_2"></a> Custom fields storing internal data. Some of those fields are available in the original class as well but they are hidden inside private fields. That's why I had to make a copy of original class instead of simple extension
<li> <a id="3_3"></a> In scenario outline each example item is actually single scenario, so in this block we just initialize all child items as single scenarios
<li> <a id="3_4"></a> Override parent class methods to expose current class fields as well as to customize some scenario outline specific features like scenario outline name which includes the parameters row.
<li> <a id="3_5"></a> Overridden <b>runChild</b> method definition
<li> <a id="3_6"></a> Similar to <b>FeatureRunner</b> extension we identify current test status by running the following statement:
<pre class="code">
Assert.assertEquals(0, this.getRuntime().exitStatus());
</pre>
<li> <a id="3_7"></a> If the above assertion throws the error we invoke <b>retry</b> method which handles major re-run logic
<li> <a id="3_8"></a> After scenario is processed we switch the current scenario counter to 1 position. Thus, we store all generated scenarios in the <b>exampleScenarios</b> field and the index of currently running scenario is in <b>exampleCount</b> field. That would be needed inside the <b>retry</b> method
<li> <a id="3_9"></a> And here is the <b>retry</b> method. Before re-running scenario we should get reference to recently run scenario outline item. The key thing is that we shouldn't re-create the same scenario but to get exact reference to it. This way we can keep our results more or less consistent. That's why we kept <b>exampleScenarios</b> and <b>exampleCount</b> fields as the following statement:
<pre class="code">
CucumberScenario scenario = exampleScenarios.get(exampleCount);
</pre>
returns exactly currently running scenario instance which already has one failed result at the moment.
<li> <a id="3_10"></a> Re-run handling logic. We initialize new feature runner based on existing scenario instance and try to run it again handling the result code. If re-run passes we break the loop and mark test as successful. Otherwise we'll trigger final error.
</ol>
So, now we can handle re-runs on scenario outline level. Now we have to integrate our changes into main runner. Actually, neither original <b>ExamplesRunner</b> nor recently written <b>ExtendedExamplesRunner</b> classes are targeted to be invoked from main runner class directly. For this reason there is the container class of <b>ScenarioOutlineRunner</b> or it's modification in a form of <b>ExtendedScenarioOutlineRunner</b> class which was already included in our code <a href="#1_3">here</a>. For the <b>ExtendedScenarioOutlineRunner</b> class we shouldn't do too many changes. All we need is the reference to customized <b>ExtendedExamplesRunner</b> class. Thus the code of this class is almost similar to <b>ScenarioOutlineRunner</b> class and looks like:
<pre class="code">
package com.github.mkolisnyk.cucumber.runner;
import java.util.ArrayList;
import java.util.List;
import org.junit.runner.Description;
import org.junit.runner.Runner;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.Suite;
import org.junit.runners.model.InitializationError;
import cucumber.runtime.Runtime;
import cucumber.runtime.junit.JUnitReporter;
import cucumber.runtime.model.CucumberExamples;
import cucumber.runtime.model.CucumberScenarioOutline;
public class ExtendedScenarioOutlineRunner extends
Suite {
private final CucumberScenarioOutline cucumberScenarioOutline;
private final JUnitReporter jUnitReporter;
private Description description;
public ExtendedScenarioOutlineRunner(Runtime runtime, CucumberScenarioOutline cucumberScenarioOutline, JUnitReporter jUnitReporter) throws InitializationError {
super(null, buildRunners(runtime, cucumberScenarioOutline, jUnitReporter));
this.cucumberScenarioOutline = cucumberScenarioOutline;
this.jUnitReporter = jUnitReporter;
}
private static List<Runner> buildRunners(Runtime runtime, CucumberScenarioOutline cucumberScenarioOutline, JUnitReporter jUnitReporter) throws InitializationError {
List<Runner> runners = new ArrayList<Runner>();
for (CucumberExamples cucumberExamples : cucumberScenarioOutline.getCucumberExamplesList()) {
<a href="#4_1"><span class="mark">runners.add(new ExtendedExamplesRunner(runtime, cucumberExamples, jUnitReporter));</span></a>
}
return runners;
}
@Override
public String getName() {
return cucumberScenarioOutline.getVisualName();
}
@Override
public Description getDescription() {
if (description == null) {
description = Description.createSuiteDescription(getName(), cucumberScenarioOutline.getGherkinModel());
for (Runner child : getChildren()) {
description.addChild(describeChild(child));
}
}
return description;
}
@Override
public void run(final RunNotifier notifier) {
cucumberScenarioOutline.formatOutlineScenario(jUnitReporter);
super.run(notifier);
}
@Override
protected void runChild(Runner runner, final RunNotifier notifier) {
super.runChild(runner, notifier);
}
}
</pre>
Here highlighted elements represent the following:
<ol>
<li> <a id="4_1"></a> As it was mentioned before major change we should include is that current class contains collection of <b>ExtendedExamplesRunner</b> elements. Everything else is taken from the original class. Of course, extension could be more effective but in our case we had to make changes into private static method. Thus, I had to make precise copy of the original class for scenario outlines container.
</ol>
These are major changes we had to do in order to make our Cucumber tests re-runnable in case of error.
</p>
<h1>Running tests</h1>
<p>
Now it's time to use our newly extended Cucumber runner. The use is completely the same as for original <b>Cucumber</b> class and sample test may look like this:
<pre class="code">
package com.github.mkolisnyk.cucumber.reporting;
import org.junit.runner.RunWith;
import com.github.mkolisnyk.cucumber.runner.ExtendedCucumber;
import cucumber.api.CucumberOptions;
<span class="mark">@RunWith(ExtendedCucumber.class)</span>
@CucumberOptions(
plugin = {"html:target/cucumber-html-report",
"json:target/cucumber.json",
"pretty:target/cucumber-pretty.txt",
"usage:target/cucumber-usage.json",
"junit:target/cucumber-results.xml"
},
features = {"./src/test/java/com/github/mkolisnyk/cucumber/features" },
glue = {"com/github/mkolisnyk/cucumber/steps" },
tags = { }
)
public class SampleCucumberTest {
}
</pre>
Our custom runner class inclusion is highlighted. Well, now we can run our tests as ordinary JUnit tests and get our test results.
</p>
<p>
Since we operate only with top level Cucumber engine structures we do not update anything specific to JUnit or some standard reporting features which record all our errors even we had successful re-run after that. Nevertheless, we still can trace whether our tests are failing permanently or the error we spot is just temporary. The example below shows the fragment of HTML report for one test which passed after some re-runs:
<div>
<section itemtype="http://cukes.info/microformat/scenario" class="blockelement scenario failed" itemscope=""> <details open="open"> <summary class="header"> <span class="keyword" itemprop="keyword">Scenario</span>: <span itemprop="name" class="name">Flaky test</span> </summary> <div itemprop="description" class="description"></div> <ol class="steps"><li itemtype="http://cukes.info/microformat/step" class="step passed"><span class="keyword" itemprop="keyword">Given </span><span class="name" itemprop="name">I am in the system</span></li><li itemtype="http://cukes.info/microformat/step" class="step failed"><span class="keyword" itemprop="keyword">When </span><span class="name" itemprop="name">I do something</span><pre class="error">java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at com.github.mkolisnyk.cucumber.steps.TestSteps.i_do_something(TestSteps.java:35)
at ✽.When I do something(Test.feature:15)
</pre></li><li itemtype="http://cukes.info/microformat/step" class="step skipped"><span class="keyword" itemprop="keyword">Then </span><span class="name" itemprop="name">I should see nothing</span></li></ol></details> </section><section itemtype="http://cukes.info/microformat/scenario" class="blockelement scenario passed failed" itemscope=""> <details open="open"> <summary class="header"> <span class="keyword" itemprop="keyword">Scenario</span>: <span itemprop="name" class="name">Flaky test</span> </summary> <div itemprop="description" class="description"></div> <ol class="steps"><li itemtype="http://cukes.info/microformat/step" class="step passed"><span class="keyword" itemprop="keyword">Given </span><span class="name" itemprop="name">I am in the system</span></li><li itemtype="http://cukes.info/microformat/step" class="step failed"><span class="keyword" itemprop="keyword">When </span><span class="name" itemprop="name">I do something</span><pre class="error">java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at com.github.mkolisnyk.cucumber.steps.TestSteps.i_do_something(TestSteps.java:35)
at ✽.When I do something(Test.feature:15)
</pre></li><li itemtype="http://cukes.info/microformat/step" class="step skipped"><span class="keyword" itemprop="keyword">Then </span><span class="name" itemprop="name">I should see nothing</span></li></ol></details> </section><section itemtype="http://cukes.info/microformat/scenario" class="blockelement scenario passed" itemscope=""> <details> <summary class="header"> <span class="keyword" itemprop="keyword">Scenario</span>: <span itemprop="name" class="name">Flaky test</span> </summary> <div itemprop="description" class="description"></div> <ol class="steps"><li itemtype="http://cukes.info/microformat/step" class="step passed"><span class="keyword" itemprop="keyword">Given </span><span class="name" itemprop="name">I am in the system</span></li><li itemtype="http://cukes.info/microformat/step" class="step passed"><span class="keyword" itemprop="keyword">When </span><span class="name" itemprop="name">I do something</span></li><li itemtype="http://cukes.info/microformat/step" class="step passed"><span class="keyword" itemprop="keyword">Then </span><span class="name" itemprop="name">I should see nothing</span></li></ol></details> </section>
</div>
So, in case of this report we should look at the latest run to see whether the test fails permanently. Also, it's useful to see interim errors as some temporary problems may be related not just to environment issues but also it can be a reflection of some real software problem which may appear as a combination of many different factors.
</p>
<h1>Further improvement</h1>
<p>
The solution described above is not final and requires some improvements. E.g. as you might have seen from the above code samples the number of retries is hard-coded while it's much more convenient to provide some configuration in a form of annotation or whatever we seem useful to state how many times we want to re-run our tests.
</p>
<p>
Also, despite we re-run our tests were still see all previous errors. Thus we need additional reporting (maybe another extension as the part or <a href="http://mkolisnyk.blogspot.co.uk/2015/05/cucumber-jvm-advanced-reporting.html">Advanced Reporting</a>) which cleans up results to show only permanent errors. However, we shouldn't get rid of current reports as well as they may show us even temporary problems which also may be useful to know about.
</p>
<p>
And we shouldn't forget that this solution is done for combination of Cucumber-JVM and JUnit however in some cases people use different combination of engines even within Java stack.
</p>
<p>
There may be some other improvements to be done. But the aim of this post is to show one of the ways we can make our life simpler.
</p>
</body>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com20tag:blogger.com,1999:blog-2532302763215844416.post-59122881497582201192015-05-11T01:08:00.000+01:002015-08-31T21:41:24.550+01:00The Future of Test Automation Frameworks<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
td{border:1px solid black;}
th {background-color:#CCCCDD;border:1px solid black;}
table{border:1px solid black;border-collapse:collapse;}
.defrow:nth-child(even) {background: #CCC}
.defrow:nth-child(odd) {background: #FFF}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
</style>
<title>The Future of Test Automation Frameworks</title>
</head>
<body>
<p>
It's always interesting to know the future to be able to react on that properly. It's especially true for the world of technology when every time we get something new, when something which was just a subject of science fiction yesterday becomes observable reality nowadays. Automated testing is not an exception here. We should be able to catch proper trend and to be prepared for that. Actually, our professional growth depends on that. Should we stick to the technologies we use at the moment of should we dig more into some areas which aren't well-developed at the moment but still have big potential? In order to find an answer to that question we need to understand how test automation was evolving, what promising areas are. Based on that we can identify what should we expect next.
</p>
<p>
So, let's observe test automation frameworks evolution to see how it evolves and where we can grow to.
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-SVcEzPBZFog/VU_w5VDDCfI/AAAAAAAAAs8/fL-u2DoZ9R8/s1600/BenderEvolution.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-SVcEzPBZFog/VU_w5VDDCfI/AAAAAAAAAs8/fL-u2DoZ9R8/s1600/BenderEvolution.png" /></a></div>
</p>
<a name='more'></a>
<h1>Background</h1>
<p>
Before we go further we should clearly understand that test automation is not limited with some specific tools emulating user clicks. It starts from some utility scripts (shell scripts or batch files) doing some routine operations for us. Then it goes to the world of code-level testing (involving Unit and Integration tests) and up to system level where we interact with the system as end users. The problem here is that end users are not necessarily human beings. E.g. for services, databases and any other back-end stuff it's still some other software. Only some high-level application components have dedicated GUI which interaction is up to human being.
</p>
<p>
Secondly, test automation activities can be performed by different group of people depending on which entities we cover. Thus, code-level testing is normally handled by developers while higher level testing like system, integration normally requires dedicated QA team (or whatever name you give to it). Generally, it means that such testing levels require different skill set, knowledge and different requirements. E.g. for QA team the automation may take up to 100% of activities (in some cases there are specialized test automation teams of people which in some cases are called as software developers in test). At the same time test development is not the major activity for development team and it should take preferably no more than 30% of entire working time. Also, since it's subsidiary activity for developers they don't need to have too complicated abstractions. In such cases it's more efficient to keep testing solution simple. At the same time for QAs there may be additional requirements to provide mapping between tests and requirements with minimal maintenance efforts in case of requirement changes which requires some additional abstractions. So, all above examples (and many others) show that depending on the major activity performed we need different complexity of testing solution.
</p>
<p>
Thirdly, a lot of things depend on the actual information to be provided. In classical case we simply should provide the information whether system meets some pre-defined acceptance criteria. At the same time there are cases when more or less specific criteria are either undefined or automated analysis requires too much efforts. In this case we can use automation for the most routine parts while the final decision is made based on analysis performed by human.
</p>
<p>
All the above items mean that no matter how advanced the test automation framework is there are always cases when it may be non-applicable or it's use is not rational.
</p>
<h1>Test Automation Framework Approaches Evolution</h1>
<p>
So, how test automation frameworks were evolving? Well, this post is not dedicated to "software archaeology" so I don't want to dig into all that software that was at the beginning of test automation. And I can't really say what was before and what was after. For this post I'll use classification based on complexity growth which is very likely correlates with actual evolution of software test automation tools as they also went through all those enhancements. There are many different framework types classifications based on complexity and they reflect more or less the same things. So, for this post I'll highlight the following framework types:
<ol>
<li> Linear - all actions are represented as linear sequence of either commands or separate scripts
<li> Structured - all entities are more or less grouped under dedicated data structures
<li> Data-driven - common flow is extended with the input data table and the same flow is executed for each data table row
<li> Keyword-driven - all actual actions are abstracted with some high-level text instructions so that actual tests can be represented as readable text and reflect business requirements rather than technical implementation
<li> Hybrid - combines 2 or more of the above approaches
</ol>
So, let's take a detailed look at each of the above approaches.
</p>
<h2>Linear</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-RAcxBzd1u94/VU_xFvSts9I/AAAAAAAAAtI/kCxdSWw1rAg/s1600/Linear.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-RAcxBzd1u94/VU_xFvSts9I/AAAAAAAAAtI/kCxdSWw1rAg/s320/Linear.png" /></a></div>
</p>
<p>
Linear framework is the simplest one. It means that each application function/module/section is represented as an independent script. The tests are constructed as large hierarchical structure of these scripts. Also, it can be simple linear structure of tests with some sub-modules when all instructions are executed in the sequence from top to bottom. Typical example is any shell script where we mainly contain some sequence of commands, maybe some sub-scripts grouping common functionality and (but not necessarily) some structure powered by the engine which defines general flow of entire test suite.
</p>
<p>
In the most of cases it is represented via scripts (like shell scripts or batch files as well as some simple scripting constructions via VBScript or JScript). Mainly due to that a lot of times I hear that automated tests are normally called "automated scripts". Well, in the modern world they aren't necessarily scripts but let's consider it's rudimentary artifact which came from this framework approach. Typical code may look like (in pseudo-code):
<pre class="code">
start myprogram.exe
engine get_top_window
engine click button "Start"
call create_user "user" "password"
call login "user" "password"
IF %ERRORLEVEL% > 0 echo "[ERROR] unable to login"
</pre>
Since the approach is pretty simple it was used since the early beginning. E.g. such approach was used by <a href="http://en.wikipedia.org/wiki/HP_WinRunner">WinRunner</a> which was replaced by QTP long time ago. Similar approach is applicable for <a href="http://staf.sourceforge.net/">STAF</a>. From the modern tools I can say that <a href="http://www.soapui.org/">SoapUI</a> uses something similar. Entire suite is definitely represented as some combination of isolated scripts powered by main engine.
</p>
<p>
This approach is pretty simple and easy to implement. Also, it is very convenient for <b>Record & Playback</b> abilities which was (and probably still is) one of the greatest marketing feature for a number of test automation tools from vendors (even taking into account that a lot of test automation specialists eventually come to decision that generally <b>Record & Playback sucks</b> and it is not applicable for full-scale automation projects). However, there were some drawbacks observed, like:
<ul>
<li> High dependency between tests as entire test suite is usually single long script which is executed sequentially
<li> Hard to pass values between steps as in some cases each steps could be just individual scripts which couldn't have an idea about other scripts
<li> Limited capabilities for running sub-set of tests
<li> Hard to handle errors properly especially for the cases when we had to stop current test and reset application to initial state.
</ul>
</p>
<h2>Structured</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-DtBvR1SJ1_E/VU_xd6piUAI/AAAAAAAAAtM/A99BizXOxls/s1600/Structured.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-DtBvR1SJ1_E/VU_xd6piUAI/AAAAAAAAAtM/A99BizXOxls/s320/Structured.png" /></a></div>
</p>
<p>
All the drawbacks of previous approach forced to add some more abstractions which represent tests, suites, pre/post-conditions as well as provide more advanced techniques for exception handling and general test execution control. This way the structured test automation frameworks appeared.
</p>
<p>
Firstly, more attention was paid to an ability to group repetitive steps into some functions. Then tests themselves are started to be represented with dedicated code entities, e.g. for <a href="http://borland.com/silktest">SilkTest</a> there were dedicated keywords which could indicate each specific function as test case (<b>testcase</b>) or pre/post-condition (<b>appstate</b>). Also, there was introduced the <a href="http://junit.org">JUnit</a> and all other engines of xUnit family which are ports to other languages and which are now representing can be a kind of market standard for test frameworks or their kind. Using OOP methodology there was introduced an ability to create abstractions for test suites, test cases. Also, inheritance mechanism optimized the effort for test automation as a lot of common functionality could be re-used in sibling classes.
</p>
<p>
Generally, test automation solution became more structured as well as test execution became more controllable and recoverable.
</p>
<p>
Here is an example of tests using structured framework approach:
<pre class="code">
package com.github.mkolisnyk.aerial.readers;
import java.io.File;
import java.util.ArrayList;
import org.junit.Assert;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.apache.commons.io.FileUtils;
import org.apache.http.HttpStatus;
import com.github.kristofa.test.http.Method;
import com.github.kristofa.test.http.MockHttpServer;
import com.github.kristofa.test.http.SimpleHttpResponseProvider;
import com.github.mkolisnyk.aerial.core.AerialTagList;
import com.github.mkolisnyk.aerial.core.params.AerialParamKeys;
import com.github.mkolisnyk.aerial.core.params.AerialParams;
import com.github.mkolisnyk.aerial.core.params.AerialSourceType;
public class AerialGitHubReaderTest {
private static final int PORT = 51234;
private static final String BASE_URL = "http://localhost:" + PORT;
private MockHttpServer server;
private SimpleHttpResponseProvider responseProvider;
private AerialGitHubReader reader = null;
private AerialParams params;
<span class="done">
@Before
public void setUp() throws Exception {
responseProvider = new SimpleHttpResponseProvider();
server = new MockHttpServer(PORT, responseProvider);
server.start();
}
@After
public void tearDown() throws Exception {
server.stop();
}</span>
<span class="undone">
private void initReader() throws Exception {
params = new AerialParams();
params.parse(new String[] {
AerialParamKeys.INPUT_TYPE.toString(), AerialSourceType.JIRA.toString(),
AerialParamKeys.OUTPUT_TYPE.toString(), AerialSourceType.FILE.toString(),
AerialParamKeys.SOURCE.toString(), BASE_URL,
AerialParamKeys.DESTINATION.toString(), "output",
"repo:mkolisnyk/aerial state:open"
});
reader = new AerialGitHubReader(params, new AerialTagList());
reader.open(params);
}</span>
<span class="mark">@Test
public void testOpenGitHubReaderValidQueryShouldFillContentProperly()
throws Exception {
String[] expectedContent = new String[] {
"Value 1",
"Value 1-1",
"Value 2",
"Value 3"
};
String mockOutput = FileUtils.readFileToString(new File("src/test/resources/json/github_valid_output.json"));
responseProvider
.expect(Method.GET,
"/search/issues?q=repo:mkolisnyk/aerial+state:open&sort=created&order=asc")
.respondWith(HttpStatus.SC_OK, "application/json", mockOutput);
initReader();
Assert.assertTrue(reader.hasNext());
ArrayList<String> actual = new ArrayList<String>();
while (reader.hasNext()) {
actual.add(reader.readNext());
}
Assert.assertEquals(expectedContent.length, actual.size());
for (String expected : expectedContent) {
actual.remove(expected);
}
Assert.assertEquals(0, actual.size());
reader.close();
Assert.assertFalse(reader.hasNext());
Assert.assertNull(reader.readNext());
}</span>
}
</pre>
That was example of JUnit test where:
<ul>
<li> Class itself represents test container and can be either independent suite or some sub-suite withing begger one.
<li> <span class="done">Green</span> highlighted part correspond to pre/post-conditions which make sure that each test will drive application under test to initial state disregard the execution results
<li> <span class="undone">Red</span> highlighted text shows an example how to group repetitive code into functions
<li> <span class="mark">Yellow</span> highlighted code shows an example how test itself can be moved into separate entity
</ul>
</p>
<a id="data-driven"></a><h2>Data-driven</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-dFxiNEyjzSU/VU_xlllRguI/AAAAAAAAAtU/wdrAvzD0Q84/s1600/Data-driven.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-dFxiNEyjzSU/VU_xlllRguI/AAAAAAAAAtU/wdrAvzD0Q84/s320/Data-driven.png" /></a></div>
</p>
<p>
Of course, there were some cases when we have to run the same test flow against different data set. So, there was a necessity to add an ability to bind data source and provide an engine which runs the same test for each data row. I wouldn't say it is something which appeared after structured frameworks were introduced. It was even before. Anyway, this is another enhancement to decrease test automation efforts for routine operations.
</p>
<p>
Below is an example of data-driven test:
<pre class="code">
/**
* .
*/
package com.github.mkolisnyk.aerial.expressions.value;
import java.util.Arrays;
import java.util.Collection;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;
import com.github.mkolisnyk.aerial.document.InputRecord;
/**
* @author Myk Kolisnyk
*
*/
@RunWith(Parameterized.class)
public class DateRangeValueExpressionTest {
private DateRangeValueExpression expression;
private InputRecord record;
private boolean validationPass;
public DateRangeValueExpressionTest(
String description,
InputRecord recordValue,
boolean validationPassValue) {
this.record = recordValue;
this.validationPass = validationPassValue;
}
<span class="mark">
@Parameters(name = "Test date range value: {0}")
public static Collection<Object[]> data() {
return Arrays.asList(new Object[][] {
{"All inclusive range",
new InputRecord("Name", "date", "[01-01-2000;02-10-2010], Format: dd-MM-yyyy", ""),
true
},
{"Upper exclusive range",
new InputRecord("Name", "date", "[01-01-2000;02-10-2010), Format: dd-MM-yyyy", ""),
true
},
{"All exclusive range",
new InputRecord("Name", "date", "(01-01-2000;02-10-2010), Format: dd-MM-yyyy", ""),
true
},
{"Lower exclusive range",
new InputRecord("Name", "date", "(01-01-2000;02-10-2010], Format: dd-MM-yyyy", ""),
true
},
{"Spaces should be handled",
new InputRecord("Name", "date", "( 01-01-2000 ; 02-10-2010 ], Format: dd-MM-yyyy", ""),
true
},
{"Invalid order range",
new InputRecord("Name", "date", "[01-01-2010;02-10-2000], Format: dd-MM-yyyy", ""),
false
},
{"Empty range should cause the failure",
new InputRecord("Name", "date", "[02-10-2010;02-10-2010], Format: dd-MM-yyyy", ""),
false
},
{"Wrong format should cause the failure",
new InputRecord("Name", "date", "[10-02-2010;02-10-2010], Format: MM/dd/yyyy", ""),
false
},
{"Wrong but matching format should cause the failure",
new InputRecord("Name", "date", "[10-02-2010;02-10-2010], Format: MM-dd-yyyy", ""),
false
},
});
}
</span>
@Before
public void setUp() throws Exception {
expression = new DateRangeValueExpression(record);
}
@After
public void tearDown() throws Exception {
}
@Test
public void testParse() throws Exception {
try {
expression.validate();
} catch (Throwable e) {
Assert.assertFalse("This validation was supposed to pass",
this.validationPass);
return;
}
Assert.assertTrue(
"This validation was supposed to fail",
this.validationPass);
expression.parse();
Assert.assertEquals(record.getValue().replaceAll(" ", ""),
expression.toString());
}
}
</pre>
The highlighted area shows the code which provides input data so all tests are running against each array item (data row).
</p>
<a id="keyword-driven"></a><h2>Keyword-driven</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-KpnExcqK_B0/VU_xt3HJQjI/AAAAAAAAAtc/Lt_49DSUfYU/s1600/Keyword-driven.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-KpnExcqK_B0/VU_xt3HJQjI/AAAAAAAAAtc/Lt_49DSUfYU/s320/Keyword-driven.png" /></a></div>
</p>
<p>
And yet, all the above approaches still require programming skills. At the same time there is the trend for closer collaboration within software project teams with roles combining. Also, it's generally useful to share some artifacts to other people to bring them an idea of what is covered by test automation to get some feedback from their point of view. Also, it seemed to be good idea to involve non-technical people into test automation. This is one of the reasons why a lot of vendors brought <b>Record & Playback</b> ability and provided it as one of the valuable features. As I mentioned before, this feature can be useful in some cases but for large-scale automation it brings even more damage than profit. Thus, some companies started inventing some other features. E.g. <a href="http://www.smartesoft.com/products_smartescript.php">SmarteScript</a> from SmarteSoft implemented their own approach for tests representation where all steps are represented as table rows and columns correspond to elements so that each cell contains command to perform on specific control at specific step. Quite interesting alternative.
</p>
<p>
But one of the most popular solution was to assign some specific phrase or keyword to actual implementation. So, we have some resource (either table or plain text file) which contains just sequence of keywords and we have some actual code which is linked to specific keyword which implement real actions to be performed during test run. So, for the end user tests might look like:
<table>
<tr><th>Command</th><th>Parameters</th></tr>
<tr><td>Start</td><td>myprogram.exe</td></tr>
<tr><td>Create User</td><td>"user";"password"</td></tr>
<tr><td>Login</td><td>"user";"password"</td></tr>
<tr><td>Verify Text</td><td>Successful</td></tr>
</table>
Well, with proper design this solution is pretty handy and definitely can be helpful to involve non-technical people. However, it's definitely hard to keep proper and convenient design for large scale projects as well as we may have similar problems as for <b>Linear</b> approach as visually they don't make any big difference.
</p>
<p>
Typical keyword-driven solutions are:
<ul>
<li> <a href="http://www.fitnesse.org/">Fitnesse</a> - the Wiki Server with additional abilities.
<li> <a href="http://www.greenpeppersoftware.com/en/">GreenPepper</a> - the <a href="https://www.atlassian.com/software/confluence">Confluence</a> add-on.
<li> <a href="http://concordion.org/">Concordion</a>
</ul>
Also, some vendor tools like <a href="http://www8.hp.com/uk/en/software-solutions/unified-functional-automated-testing/?jumpid=reg_r1002_uken_c-001_title_r0002">HP Unified Functional Testing</a> and <a href="http://smartbear.com/product/testcomplete/overview/">SmartBear TestComplete</a> provide abilities for using keywords.
</p>
<h2>Hybrid</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-BaK31UslrWI/VU_x1hr4lPI/AAAAAAAAAtk/09jMFpB8ykc/s1600/Hybrid.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-BaK31UslrWI/VU_x1hr4lPI/AAAAAAAAAtk/09jMFpB8ykc/s320/Hybrid.png" /></a></div>
</p>
<p>
So, eventually, we come to the point when we see that none of the approaches is perfect and each of them has some drawbacks. However, each approach also has some advantages which can mitigate gaps in other approaches. This is the reason to combine different framework types. If we use 2 or more different framework types within our framework this framework may be treated as <b>Hybrid</b> framework.
</p>
<p>
One of the most known hybrid framework examples is <a href="http://cukes.info">Cucumber</a>. Here is some small example of Cucumber scenario:
<pre class="code">
Scenario Outline: eating
Given there are <start> cucumbers
When I eat <eat> cucumbers
Then I should have <left> cucumbers
<span class="mark">Examples:
| start | eat | left |
| 12 | 5 | 7 |
| 20 | 5 | 15 |</span>
</pre>
From the above sample we can see that test itself contains just some readable words which should have correspondence to the code which is major feature of <b>Keyword-driven</b> frameworks. At the same time the highlighted part is actually data source and the scenario is executed for each row of the data table. And this is <b>Data-driven</b> approach.
</p>
<p>
When we look on the back into the glue code we may see something like:
<pre class="code">
...
@Given("^there are (\\d+) cucumbers$")
public void givenThereAreCucumbers(int number) {
// TODO: add implementation
}
@When("^I eat (\\d+) cucumbers$")
public void whenIEatCucumbers(int number) {
// TODO: add implementation
}
@Then("^I should have (\\d+) cucumbers$")
public void thenIShouldSeeCucumbersLeft(int number) {
// TODO: add implementation
}
...
</pre>
which definitely represents <b>Structured</b> framework approach.
</p>
<p>
The hybrid frameworks are now the most advanced and practically applicable solutions. Some features are more popular, some of them less. I'd say even more, it's rare case when we see clean approach implementation without using any elements from another one. Main reason for that is the fact that the more advanced approach is the more practices it can grab from lower-level approaches. At the same time, we shouldn't treat nowadays as the golden age of test automation as technologies are still evolving and many new things appear on the market. Also, there are some more advanced approaches which may already exist but they are not so wide spread due to various complexities. So, now we are logically moving to another part of this post to see where we should grow with.
</p>
<h1>What's going to be next?</h1>
<p>
In the previous paragraph there was an overview of existing framework types. Again, this list is not set in stone and there may be several other deviations. And yet, there are still common directions of test automation frameworks evolution. They are:
<ul>
<li> Trend for more structured representation of tests
<li> Trend to more high-level representation of tests which potentially can be shared with people outside test automation team
</ul>
The above trends may help identifying where we can go next.
</p>
<p>
So, generally test automation isn't concentrated only on simulating user actions. As the automation process it also provides some automation for routine processes like requirements/tests/auto-tests alignment so that each requirement change is automatically reflected in automated tests update. This also minimizes the maintenance effort. Also, this trend shows that tool developers don't give up trying to make test design and test automation more visualized. So, generally everything goes closer to code less automation. This brings us to several potential framework types which can be the next generation of test automation. Some of potential candidates are:
<ul>
<li> Model-based frameworks
<li> UI Driven Development (UDD) frameworks
<li> Executable Requirements
</ul>
Now let's try to take a closer look at all of them.
</p>
<h2>Model Based Frameworks</h2>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-K8HvVvfSHWw/VU_x-F7q8AI/AAAAAAAAAts/he-U22oZxmQ/s1600/ModelBasedTesting.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-K8HvVvfSHWw/VU_x-F7q8AI/AAAAAAAAAts/he-U22oZxmQ/s320/ModelBasedTesting.png" /></a></div>
</p>
<p>
Entire application under test can be schematically represented as the directed graph where nodes are associated with application states and edges correspond to actions leading to the new state. Here is an example of the model built:
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-LJmBYEHKJfM/VU_yDuB7XSI/AAAAAAAAAt0/kavvoJ_8ehM/s1600/ModelBasedSample.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-LJmBYEHKJfM/VU_yDuB7XSI/AAAAAAAAAt0/kavvoJ_8ehM/s400/ModelBasedSample.png" /></a></div>
The idea is that we do not define any specific test scenario. We describe the model and the engine itself will generate all linear scenarios going from some initial state to the final one. This is not new approach and there are a lot of tools which already support this. Here is <a href="http://mit.bme.hu/~micskeiz/pages/modelbased_testing.html">very good overview of Model-Based Testing solutions</a> to get started.
</p>
<p>
Why it potentially can be promising? Well, it simply provides visual and readable representation of the system under test so everyone can see how it works. Also, potentially it can optimize test automation efforts as automated tests are generated based on the model. So, once we change something in the model we automatically update all tests. Also, we have quite good coverage at all points all the time, so that even if we add at least one state the number of tests will be increased quite seriously (the more edges and nodes we have the more paths potentially can go through this new state and the more tests are generated as the result).
</p>
<h2>UDD Frameworks</h2>
<p>
The initial idea was described on <a href="http://www.thoughtworks.com/insights/blog/future-test-automation-tools-infrastructure">Future of Test Automation Tools & Infrastructure</a> post by <a href="http://www.thoughtworks.com/profiles/anand-bagmar">Anand Bagmar</a>. The core point is that we use visual components instead of coding. E.g. we take the application source code and represent some model of the UI including screens, maybe some functionality and their UI components. Something like this (picture was taken from the original post linked before):
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-r589EAJrybk/VU_yKLdviHI/AAAAAAAAAt8/aDSVDY4oJG8/s1600/UDDSample.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-r589EAJrybk/VU_yKLdviHI/AAAAAAAAAt8/aDSVDY4oJG8/s1600/UDDSample.png" /></a></div>
Once we have such representation it can be a kind of repository for test elements which we can drag & drop into our test selecting some actions or something like that. Major idea is to express actual interaction with the UI using pre-defined set of visual interaction patterns we can simply choose from our palette. Everything else is driven by the engine. Well, it's still looks like coding but a bit visualized.
</p>
<h2>Executable Requirements</h2>
<p>
I was describing the approach in the <a href="http://mkolisnyk.blogspot.co.uk/2014/12/automated-testing-moving-to-executable.html">Automated Testing: Moving to Executable Requirements</a> post. The idea is that since we already can generate automated tests based on test scenario description (e.g. the way the Cucumber does things) why don't we generate test scenarios based on some formal descriptions? In some cases it's pretty doable as for typical input formats we should do typical set of tests (positive, negative, boundary values). So, eventually we do not write tests but define requirements and specifications in some formalized representation which can be the source for generating test scenarios which in turn can be the source of automated test cases generation. All we have to do is to write glue code implementation (similar as we do for Cucumber or similar engines).
</p>
<p>
What are the advantages we can have? Well, the BDD approach (which is on the back of executable requirements) itself decreases coding effort due to text instructions re-use. The more we re-use the more time we save on coding. And now imagine we have even more compact structure which expands to the entire set of scenarios with all necessary data generated. All we need to do is to describe all set of data item relationships and basic flows for success and error conditions. Since it is something to be done in plain text it can be stored anywhere where the text can be read. Thus we can use some common documentation storage like Confluence or similar and use this document as the requirement document. Also, this way we always make 100% coverage of requirements by tests and auto-tests.
</p>
<p>
I've started creating the engine performing that. More detailed documentation on that can be found on <a href="http://mkolisnyk.github.io/aerial/">the official documentation page</a>.
</p>
<h1>Who's gonna win?</h1>
<h2>Main trends observed</h2>
<p>
OK. These are just several approaches people think to be the future of test automation. These approaches are pretty different but there are some common things in them all. They are:
<ul>
<li> Test automation is seen more visualized
<li> Test automation is seen to be targeted more to non-technical people
<li> Test automation is targeted to be more aligned with application under test as the source for some test resources generation
<li> In general, more additional abstractions are expected above code level in test automation frameworks
</ul>
Future will show us what's going to work and what's not. But in some areas we can observe how it works in other areas to predict the future of test automation.
</p>
<h2>How it works in programming world?</h2>
<p>
Yes, the development world is the one which we can use for test automation future prediction. Since development is core part of the software project it is the development where the biggest number of investment is targeted to. So, let's see how similar things were reflected in the development as a lot of experiments took place in that area as well.
</p>
<p>
Firstly, it is visualization. It's far not new thing in the software development world. You can start with <a href="http://en.wikipedia.org/wiki/Visual_programming_language">this</a> to make sure it's true. At the same time you can get the list of the software listed in previous link and then go to any job site like <b>monster.com</b> or <b>indeed.com</b> or <b>LinkedIn</b> (whatever you see more convenient to you) and try to find jobs related to those technologies. After that compare it with the number of jobs for Java, C#, JavaScript, Python, C++. You'll see that such visual programming tools are far apart from mainstream. Why does it happen? There are many reasons. Just some of them are:
<ol>
<li> Visualization really works when it is domain-specific. Actually, it's not just about software development. The music notation is another example of information visualization when we graphically represent the sounds to be played. Similar thing happens for software development. We may have some specialized software for making games, making video or modelling some physical processes. But try to apply the same software to some other domain areas and you'll see that it's really hard. It's generally hard to define common visual programming language applicable for everything.
<li> Most of visual diagrams looks great and convenient to process only when they are done for small thing. But what if the model becomes more complicated, e.g. like this:
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-UgoppsIvMoA/VU_yUCO36qI/AAAAAAAAAuE/PIEONQ6Hx_0/s1600/BigModel.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-UgoppsIvMoA/VU_yUCO36qI/AAAAAAAAAuE/PIEONQ6Hx_0/s400/BigModel.png" /></a></div>
? What if the system under test is bigger than that?
<li> Sometimes people say "a picture speaks a thousand words". Yeah, cool. But how many of you can paint such pictures so that many people can understand all those "thousand words" you was going to say with that? Would it be more compact than those words? Well, if it is some structure/object description then maybe yes. But sometimes words are better than pictures especially when you write some instructions. If there are any doubts you can make an experiment. Just take this post, take first thousand words or it and display it with the picture with minimal number of words, let's say 50. This is needed to avoid cheating otherwise you can make a screen shot and represent it as the picture. Then we can send it to some other people so that they can recover the information from that pictures.
<li> Since there is no common standard for graphical representation for everything it's hard to perform such activities as refactoring or maintenance as the system should be readable and clear to understand for anyone who works with it the same way
</ol>
So, generally speaking, visualization is used in some areas and some parts of development but it is not so universal as programming code. At least at the moment. Maybe in the future something will be changed but now it is the way I mentioned before. So, the same expectations I have for test automation. Visualization will work in some areas but it wouldn't be the mainstream in the nearest future. At least not before the same trend would be for development.
</p>
<p>
Second item is targeting to non-technical people. Development already went through that. And that was called <a href="http://en.wikipedia.org/wiki/Rapid_application_development">Rapid Application Development</a>. Well, tools supporting that still exist and quite effectively used. But mainly, they optimize routine part like UI design. Thus, UI creation is usually treated as a kind of "monkey job". But apart from UI there is huge layer of business logic and various other levels which require advanced programming. And this is not something that's going to be changed soon on the entire development front as here we again deal with domain specifics. Yes, maybe there would be (or there are) some specialized wizards targeted to some specific area but for now developers still require programming coding.
</p>
<p>
Another part of it is the question: <b>Why do we need non-technical people perform technical tasks?</b> Seriously, why should we target something to people who aren't capable to do this by default? Being technical person means to know <b>HOW</b> to do things apart from <b>WHAT</b> to do at all. And this <b>HOW</b> includes not just how to express some specific instructions/behaviour but also how to do it effectively to fit style, architecture and performance requirements. There are many ways to do the same things but only a few of them are really good.
</p>
<p>
OK. Going to the next item. The test automation is targeted to be more aligned with application under test. Well, in this area things are pretty good in development world. Developers write tests for the code they create and use them to make sure nothing is broken with another change. This makes high alignment with tests and application under test.
</p>
<p>
The same situation is for more additional abstractions are expected above code level in test automation frameworks. Yes, testing frameworks even for unit tests become more complicated to provide some mocks, some more advanced abilities to group and structure tests. Everything takes place. Even more, with growing popularity of WebDriver and similar code-based engines development and test automation can be really aligned in terms of technological stack.
</p>
<p>
This is how it works in development world. As we can see a lot of technical enhancements are rather damaging than useful. Well, maybe in some cases we didn't grow enough to make some serious shift but at least for test automation it's good indicator that it wouldn't work neither.
</p>
<h2>So who is the winner?</h2>
<p>
And now the moment of truth. What kind of test automation approaches we should expect in the upcoming future?
</p>
<p>
First of all the test automation world isn't monolith. It has the code-level test automation normally performed by developers and system level tests automation performed by dedicated QA/Test Automation/SDET teams. Of course, some project teams try to build process where all roles are mixed but as for me it's just another extremeness to independent verification and testing when development and testing teams are completely separated. The optimal activities distribution should be somewhere in the middle. So, yes, close communication between development and testing streams is still something that works (and I don't see any reason for it to stop working in the future) but there will always be some areas where one stream has monopoly on. There still should be some areas of responsibility. When responsibilities are shared between everyone eventually it appears that no one is really responsible. And this is bad. So, I still expect that development and testing streams will have their dedicated areas for test automation. Mainly developers work on code-level tests while testing stream works on system-level tests.
</p>
<p>Developers testing shouldn't go behind the structured and data-driven approaches. There are several reasons for that:
<ul>
<li> Since developers work with programming code there's no need to switch to some other technologies for writing another programming code.
<li> Any high-level abstractions provided by keyword-driven approach do not really bring the value for developers as their tests aren't directly targeted to fit the requirements but rather some specific code. Additionally, they have smaller bandwidth for testing activities, so they have to spend their time more effectively and look more simple solutions rather than play with complicated frameworks.
<li> Tests created by developers are not just targeted to verify functionality but also they serve as a kind of sample working code. Developer tests are mainly needed to make sure that when we start using some code within another methods it works properly. Otherwise, we can test and debug it separately from entire system.
</ul>
So, for code-level testing I can expect essential changes only in case of some programming paradigm changes (e.g. when new paradigm is introduced and become the mainstream). In this case testing should reflect that. Also, some new approaches may be expected for relatively new but already popular technologies (like mobile, cloud-computing etc). There's a lack or normal tool set for testing in this area so it will be created/upgraded.
</p>
<p>
As for system-level testing (including UI) there's already trend to use similar technological stack as the development. Mainly it happens due to the fact that new automation tools which become mainstream and eat essential portion of market from vendors they are mainly based on some API of mainstream programming languages. This happened with Web, this happens with mobile, with back-end. Other areas won't stay apart maybe except some niche technologies where vendors will still have their clear area.
</p>
<p>
As for framework types to be involved, <b>what I don't really expect is the popularization of visual programming tools</b>. They didn't shoot for development after many years, so why they should shoot for testing which simply goes a few years behind? And how are they going to shoot for non-UI testing when in most cases API use is more appropriate and more efficient?
</p>
<p>Also, <b>I don't expect people to give up trying keyword-driven approach</b> as alignment between requirements and tests is quite essential and routine part. Maybe the BDD solutions like Cucumber wouldn't shoot properly, well, then there would appear something else. Keyword-driven approach isn't new and from time to time people use it in different form for system level tests. It is also can be reflected with the fact that QA and test automation roles are tend to be merged into one so that test automation is no longer a separate activity but rather another "gun" which can shoot. It means that there still would be a need to express test instructions in some readable form. For now the Gherkin standard looks the most appropriate candidate for that. Thus, we bind test scenarios to automated tests. <b>So, after that the next logical step seems to be binding requirements/specifications to the test scenarios as their synchronization is still routine and quite boring part of test automation</b>.
</p>
<p>
At the same time, <b>structured framework types are still valid options and still quite optimal approach for test automation</b>. So, they wouldn't disappear but still keep essential part of test automation project.
</p>
<p>
Anyway, this is just another trial to predict the future as an extrapolation of present which is not really right. Who knows? Maybe in a few years someone will invent another approach which will explode test automation and brings the automation to completely new level. Would that be so or would there be something different? Well, there's only one way to find that out :-).
</p>
<h1>References</h1>
<p>
<ol>
<li> <a href="http://www.automationframework.info/">Resource dedicated to test automation frameworks</a>
<li> <a href="http://www.oracle.com/technetwork/articles/entarch/shrivastava-automated-frameworks-1692936.html">Automation Framework Architecture for Enterprise Products: Design and Development Strategy</a> by Anish Shrivastava
<li> <a href="http://itknowledgeexchange.techtarget.com/qa-processes/types-of-test-automation-frameworks/">Types of Test Automation Frameworks</a>
<li> <a href="http://www.slideshare.net/saucelabs/test-automation-framework-designs">Test Automation Framework Designs Presentation</a>
<li> <a href="http://www.softwaretestingclub.com/forum/topics/pros-and-cons-of-different-automation-frameworks">Pros and cons of different automation frameworks</a>
<li> <a href="http://3qilabs.com/types-of-automation-frameworks/">Types of Automation Frameworks</a> by Taylor Reifurth
<li> <a href="http://www.qualitiasoft.com/resources/the-evolution-of-test-automation-frameworks/">The Evolution of Test Automation Frameworks</a>
<li> <a href="http://www.cognifide.com/blogs/quality-assurance/evolution-of-test-automation-frameworks-master-chefs-choice-from-spaghetti-to-cucumber/">Evolution of test automation frameworks - Master Chef's choice: from spaghetti to cucumber</a>
<li> <a href="http://www.thoughtworks.com/insights/blog/future-test-automation-tools-infrastructure">Future of Test Automation Tools & Infrastructure</a> by Anand Bagmar
<li> <a href="http://www.softwaretestingnews.co.uk/the-evolution-of-test-automation-2/">The evolution of test automation</a> by Narayana Maruvada
<li> <a href="http://blog.valuelabs.com/qa/test-automation-need-and-evaluation-of-frameworks/">Need for Test Automation & Evolution of Frameworks</a>
<li> <a href="http://www.ntu.edu.sg/home/ehchua/programming/java/JavaUnitTesting.html">Java Unit Testing - JUnit & TestNG</a>
<li> <a href="http://www.slideshare.net/cygnetinfotech/test-automation-3#14312603112661&fbinitialized">SlideShare: Test Automation 3.0 - Evolution of Rapid Application Testing</a>
<li> <a href="http://mit.bme.hu/~micskeiz/pages/modelbased_testing.html">Model-based testing (MBT)</a> by Zoltan Micskei on <a href="http://mit.bme.hu/~micskeiz/indexe.html">Department of Measurement and Information Systems in Budapest University of Technology and Economics (BME) site</a>
<li> <a href="http://www.conformiq.com/model-based-testing/">Conformiq: Model-Based Testing</a>
<li> <a href="https://msdn.microsoft.com/en-gb/library/ee620469.aspx">Model-Based Testing</a>
<li> <a href="http://appmbt.blogspot.co.uk/2011/05/application-of-model-based-testing-to_27.html">Applied Model Based Testing - Experiences & Examples</a> by <a href="http://www.blogger.com/profile/06167662704783766614">Simon Ejsing</a>
</ol>
</p>
</body>
Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-21521229214934361602015-05-06T23:17:00.000+01:002015-07-17T20:29:40.515+01:00Ploblem Solved: Cucumber-JVM running actions before and after tests execution with JUnit<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
th {background-color:#CCCCDD;}
table{border:1px solid black}
td{border:1px solid silver}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
</style>
<title>Ploblem Solved: Cucumber-JVM running actions before and after tests execution with JUnit</title>
<meta charset="UTF-8">
<meta name="description" content="Cucumber-JVM running actions before and after tests execution with JUnit">
<meta name="keywords" content="cucumber, beforeall, afterall, junit">
<meta name="author" content="Mykola Kolisnyk">
</head>
<body>
<h1>Background</h1>
<p>
It is frequent case when we need to do some actions before and/or after entire test suite execution. Mainly, such actions are needed for global initialization/cleanup or some additional reporting or any other kind of pre/post-processing. There may be many different reasons for that and some test engines provide such ability, e.g. <a href="http://testng.org">TestNG</a> has <a href="http://testng.org/javadocs/org/testng/annotations/BeforeSuite.html">BeforeSuite</a> and <a href="http://testng.org/javadocs/org/testng/annotations/AfterSuite.html">AfterSuite</a> annotations, the <a href="http://junit.org">JUnit</a> has <a href="https://github.com/junit-team/junit/wiki/Test-fixtures">test fixtures</a> which may run before/after test class (it's not really the same but when we use <a href="https://cucumber.io/docs/reference/jvm#java">Cucumber-JVM</a> it's very close to what we need).
</p>
<h1>Problem</h1>
<p>
The problem appears when you want to add some actions at the very latest or very earlies stages of tests execution and you use Cucumber-JVM with JUnit. In my case I wanted to add some reports post-processing to make <a href="http://mkolisnyk.blogspot.co.uk/2015/05/cucumber-jvm-advanced-reporting.html">an advanced Cucumber report generation</a>. In this case <a href="https://github.com/junit-team/junit/wiki/Test-fixtures">JUnit fixtures</a> didn't help as <b>AfterClass</b>-annotated method runs before Cucumber generates final reports.
</p>
<p>
At the same time <a href="https://github.com/cucumber/cucumber-jvm/issues/515">adding @BeforeAll and @AfterAll hooks</a> question raised on Cucumber side as well. And there was even some solution proposed. Unfortunately, authors decided to <a href="https://github.com/cucumber/cucumber-jvm/pull/678">revert those changes</a> as there were some cases when it does not work.
</p>
<p>
So, the problem is that I need something to run after entire Cucumber-JVM suite is done but neither Cucumber nor JUnit gives me built-in capability for doing this.
</p>
<h1>Solution</h1>
<a name='more'></a>
<p>
JUnit itself gives an ability to customize test runner classes. Actually, Cucumber runs JUnit tests via <a href="https://github.com/cucumber/cucumber-jvm/blob/master/junit/src/main/java/cucumber/api/junit/Cucumber.java">Cucumber JUnit runner</a> which is already customized. But we can extend Cucumber capabilities by wrapping Cucumber runner inside our custom runner. That doesn't seem to be the smoothest solution as ideally this functionality should be provided by Cucumber itself. But if the solution looks ugly while without it things are getting even worth then this solution is no longer ugly. So, the problem is solved in three major steps:
<ol>
<li> Create BeforeSuite and AfterSuite annotations
<li> Create wrapper on Cucumber JUnit runner
<li> Use it
</ol>
So, let's bring some more details on that.
</p>
<h2>Create BeforeSuite and AfterSuite annotations</h2>
<p>
The <b>@BeforeSuite</b> and <b>@AfterSuite</b> annotations are simply annotations without any parameter which are assigned to specific method. So, their implementation may look like:
<pre class="code">
package org.sample.cucumber.annotations;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD })
public @interface AfterSuite {
}
</pre>
and the same for <b>@BeforeSuite</b>:
<pre class="code">
package org.sample.cucumber.annotations;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD })
public @interface BeforeSuite {
}
</pre>
So, now we can annotate some methods. Let's go to the next step.
</p>
<h2>Create wrapper on Cucumber JUnit runner</h2>
<p>
Custom runner implementation has several ideas on the back of it. They are:
<ul>
<li> Extended Cucumber wrapper eventually runs Cucumber runner itself
<li> In addition to Cucumber runner invoke we look for methods annotated with <b>@BeforeSuite</b> and <b>@AfterSuite</b> annotations. If there are such methods we run them before/after invoking actual Cucumber runner. The sample implementation looks like:
</ul>
<pre class="code">
package org.sample.cucumber;
import java.lang.annotation.Annotation;
import java.lang.reflect.Method;
import org.junit.runner.Description;
import org.junit.runner.Runner;
import org.junit.runner.notification.RunNotifier;
import org.sample.cucumber.annotations.AfterSuite;
import org.sample.cucumber.annotations.BeforeSuite;
import cucumber.api.junit.Cucumber;
public class ExtendedCucumberRunner extends Runner {
private Class<?> clazz;
private Cucumber cucumber;
public ExtendedCucumberRunner(Class<?> clazzValue) throws Exception {
clazz = clazzValue;
cucumber = new Cucumber(clazzValue);
}
@Override
public Description getDescription() {
return cucumber.getDescription();
}
private void runPredefinedMethods(Class<?> annotation) throws Exception {
if (!annotation.isAnnotation()) {
return;
}
Method[] methodList = this.clazz.getMethods();
for (Method method : methodList) {
Annotation[] annotations = method.getAnnotations();
for (Annotation item : annotations) {
if (item.annotationType().equals(annotation)) {
<span class="mark">method.invoke(null);</span>
break;
}
}
}
}
@Override
public void run(RunNotifier notifier) {
try {
runPredefinedMethods(BeforeSuite.class);
} catch (Exception e) {
e.printStackTrace();
}
cucumber.run(notifier);
try {
runPredefinedMethods(AfterSuite.class);
} catch (Exception e) {
e.printStackTrace();
}
}
}
</pre>
Additional attention should be paid to the highlighted code. It doesn't use any object instance which means that <b>@BeforeSuite</b> and <b>@AfterSuite</b> annotations should be applied to static methods in order to make things working. OK, we're almost there. Here it goes the last step.
</p>
<h2>Use it</h2>
<p>
Now we can define our Cucumber test suite with custom runner. In JUnit/Cucumber-JVM each test suite corresponds to single JUnit class. So, we can declare something like:
<pre class="code">
package org.sample.cucumber;
import org.junit.runner.RunWith;
import org.sample.cucumber.annotations.AfterSuite;
import org.sample.cucumber.annotations.BeforeSuite;
import org.sample.cucumber.ExtendedCucumberRunner;
import cucumber.api.CucumberOptions;
@CucumberOptions(
plugin = {"html:target/cucumber-html-report",
"json:target/cucumber.json",
"pretty:target/cucumber-pretty.txt",
"usage:target/cucumber-usage.json"
},
features = {"output/" },
glue = {"org/sample/cucumber/glue" },
tags = { }
)
<span class="mark">@RunWith(ExtendedCucumberRunner.class)</span>
public class SampleTestClass {
<span class="mark">@BeforeSuite</span>
public static void setUp() {
// TODO: Add your pre-processing
}
<span class="mark">@AfterSuite</span>
public static void tearDown() {
// TODO: Add your post-processing
}
}
</pre>
Highlighted parts are the places where we use our custom objects.
</p>
<p>
Generally, that's it! Now you have your own way to define pre/post conditions which are performed at the very beginning/at the end of entire test suite which works for JUnit in combination with Cucumber-JVM.
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-pyXvM1XdMjU/VUqEz4Pzz9I/AAAAAAAAAso/7X1XNsLXk8o/s1600/problem-solved-puzzle.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-pyXvM1XdMjU/VUqEz4Pzz9I/AAAAAAAAAso/7X1XNsLXk8o/s1600/problem-solved-puzzle.png" /></a></div>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com51tag:blogger.com,1999:blog-2532302763215844416.post-34910283420263683022015-05-03T18:08:00.001+01:002015-05-03T18:08:49.396+01:00Cucumber JVM: Advanced Reporting<html>
<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
td{border:1px solid black;}
th {background-color:#CCCCDD;border:1px solid black;}
table{border:1px solid black;border-collapse:collapse;}
.defrow:nth-child(even) {background: #CCC}
.defrow:nth-child(odd) {background: #FFF}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
.passed {background-color:lightgreen;font-weight:bold;color:darkgreen}
.failed {background-color:tomato;font-weight:bold;color:darkred}
.undefined {background-color:gold;font-weight:bold;color:goldenrod}
</style>
<title>Cucumber JVM: Advanced Reporting</title>
</head>
<body>
<h1>Advanced Cucumber Reporting Introduction</h1>
<p>
The <a href="https://github.com/cucumber/cucumber-jvm">Cucumber JVM</a> contains some set of predefined reports available as the <b>plugin</b> option. By default we have some raw reports. Some of them are ready to be provided for end users (e.g. HTML report) while others still require some post-processing like JSON reports (both for usage and results reports). Also, available standard HTML report isn't really good enough. It's good for analysis (partially) but if we want to provide some kind of overview information we don't need so much details as well as we may need some more visualized statistics rather than just one simple table.
</p>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-aXHbB9nPPeI/VUZUwaKi8TI/AAAAAAAAAr0/yiXtd5WALPo/s1600/cucumber.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-aXHbB9nPPeI/VUZUwaKi8TI/AAAAAAAAAr0/yiXtd5WALPo/s200/cucumber.png" /></a></div>
</p>
<p>
Well, there are some already existing solutions for such advanced reporting, e.g. <a href="http://www.masterthought.net/section/cucumber-reporting">Cucumber Reports by masterthought.net</a>. That's really nice results reporting solution for Cucumber JVM. But when I tried to apply this solution I encountered several problems with Maven dependencies resolution and report processing. That became a reason for me to look at something custom as I couldn't smoothly integrate this reporting to my testing solution. Additionally, it covers only results reporting while I'm also interested in steps usage statistic information to make sure that we use our Cucumber steps effectively. And apparently I didn't find reporting solution covering this area in appropriate way for Cucumber JVM.
</p>
<p>
Being honest, I'm not really big fan of writing custom reporting solution at all as for test automation engineers existing information is more than enough in most of the cases. But if you need to provide something more specific and preferably in e-mailable form to send to other members of project team we need something else. That's why I created some components which generate some Cucumber reports like results and usage reports (see sample screen shots below).
<table>
<tr><td><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-U6KQO9zdWTw/VUZU9JPoviI/AAAAAAAAAr4/C6komJ6Op1I/s1600/SampleResultsReport.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-U6KQO9zdWTw/VUZU9JPoviI/AAAAAAAAAr4/C6komJ6Op1I/s400/SampleResultsReport.png" /></a></div></td><td><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-2bzrFE8jcRY/VUZVEHD9wRI/AAAAAAAAAsA/lFS1NUUTWPY/s1600/SampleUsageReport.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-2bzrFE8jcRY/VUZVEHD9wRI/AAAAAAAAAsA/lFS1NUUTWPY/s400/SampleUsageReport.png" /></a></div></td></tr>
</table>
The above samples show e-mailable reports which mainly provide results overview information which can be sent via e-mail as well as additional HTML report summarizing usage statistics. In this post I'll describe my reporting solution with some examples and some detailed explanations.
</p>
<a name='more'></a>
<h1>Cucumber JVM Advanced Reporting Setup</h1>
<p>
Cucumber reporting solution is provided as a Java library which can be included as any dependency via Jar, Maven, Gradle or any other solution. Since library is included it provides the API which converts basic Cucumber JVM reports (mainly JSON-based) into HTML reports. Now let's bring more details on that.
</p>
<h2>Add Java Dependency</h2>
<p>
As it was mentioned before my custom reporting solution is provided as the library which can be included via Maven:
<pre class="code">
<dependency>
<groupId>com.github.mkolisnyk</groupId>
<artifactId>cucumber-reports</artifactId>
<version>0.0.2</version>
</dependency>
</pre>
or the same thing for Gradle:
<pre class="code">
'com.github.mkolisnyk:cucumber-reports:0.0.2'
</pre>
or simply download Jar some <a href="http://central.maven.org/maven2/com/github/mkolisnyk/cucumber-reports/0.0.2/cucumber-reports-0.0.2.jar">here</a> (depending on version the URL can be different). After that the API is accessible within the project and we can write the code which generates reports.
</p>
<h2>Report Generation Samples</h2>
<p>
Report generation can be triggered somewhere within Java code. It is assumed that base JSON reports have already been generated by Cucumber JVM, so we just have to point to existing files. In real life this API should be called somewhere as the part of hooks or some other similar code blocks which are executed after entire test suite is completed.
</p>
<p>
The following code sample generates results overview report based on Cucumber JSON report stored at <b>./src/test/resources/cucumber.json</b> location. The output directory is <b>target</b> and the file prefix is <b>cucumber-results</b>. The code is:
<pre class="code">
CucumberResultsOverview results = new CucumberResultsOverview();
results.setOutputDirectory("target");
results.setOutputName("cucumber-results");
results.setSourceFile("./src/test/resources/cucumber.json");
results.executeFeaturesOverviewReport();
</pre>
As the result of this code there would be <b>target/cucumber-results-feature-overview.html</b> file generated.
</p>
<p>
Cucumber usage report works in the same fashion except it uses report produced as the part of <b>usage</b> Cucumber JVM plugin. The following example generates usage report into the <b>target</b> folder based on JSON usage report generated by Cucumber and located at <b>./src/test/resources/cucumber-usage.json</b> path:
<pre class="code">
CucumberUsageReporting report = new CucumberUsageReporting();
report.setOutputDirectory("target");
report.setJsonUsageFile("./src/test/resources/cucumber-usage.json");
report.executeReport();
</pre>
The output would be the HTML file located at <b>target/cucumber-usage-report.html</b>.
</p>
<p>
This is how those reports are generated. Now let's take a look at what exactly is generated.
</p>
<h1>Cucumber HTML Results Report</h1>
<p>
Cucumber results report contains 3 major sections:
<ul>
<li> <b>Overview Chart</b> - contains pie charts showing the ratio of passed/failed features and scenarios. Scenario is considered as failed when it has failed steps. Otherwise if scenario has undefined steps (without any failed steps) the scenario status is <b>undefined</b>. In all other cases scenario is treated as passed. Features status uses similar logic but in this case the elementary part is scenario. In other words if feature contains at least one failed scenario it is treated as failed. If no fails occurred but there are some undefined scenarios the feature is undefined. Otherwise it is treated as passed. Eventually, the overview chart looks like this:
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-uemQyyVl_fQ/VUZVLDrp8qI/AAAAAAAAAsI/Nbp3mCD7Qd4/s1600/ResultsOverviewChart.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-uemQyyVl_fQ/VUZVLDrp8qI/AAAAAAAAAsI/Nbp3mCD7Qd4/s400/ResultsOverviewChart.png" /></a></div>
<li> <b>Features Status</b> - it is the table containing the list of features by their names and scenario run statistics. It shows the number of passed, failed and undefined scenarios for each specific features. Here is the sample of feature overview table:
<table><tr><th>Feature Name</th><th>Status</th><th>Passed</th><th>Failed</th><th>Undefined</th></tr><tr class="passed"><td>Live Departure Board</td><td>passed</td><td>6</td><td>0</td><td>0</td></tr><tr class="failed"><td>Payments</td><td>failed</td><td>6</td><td>1</td><td>0</td></tr><tr class="passed"><td>Application start up</td><td>passed</td><td>2</td><td>0</td><td>0</td></tr><tr class="passed"><td>Journey options feature</td><td>passed</td><td>3</td><td>0</td><td>0</td></tr><tr class="failed"><td>RTJP - Seats left</td><td>failed</td><td>1</td><td>1</td><td>0</td></tr><tr class="passed"><td>Rename 2 singles to cheapest tickets</td><td>passed</td><td>2</td><td>0</td><td>0</td></tr><tr class="failed"><td>Journey Search</td><td>failed</td><td>27</td><td>4</td><td>0</td></tr></table>
<li> <b>Scenario Status</b> - it is more detailed breakdown where features are also split into scenarios. The table contain the number of passed, failed and undefined steps for each specific scenarios. Sample table looks like (sample fragment):
<table><tr><th>Feature Name</th><th>Scenario</th><th>Status</th><th>Passed</th><th>Failed</th><th>Undefined</th></tr><tr class="passed"><td>Live Departure Board</td><td>Make a search through Live Departure Board</td><td>passed</td><td>9</td><td>0</td><td>0</td></tr><tr class="passed"><td>Live Departure Board</td><td>Make a search through Live Departure Board</td><td>passed</td><td>9</td><td>0</td><td>0</td></tr><tr class="passed"><td>Live Departure Board</td><td>Make a search through Live Departure Board</td><td>passed</td><td>9</td><td>0</td><td>0</td></tr><tr class="passed"><td>Live Departure Board</td><td>Make a search through Live Departure Board</td><td>passed</td><td>9</td><td>0</td><td>0</td></tr><tr class="passed"><td>Live Departure Board</td><td>No direct route</td><td>passed</td><td>8</td><td>0</td><td>0</td></tr><tr class="passed"><td>Live Departure Board</td><td>Recent searches LDB list</td><td>passed</td><td>7</td><td>0</td><td>0</td></tr>
</table>
</ul>
Since this format is adapted to be an e-mailable report there is no need to add steps breakdown as thus the report becomes too big as well as the actual steps breakdown can be taken from standard Cucumber JVM HTML output.
</p>
<h1>Cucumber HTML Usage Report</h1>
<p>
This report is more specific and it is mainly targeted to analyze how effectively we use our Cucumber keywords as well as provides major information about use of those keywords. This report contains 2 major sections:
<ul>
<li> <b>Usage Overview</b> - contains graph and table summarizing general keywords usage statistics
<li> <b>Usage Detailed Information</b> - contains the set of tables showing where and how each specific keyword is used
</ul>
Now let's describe both sections in more details.
</p>
<h2>Usage Overview</h2>
<p>
Usage overview contains 2 parts:
<ul>
<li> <b>Cucumber Keywords Usage Graph</b> - graphical representation of general usage for all the keywords. Sample usage graph looks like:
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-YARvNqIYrhM/VUZVX7D73CI/AAAAAAAAAsQ/YJGsHvkG2WI/s1600/UsageOverviewGraph.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-YARvNqIYrhM/VUZVX7D73CI/AAAAAAAAAsQ/YJGsHvkG2WI/s400/UsageOverviewGraph.png" /></a></div>
This is major item of the entire report and mainly it is enough to view it in order to get an idea about how good we are at steps re-use. What exactly does it contain? The X-axis goes through the number of steps re-use. The Y-axis shows how many steps are re-used X times. So, if we see on this graph the point with dimensions like X = 28 and Y = 2 it indicates that we have 2 keywords which are re-used 28 times within the test suite.
<p>
Another major item in this graph is average and median numbers (highlighted with red and yellow lines correspondingly). <b>Average</b> number is mathematical average. This value indicates the average steps re-use count across the entire set of steps. Since it's mathematical average number it doesn't reflect statistical distribution of all steps. It means that high average number may be reached even having most of the steps used just once but 1 or 2 steps are re-used big number of times. In this case mainly we don't re-use steps but our numbers still look good. Thus our information is a bit distorted and we can get some wrong conclusions during the analysis.
</p>
<p>
In order to give more precise picture of average re-use count which also takes into account the actual distribution of the steps re-use data the graph also shows the <b>median</b> value. It is also average number but it indicates that at least 50% of steps are re-used less than this median value times. Thus, in case of huge number of single time used steps our median will still be 1 even despite we have some steps which are re-used hundreds of times while simple mathematical average shows better number.
</p>
<p>
When we analyze the above values we should make sure that both of them are at least greater than 2. This means that in average we use each step at least once without necessity of spending time on writing this step implementation. This already makes Gherkin use effective. If the above numbers are bigger it's even better.
</p>
<p>
Eventually, we have <b>Re-use ratio</b> which is displayed as a big number on the graph. The ratio of %N indicates that in current test suite %N of steps were written without implementing actual glue code. In order words this number means that %N of all our steps in current test suite are automated even during test design stage. So, the bigger this ratio is the more effective we use our steps.
</p>
<li> <b>Cucumber Keywords Usage Table</b> - keywords usage table shows the same data as the above graph but this data with precise numbers is grouped and represented in tabular form. It is needed just because graph has some restrictions in displaying all numbers so in some cases we cannot get precise information about actual re-use count. So, table provides this information.
</ul>
</p>
<h2>Usage Detailed Information</h2>
<p>
Detailed information part shows each keyword usage with all variations of all parameters the keyword is used with. E.g:
<h3>^(?:I |)click on the "([^"]*)" button$</h3>
<table><tr><th>Step Name</th><th>Duration</th><th>Location</th></tr><tr><td>I click on the "OK" button</td><td> - </td><td> - </td></tr><tr><td></td><td>2.858510898</td><td>LiveTrainTimes.feature:48</td></tr><tr><td>I click on the "Skip" button</td><td> - </td><td> - </td></tr><tr><td></td><td>1.862363906</td><td>Startup.feature:7</td></tr><tr><td>I click on the "Login" button</td><td> - </td><td> - </td></tr><tr><td></td><td>2.242663886</td><td>Startup.feature:7</td></tr><tr><td>I click on the "Buy Tickets" button</td><td> - </td><td> - </td></tr><tr><td></td><td>19.839095216</td><td>booking/JourneyOptions.feature:12</td></tr><tr><td></td><td>21.44855246</td><td>booking/JourneyOptions.feature:30</td></tr><tr><td></td><td>20.263704459</td><td>booking/JourneyOptions.feature:87</td></tr><tr><td></td><td>22.396167947</td><td>journeysearch/BON-80_RTJP_SeatsLeft.feature:14</td></tr><tr><td></td><td>20.97751731</td><td>journeysearch/BON-88_SingleOptionsForJourney.feature:15</td></tr><tr><td></td><td>21.321668302</td><td>journeysearch/BON-88_SingleOptionsForJourney.feature:79</td></tr><tr><td></td><td>21.294036775</td><td>journeysearch/BasicJourneySearch.feature:7</td></tr><tr><td></td><td>19.723224101</td><td>journeysearch/BasicJourneySearch.feature:22</td></tr><tr><td></td><td>21.087017738</td><td>journeysearch/BasicJourneySearch.feature:221</td></tr><tr><td>I click on the "Earlier Trains" button</td><td> - </td><td> - </td></tr><tr><td></td><td>28.285814568</td><td>booking/JourneyOptions.feature:18</td></tr><tr><td>I click on the "Later Trains" button</td><td> - </td><td> - </td></tr><tr><td></td><td>18.404438358</td><td>booking/JourneyOptions.feature:36</td></tr><tr><td>I click on the "First Class" button</td><td> - </td><td> - </td></tr><tr><td></td><td>0.815324731</td><td>booking/JourneyOptions.feature:94</td></tr><tr><td>click on the "Buy Tickets" button</td><td> - </td><td> - </td></tr><tr><td></td><td>29.302681774</td><td>journeysearch/BasicJourneySearch.feature:231</td></tr></table>
Here we can identify which parts of the keyword are really varying and how many different variations are used. Mainly, based on that information we can identify which regular expressions can be optimized or joint in order to make more effective use of keywords.
</p>
<h1>Summary</h1>
<p>
The above reports aren't complete set of available reports which can be made for Cucumber JVM output processing. But these are major ones I have to use at the moment. And again, this project is still in development so we may have some more reports produced based on standard output formats.
</p>
</body>
</html>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com215tag:blogger.com,1999:blog-2532302763215844416.post-48988133024197332422015-02-22T18:55:00.000+00:002015-02-22T18:55:31.156+00:00Aerial: Create New Project Using Archetype<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
th {background-color:#CCCCDD;}
table{border:1px solid black}
td{border:1px solid silver}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
</style>
<title>Aerial: Create New Project Using Archetype</title>
</head>
<body>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-a4HEXtcV5GY/VLwyKBubRAI/AAAAAAAAAo0/HatDltGYw14/s1600/AerialSide001.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-a4HEXtcV5GY/VLwyKBubRAI/AAAAAAAAAo0/HatDltGYw14/s320/AerialSide001.png" /></a></div>
Since <a href="http://mkolisnyk.github.io/aerial/releases#v005">0.0.5</a> version <a href="http://mkolisnyk.github.io/aerial">Aerial</a> has an ability to generate Java projects from archetypes to get sample working project quickly. In this post I'll describe the steps for doing this.
</p>
<a name='more'></a>
<h1>Overview</h1>
<p>
The entire list of Aerial archetypes can be found <a href="http://mkolisnyk.github.io/aerial/features#maven-archetypes">here</a>. In this post I'll use <b>erial-cucumber-junit-archetype</b> archetype but other archetypes are used in the same manner. Also, I'll use Eclipse as an IDE but other similar systems operate in the same fashion. So, mainly project generation goes through the following steps:
<ul>
<li> Creating new maven project
<li> Selecting archetype
<li> Make archetype settings
<li> Trigger generation and view generated project
</ul>
</p>
<h1>Creating New Maven Project</h1>
<p>
Right-click on left panel and select <b>New > Project</b>:
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-vVSE-GLNqGQ/VOoh38LRCoI/AAAAAAAAAqM/8_m6TXmiu5s/s1600/archetype_step01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-vVSE-GLNqGQ/VOoh38LRCoI/AAAAAAAAAqM/8_m6TXmiu5s/s320/archetype_step01.png" /></a></div>
From the list of available project types select <b>Maven Project</b>:
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-LpXKXrrpHwQ/VOoh_OIpDYI/AAAAAAAAAqU/26nBBPAzjrU/s1600/archetype_step02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-LpXKXrrpHwQ/VOoh_OIpDYI/AAAAAAAAAqU/26nBBPAzjrU/s320/archetype_step02.png" /></a></div>
On the next dialog make sure <b>Create Simple Project</b> check box is unchecked:
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-9pZ5JSNXEwg/VOoiFbga64I/AAAAAAAAAqc/kXeAe-wM-hc/s1600/archetype_step03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-9pZ5JSNXEwg/VOoiFbga64I/AAAAAAAAAqc/kXeAe-wM-hc/s320/archetype_step03.png" /></a></div>
Then click on <b>Next</b> button.
</p>
<h1>Selecting Archetype</h1>
<p>
Find the archetype in the list of available archetypes like it is shown on the below picture:
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-GDhmj-zLTCE/VOoiKqF1wKI/AAAAAAAAAqk/YygfuLdxlcU/s1600/archetype_step04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-GDhmj-zLTCE/VOoiKqF1wKI/AAAAAAAAAqk/YygfuLdxlcU/s320/archetype_step04.png" /></a></div>
Then click on <b>Next</b> button.
</p>
<h1>Make Archetype Settings</h1>
<p>
In the Maven Archetype Parameters form you can customize parameters. Mainly you should fill <b>groupId</b>, <b>artifactId</b>, <b>package</b> and some additional parameters similar to the ones shown in the below picture:
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-J_EDi9ZuANU/VOoiQ_zBgLI/AAAAAAAAAqs/edO2hunw7rM/s1600/archetype_step05.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-J_EDi9ZuANU/VOoiQ_zBgLI/AAAAAAAAAqs/edO2hunw7rM/s320/archetype_step05.png" /></a></div>
Then click on <b>Finish</b> button.
</p>
<h1>View Generated Project Structure</h1>
<p>
After that the project will be generated. You would be able to see the structure like this:
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-XGYCA6TEeM4/VOoiVjQGPbI/AAAAAAAAAq0/wEIXvchcLX8/s1600/archetype_step06.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-XGYCA6TEeM4/VOoiVjQGPbI/AAAAAAAAAq0/wEIXvchcLX8/s320/archetype_step06.png" /></a></div>
</p>
<h1>Related Demo Video</h1>
<p>
The above steps with more detailed explanations can be found on video demo:
<iframe width="420" height="315" src="https://www.youtube.com/embed/YR87Fs3BZeU" frameborder="0" allowfullscreen></iframe>
</p>
</body>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-82280473668821448282015-02-18T21:30:00.000+00:002015-02-18T21:30:01.427+00:00Problem Solved: run TestNG test from JUnit using Maven<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
th {background-color:#CCCCDD;}
table{border:1px solid black}
td{border:1px solid silver}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
</style>
<title>NBehave vs SpecFlow Comparison</title>
<meta charset="UTF-8">
<meta name="description" content="NBehave vs SpecFlow Comparison">
<meta name="keywords" content="nbehave,specflow,comparison,.net,bdd,gherkin,cucumber">
<meta name="author" content="Mykola Kolisnyk">
</head>
<body>
<h1>Background</h1>
<p>
Recently I've been developing some applicaiton component which was supposed to run with TestNG. Actually, it was another extension of TestNG test. So, in order to test that functionality I had to emulate TestNG suite run within the test body. Well, performing TestNG run programmatically wasn't a problem. It can be done using code like:
<pre class="code">
import org.testng.TestListenerAdapter;
import org.testng.TestNG;
.......
TestListenerAdapter tla = new TestListenerAdapter();
TestNG testng = new TestNG();
testng.setTestClasses(new Class[] {SomeTestClass.class});
testng.addListener(tla);
testng.run();
.......
</pre>
where <b>SomeTestClass</b> is some existing TestNG class. This can be even used with JUnit (which was my case as I mainly used JUnit for the test suite). So, technically TestNG can be executed from JUnit test and vice versa.
</p>
<h1>Problem</h1>
<p>
The problem appeared when I tried to run JUnit test performing TestNG run via Maven. Normally tests are picked up using <b>surefire</b> plugin which can be included into Maven pom.xml file with the entry like this:
<pre class="code">
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.18.1</version>
</plugin>
</pre>
If you use JUnit only it picks up JUnit tests. In my case I also used JUnit for running test suites but one test required TestNG test class (actually the class with TestNG @Test annotation) as well as I had to add TestNG Maven dependency. In this case only this TestNG class was executed during test run. So, how to let Maven know that I want to run exactly JUnit tests but not TestNG ones while both of them are present within the same Maven project?
</p>
<a name='more'></a>
<h1>Solution</h1>
<p>
Actually, the answer can be found <a href="http://maven.apache.org/surefire/maven-surefire-plugin/examples/junit.html#Manually_Specifying_a_Provider">here</a>. All you have to do is to define explicitly that you are using JUnit. So, the above pom.xml construction should be updated like this:
<pre class="code">
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.18.1</version>
<span class="mark"><dependencies>
<dependency>
<groupId>org.apache.maven.surefire</groupId>
<artifactId>surefire-junit47</artifactId>
<version>2.18.1</version>
</dependency>
</dependencies></span>
</plugin>
</pre>
After that surefire plugin will run only JUnit tests. Problem solved.
</p>
<p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-7H5UJroacKk/VOT-Z0oEXKI/AAAAAAAAAp8/oQvDTnrmgMg/s1600/Problem-Solved-Stamp-300x215.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-7H5UJroacKk/VOT-Z0oEXKI/AAAAAAAAAp8/oQvDTnrmgMg/s1600/Problem-Solved-Stamp-300x215.png" /></a></div>
</p>
</body>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-23148417496921902252015-02-09T02:35:00.000+00:002015-02-10T12:10:27.147+00:00NBehave vs SpecFlow Comparison<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
th {background-color:#CCCCDD;}
table{border:1px solid black}
td{border:1px solid silver}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
</style>
<title>NBehave vs SpecFlow Comparison</title>
</head>
<body>
<p>
It's always good when you use some technology and you have a choice between various tools/engines. In some cases it makes a problem like it happens sometimes with <a href="http://mkolisnyk.blogspot.com/2012/06/gherkin-bdd-engines-comparison.html">BDD Engines</a> especially when we have to choose between similar engines which are widely used and at first glance they seem to be identical.
Some time ago I've made <a href="http://mkolisnyk.blogspot.com/2013/03/jbehave-vs-cucumber-jvm-comparison.html">JBehave vs Cucumber-JVM comparison</a> to spot some differences and comparative characteristics of the most evolved engines in Java world. And as I can see from the pge views statistics it's quite interesting topic. At the same there is .NET technology which has another set of BDD engines. And they are quite popular as well. So, in this world we may encounter question like: <b>What is better, <a href="http://nbehave.org">NBehave</a> or <a href="http://specflow.org">SpecFlow</a> ?</b>
</p>
<p>
This answer isn't so trivial. When I did <a href="http://mkolisnyk.blogspot.com/2012/06/gherkin-bdd-engines-comparison.html">cross-platform BDD Engines comparison</a> almost 3 years ago some of the engines weren't well enough or at least their documentation was on low level. At that time <a href="http://nbehave.org">NBehave</a> didn't seem to look well. But since that time a lot of things changed and now both <a href="http://nbehave.org">NBehave</a> and <a href="http://specflow.org">SpecFlow</a> are turned into full-featured Gherkin interpreter engines. So, the choice of better tool among then isn't so trivial anymore. It means we'll start new comparison between <a href="http://nbehave.org">NBehave</a> or <a href="http://specflow.org">SpecFlow</a>
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-JNh0g9WgEq0/VNgb33In2BI/AAAAAAAAApg/nP3dUl4SAwI/s1600/scorpionvssub_zero_small.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-JNh0g9WgEq0/VNgb33In2BI/AAAAAAAAApg/nP3dUl4SAwI/s400/scorpionvssub_zero_small.jpg" /></a></div>
So, let's find out who's winner in this battle!!!
</p>
<a name='more'></a>
<h1>What would be compared. Calculation methodics</h1>
<p>
Again, we'll use the same comparison scale as before. We'll take some common group of characteristics and set the grade showing how each engine is in each area
</p>
<p>
The following areas/features will be covered:
<ul>
<li>Documentation
<li>Flexibility in passing parameters
<li>Data sharing between steps
<li>Auto-complete
<li>Scoping
<li>Composite steps
<li>Backgrounds and Hooks
<li>Binding to code
<li>Formatting flexibility
<li>Built-in reports
<li>Conformity to Gherkin standards
<li>Input Data Sources
</ul>
We'll keep using 4 grade scale for evaluating each of the features. The grades are classified in the following way:
<table>
<tr><th>Grade</th><th>Criteria</th></tr>
<tr><td bgcolor=silver>0</td><td>Feature is not represented</td></tr>
<tr><td bgcolor=red>1</td><td>Feature is partially available with some restrictions</td></tr>
<tr><td bgcolor=yellow>2</td><td>Mainly feature exists but without any extra options</td></tr>
<tr><td bgcolor=green>3</td><td>Full-featured support</td></tr>
</table>
The above grades are still quite unclear and sometimes they can be subjective. In this case we'll split each area into some specific features and check which of them exists/missing. Each positive case (when feature is present) will be treated as one score item. The overall number of score items will be divided into 3 parts each of them will correspond to some grade. Based on this joint mark we'll be able to estimate each feature grade more or less objectively with some specific evidences. Thus, our comparison would be valid and valuable.
</p>
<h1>What would NOT be compared</h1>
<p>
Additionally, there are some things which I wouldn't use for comparison. They are:
<ul>
<li> Technology support - some engines may run with Mono or something else
<li> IDE integration - despite it's still important it is excluded as it's hard to make comparative measure here
<li> Test engines support - both BDD tools may run with different test frameworks
<li> Additional languages support - .NET is just a platform to be used by multiple programming languages
</ul>
Why the above items are excluded? During this comparison I'm trying to use metrics and characteristics which can be expandable to other similar engines for other programming languages. Thus, I'm trying to be not too much technology-specific.
</p>
<p>
So, when refer or use this comparison, please, don't forget that the above items were not taken into account. And for some people that can be essential. And mainly, <b>don't forget to read system requirements</b>.
</p>
<h1>Documentation</h1>
<p>
The documentation is the first entry point while trying to use any library/tool/other software whatever. So, we'll start with it here as well.While evaluating this feature we'll take a look at the following items:
<ul>
<li> <b>Official site</b> - every more or less formed solution should have an official site and an entry point for new people. Otherwise it would be hard to find it within miriads pages in the Internet space
<li> <b>Getting started guide</b> - when you're new to any engine there should be some dedicated page where to start from. Basically it contains some step-by-step instruction for creating first sample using the engine/tool whatever. Having such page is the big plus to the documentation
<li> <b>General Usage guide</b> - indeed for many different reasons we should refer to manual for some specific feature or some complex action. So, it would be great if we have consolidated set of pages for reference
<li> <b>API documentation</b> - since both engines are used via API it would be nice to have them documented on the API level as well
<li> <b>Code examples</b> - that would be another reference point demonstrating some specific feature capabilities. In most cases the only live sample is much better than tons of documentation pages
<li> <b>Specialized forums</b> - every well-grown solution consolidates people around. They make communities, discussion forums. Such resources are very useful for discussing some specific cases or non-standard tricks which are quite frequent but problem-specific almost all the time
<li> <b>Related blog posts</b> - another documentation type which is still helpful is various blog posts. It indicates that some information is not just defined on official site but also some of examples, tips and tricks can be found on other pages written by people who aren't developers of the engine itself
<li> <b>Features documentation completeness</b> - in this post we'll make an overview of various features. So another good indicator for documentation is features documentation completeness. It means that every feature should have page describing it
</ul>
</p>
<p>
<table cellspacing=0 cellpadding=0 margin=0>
<tr><th>Feature</th><th>Available in <br>NBehave</th><th>Evidence</th><th>Available in <br>SpecFlow</th><th>Evidence</th></tr>
<tr><td><b>Official site</b></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="http://nbehave.org">NBehave Official Site</a></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="http://specflow.org">SpecFlow Official Site</a></td></tr>
<tr><td><b>Getting started guide</b></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="https://github.com/nbehave/NBehave/wiki/Getting%20Started">Getting Started with NBehave</a></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="http://www.specflow.org/getting-started/">Getting Started With SpecFlow</a></td></tr>
<tr><td><b>General Usage guide</b></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="https://github.com/nbehave/NBehave/wiki/Documentation">NBehave Documentation</a></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="http://www.specflow.org/documentation/">SpecFlow Documentation</a></td></tr>
<tr><td><b>API documentation</b></td><td><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td><td>No API docs available</td><td><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td><td>No API docs available</td></tr>
<tr><td><b>Code examples</b></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="https://github.com/nbehave/NBehave/tree/master/src/NBehave.Examples">NBehave Examples on GitHub</a></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="https://github.com/techtalk/SpecFlow-Examples">SpecFlow GitHub Examples</a></td></tr>
<tr><td><b>Specialized forums</b></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><ul><li> <a href="https://nbehave.codeplex.com/discussions">NBehave discussions on CodePlex</a></ul></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><ul><li> <a href="http://groups.google.com/group/specflow">SpecFlow discussions on Google Groups</a> <li> <a href="http://www.linkedin.com/groups/SpecFlow-Pragmatic-BDD-NET-3804529?goback=%2Eanp_3804529_1343821406271_1">SpecFlow LinkedIn group</a>
<li> <a href="https://www.linkedin.com/groups?gid=4899151&trk=vsrp_groups_res_name&trkInfo=VSRPsearchId%3A490885951423404562765%2CVSRPtargetId%3A4899151%2CVSRPcmpt%3Aprimary">Columbus ATDD Developers Group</a> (not 100% relevant but close) </ul></td></tr>
<tr><td><b>Related blog posts</b></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td>
<td>
<ul>
<li> <a href="http://www.codeproject.com/Articles/32512/Behavior-Driven-Development-with-NBehave">Behavior-Driven Development with NBehave</a> by Dmitri Nеstеruk
<li> <a href="http://vadimdev.blogspot.co.uk/2011/04/getting-started-with-bdd-using-nbehave.html">Getting started with BDD using NBehave</a>
<li> <a href="http://lostechies.com/jimmybogard/2007/09/04/authoring-stories-with-nbehave-0-3/">Authoring stories with NBehave 0.3</a> by Jimmy Bogard
<li> <a href="http://lostechies.com/joeocampo/2008/02/27/updates-to-nbehave/">Updates to NBehave</a> by Joe Ocampo
<li> <a href="https://russelleast.wordpress.com/tag/nbehave/">NBehave related posts</a> on <a href="https://russelleast.wordpress.com/">Russel East's blog</a>
<li> <a href="http://geekswithblogs.net/Podwysocki/archive/2007/12/05/117377.aspx">Behavior Driven Development (BDD) and NBehave</a> by Matthew Podwysocki
<li> <a href="http://ravichandranjv.blogspot.co.uk/2012/11/bdd-with-nbehave-starter-example.html">BDD with NBehave starter example</a>
<li> <a href="http://blog.khedan.com/2009/11/putting-it-together-series-part-1.html">Putting it together series – Part 1: Testing Framework (NBehave, Rhino.Mocks)</a> by Naeem Khedarun
<li> <a href="http://frankdecaire.blogspot.co.uk/2013/02/nbehave.html">NBehave</a> by Frank DeCaire
</ul>
</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td>
<td>
<ul>
<li> <a href="http://blog.stevensanderson.com/2010/03/03/behavior-driven-development-bdd-with-specflow-and-aspnet-mvc/">Behavior Driven Development (BDD) with SpecFlow and ASP.NET MVC</a> from <a href="http://blog.stevensanderson.com/">Steven Sanderson's blog</a>
<li> <a href="https://msdn.microsoft.com/en-us/magazine/gg490346.aspx">Behavior-Driven Development with SpecFlow and WatiN</a> by <a href="https://msdn.microsoft.com/magazine/ee532098.aspx?sdmr=BrandonSatrom&sdmi=authors">Brandon Satrom</a> on MSDN Magazine
<li> <a href="http://claysnow.co.uk/using-specflow-on-mono-from-the-command-line/">Using SpecFlow on Mono from the command line</a> by <a href="http://claysnow.co.uk/author/sebrose/">Seb Rose</a> from <a href="http://claysnow.co.uk">Claysnow</a>
<li> <a href="https://rburnham.wordpress.com/category/specflow/">SpecFlow dedicated posts</a> on <a href="https://rburnham.wordpress.com/">Ryan Burnham's Blog</a>
<li> <a href="http://volaresystems.com/blog/post/2013/01/06/SpecFlow-and-WatiN-Worst-Practices-What-NOT-to-do">SpecFlow and WatiN Worst Practices: What *NOT* to do</a> by Joe Wilson
<li> <a href="http://blogs.msdn.com/b/qingsongyao/archive/2013/09/15/acceptance-testing-driven-development-atdd-use-specflow.aspx">Acceptance Testing Driven Development (ATDD) Use SpecFlow</a> by <a href="http://blogs.msdn.com/337941/ProfileUrlRedirect.ashx">Qingsong Yao</a>
<li> <a href="http://geekswithblogs.net/EltonStoneman/archive/2014/09/24/executable-specifications-end-to-end-acceptance-testing-with-specflow.aspx">Executable Specifications: End-to-end Acceptance Testing with SpecFlow</a> by <a href="http://www.pluralsight.com/author/elton-stoneman">Elton Stoneman</a>
<li> <a href="http://dontcodetired.com/blog/post/Advanced-SpecFlow-Restricting-Step-Definition-and-Hook-Execution-with-Scoped-Bindings.aspx">Advanced SpecFlow: Restricting Step Definition and Hook Execution with Scoped Bindings</a> by Jason Roberts
<li> <a href="http://blogs.lessthandot.com/index.php/enterprisedev/application-lifecycle-management/using-specflow-to/">Using SpecFlow to drive Selenium WebDriver Tests</a> by <a href="http://blogs.lessthandot.com/index.php/author/tarwn/">Eli Weinstock-Herman (tarwn)</a>
<li> <a href="http://site.specflow.org/blog-posts">Blog post links</a> on <a href="http://site.specflow.org">http://site.specflow.org</a>
</ul>
</td></tr>
<tr><td><b>Features documentation completeness</b></td><td><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td><td>At the time when this post is being written the official documentation has several pages empty. E.g.: Tags, Visual Studio Plugin</td><td><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td><td>During this comparison there were found some places where there's no direct references to the feature itself. Mainly it's related to Gherkin features</td></tr>
</table>
</p>
<p>
Summarizing the above content we can find out that mainly both <a href="http://nbehave.org">NBehave</a> and <a href="http://specflow.org">SpecFlow</a> are equally good represented in all documentation areas. They both have API docs missing but mainly it's due to .NET doesn't have really full-featured API doc generator as well as it's not something which is being widely practiced in the .NET world. But NBehave documentation has some areas missing. Thus, it gets a bit lower grade and we have grades like this:
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=yellow>2</td></tr>
<tr><td>SpecFlow</td><td bgcolor=yellow>2</td></tr>
</table>
</p>
<h1>Flexibility in passing parameters</h1>
<p>
In this section we'll compare <a href="http://nbehave.org">NBehave</a> and <a href="http://specflow.org">SpecFlow</a> capabilities against their flexibility in building instructions and varying them in different fashion to cover more specific cases which correspond to the same action. Here we'll check the following options:
<ul>
<li><b>Parameter variants</b> - in some cases we have fixed scope of possible values to pass as the parameter. Instead of using generic regular expression we should be able to enumerate acceptable options. That's what's done using parameter variants
<li><b>Parameters injection</b> - in some cases the sequence of the input parameters to method is not the same as we see convenient for our phrases. In such cases we should be able to define explicitly which part of the text goes to some specific parameter
<li><b>Tabular parameters</b> - when we pass some complex structures or arrays of complex structures there should be some mechanism to group them in granular way so they are still readable and at the same time they're convenient to parse. Tabular parameters is one of the ways for that
<li><b>Step arguments conversion</b> - an ability to convert input values into compact data structures
<li><b>Examples</b> - that looks similar to tabular parameters except this feature is used for different purpose when we want to run the same scenario against multiple different input parameters
<li><b>Multi-line input</b> - relatively small feature however it's quite frequently used and it's nice when such feature is supported
<li><b>Extra features</b> - any other features. They're not so essential as previous features however they're nice to have
</ul>
So, the following table compares JBehave and Cucumber against the above features:
<table cellspacing=0 cellpadding=0 margin=0>
<tr><th>Feature</th><th>Available in <br>NBehave</th><th>Evidence</th><th>Available in <br>SpecFlow</th><th>Evidence</th></tr>
<tr><td>Parameter variants</td><td align=center><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td>Simply supported by regular expressions</td><td align=center><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td>Simply supported by regular expressions</td></tr>
<tr><td>Parameters injection</td><td align=center><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="https://github.com/nbehave/NBehave/wiki/Typed-steps">Typed steps in NBehave</a></td><td align=center><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td><td bgcolor=silver> </td></tr>
<tr><td>Tabular parameters</td><td align=center><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="https://github.com/nbehave/NBehave/wiki/Tables">Tables in NBehave</a></td><td align=center><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><ul><li> <a href="https://github.com/techtalk/SpecFlow-Examples/blob/master/ASP.NET-MVC/BookShop/BookShop.AcceptanceTests.Selenium/US01_BookSearch.feature">Example</a> <li> <a href="http://www.specflow.org/documentation/Step-Definitions/">Step Definitions in SpecFlow</a></ul></td></tr>
<tr><td>Step arguments conversion</td><td align=center><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="https://github.com/nbehave/NBehave/wiki/Typed-steps">Typed steps in NBehave</a></td><td align=center><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="http://www.specflow.org/documentation/Step-Argument-Conversions/">Step arguments conversion in SpecFlow</a></td></tr>
<tr><td>Examples</td><td align=center><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="https://github.com/nbehave/NBehave/wiki/Scenario-outline">Scenario outlines in NBehave</a></td><td align=center><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="http://www.specflow.org/documentation/Using-Gherkin-Language-in-SpecFlow/">Gherkin Language in SpecFlow</a></td></tr>
<tr><td>Multi-line input</td><td align=center><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="https://github.com/nbehave/NBehave/wiki/Docstring">Docstrings in NBehave</a></td><td align=center><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><a href="http://www.specflow.org/documentation/Using-Gherkin-Language-in-SpecFlow/">Gherkin Language in SpecFlow</a></td></tr>
<tr><td>Formatted input</td><td align=center><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td><td bgcolor=silver> </td><td align=center><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td><td bgcolor=silver> </td></tr>
</table>
There may be some other small features but in most cases they're rather some small syntax sugar or missing just because there's no need of them in some specific engine.
</p>
<p>
Despite SpecFlow has one feature less both engines have major support but it's not full-featured. Thus, we'll grade them both with 2, so:
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=yellow>2</td></tr>
<tr><td>SpecFlow</td><td bgcolor=yellow>2</td></tr>
</table>
</p>
<h1>Data sharing between steps</h1>
<p>
During test execution there may be some cases when we need to remember some value on one step and re-use it in some further steps. Well, it's very frequent case. By default it can be resolved by using some global objects (which are usually static) and sometimes it may cause problems in case of parallel runs. But some engines have built-in storage which is supposed to be more thread-safe as well as it's a kind of standard way of data sharing. Such feature is called <b>the context</b>. It represents some internal data storage (usually some map) where we store some named values and then can retrieve it. Depending on the scope contexts can be:
<ul>
<li> <b>Binding-specific</b> - localized by class, file or any other type of single resource
<li> <b>Scenario-specific</b> - the scope is limited with specific scenario
<li> <b>Feature-specific</b> - the scope is limited with specific feature
<li> <b>Global</b> - the values in this context are visible and accessible everywhere and during entire run time
</ul>
</p>
<p>
<table cellspacing=0 cellpadding=0 margin=0>
<tr><th>Context</th><th>Available in <br>NBehave</th><th>Available in <br>SpecFlow</th><th>Evidences</th></tr>
<tr><td><b>Binding-specific</b></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td>This is supported by local variables. So, it's all about language features + code binding specifics</td></tr>
<tr><td><b>Scenario-specific</b></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td>
<td>
<ul>
<li> <a href="https://github.com/nbehave/NBehave/wiki/Context#scenariocontext">Scenario context in NBehave</a>
<li> <a href="http://www.specflow.org/documentation/ScenarioContext/">Scenario context in SpecFlow</a>
</ul>
</td></tr>
<tr><td><b>Feature-specific</b></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td>
<td>
<ul>
<li> <a href="https://github.com/nbehave/NBehave/wiki/Context#featurecontext">Feature context in NBehave</a>
<li> <a href="http://www.specflow.org/documentation/FeatureContext/">Feature context in SpecFlow</a>
</ul>
</td></tr>
<tr><td><b>Global</b></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td>This can be done using static objects or singletons. So, it's more about language capabilities</td></tr>
</table>
</p>
<p>
So, both engines support contexts pretty well. Thus, they both gain the highest grades:
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=green>3</td></tr>
<tr><td>SpecFlow</td><td bgcolor=green>3</td></tr>
</table>
</p>
<h1>Auto-complete</h1>
<p>
Auto-complete feature is really important and gives real speed-up during test scenarios development as all you need to do is just use some specific set of words and system will pick up all necessary phrases. It's extremely convenient and it may be one of the key points of decision making when selecting proper engine.
</p>
<p>
Both engines under current comparison can have it. NBehave has <a href="https://visualstudiogallery.msdn.microsoft.com:443/9ea87b8a-4086-4a02-984c-be4b3663b6cb">NBehave Visual Studio Plugin</a> which contains auto-complete feature. SpecFlow has <a href="http://www.specflow.org/documentation/Visual-Studio-2013-Integration/">Visual Studio 2013 Integration</a> feature which is also represented as the plugin with auto-complete feature. Currently, NBehave plugin is only for Visual Studio 2010 (and supported by 2012 version) so it doesn't really work on 2013. But as the part of this comparison we just check the principal ability to support this and NBehave does this. So, both engines have highest grade here:
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=green>3</td></tr>
<tr><td>SpecFlow</td><td bgcolor=green>3</td></tr>
</table>
</p>
<h1>Scoping</h1>
<p>
If we load step definitions from different libraries or we apply similar text instructions to different application areas we may need to use the same key words but with different implementation behind it. Well, in most cases it's a bit confusing but on the other hand it may make some benefits. E.g. imagine you have an applicaiton which is represented with different clients (web and desktop) doing the same thing but they are implemented under different platforms so that one of clients is great to use on local desktop while another is targeted to be used via browser. All actions are the same so end user test scenarios are the same. But since we interact with different windows and controls we actual technical core part should be different. That's why we need to distribute steps somehow based on some specific attribute. Thus, we apply some steps to web applicaiton only while other steps are applied only for desktop client. This is one of the examples where we need steps scoping.
</p>
<p>
So, how good our engines are with it. Well, SpecFlow has dedicated <a href="http://www.specflow.org/documentation/Scoped-Bindings/">scoped bindings</a> feature for that. It does exactly what is expected for this functionality area. We may have several step bindings for the same key word but depending on some attributes they may be applied differently. What about NBehave? Well, there's no explicit way to do like that. But, NBehave itself is being started a bit differently. It runs specific assemblies. So, in most cases if we want to run different versions of steps representing the same key word we just need to have separate assembly and run it. I cannot even say it is workaround. It is feature but it's a bit weaker than the one supported by SpecFlow as we can do similar trick using SpecFlow configuration if necessary. Thus, we can mark our engines with the following grades:
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=yellow>2</td></tr>
<tr><td>SpecFlow</td><td bgcolor=green>3</td></tr>
</table>
</p>
<h1>Composite steps</h1>
<p>
Normally when we design BDD-based solution we reserve some key words for low-level actions (like generic clicks, text entries) then we go to some small page actions e.g. actions for filling the form and then we have separate level where we describe all the above actions using some more general description reflecting some business functionality, e.g. create some trade. It means that some lower level steps may be simply included into higher level steps. Thus we additionally re-use our key words which is good as it brings more advantage of BDD approach. Generally the ability to call some step definitions from another step definitions is called <b>Composite Steps</b>. So, how are they supported by NBehave and SpecFlow?
</p>
<p>
Well, NBehave doesn't seem to have this feature explicitly. At least it is not described in the documentation as well as it's not seen in the code. But as an alternative it has <a href="https://github.com/nbehave/NBehave/wiki/Embedded-runner">embeded runner</a> feature. It doesn't cover all potential cases but at least it may have an ability to invoke features from the code. Thus, we can combine reusable steps into string with the feature and we'll get what we need.
</p>
<p>
As for the SpecFlow it supports <a href="http://www.specflow.org/documentation/Calling-Steps-from-Step-Definitions/">calling steps from step definitions</a> pretty well. Thus, we may conclude it has full feature support. This can be reflected in the following grades:
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=yellow>2</td></tr>
<tr><td>SpecFlow</td><td bgcolor=green>3</td></tr>
</table>
Despite NBehave doesn't support this feature explicitly the alternative way of doing it covers as options. In other words, this feature isn't supported directly but some workaround covers all necessary support.
</p>
<h1>Backgrounds and Hooks</h1>
<p>
Of course, test actions aren't just performed within actions. In some cases we need to perform some pre-setup activities to make sure our application is in proper initial state. Additionally we should be able to handle some events which may happen before/after each step/scenario/feature/run etc. All that stuff is covered by backgrounds and hooks.
</p>
<h2>Backgrounds</h2>
<p>
Backgrounds are important when we have common pre-conditions which should be applied to any test within a feature.
Here is the direct reference showing <a href="https://github.com/nbehave/NBehave/wiki/Background">backgrounds in NBehave</a> support.
SpecFlow also supports this <a href="http://www.specflow.org/documentation/Using-Gherkin-Language-in-SpecFlow/">Gherkin feature</a>. So, this feature is fully supported by both engines, thus we have grades like:
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=green>3</td></tr>
<tr><td>SpecFlow</td><td bgcolor=green>3</td></tr>
</table>
</p>
<h2>Hooks</h2>
<p>
Hooks are needed to run some specific action on some engine-specific event. E.g. in some cases we may need to run some code before/after entire suite or even specific step. It may be needed for may purposes (e.g. reporting). The comparison will cover the following hook types:
<ul>
<li><b>Before/After step</b> - runs before/after each step
<li><b>Before/After scenario block</b> - runs before/after each scenario blocks like all givens, whens etc.
<li><b>Before/After scenario</b> - runs before/after each test scenario
<li><b>Before/After feature</b> - runs before/after each feature
<li><b>Before/After run</b> - runs before/after entire run
<li><b>Tag Filtering</b> - this is additional feature which gives an ability to apply hooks only to specific tags. It was decided to add it here as it's the most relevant place.
</ul>
Both NBehave and SpecFlow support hooks. Here is the the list of <a href="https://github.com/nbehave/NBehave/wiki/Hooks">NBehave hooks</a>. <a href="http://www.specflow.org/documentation/Hooks/">SpecFlow hooks</a> also have their dedicated description. The common set of hooks is pretty similar and covers the most essential places. But <a href="http://specflow.org">SpecFlow</a> has 2 more options which cannot be found in <a href="http://nbehave.org">NBehave</a>. They are:
<ul>
<li> <b>Before/After scenario block</b> - it's not really critical to have it as well as using existing <a href="https://github.com/nbehave/NBehave/wiki/Hooks">NBehave hooks</a> we can implement this type of hooks. But anyway, for SpecFlow it's additional feature which doesn't exist in NBehave
<li> <b>Tag Filtering</b> - this is relevant to <a href="http://www.specflow.org/documentation/Scoped-Bindings/">scoped bindings</a> which are simply good feature but not essential (sometimes it's even harmful) but yet, it's still something the SpecFlow has unlike NBehave
</ul>
The above information can be represented with the following table:
<table cellspacing=0 cellpadding=0 margin=0>
<tr><th>Hook</th><th>Available in <br>NBehave</th><th>Available in <br>SpecFlow</th></tr>
<tr><td>Before/After step</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td></tr>
<tr><td>Before/After scenario block</td><td><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td></tr>
<tr><td>Before/After scenario</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td></tr>
<tr><td>Before/After feature</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td></tr>
<tr><td>Before/After run</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td></tr>
<tr><td>Tags Filtering</td><td><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td></tr>
</table>
Taking to account the above features we easily can estimate grades for both engines. They are:
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=yellow>2</td></tr>
<tr><td>SpecFlow</td><td bgcolor=green>3</td></tr>
</table>
Despite SpecFlow appeared to be better in this area the NBehave didn't lose too much as such features aren't really frequently used.
</p>
<h1>Binding to code</h1>
<p>
Both <a href="http://nbehave.org">NBehave</a> and <a href="http://specflow.org">SpecFlow</a> use attributes to bind some text to the actual implementation which is the best step binding. Thus, both engines get the highest grade in this area:
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=green>3</td></tr>
<tr><td>SpecFlow</td><td bgcolor=green>3</td></tr>
</table>
</p>
<h1>Formatting flexibility</h1>
<p>
For this comparison current area is also redundant as both <a href="http://nbehave.org">NBehave</a> and <a href="http://specflow.org">SpecFlow</a> are relatively flexible to features formatting. Thus, again both engines get the highest grade:
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=green>3</td></tr>
<tr><td>SpecFlow</td><td bgcolor=green>3</td></tr>
</table>
</p>
<h1>Built-in reports</h1>
<p>
Reporting is one of the important parts of each test engine as reports are the major artifact we analyze after test run completion. Sometimes people tend to overestimate the reports importance but for sure too few of them <b>under</b>estimate that. For this comparison we'll take the set of report types which is supported not only at least by one of the engines in this comparison but also we'll pay attention to reports generated by other similar engines for other programming languages.
So, let's compare NBehave and SpecFlow against supported report types:
<table cellspacing=0 cellpadding=0 margin=0>
<tr><th>Report type</th><th>Available in <br>NBehave</th><th>Available in <br>SpecFlow</th></tr>
<tr><td>Console output</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td></tr>
<tr><td>Pretty console output</td><td><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td><td><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td></tr>
<tr><td>Structured file (e.g. XML)</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td></tr>
<tr><td>Well-formatted readable file (e.g. HTML)</td><td><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td><td><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td></tr>
<tr><td>Usage report</td><td><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td></tr>
<tr><td>Extra report types</td><td><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td><td><img border="0" src="http://2.bp.blogspot.com/-_INL1g7DqxE/UUpC6xElloI/AAAAAAAAAao/EmrjE6RPJBU/s320/no.png" /></td></tr>
</table>
<a href="http://specflow.org">SpecFlow</a> is a bit better than <a href="http://nbehave.org">NBehave</a>but it is just due to one report which SpecFlow supports. So, the difference here is not essential and that makes the following grades:
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=red>1</td></tr>
<tr><td>SpecFlow</td><td bgcolor=yellow>2</td></tr>
</table>
</p>
<h1>Conformity to Gherkin standards</h1>
<p>
Since Gherkin is the common standard applied not just for <a href="http://nbehave.org">NBehave</a> and <a href="http://specflow.org">SpecFlow</a> but also for many other languages it is important to make sure that our engines conform this common standard. In some cases that may appear to be key selection factor. E.g. it is important if you develop features in different programming languages and you want to use the same set of tests across multiple languages. Having common feature files should decrease the effort on migrating from one language to another.
</p>
<p>
So, the table below shows which features are supported by both engines under comparison:
<table cellspacing=0 cellpadding=0 margin=0>
<tr><th>Keyword/attribute</th><th>Available in <br>NBehave</th><th>Available in <br>SpecFlow</th><th>Comment</th></tr>
<tr><td>Feature</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td> </td></tr>
<tr><td>Scenario</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td> </td></tr>
<tr><td>Given\When\Then</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td> </td></tr>
<tr><td>And\But</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td> </td></tr>
<tr><td>Scenario Outline</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td> </td></tr>
<tr><td>Examples</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td> </td></tr>
<tr><td>Background</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png"" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td> </td></tr>
<tr><td>Tag</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png"" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td> </td></tr>
<tr><td>Tables</td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td><img border="0" src="http://1.bp.blogspot.com/-vmkjh0Bnku0/UUpCfDTJsmI/AAAAAAAAAag/4U_mNZh816s/s320/yes.png" /></td><td> </td></tr>
</table>
Both <a href="http://nbehave.org">NBehave</a> and <a href="http://specflow.org">SpecFlow</a> are 100% Gherkin compliant. That's reflected in grades:
<br>
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=green>3</td></tr>
<tr><td>SpecFlow</td><td bgcolor=green>3</td></tr>
</table>
</p>
<h1>Input Data Sources</h1>
<p>
Input data sources are needed in case we want to share some data or scenario parts across multiple resources. Generally, it should optimize the effort for development and maintenance. But on the other hand we may have additional problems with references resolution on features level. Generally, Gherkin feature files were not supposed to be a description of algorithms and some other technical stuff. Mainly it is targeted to express test instructions which should be interpreted explicitly. Thus, ideologically presence of inclusions is not really appropriate. At the same time being able to read data from some external sources could be useful. But...
</p>
<h2>External Data</h2>
<p>
<a href="http://specflow.org">SpecFlow</a> doesn't contain anything which does that directly. At the same time <a href="http://nbehave.org">NBehave</a> has <a href="https://github.com/nbehave/NBehave/wiki/Embedded-runner">embeded runner</a> which may read features transferred as a string. Where this string comes from is a separate question but generally it's possible using built-in functionality. At the same time it's rather workaround than something really targeted to read external sources, thus NBehave gains some points. So, comparative grades look like:
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=yellow>2</td></tr>
<tr><td>SpecFlow</td><td bgcolor=silver>0</td></tr>
</table>
</p>
<h2>Inclusions</h2>
<p>
No support for inclusions by SpecFlow. Maybe it's even for good. NBehave has <a href="https://github.com/nbehave/NBehave/wiki/Embedded-runner">embeded runner</a> feature which can be used for that. It's not 100% of what is required but it is really serious feature supporting that. Based on that information we can grade our engines like this:
<table>
<tr><th>Engine</th><th>Grade</th></tr>
<tr><td>NBehave</td><td bgcolor=yellow>2</td></tr>
<tr><td>SpecFlow</td><td bgcolor=silver>0</td></tr>
</table>
</p>
<h1>Summary</h1>
<p>
It's time to summarize all the results and join them under total score table. Here it is:
<table>
<tr><th rowspan=2>Engine\Feature</th><th rowspan=2>Documentation</th><th rowspan=2>Flexibility in <br>passing parameters</th><th rowspan=2>Data sharing<br> between steps</th><th rowspan=2>Auto-complete</th><th rowspan=2>Scoping</th><th rowspan=2>Composite <br>steps</th><th colspan=2>Backgrounds <br>and Hooks</th><th rowspan=2>Binding <br>to code</th><th rowspan=2>Formatting <br>flexibility</th><th rowspan=2>Built-in <br>reports</th><th rowspan=2>Conformity to <br>standards</th><th colspan=2>Input Data <br>Sources</th><th rowspan=2>Total</th></tr>
<tr><td bgcolor=#DDDDFF>Backgrounds</td><td bgcolor=#DDDDFF>Hooks</td><td bgcolor=#DDDDFF>External <br>Data</td><td bgcolor=#DDDDFF>Inclusions</td></tr>
<tr bgcolor=gold><td>NBehave</td><td bgcolor=yellow>2</td><td bgcolor=yellow>2</td><td bgcolor=green>3</td><td bgcolor=green>3</td><td bgcolor=yellow>2</td><td bgcolor=yellow>2</td><td bgcolor=green>3</td><td bgcolor=yellow>2</td><td bgcolor=green>3</td><td bgcolor=green>3</td><td bgcolor=red>1</td><td bgcolor=green>3</td><td bgcolor=yellow>2</td><td bgcolor=yellow>2</td><td>33</td></tr>
<tr bgcolor=gold style="border: 2px solid gold"><td>SpecFlow</td><td bgcolor=yellow>2</td><td bgcolor=yellow>2</td><td bgcolor=green>3</td><td bgcolor=green>3</td><td bgcolor=green>3</td><td bgcolor=green>3</td><td bgcolor=green>3</td><td bgcolor=green>3</td><td bgcolor=green>3</td><td bgcolor=green>3</td><td bgcolor=yellow>2</td><td bgcolor=green>3</td><td bgcolor=silver>0</td><td bgcolor=silver>0</td><td>33</td></tr>
</table>
</p>
<p>
Well, we've just got equal scores.
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-RQIXJtxMYiw/VNgcPqEMHMI/AAAAAAAAApo/7o878XqwDB8/s1600/friendship.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-RQIXJtxMYiw/VNgcPqEMHMI/AAAAAAAAApo/7o878XqwDB8/s400/friendship.jpg" /></a></div>
Does that mean both engines are equal? Well, probably, no. What we should take from this comparison is that:
<ul>
<li> NBehave and SpecFlow have different balance of features. But the general quantity and impact of those features compensates gaps of each engine
<li> Some features were not really compared (and that was described in separate paragraph), so in some cases those excluded features should be taken into account
<li> SpecFlow looks like alive project while NBehave hasn't been updated for a while however, the new branch for 0.7 release is still there and who knows, maybe the project was just suspended
</ul>
And in general, the comparison will mainly show you what you gain and what you lose by selecting any of the compared engines if you apply them on the same area. But again, don't forget to read release notes and their specific details.
</p>
</body>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com4tag:blogger.com,1999:blog-2532302763215844416.post-87837910824932835752015-01-18T22:29:00.000+00:002015-01-18T22:29:23.581+00:00Aerial: Introduction to Test Scenarios Generation Based on Requirements<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
td{border:1px solid black;}
th {background-color:#CCCCDD;border:1px solid black;}
table{border:1px solid black;border-collapse:collapse;}
.defrow:nth-child(even) {background: #CCC}
.defrow:nth-child(odd) {background: #FFF}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
</style>
<title>Aerial: Introduction to Test Scenarios Generation Based on Requirements</title>
</head>
<body>
<p>
In one of the previous articles dedicated to <a href="http://mkolisnyk.blogspot.co.uk/2014/12/automated-testing-moving-to-executable.html">moving to executable requirements</a> I've described some high-level ideas about how such requirements should look like. Main idea is that document containing requirements/specifications can be the source for generating test cases. But in the previous article we just made brief overview on how it should work in theory. Now it's time to turn theory into practice. In this article I'll make brief introduction to new engine I've started working with. So, let me introduce the <a href="http://mkolisnyk.github.io/aerial/">Aerial</a> the magic box which makes our requirements executable.
<div class="separator" style="clear: both; text-align: center;"><img border="0" src="http://4.bp.blogspot.com/-a4HEXtcV5GY/VLwyKBubRAI/AAAAAAAAAow/xKm_71TiSEo/s1600/AerialSide001.png" /></div>
In this article I'll describe main principles of this engine and provide some base examples to bring an idea of what it is and where it is used.
</p>
<a name='more'></a>
<h1>Aerial Main Purpose and Benefits</h1>
<p>
Aerial is an engine implementing the approach of Executable Requirements. It is targeted to generate test scenarios based on requirements description. Thus, the requirements document is the source for tests.
Mainly it is designed as an extension of <a href="http://cukes.info">Cucumber</a> to provide the following possibilities:
<ul>
<li><b>More compact and structured representation of requirements and scenarios</b> - with Aerial we don't need to expand all data and scenarios. We just define rules how to expand behavior description into actual set of scenarios. This form is more compact and more declarative.
<li><b>Built-in mechanism for generating test scenarios</b> - main idea is that test scenarios are generated, so we don't need to spend the time on test design itself. In other words the Requirements Definition and Test Design stages are collapsed into one stage.
<li><b>Generalised approach for getting data from external resources</b> - requirements are usually defined in text form and can be stored anywhere in any form. The Aerial provides extensible mechanism for various input sources processing.
<li><b>Ability to perform static checks on requirements</b> - since requirements are used as the source for test scenario generation there is an ability to provide various validations. In other words Aerial partially automated static validation of requirements.
<li><b>Tests and their automated implementation reacts on any requirement change</b> - it is common thing in any testing project where we should always keep out tests and automated tests in synch with requirements. Taking into account that tests are supposed to be generated based on requirements description we can see that if we do any changes to requirements the underlying tests will be updated automatically.
<li><b>Simplify requirements and test coverage calculation</b> - the coverage is always important. Since test scenarios are generated from initial requirements the requirements coverage is 100% by design. In particular this is described in <a href="http://mkolisnyk.blogspot.co.uk/2014/09/blog-post.html#make-requirements-executable">Measure Automated Tests Quality</a> post when we were looking at unified test coverage metric.
</ul>
</p>
<h1>How Is It Ever Possible?</h1>
<p>
First question which should appear is how can we generate test scenarios based on requirements at all? Definitely, requirements are free-form text which can be varying and describing everything. And again, what and how should we describe?
</p>
<h2>Common Elements For Scenario Description</h2>
<p>
Every test scenario has specific components which are present almost everywhere disregard what application we test. First of all every test performs some <b>Action</b> which drives system to some specific state where we perform verifications. Of course, every action leads to the place where we verify results.
</p>
<p>Depending on action result we can appear in a different state. For generator we should distribute further actions depending on <b>Action</b> success or failure.</p>
<p>
In some cases we have to make sure that our test drives application to correct initial state before doing any actions. This is normally done by some pre-conditions or <b>Pre-requisites</b>.
</p>
<p>
And finally, all scenarios are driven by some input data we use. Depending on the input characteristics we can identify whether our scenario is positive or negative.
</p>
<p>
So, eventually, every functional case can be represented with the following items:
<ul>
<li> <b>Action</b> - some set of instructions explaining what should be done in order to reach some application state where we should do some verifications
<li> <b>Actions on Success/Failure</b> - the set of instructions to be done in case <b>Action</b> is expected to be finished successfully or with error respectively
<li> <b>Pre-requisites</b> - the set of instructions to be done before test scenario starts being executed
<li> <b>Test Data</b> - the set of descriptions showing the data to be used within the scenarios as well as the data format
</ul>
Their dependency and interaction can be represented with the following diagram:
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-ftmLYK56kkk/VLwywYqq6HI/AAAAAAAAAo4/ptawsmaEV1Q/s1600/AerialScenarioStructure.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-ftmLYK56kkk/VLwywYqq6HI/AAAAAAAAAo4/ptawsmaEV1Q/s400/AerialScenarioStructure.png" /></a></div>
Depending on level of detail we can represent any test scenario in this form. This is pretty convenient for generation as we can describe each specific part and then combine them and build final scenarios just by combining all those parts.
</p>
<h2>Typical Data Is Tested In Typical Way</h2>
<p>
Main essential part which left in the generation algorithm is test data. We should declare the data in some specific way so that the actual input will be generated just based on some rules.
</p>
<p>
This can be achieved by test data format definition. E.g. if some input is very specific we can use regular expression to identify what the format valid value should have. Based on that we also can define potential values which violate the format.
</p>
<p>
In real life it can be represented by typical patterns on entering the value into some specific field. E.g. you probably heard the joke like:
<div class="rule">
QA Engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 999999999 beers. Orders a lizard. Orders -1 beers. Orders a sfdeljknesv
</div>
So, in this example we have:
<ul>
<li> Valid number (1 beer)
<li> Lower boundary value (0 beers)
<li> Big number which is very likely bigger than maximal acceptable one (999999999)
<li> Something which is not the number (lizard & sfdeljknesv)
<li> Improper number (-1)
</ul>
However, generally the number of beers could be described with the range from 0 to 1000 (imagine you pay for all beers during the day). So, the input data in this example can be described as numeric value in the range of 0 to 1000.
</p>
<p>
Now let's switch from this example to another using similar numeric entry. E.g. we have some payment system where we can make payments from 0 to 1000$. Let's imagine test inputs we could use to check this system. Well, they would be pretty similar to the ones we have for previous example.
</p>
<p>
So, even being abstracted from the context we can define some rules for input data generation where we can define some declarative form of input. Based on some algorithm we can expand those rules into actual test data. Indeed, if our value is defined within a range we definitely try something inside the range (positive testing), outside the range (negative testing), values on or near the borders (boundary analysis). We shouldn't take all possible data here, just some instances of data belonging to some specific group (equivalence classes).
</p>
<p>
So, the same thing is for Aerial. The initial document contains input section where we define just the input name and acceptable format. The engine itself simply generates typical data based on some rules.
</p>
<h2>The Set of Scenarios is Typical</h2>
<p>
Mainly when we have some input data and some pre-defined flow we can define the following groups of scenarios:
<ul>
<li> <b>Positive</b> - operate with valid data with expectation of valid output
<li> <b>Negative</b> - operate with invalid data with expectation of invalid output
<li> <b>Unique values</b> - perform the same scenario on similar set of data more than once. If some value is unique it would be accepted at first turn but the error is expected on second turn.
</ul>
Some of the test groups may appear implicitly here. E.g. <b>Boundary Analysis</b> can be performed as the part of <b>Positive</b> and <b>Negative</b> tests while checking values based on range defined. Also there may be <b>Pair-wise testing</b> applied in case the number of positive tests data is big enough.
</p>
<p>
Main this in the above paragraph is that there are some specific groups of tests which can be generated in some common way.
</p>
<h2>Sample Scenario Generation Patterns</h2>
<h3>Positive Scenarios</h3>
<p>
Positive scenarios operate with positive values and they are targeted to check expected behaviour. Mainly positive scenarios are generated from document description using the following structure:
<pre class="code">
Scenario: < Case Name > positive test
Given < Pre-requisites steps >
When < Action text >
Then < Actions in case of success >
Examples:
<The positive test data table>
</pre>
</p>
<h3>Negative Scenarios</h3>
<p>
Negative scenarios are built the same way as positive scenarios except they operate with negative test data where at least one item doesn't fit acceptable format. Also, since this scenario uses
invalid input it expects actions on errors to be expected results. So, mainly negative test scenario is build using the following template:
<pre class="code">
Scenario: < Case Name > negative test
Given < Pre-requisites steps >
When < Action text >
Then < Actions in case of error >
Examples:
<The negative test data table>
</pre>
</p>
<h3>Unique Value Scenarios</h3>
<p>
Unique value scenario generation is triggered as soon as at least one field has **Unique** column value set to **true** in the input data table.
In this context the **Unique** term isn't restricted just with the case when we cannot create 2 records with the same value of some field. In this case uniqueness means
that we cannot perform some action twice having the same value for some field.
</p>
<p>
Getting to the technical side of the scenario generation we should get the scenario when we run action successfully at first turn but on the second turn we get the error if we use the same value in some field.
Thus, the unique value scenario can be described with the following template:
<pre class="code">
Scenario: < Case Name > negative test
Given < Pre-requisites steps >
When < Action text >
Then < Actions in case of success >
When < Action text >
Then < Actions in case of error >
Examples:
<Unique scenario data>
</pre>
</p>
<h3>Scenario Generation Sample</h3>
<p>
In order to show all the above items on practice here is an example of such generation. E.g. initial requirements document looks like:
<pre class="code">
This is a sample document
With multiline description
Feature: Sample Feature
This is a sample feature
With multiline description
Case: Sample Test
This is a sample test case
With multiline description
Action:
Sample action on <Test> value
Input:
| Name | Type | Value | Unique |
| Test | Int | [0;100) | true |
On Success:
This is what we see on success
On Failure:
This is what we see on error
Pre-requisites:
These are our pre-requisites
</pre>
Here we describe some action and expectations depending on input validity. After transformation we'll get the file containing text like:
<pre class="code">
Feature: Sample Feature
Scenario Outline: Sample Test positive test
Given These are our pre-requisites
When Sample action on <Test> value
Then This is what we see on success
Examples:
| Test | ValidInput |
| 50 | true |
| 51 | true |
| 0 | true |
Scenario Outline: Sample Test negative test
Given These are our pre-requisites
When Sample action on <Test> value
Then This is what we see on error
Examples:
| Test | ValidInput |
| 100 | false |
| -1 | false |
| 101 | false |
Scenario Outline: Sample Test unique value test
Given These are our pre-requisites
When Sample action on <Test> value
Then This is what we see on success
When Sample action on <Test> value
Then This is what we see on error
Examples:
| Test | ValidInput |
| 50 | true |
</pre>
As it's seen on example we are getting full-features Cucumber feature containing multiple scenario outlines. So, when we use some data we shouldn't forget to include data items into actions text. Otherwise, our generated scenario wouldn't be consistent.
</p>
<h1>High Level Integration Overview</h1>
<p>
Aerial isn't really independent engine. It just prepares initial data for further processing. At the moment it is targeted to generate Cucumber-like scenarios to send them to JUnit. So, mainly entire flow covers 3 major components:
<ol>
<li> Aerial itself - it is used for test scenarios generation
<li> Cucumber - used to run generated scenarios
<li> JUnit - main test engine which runs all
</ol>
Schematically the entire flow can be represented with the following diagram:
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Q6iQGoWUn7g/VLwzLCJX95I/AAAAAAAAApA/haBgladG8To/s1600/AerialFlow.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-Q6iQGoWUn7g/VLwzLCJX95I/AAAAAAAAApA/haBgladG8To/s640/AerialFlow.png" /></a></div>
Here we have 2 major stages:
<ol>
<li> <b>Stage 1</b> - the requirements document is processed by Aerial. In the end the Cucumber-like feature files are generated
<li> <b>Stage 2</b> - the Cucumber feature files are sent to Cucumber driven by JUnit. As the result the JUnit reports are produced
</ol>
</p>
<h1>Components Overview</h1>
<p>
Aerial itself isn't a monolith product. It contains several parts which can be released separately and targeted to specific goals. These conponents are:
<ul>
<li> Aerial core engine
<li> Aerial Maven plugin
</ul>
Let's describe all those components in more details.
</p>
<h2>Aerial Engine</h2>
<p>
Aerial Engine is the core library which performs entire scenarios processing. It can be delivered either as:
<ul>
<li> External dependency:
<ul>
<li> <b>Maven</b>:
<pre class="code">
<dependency>
<groupId>com.github.mkolisnyk</groupId>
<artifactId>aerial</artifactId>
<version>0.0.2</version>
</dependency>
</pre>
<li> <b>Gradle</b>:
<pre class="code">
'com.github.mkolisnyk:aerial:0.0.2'
</pre>
<li> <b>Buildr</b>:
<pre class="code">
'com.github.mkolisnyk:aerial:jar:0.0.2'
</pre>
</ul>
<li> Jar library which can be downloaded from <a href="http://central.maven.org/maven2/com/github/mkolisnyk/aerial/0.0.2/aerial-0.0.2.jar">here</a> (for different version just change version numbers in file/folder names).
</ul>
</p>
<h2>Aerial Maven Plugin</h2>
<p>
Aerial Maven plugin is needed in case we want to split entire processing activities. By default, if we bind Aerial annotations to JUnit tests the entire flow will be performed but it uses very specific runner. If you need to use some other runners there's an ability to generate scenarios separately from execution. This is current purpose of the Aerial Maven plugin. It can be included using the following entry:
<pre class="code">
<dependency>
<groupId>com.github.mkolisnyk</groupId>
<artifactId>aerial-maven-plugin</artifactId>
<version>0.0.2</version>
</dependency>
</pre>
At the moment it supports <b>aerial:generate</b> goal which is being executed during <b>generate-sources</b> phase.
</p>
<h1>Tests Examples</h1>
<p>
Examples with detailed description can be found on <a href="http://mkolisnyk.github.io/aerial/#example">main Aerial site</a> page.
</p>
<h1>Summary</h1>
<p>
That was a brief overview of Aerial, its' capabilities and major concepts behind it. Major things we should take from this article are:
<ul>
<li> Specifications\Requirements are represented as free-form text. So, we can make them be written in some specific format for convenience
<li> Test scenarios are relatively typical and may contain some common structure items.
<li> Aerial uses the above principles and introduces some input format which is suitable for test scenarios generation
<li> Aerial doesn't perform the entire test execution. It is just additional automated step before Cucumber (or any other similar framework) run
</ul>
Of course, Aerial is pretty new engine and a lot of things are to be changed, added. But the major concepts are still the same
</p>
<h1>Presentation and Video</h1>
<p>
<iframe src="//www.slideshare.net/slideshow/embed_code/43392430" width="425" height="355" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> <div style="margin-bottom:5px"> <strong> <a href="//www.slideshare.net/kolesnik_nickolay/aerial-the-executable-requirements-engine-introduction" title="Aerial - The Executable Requirements Engine - Introduction" target="_blank">Aerial - The Executable Requirements Engine - Introduction</a> </strong> from <strong><a href="//www.slideshare.net/kolesnik_nickolay" target="_blank">Mykola Kolisnyk</a></strong> </div>
<iframe width="560" height="315" src="//www.youtube.com/embed/3zvupDhjd9g" frameborder="0" allowfullscreen></iframe>
</p>
<h1>References</h1>
<p>
<ul>
<li> <a href="https://github.com/mkolisnyk/aerial">GitHub Project</a>
<li> <a href="http://mkolisnyk.github.io/aerial/">Main Aerial Site</a>
</ul>
</p>
</body>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-60294461622634978052014-12-16T00:51:00.000+00:002014-12-16T00:51:39.916+00:00Automated Testing: Moving to Executable Requirements<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
td{border:1px solid black;}
th {background-color:#CCCCDD;border:1px solid black;}
table{border:1px solid black;border-collapse:collapse;}
.defrow:nth-child(even) {background: #CCC}
.defrow:nth-child(odd) {background: #FFF}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
</style>
<title>Automated Testing: Moving to Executable Requirements</title>
</head>
<body>
<p>
A lot of test automation tools and frameworks are targeted to combine technical and non-technical aspects of test automation. Mainly it's about combination of test design and automated implementation. Thus, we can delegate test automation to people without programming skills as well as make solution more efficient due to maximal functionality re-use. Eventually, test automation tools are usually capable enough to create some sort of DSL which reflects business specifics of system under tests. There is a lot of test automation software created to provide such capabilities. Main thing they all reach is that they mitigate the border between manual and automated testing. As the result we usually control correspondence between test scenarios and their automated implementation.
</p>
<p>
But it's not enough as we also have requirements and if we change them we should spend some time to make sure that requirements are in line with tests and their automated implementation. Thus, we need an approach which combines requirements, tests and auto-tests into something uniform. One of such approaches is called <b>Executable Requirements</b>. In this post I'll try to describe existing solutions for that and some possible ways where to move.
</p>
<a name='more'></a>
<h1>How Executable Requirements Should Look Like?</h1>
<p>
Before starting describing any existing approaches/solutions and potential ways to go we should clearly identify what we actually should expect in the end. We have to define attributes or features we need to obtain to make sure we definitely use <b>Executable Requirements</b> approach. So, what are those items?
</p>
<h2>There Should Be Requirements</h2>
<p>
This is insanely obvious thing but it is really something we should start with. In order to implement <b>Executable Requirements</b> approach we need requirements. What requirements are?
<div class="rule">
Requirements is structured, formalized description of software under development, the way it is supposed to work. Normally it is represented as a set of feature descriptions with their expected behaviour.
</div>
This is improvised definition but main thing we should know about it is that there may be other approaches of expected system behaviour definition. E.g. we can define that using graphical model or based on similar application or previous experience in general. All those approaches and the way to use them are described in <b>ISO/IEC/IEEE DIS 29119-4:2013</b> standard.
</p>
<p>
So, we have requirements if:
<ol>
<li> Any expected system under development behaviour is described
<li> The description is done in textual form (maybe in combination with some graphics but main information should be stored as readable text)
<li> All descriptions fit <a href="http://en.wikipedia.org/wiki/Requirement#Characteristics_of_good_requirements">good requirements criteria</a> including Traceability which is useful to cover requirements with tests
</ol>
</p>
<h2>Requirements Should Be Executable</h2>
<p>
Well, it's another obvious thing. Otherwise, it wouldn't be <b>Executable Requirements</b>. What do we expect here? In the context of <b>Executable Requirements</b> approach the requirement can be treated as executable if it is (or it has some) shared part which can be executed by some software in order to perform testing of the system under development. To make long story short, our requirement description is actually a kind of source code for automated testing or at least it refers to some resource which is automated test.
</p>
<p>
A lot of test management systems have similar feature when we have some form for requirements with linked identity for test and linked automated test. Just take anything:
<ol>
<li> <a href="http://www8.hp.com/uk/en/software-solutions/quality-center-quality-management/">HP Quality Center</a>
<li> <a href="http://msdn.microsoft.com/en-us/library/jj635157.aspx">Microsoft Test Manager</a>
<li> <a href="https://jazz.net/products/rational-quality-manager/">Rational Quality Manager</a>
<li> <a href="http://www.borland.com/Products/Software-Testing/Test-Management/Silk-Central">SilkCentral Test Manager</a>
</ol>
But it's not enough to make requirements executable. All those systems provide interface for setting up the link to some executable item. But it doesn't contain this resource itself. It means that if we change the requirement none of automated tests would be affected. Or even more, we may link wrong or empty tests. Well, we'll still be able to run all the stuff linked to requirement but it would be completely irrelevant to it.
</p>
<h2>Tests Should Be Sensitive To Requirements Change</h2>
<p>
That's one of the most important thing! <b>Executable Requirements</b> are not about requirements linking to some executable tests but it's about making the requirement to become the source of executable test. Either entire description or some of it's part should be the source for tests generation. This resolves the problem with tests and requirements synchronization. If requirements change tests are updated accordingly.
</p>
<h1>What Benefits We Should Obtain?</h1>
<p>
Well, all the above things should have some goals we're about to reach. They are:
<ol>
<li> Requirements, tests and auto-tests are always kept in synchronization. Thus our testing is always up to date with the requirements
<li> Tests are taken based on requirements thus we always have nominal 100% requirements coverage
<li> Testing effort minimization as requirements definition, test design and automated tests implementation are collapsed into one activity
<li> Human factor minimization as some extra activity is done automatically
</ol>
Well, that sounds nice. But let's take a look what do we have on practice.
</p>
<h1>Existing Solutions Overview</h1>
<p>
<b>Executable Requirements</b> is not something new and there's a number of tools/engines which implement something close to it. I don't want to make an overview of all of them. I can identify several major groups of solutions joint by some common approach with common advantages and gaps.
</p>
<h2>Fitnesse-like tools</h2>
<p>
This group of tools provide infrastructure in a form of some site or just some add-on to documentation storage software with extra ability to execute some specific macro or some code on the background based on text from some document. Examples are:
<ul>
<li> <a href="http://www.fitnesse.org/">Fitnesse</a> - the Wiki Server with additional abilities.
<li> <a href="http://www.greenpeppersoftware.com/en/">GreenPepper</a> - the <a href="https://www.atlassian.com/software/confluence">Confluence</a> add-on.
<li> <a href="http://concordion.org/">Concordion</a>
</ul>
All those tools have some common things. They all provide some interface to deal with the documentation. At the same time those documents may contain some parts which are linked to executable code. So, eventually requirements are represented as the documents (simple web pages) where some of their parts may run some executable code linked to the page.
</p>
<h3>Advantages</h3>
<p>
<ol>
<li> Requirements are represented as real documentation with good formatting and readable form
<li> Everyone can easily trigger tests execution
</ol>
</p>
<h3>Disadvantages</h3>
<p>
<ol>
<li> Mainly there's no direct integration with CI or source control, so executable code should be deployed somehow on server
<li> Tests aren't easily configurable and portable to different workstations. E.g. we hardly can run the same set of tests on multiple remote machines in parallel as a part of continuous delivery process
<li> Require human interaction to trigger run
</ol>
</p>
<h2>Cucumber-like engines</h2>
<p>
Another family of engines still operate with textual representation of executable tests but they pay more attention to engine itself rather than infrastructure. Main representatives are:
<ul>
<li> <a href="http://mkolisnyk.blogspot.com/2012/06/gherkin-bdd-engines-comparison.html">Cucumber and all similar solutions</a>
<li> <a href="http://robotframework.org">Robot Framework</a>
</ul>
</p>
<h3>Advantages</h3>
<p>
<ol>
<li> Closely integrated to technical side of the entire infrastructure
<li> Portable to different workstations
<li> Flexible to configuration
<li> Test execution requires minimal human interaction
</ol>
</p>
<h3>Disadvantages</h3>
<p>
<ol>
<li> Despite the source of tests is still textual it is not represented as requirement document shared to everyone. Source is usually separate resource
<li> Requires technical knowledge and access to test solution sources to run and provide some updates
</ol>
</p>
<h1>But Is That Really Executable Requirements?</h1>
<p>
My answer is: <b>No</b>. Mainly all the above solutions provide the way to execute tests based on text instructions + extra ability to combine tests and requirements into a single document. Well, that works pretty well if we talk about some acceptance tests which are normally presented in small quantity and with quite short scenarios.
</p>
<p>
But that's not applicable for a full featured testing. Also, if we take a look at the way how requirements are defined we'll see that normally they don't contain test scenarios. Maybe some use cases are available but we'll not find entire test suite defined in the same document. So, most of existing solutions just provide an ability to combine requirement definitions with executable test descriptions. In other words, these are not <b>Executable Requirements</b> but they are executable test scenarios added to requirement descriptions. In other words, the part which is the source for automated tests doesn't belong to requirement itself. It means that the requirement part is not executable.
</p>
<h1>How Should Real Executable Requirements Look Like?</h1>
<p>
Yes, it seems like existing solutions don't provide the true <b>Executable Requirements</b> approach. At the same time, they show the way how it should look like.
</p>
<p>
Main thing which is missing is an ability to generate tests based on requirement definition which is normally text. We know how to bind text to the executable code (<a href="http://mkolisnyk.blogspot.com/2012/06/gherkin-bdd-engines-comparison.html">Cucumber and all similar solutions</a>) but we need an additional step which would generate test scenarios based on requirements descriptions.
</p>
<p>
Is that something we can do? Actually, yes. When we design our tests we have a lot of typical cases we do. E.g. when we know that some record should have unique combination of some key fields we definitely need to perform additional test which checks the case when we try to create non-unique records. Another example is validating input into some field which requires specific format (e.g. e-mail field). In this case we definitely should check any record which matches the format, then check record which violates it, some very long string, special characters, spaces, case sensitivity. Main thing is that it is typical check list for many different cases.
</p>
<p>
OK. Let's try to formalize it a bit. So, for most of test scenarios we have such common items as:
<ul>
<li> <b>Action to perform</b> - the set of operations we should perform as the part of test scenario before reaching the state where we compare expected and actual state using check list
<li> <b>Input data</b> - the set of values we use during action execution. Here we can identify the format and other constraints of any input value we should use
<li> <b>Expectations on success/failure</b> - the check list for verifications in case of positive or negative scenario
</ul>
Let's see it on example. Imagine we have some form with 2 text fields (e-mail and phone number) and <b>Submit</b> button. Form looks like this:
<fieldset>
<br>E-mail: <input type="text" value="sample@mail.com" readonly />
<br>Phone: <input type="text" value="+(1) 12323489" readonly />
<br><button>Submit</button>
</fieldset>
If we enter properly formatted information we'll see the Welcome screen. Otherwise we'll see the error message. E-mail and phone number fields usually have specific formats. So, based on the above classification we can describe the form behavior in the following way:
<li> <b>Representation 1</b>
<pre class="code">
Action:
I enter "<E-mail>" value as E-mail
And enter "<Phone Number>" value as Phone Number
And click on "Submit" button
Input:
| Field | Type | Format |
| E-mail | String | (\w+)[@](\w+)[.](\w+) |
| Phone Number | String | \+\(\d{1,3}\) \d{8} |
On Success:
I should see the Welcome screen is open
On Failure:
I should see the error message
</pre>
This still looks too formalized. But look! This is not dedicated test. This is something like technical specification. We have major operation flow, data format definition and various behaviour depending on positive or negative input. But these aren't tests yet. It is something which can be a basis for the tests to be generated. Imagine if we can generate the following tests based on the above description:
<li> <b>Representation 2</b>
<pre class="code">
Feature: Sample Feature
Scenario Outline: positive test
When I enter "<E-mail>" value as E-mail
And enter "<Phone Number>" value as Phone Number
And click on "Submit" button
Then I should see the Welcome screen is open
Examples:
| Phone Number | ValidInput | E-mail |
| +(81) 23560730 | true | test@mail.com |
Scenario Outline: negative test
When I enter "<E-mail>" value as E-mail
And enter "<Phone Number>" value as Phone Number
And click on "Submit" button
Then I should see the error message
Examples:
| Phone Number | ValidInput | E-mail |
| | false | test@mail.com |
| +(306) 48051823+(306) 48051823 | false | test@mail.com |
| \\+\\(\\d{1,3}\\) \\d{8} | false | test@mail.com |
| +(81) 23560730 | false | |
| | false | |
| +(306) 48051823+(306) 48051823 | false | |
| \\+\\(\\d{1,3}\\) \\d{8} | false | |
| +(81) 23560730 | false | wrong@email.comwrong@email.com |
| | false | wrong@email.comwrong@email.com |
| +(306) 48051823+(306) 48051823 | false | wrong@email.comwrong@email.com |
| \\+\\(\\d{1,3}\\) \\d{8} | false | wrong@email.comwrong@email.com |
| +(81) 23560730 | false | (\\w+)[@](\\w+)[.](\\w+) |
| | false | (\\w+)[@](\\w+)[.](\\w+) |
| +(306) 48051823+(306) 48051823 | false | (\\w+)[@](\\w+)[.](\\w+) |
| \\+\\(\\d{1,3}\\) \\d{8} | false | (\\w+)[@](\\w+)[.](\\w+) |
</pre>
This is already something looking like test set covering different input options. And this is runnable Cucumber feature representation. So, imagine you define technical specification in the format of <b>Representation 1</b> and there's engine which generates <b>Representation 2</b> based on the specification. The <b>Representation 2</b> in turn is runnable test scenario which can be taken by Cucumber or any similar engine.
</p>
<p>
If we have it we don't really need to define most of routine scenarios but just some rules for scenarios generation. We can wrap such description into some standard form which can be used as requirement or specification document. This way we have requirements which are the basis for test scenarios generation and test scenarios are basis for generating automated tests. If we change something in requirements all our automated solution would change and react on modifications. That would really makes us not just requirements but <b>Executable Requirements</b>
</p>
<h1>Afterword</h1>
<p>
All things described above is my opinion and vision of future growth in the direction of <b>Executable Requirements</b>. Some of the elements were described just as an idea but it is something we definitely should be able to reach. Or at least it's the next step to move. It doesn't really matter which form it would take, the main thing is that we should be able to generate test scenarios based on requirements represented as some structured and readable text.
</p>
</body>
Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com2tag:blogger.com,1999:blog-2532302763215844416.post-44373285869364075132014-09-21T21:26:00.000+01:002015-01-18T17:34:23.937+00:00Measure Automated Tests Quality<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
td{border:1px solid black;}
th {background-color:#CCCCDD;border:1px solid black;}
table{border:1px solid black;border-collapse:collapse;}
.defrow:nth-child(even) {background: #CCC}
.defrow:nth-child(odd) {background: #FFF}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
</style>
</head>
<body>
<h1>Introduction</h1>
<p>
There's one logical recursion I encounter with test automation. Test automation is about developing software targeted to test some software. So, the output of test automation is another software. This is one of the reason for treating the test automation as the development process (which is one of the best practices for test automation). But how are we going to make sure that the software we create for testing is good enough? Indeed, when we develop the software we use testing (and test automation) as one of the tools for checking and measure the quality of the software under test.
</p>
<p>
So, what about software we create for test automation?
</p>
<p>
On the other hand we use testing to make sure that the software under test is of acceptable quality. In case of test automation we use another software for this. And in some cases this software becomes complicated as well. So, how can we rely on non-tested software for making any conclusions about the target product we develop? Of course, we can make test automation simple but it's not the common solution. So, we should find some compromise where we use reliable software to check the target software (the system under test). Also, we should find the way to find out how deep testing can be and how we can measure that.
</p>
<p>
So, main questions which appear here are:
<ul>
<li> How can we identify that the automated tests we have are enough to measure quality of end product?
<li> How can we identify that our tests are really good?
<li> How can we keep quality control on our automated tests?
<li> How can we identify if our tests are of acceptable complexity?
</ul>
In this article I'll try to find out answers to many of those questions.
</p>
<a name='more'></a>
<h1>What tests are applied for?</h1>
<p>
Before started describing how we can measure our tests quality we should be able to identify what exactly we should measure or what our metrics should be based on.
The below picture shows main artifacts tests are bound to:
<center>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-e_mo3aBLGeo/VB8zB7zSiHI/AAAAAAAAAnM/bzjnqvEO3eU/s1600/TestsDiagram.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-e_mo3aBLGeo/VB8zB7zSiHI/AAAAAAAAAnM/bzjnqvEO3eU/s400/TestsDiagram.png" /></a></div>
</center>
There're 3 main artifacts we're operating with:
<ul>
<li> <b>Requirements</b> - any formal definition on how system under test should work. It can be either some dedicated document or set of descriptions or even simply information based on previous experience with similar systems. In any case, there should be any kind of description of how system should behave.
<li> <b>Implementation</b> - set of source code and corresponding resources which implements all items defined in requirements
<li> <b>Tests</b> - any form of instructions targeted to verify the correspondence between requirements and actual system under test behavior.
</ul>
So, Requirements are the main source of expected behavior definition. Implementation is the actual reflection of requirements and tests are artifacts verifying that implementation and requirements are valid. So, tests can be bound both to requirements and implementation. Requirements are verified by playing different kind of scenarios at the system level while implementation directly is tested on code level and tests are rather bound to some specific code components than some functionality part.
</p>
<p>
Despite implementation is the reflection of requirements tests can be mapped not just to requirements but also to some separate part of implementation which is not strictly bound to any part of functionality. This may be related to various auxiliary utility code which is used across the project. It is used by various functional parts representing business logic but they're not dedicated to any of it. At the same time it's necessary to cover such utilities with tests to make sure nothing is broken after any change as such change may affect business logic implementation.
</p>
<p>
So, given all the above information tests cover requirements and they should be mapped somehow to them. In addition to that tests cover implementation modules and should be mapped to them as well. So, this is the basis to respond to the next question.
</p>
<h1>How can we identify that the automated tests we have are enough to measure quality of end product?</h1>
<h2>How do we cover requirements?</h2>
<p>
There's common practice for requirements coverage. This is <a href="http://en.wikipedia.org/wiki/Traceability_matrix">Traceability Matrix</a>. It normally sets correspondence between requirements and tests. In case of test automation it also sets correspondence to automated tests. So, this matrix can be represented with the table like:
<table>
<tr><th>Requirement ID</th><th>Test Case ID</th><th>Auto-test ID</th></tr>
<tr class="defrow"><td>REQ-1</td><td>TC-1</td><td>ATC-1</td></tr>
<tr class="defrow"><td>REQ-1</td><td>TC-1</td><td>ATC-2</td></tr>
<tr class="defrow"><td>REQ-1</td><td>TC-2</td><td>ATC-3</td></tr>
<tr class="defrow"><td>REQ-2</td><td>TC-3</td><td>ATC-4</td></tr>
<tr class="defrow"><td>REQ-3</td><td>TC-4</td><td class="undone">-</td></tr>
<tr class="defrow"><td>REQ-4</td><td class="undone">-</td><td class="undone">-</td></tr>
</table>
Cells in red indicate rows which aren't covered either by test cases or simply don't have automated test. This is still gap for test runs as they always show what's tested but not what's left non-covered.
</p>
<p>
In general case each requirement may have multiple test cases verifying different aspects of the requirement (e.g. positive/negative tests). Each test case may have multiple automated tests assigned especially when test case plays several scenarios.
</p>
<p>
With such scheme we can't get simple measure saying how good we are at requirements coverage, especially for automated tests. All we can use is just 2 separate (slightly relevant) measures:
<ol>
<li> <b>Test Case coverage</b> - the relation between requirements with test cases to the overall number of requirements. It can be reflected with the following formula:
<center><div class="rule">RCOV<sub>tc</sub> = R<sub>tc</sub>/R</div></center>
where:
<ul>
<li> RCOV<sub>tc</sub> - requirements coverage by test cases
<li> R<sub>tc</sub> - the number of requirements covered by test cases
<li> R - overall number of requirements
</ul>
<li> <b>Automated Tests coverage</b> - it is the part of requirements covered by tests which have automated implementation. It can be reflected with the following formula:
<center><div class="rule">RCOV<sub>atc</sub> = RCOV<sub>tc</sub> * TCCOV<sub>auto</sub> = RCOV<sub>tc</sub> * TC<sub>atc</sub>/TC</div></center>
where:
<ul>
<li> RCOV<sub>atc</sub> - requirements coverage by automated tests
<li> RCOV<sub>tc</sub> - requirements coverage by automated tests
<li> TCCOV<sub>auto</sub> - test cases coverage by automated tests
<li> TC<sub>atc</sub> - the number of tests with automated implementation
<li> TC - total number of test cases
</ul>
<li> <b>Overall Requirements Satisfaction Rate</b> - the result we get after entire test set run showing which part of requirements are met at all. The formula combines previous values and looks like:
<center><div class="rule">ORSR = PassRate * RCOV<sub>tc</sub> * TCCOV<sub>auto</sub></div></center>
Where:
<ul>
<li> <b>ORSR</b> - Overall Requirements Satisfaction Rate value
<li> <b>PassRate</b> - is the relation between passed tests and entire number of tests executed
</ul>
<b>ORSR</b> value is final measure and it actually indicates how good our system under test is. It reflects the portion of functionality which is covered by tests and works as expected. E.g. if <b>ORSR</b> equals 0.7 it means that 70% of entire application functionality is tested and works as expected.
</ol>
</p>
<h2>How to make this measure more precise and simple?</h2>
<p>
The above measures have some distortions and inconsistencies due to following reasons:
<ol>
<li> Requirement is considered covered when at least one test is associated with it. But requirement can be too general and test may cover just some part of it
<li> Test case is considered covered with automation when it has at least one automated test associated. If test case involves several scenarios where just some of them have automated implementation it still counts but the coverage number is not precise
<li> Any coverage like this doesn't reflect possible cases which may happen due to technical implementation specifics
</ol>
First 2 items may be fixed with proper split between requirements and tests as well as we can tightly link each items to each other so that changes in requirements may lead to changes in tests and their automated implementation. At the end we reach the proportion between requirements, tests and auto-tests to the value of 1:1:1. This can be reached with several major steps:
<h3>Requirements detalization</h3>
<p>
Each requirement is split to atomic item which requires just single check-point. In order to do better mapping between requirements and tests it's better to perform such split based on testing techniques used. Thus, we can identify range of valid ways, improper ways, border conditions etc. Once we have definition of expected behavior in all of those cases we already can make quite atomic and targeted tests. Thus, the above table will be transformed to something like:
<table>
<tr><th>Requirement ID</th><th>Test Case ID</th><th>Auto-test ID</th></tr>
<tr class="defrow"><td>REQ-1-1</td><td>TC-1-1</td><td>ATC-1</td></tr>
<tr class="defrow"><td>REQ-1-2</td><td>TC-1-2</td><td>ATC-2</td></tr>
<tr class="defrow"><td>REQ-1-3</td><td>TC-1-3</td><td>ATC-3</td></tr>
<tr class="defrow"><td>REQ-2-1</td><td>TC-2-1</td><td>ATC-4</td></tr>
<tr class="defrow"><td>REQ-3</td><td>TC-3</td><td class="undone">-</td></tr>
<tr class="defrow"><td>REQ-4</td><td class="undone">-</td><td class="undone">-</td></tr>
</table>
</p>
<h3>Map auto-test to test case</h3>
<p>
Make 1:1 correspondence between test scenario and it's automated implementation so that it can be tracked easily. Thus, we'll get matrix like:
<table>
<tr><th>Requirement ID</th><th>Test Case ID</th><th>Auto-test ID</th></tr>
<tr class="defrow"><td>REQ-1-1</td><td>TC-1-1</td><td>ATC-1-1</td></tr>
<tr class="defrow"><td>REQ-1-2</td><td>TC-1-2</td><td>ATC-1-2</td></tr>
<tr class="defrow"><td>REQ-1-3</td><td>TC-1-3</td><td>ATC-1-3</td></tr>
<tr class="defrow"><td>REQ-2-1</td><td>TC-2-1</td><td>ATC-2-1</td></tr>
<tr class="defrow"><td>REQ-3</td><td>TC-3</td><td class="undone">-</td></tr>
<tr class="defrow"><td>REQ-4</td><td class="undone">-</td><td class="undone">-</td></tr>
</table>
After such transformation the formula:
<center><div class="rule">ORSR = PassRate * RCOV<sub>tc</sub> * TCCOV<sub>auto</sub></div></center>
shows more or less reliable results as the area we cover consists of granular requirement definitions covered with dedicated and yet granular tests.
</p>
<p>
But we still have untracked areas where we don't cover anything. When we run testing our results wouldn't include information about requirements coverage. We should always track requirements and their correspondence to tests. Generally, this stage is quite OK and a lot of projects stop here. But it doesn't mean that it's really maximum we can take.
</p>
<h3>Make test cases and automated implementation as a single unit</h3>
<p>
The idea is that each test case is created in specific form which can be read and interpreted automatically by some test engine which would run specific test instructions based on test case steps description. This leads us to <a href="http://en.wikipedia.org/wiki/Keyword-driven_testing">Keyword-driven testing</a> where each test case is the set of keywords processed by some automated engine. Thus, we collapse test cases and their automated implementation into single unit where test case itself is just input resource for the automated tests. After such transformation our table looks like:
<table>
<tr><th>Requirement ID</th><th>Test ID</th></tr>
<tr class="defrow"><td>REQ-1-1</td><td>KTC-1-1</td></tr>
<tr class="defrow"><td>REQ-1-2</td><td>KTC-1-2</td></tr>
<tr class="defrow"><td>REQ-1-3</td><td>KTC-1-3</td></tr>
<tr class="defrow"><td>REQ-2-1</td><td>KTC-2-1</td></tr>
<tr class="defrow"><td>REQ-3</td><td class="undone">KTC-3</td></tr>
<tr class="defrow"><td>REQ-4</td><td class="undone">-</td></tr>
</table>
After such transformation our initial formulas change a bit. In particular, the value of test cases coverage by automated tests becomes 100% or 1 by default, or in the form of formula:
<center><div class="rule">TCCOV<sub>auto</sub> = TC<sub>atc</sub>/TC = 1</div></center>
or
<center><div class="rule">RCOV<sub>atc</sub> = RCOV<sub>tc</sub></div></center>
and then final ORSR value is now calculated as:
<center><div class="rule">ORSR = PassRate * RCOV<sub>tc</sub></div></center>
As it's seen from the table we still may have problems with incomplete or missing coverage items (highlighted in red in the above table). In this example we still have <b>REQ-4</b> requirement item non-covered with any test but <b>REQ-3</b> item is already shown as covered. But potentially some of steps would have no automated implementation. And this brings new major difference: test with incomplete automated implementation is now not just non-covered but it's failed. And this requires different attitude to such situation correction. Failed test requires fix. Also, we have advantage in maintenance. When we change the test scenario the automated implementation picks up changes immediately. So, now we don't have distribution between requirements, test cases and automated tests. We just have requirements and tests.
</p>
<a id="make-requirements-executable"><h3>Make requirements executable</h3></a>
<p>
Previously we've done unification between tests and automated tests which collapsed table just to 2 columns and 2 major items: requirements and tests. But what if requirements are created the way that tests covering them are generated automatically in a form accessible for automated execution? This approach is called <a href="http://www.javacodegeeks.com/2013/06/acceptance-criteria-should-be-executable.html">Executable Requirements</a>. Thus, requirements are automatically expanded into test cases and test cases are expanded to automated tests. Eventually, we'll get representation like:
<table>
<tr><th>Requirement ID</th></tr>
<tr class="defrow"><td>REQ-1-1</td></tr>
<tr class="defrow"><td>REQ-1-2</td></tr>
<tr class="defrow"><td>REQ-1-3</td></tr>
<tr class="defrow"><td>REQ-2-1</td></tr>
<tr class="defrow"><td class="undone">REQ-3</td></tr>
<tr class="defrow"><td class="undone">REQ-4</td></tr>
</table>
Main remarkable thing in such approach is that all tests match to some specific requirement and it leads to the following:
<center>
<div class="rule">
RCOV<sub>tc</sub> = 1
<br>ORSR = PassRate * RCOV<sub>tc</sub> = PassRate</div></center>
and then final ORSR value is now calculated as:
<center><div class="rule">ORSR = PassRate</div></center>
It's also clearly seen in the previous table where red-highlighted cells represent failed requirements or requirements which weren't met. Thus, the % of passed tests explicitly indicates the % of requirements met. Thus, we represented the entire requirement satisfaction metric with a single measurable value of tests pass rate.
</p>
<h2>How do we cover implementation?</h2>
<p>
All the above was related to binding requirements to tests. But we didn't covered implementation at all. In some cases we may have some implementation parts which aren't covered by any requirement or some specifics which is not detailed in requirements but exists in the code.
</p>
<p>
Why it is important? OK. Let's just keep ORSR metric and use only it. In this case we may have <span class="mark">100% coverage</span> even when <span class="mark">all tests are empty</span> so that they don't do anything. So, in order to prevent such situation we should also take into account the <a href="http://en.wikipedia.org/wiki/Code_coverage">code coverage</a> metrics indicating that each specific code item is invoked at least once during tests run.
</p>
<p>
Mainly we can take line and branch coverage values as the most frequently used. We can also use class and function/method coverage but that would actually be another reflection of line coverage metric. Also, we can involve some more complicated coverage metrics but it's a matter of separate chapter. For now we'll just take the most frequently used metrics. So, the <b>Overall Code Coverage</b> may be calculated as multiplication of all independent coverage metrics. Since, all coverage metrics show values from 0 to 1 (or from 0% to 100%), the final value would also fit this range. So, the formula is:
<center><div class="rule">OCC = CCOV<sub>line</sub> * CCOV<sub>branch</sub></div></center>
where:
<ul>
<li> <b>OCC</b> - overall code coverage as integrated measure or code coverage
<li> <b>CCOV<sub>line</sub></b> - code line coverage
<li> <b>CCOV<sub>branch</sub></b> - code branch coverage
</ul>
</p>
<p>
Now, we can combine this with Overall Requirements Satisfaction Rate to combine both requirements and implementation coverage. Let's name this unified metric as <b>Overall Product Satisfaction Rate (OPSR)</b> the unified coverage of requirements and their implementations which also can be interpreted as <b>Overall Product Readiness</b>. It is calculated as:
<div class="rule">
<ol>
<li>ORSR = PassRate
<li>OCC = CCOV<sub>line</sub> * CCOV<sub>branch</sub>
<li>OPSR = ORSR * OCC = PassRate * CCOV<sub>line</sub> * CCOV<sub>branch</sub>
</ol>
</div>
All the above metrics can be received automatically from test run and code coverage reports. The value itself can be interpreted as % of product readiness for use as we measure how it fits the requirements and correlate it to the rate of how do we cover the actual implementation.
</p>
<h2>Is that enough?</h2>
<p>
No. Despite we the coverage we measure is already complex and covers different aspects of system under test there are still gaps which may lead to inconsistent and wrong interpretation of results. One thing which is left non-covered here is the tests themselves. Next paragraphs will describe this moment in more details.
</p>
<h1>How can we identify that our tests are really good?</h1>
<h2>When tests can be bad?</h2>
<p>
Let's take a look at some small example of requirement, its' implementation and test covering it to see why <b>OPSR</b> metric is not enough to say that system under test is of good quality. Let's say we have some system which has the requirement that states:
<pre class="code">
Subtraction: for the given input A and B the result C is received as C = A - B
</pre>
Let's assume we already described all necessary details regarding input format, acceptable values and we already have tests for all those parts. Now we're concentrated on the operation itself. The implementation of it may look like:
<pre class="code">
double subtract(double a, double b) {
return a <span class="mark">+</span> b;
}
</pre>
And now let's assume we have test which covers the implementation:
<pre class="code">
void testSubtract() {
subtract(2, 3);
}
</pre>
Firstly, not that the implementation sample uses <b>+</b> operation which is opposite to subtraction. But also notice that we have test which simply invokes the operation without checking the result. If we measure overall coverage we'll see that test covers all lines of implementation, it also covers the requirement. But you see that functionality is wrong and test doesn't detect that.
</p>
<p>
That's why the quality of our tests must also be estimated.
</p>
<h2>How can we detect that test is good?</h2>
<p>
There are several criteria indicating that each specific test is of good quality:
<ol>
<li> <b>Test does what it's supposed to do</b> - sometimes test is designed for one thing but actually checks something else. This may happen either for bad (mistake during automation) or good (result of test case update without automation implementation changes) reason. Nevertheless, we should be able to control such situations;
<li> <b>Test operates with valid data</b> - when we design our tests we should make sure that we use proper input and proper expectations for the output. In some cases we may operate with improper data or we may set improper results as expected (especially during test automation when some people are more targeted to make all tests pass assuming the data is correct rather than verifying data consistency).
<li> <b>Test has sufficient number of check points</b> - it is very frequent case when our tests have some check points but they are not enough to check all output items in the entire output. So, we should make sure that our tests may detect any potential errors in output results;
<li> <b>Test fails if the functionality under test is inaccessible or changed at all</b> - obviously if system under test doesn't work at all the test interacting with this system should fail. Or if we replace working module with something that doesn't work there should be at least one test which can detect that something goes wrong;
<li> <b>Test is independent</b> - test runs the same way both separately or in any combination of other tests. So, it's independent to other tests. This is important as a lot of test engines (like any of xUnit engine family or similar) do not give any guarantee regarding sequence of tests to be performed. Additionally, we may need different set of tests running for different situations. And finally, if there's a test which depends on results from another test isn't that more correct to treat those 2 tests as one?
<li> <b>Test runs the same way multiple times with the same result</b> - each test should be predictable and reliable. At least it is useful to be able to reproduce the situation which happened during tests run.
</ol>
</p>
<p>
So, what are the methods which may assure the above items? Some of them are:
<ol>
<li> <b>Review</b> - the most universal way of tests quality confirmation at least because it can be done anywhere and can be applied to the widest range of potential problems. At the same time it's one of the most time consuming way and it doesn't mitigate human factor.
<li> <b>Cross-checks</b> - some tests may be designed the way that they make actions which produce similar or comparable results. So, additionally we can make some reconciliation of results by comparing relevant operations.
<pre class="code">
<b>Example:</b>
Imagine we have some module supporting 2 operations:
<b>Operation 1:</b> add(a, b) = a + b
<b>Operation 2:</b> mult(a, b) = a * b
We may add some tests verifying their functionality separately:
<b>Test 1:</b>
Expression <b>add(a, b) = c</b> is valid for a, b, c
<b>where</b>
<b>| a | b | c |</b>
| 1 | 1 | 2 |
| 2 | 0 | 2 |
...
<b>Test 2:</b>
Expression <b>mult(a, b) = c</b> is valid for a, b, c
<b>where</b>
<b>| a | b | c |</b>
| 1 | 1 | 1 |
| 2 | 0 | 0 |
...
At the same time the above operations are relevant and multiplication can be expressed by addition, e.g.: 2 * 3 = 2 + 2 + 2 (3 times add 2).
</pre>
<li> <b>Resource sharing across independent teams</b> - it's rather process item which means that input data and automated test implementation are done by different people independently. When 2 people doing work in the same direction but from different sides and their results are matching it increases probability that they do properly. At least it avoids the risk of adapting data to test from implementation side and at the same time strictly controls the data definition. There may be several examples of resource sharing:
<ul>
<li> <b>Input data for data-driven tests</b> - test designer may prepare data sheet with inputs and expected outputs while test automation engineer may work on making common work flow based on some test samples.
<li> <b>Keyword-driven or similar approaches</b> - using this approach test designer creates test cases independently on implementation. At the same time test design and test automation here are separate activities. Thus, test does predictable actions with known and validated data.
</ul>
<li> <b><a href="http://mkolisnyk.blogspot.co.uk/2014/09/mutation-testing-overview.html">Mutation Testing</a></b> - the testing type which is based on artificial error injection in order to check how tests are good at detecting potential problem we know about. This approach is quite time and resource consuming but it can be fully delegated to machine.
</ol>
</p>
<p>
All those approaches have different way and area of influence. Also, some of the above items are approaches, some of them are process items. So, it's hard to put all the above items into one place to see the entire picture. But the below diagram shows how each of the above items cover requirements, their implementation and tests for them:
<center>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-EMIF7qSLMqs/VB8zeYieSHI/AAAAAAAAAnU/DexQFb16N50/s1600/TestsQualityCoverage.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-EMIF7qSLMqs/VB8zeYieSHI/AAAAAAAAAnU/DexQFb16N50/s400/TestsQualityCoverage.png" /></a></div>
</center>
As it's seen from the picture:
<ul>
<li> Review is something that can be applied everywhere, not just tests and actually it can cover almost all aspects of functionality and tests for it
<li> Resource sharing and cross checks involve a bit all items to cover. Thus, we can make various cross-checks to verify consistency between requirements, we can make more detailed tests based on actual implementation as well as to verify consistency of our tests. But that's rather technical and process items and they are not applied everywhere
<li> Mutation testing is targeted to cover tests only
</ul>
</p>
<h2>What can we measure there?</h2>
<p>
Generally, most of the items listed in this paragraph are more about how to do things. But only one of them shows some measurable results and says what should be covered and what's already covered and how much. It's <b><a href="http://mkolisnyk.blogspot.co.uk/2014/09/mutation-testing-overview.html">Mutation Testing</a></b> and the metric we can get from this practice. This metric can be called as <b>Mutations Coverage Rate</b> and it shows how many of potential mutations we can inject into the system under test can be found out by tests. We'll code this value as <b>M%</b>.
</p>
<p>
Having this value calculated we may say how thorough do we check any system under test code line we invoke. Thus, this value actually compensates the results given by <b>Overall Code Coverage</b> metric we've received before. But before we also combined <b>Overall Code Coverage</b> characteristic with requirements coverage and got joint <b>Overall Product Satisfaction Rate</b> value. So, now we can get new quality characteristic named <b>Overall Satisfaction Rate</b> indicating our assurance on requirements and implementation covered. This value can be calculated as:
<center><div class="rule">OSR = OPSR * M% = ORSR * OCC * M%</div></center>
Getting back to our requirements/implementation/tests relationship, all the metrics we calculated before may be expressed by their coverage and involvement with the following diagram:
<center>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-9F-qunQKnh0/VB8znsgwXNI/AAAAAAAAAnc/P4z34KCxXWI/s1600/OverallSatisfactionRate.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-9F-qunQKnh0/VB8znsgwXNI/AAAAAAAAAnc/P4z34KCxXWI/s400/OverallSatisfactionRate.png" /></a></div>
</center>
Just in order to summarize all the values we received we need to identify what questions each metric gives response to. They are:
<ul>
<li> <b>ORSR</b> - How many expectations are met at all?
<li> <b>OCC</b> - Which part of the entire application code we invoked?
<li> <b>OPSR</b> - Did we check all capabilities of our system for expectations satisfaction? If not, which part of the actual system under test meets expectations?
<li> <b>M%</b> - How many potential problems do we cover and ready to detect with our tests?
<li> <b>OSR</b> - Which part of the actual system under test <b>we are sure</b> meets expectations?
</ul>
Thus, we can measure the most essential things related to the coverage and quality of our test verifications:
<ul>
<li> We can detect and measure which requirements are covered good enough and which require more tests
<li> We can detect and measure which functionality wasn't implemented (non-covered requirements)
<li> We can detect and measure what tests require more check points
</ul>
So, with the above metrics we cannot make any cheats with empty or incomplete tests, partially covered requirements, partially covered implementation. We've received metric which involves all.
</p>
<h2>Is that enough?</h2>
<p>
No.
</p>
<p>
Firstly, the above metric is coverage-based and we actually used near 5 coverage metrics in it. But, for instance, the <a href="http://shop.snv.ch/Thematic-Fields/Information-technology-communication-technology/Information-technology-Office-machines/Software/Software-and-systems-engineering-Software-testing-Part-4-Test-techniques.html?lang=1">ISO/IEC/IEEE DIS 29119-4:2013</a> standard states near 20 coverage metrics which can be applied depending on different techniques we use. And yet, even if we integrate all of those metrics we still just minimize probability of leaving something non-covered as there always can be some coverage item which is superposition of already used items.
</p>
<p>
Secondly, it cannot be absolute quality metric as it doesn't cover such technical aspects like maintainability, testability and many other software characteristics (<a href="http://docs.codehaus.org/display/SONAR/SIG+Maintainability+Model+Plugin">here is an example model for maintainability</a>).
</p>
<p>
So, we always have a space for activity. But we have restricted budget and we always should think not just about absolute coverage but coverage of acceptable level.
</p>
<h1>How can we keep quality control on our automated tests?</h1>
<h2>What to test in tests?</h2>
<p>
This is another main topic of the chapter. Since automated tests are another form of software it should have similar practices applied. And testing shouldn't be an exception here. Logically, we should apply similar approach. But ... but subjectively testing for testing looks like an overhead. Imagine, we do testing for software, then testing for testing, then (if we still keep similar logic) testing for testing for testing and so on. It's insanity! We are not making software for the purpose of testing it. It is initial software which is the product we make but not the tests for them which are just targeted to simplify our lives but not making it more complicated.
</p>
<p>
What should we do here? The simplest way is to forget about such testing, everything works fine, I've checked that. Yes, we always can use excuse like that. But in this chapter I'm looking for some objective criteria stating that our testing solution is of appropriate quality. OK, before we've described entire way to measure system under test quality. So, now imagine out testing solution is that system under test and we should apply the same approach just <s>for lulz</s> to prove how our theory can be applied to some specific cases.
</p>
<p>
Overall automated testing solution structure can be represented with the following diagram:
<center>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-m3vJwJM4qrY/VB8zxnOqohI/AAAAAAAAAnk/hBo5Z80C20A/s1600/TestSolutionStructure.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-m3vJwJM4qrY/VB8zxnOqohI/AAAAAAAAAnk/hBo5Z80C20A/s400/TestSolutionStructure.png" /></a></div>
</center>
Where:
<ul>
<li> <b>Engine</b> - the core driver of the system which is responsible for tests organization, execution, reporting and event handling. In some cases it's completely external module (e.g. any engine of xUnit family). In some cases this is something custom-written (even based on existing engine).
<li> <b>Core Library</b> - the set of utility libraries and various wrapper functions which are still irrelevant to application under test but operate with higher abstractions than engine. Typically that can be various data conversion functions, some UI wrapper libraries, additional functions which are not specific to application under test but just made to minimize copy/paste
<li> <b>Business Functions</b> - a set of functionality which reflects application-specific behavior and actually reflects actions to perform with system under test
<li> <b>Tests</b> - final implementation of test scenarios
</ul>
Here we have 2 major groups of items by their relation to application under test:
<ul>
<li> <b>Technology-specific</b> - the group of components which is not really bound to application under test and can be applied to similar applications or applications using similar technological stack
<li> <b>Application-specific</b> - the group of components which reflect application under test functionality and cannot be used somewhere else outside of the application under test
</ul>
</p>
<h2>How can this all be tested?</h2>
<p>
Each of the test automation solution structure component types can have individual approach for testing. But mainly testing can be applied the following way:
<table>
<tr><th>Group</th><th>Structure Component</th><th>Testing Approach</th></tr>
<tr><th rowspan="2">Technology-specific</th><th>Engine</th>
<td bgcolor="lightgreen">
There may be 2 major ways for testing this part:
<ul>
<li> <b>The engine is external software</b> - it's normal case when we use existing engines. In this case all we can do is just to rely on existing functionality and/or use just parts we rely on
<li> <b>The engine is our internal software written as a part of the project</b> - in this case we should keep the engine as separate library, stored and maintained in separate location. In this case it's treated purely as the software component and we easily can apply unit, integration, system tests or whatever.
</ul>
</td></tr>
<tr><th>Core Library</th><td bgcolor="green">Since core library is also a kind of software which can be used outside specific project we can treat it as separate library and apply the same unit, integration, system tests to it considering that we're not bound to any specific application</td></tr>
<tr><th rowspan="2">Application-specific</th><th>Business Functions</th><td bgcolor="tomato">Business functions are actually reflection of application under test functionality. So, tests themselves are some kind of unit, integration, system or whatever tests for all those business functions.</td></tr>
<tr><th>Tests</th><td bgcolor="tomato">Normally each test is some kind of function which doesn't return any value and doesn't accept parameters (or at least we can expand it to that form in case of data-driven tests). The test result is either pass or fail depending on whether we encounter an error during execution or not. So, if we imagine hypotetical test which tests this test it would be a single instruction call without anything else. But it doesn't make any difference from normal test run. So, if we want to make tests for exactly tests we just need to make trial test runs on some test environments</td></tr>
</table>
From this table we can make several conclusions:
<ul>
<li>Lines highlighted with red show test automation solution components which do not require any additional tests to be created. <span class="mark">Testing solution tests itself</span>
<li>Green highlighted lines reflect components which can be treated as separate software and we can apply all similar practices we use for testing our application under test. So, <span class="mark">test solution components which are not specific to application under test can be treated as separate software components which should be tested separately</span>.
</ul>
Given the above we can conclude that the term <b>tests for tests</b> is not just wierd sounding but also it is something which doesn't exist as tests are testing themselves
</p>
<p>
<table class="notetable">
<tr><th class="notehead">NOTE:</th></tr>
<tr class="notebody"><td>Actually it's not really correct to say that any application specific functionality and resources do not require separate testing activities. There may be different cases. E.g. in one of my previous projects we used to practice tests verifying that our window definitions are up to date with current application. That was done for GUI-level testing and it was some kind of unit tests for such test type. But normally, if we talk about GUI testing there should be separate test which just navigates through different screens with minimal business actions and verifies that all controls which are supposed to be there actually exist. So, it doesn't break anything told above, it's more about proper interpretation of tests</td></tr>
</table>
</p>
<h1>How can we identify if our tests are of acceptable complexity?</h1>
<p>
Good. We know all what to test and how to detect when we have enough reliability level of our tests and all subsidiary components. Thus, we are not just confident about our system under test quality but also we're confident about quality of tools we use. But due to this confidence we shouldn't forget that our main goal is system under test development, not tests for them. So, if testing activities take more resources than actual development, well, probably there's something wrong with it. From technical side this problem may be caused by testing solution complexity. In order to control the situation and prevent such problem we may need to measure this complexity.
</p>
<p>
If we talk about code complexity we can use metric named <a href="http://en.wikipedia.org/wiki/Cyclomatic_complexity">Cyclomatic Complexity</a>. For each function it shows the number of possible flows the function can be performed. There is common practice stating that each method/function should have <b>Cyclomatic Complexity Number (further CCN)</b> value less than or equal 10. If CCN is between 10 and 20 the method is moderately good. If higher the method is treated as non-testable. This is good metric to keep granularity of our code. But also, we can use it for complexity comparison between testing solution and application under test.
</p>
<h2>Complexity of tests</h2>
<p>
In previous paragraphs we've defined some criteria of good tests. And one of them sound like:
<ul>
<li> Test runs the same way multiple times with the same result
</ul>
It means that each test has only one flow to pass. This can be reflected in CCN value:
<center><div class="rule">CCN = 1</div></center>
So, good test should have single flow at least at the highest level. Otherwise, we have to cover it with unit tests which is something we should avoid by proper test design. If we need to express this in a form of calculated metric we can operate with <b>Tests Simplicity Rate (further TSR)</b> which fit the following properties:
<ul>
<li> Each test has CCN >= 1
<li> In the most ideal way all tests have CCN = 1
<li> The more tests with CCN > 1 we have the less TSR value we have
</ul>
Given the above properties we can calculate TSR value as:
<center><div class="rule">TSR = TC<sub>atc</sub>/∑ CCN(i) , i ∈ 0..TC<sub>atc</sub></div></center>
Where:
<ul>
<li> <b>TSR</b> - tests simplicity rate value
<li> <b>CCN(i)</b> - CCN number of test with <b>i</b> index
<li> <b>TC<sub>atc</sub></b> - the number of automated tests
</ul>
Alternatively, we can use this formula:
<center><div class="rule">TSR = ∏ (1/CCN(i)) , i ∈ 0..TC<sub>atc</sub></div></center>
In this form the entire TSR value goes to 0 faster in case of growing number of tests with CCN > 1.
</p>
<p>
With the above calculations we may express tests complexity with TSR value which has 100% rate when all tests have just one flow and value near 0 if tests are too complicated.
</p>
<h2>Complexity of subsidiary testing solution components</h2>
<p>
For subsidiary testing solution components like <b>Engine</b> or <b>Core Library</b> there's 1 major criteria of acceptable complexity: the subsidiary module should have less complexity that application under test. And this criteria is applied only for modules developed as a part of the project, so that e.g. we don't need to measure complexity of JUnit if we use it. But as soon as we write our custom extension of any JUnit class we should take this into account while calculating complexity.
</p>
<p>
For better comparison we can aggregate CCN numbers for all the code of system under test and the same values for subsidiary module. After that we may get <b>Test Component Simplicity Rate (TCSR)</b> value using the following formula:
<center><div class="rule">TCSR = 1 - CCN<sub>Agg Test</sub>/CCN<sub>Agg SUT</sub></div></center>
This number can be even negative. Anyway, if it reaches 0 or below, the test solution is too complicated.
</p>
<h2>Is that enough?</h2>
<p>
No. The above characteristic was taken based on 1 factor value. But we can include much more to make measure more precise and visible. And main thing which should be of interest is the value which any testing effort bring. Everything spins around the value of it.
</p>
<h1>Where to go next?</h1>
<p>
In this chapter we've described several testing solution quality metrics which give us some visibility on how good we are with our testing. Eventually, we've managed to consolidate multiple metrics into one to give short and compact result. We may involve many other different metrics and consolidate them but we should always take into account the following:
<ol>
<li> No matter how many metrics we add there're always areas we can grow with. So, if we didn't reach the top we should expand our testing to reach it. If we reached the top, we need to find some other metrics.
<li> We should always interpret results properly. 100% doesn't always mean perfect result
<li> Any number we get should be used for the purpose. We should clearly understand what each number shows and what it doesn't
</ol>
Thus, we'll be able to collect many other technical metrics. But what we also should concentrate on is the value we bring with all our efforts. This is much more visible part of our activity. But this is separate story.
</p>
</body>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-48995981908836852812014-09-07T01:05:00.002+01:002014-09-07T01:08:26.100+01:00Mutation Testing Overview<html>
<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.example {border:1px dashed black;background-color:#EEEEEE;margin:1em;padding:1em}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
th {border:1px solid black;background-color:#CCCCDD;}
td{border:1px solid black;background-color:white;}
table{border:1px solid black;border-collapse: collapse;}
.defrow:nth-child(even) {background: #CCC}
.defrow:nth-child(odd) {background: #FFF}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
.java {background-color:#b07219;font-weight:bold;color:black;border-radius:25px;border:1px solid black}
.csharp {background-color:#5a25a2;font-weight:bold;color:white;border-radius:25px;border:1px solid black}
.ruby {background-color:#701516;font-weight:bold;color:white;border-radius:25px;border:1px solid black}
.jscript {background-color:#f1e05a;font-weight:bold;color:black;border-radius:25px;border:1px solid black}
.python {background-color:#3581ba;font-weight:bold;color:white;border-radius:25px;border:1px solid black}
</style>
<title>Mutation Testing Overview</title>
</head>
<body>
<h1>Introduction</h1>
<p>
It's always good to have the entire application code covered with tests. Also, it's nice to have some tracking on features we implement and test. All this stuff provides overall picture of actual behaviour correspondence to expectations. That's actually one of the main goals of testing. But there are some cases when all the coverage metrics don't work or do not show the actual picture. E.g. we can have some test which invokes some specific functionality and provides 100% coverage by any accessible measure but none of the tests contains any verifications. It means that we potentially have some problems there but nothing is alerted about that. Or we may have a lot of empty tests which are always green. We still may obtain coverage metrics to find out if existing number of tests is enough for some specific module. But additionally we should make sure that our tests provide good quality of potential problems detection. So, one of the ways to reach that is to inject some modification which is supposed to lead to some error and make sure that our tests are able to detect the problem. The approach of certifying tests against intentionally modified application is called <b>Mutation Testing</b>.
</p>
<p>
Main feature of such testing type is that we do not discover application issues but rather certify tests for errors detection abilities. Unlike "traditional testing" we initially know where the bug is expected to appear (as we insert it ourselves) and we have to make sure that our testing system is capable to detect it. So, mutation testing is mainly targeted to check quality of the tests. The above examples of empty test suites or tests without verifications are corner cases and they are quite easy to detect. But in real life there's interim stage when tests have verifications but they may have some gaps. In order to make testing solid and reliable we need to mitigate such gaps. And mutation testing is one of the best ways to detect such gaps.
</p>
<p>
In this article I'll describe main concepts of mutation testing as well as describe potential ways to perform this testing type with all relevant proc and cons.
</p>
<a name='more'></a>
<h1>Main Definitions</h1>
<p>
Mutation testing has its' own specific terminology we should know about. The below definitions are one of the core terms of mutation testing. So, let's define them for future use.
<table>
<tr><th>Term</th><th>Definition</th></tr>
<tr class="defrow"><td> <b>Mutation</b> </td><td> in current context it is single intentional application under test modification which is targeted to make sure that at least some small group of tests can detect the impact caused by such change.</tr>
<tr class="defrow"><td> <b>Equivalent Mutations</b> </td><td> changes to application under test which do not impact actual functionality. Actually it means that the modification applied to the application is equivalent to the modified part. As the result we can get wrong interpretation of results</tr>
<tr class="defrow"><td> <b>Killed Mutant</b> </td><td> the mutant which was detected at least by one test</tr>
<tr class="defrow"><td> <b>Alive Mutant</b> </td><td> the mutant which left unnoticed by any test performed. Such mutant is similar error situation as a bug in "traditional" testing</td></tr>
</table>
</p>
<h1>Mutation Classifications</h1>
<p>
Each mutation processing can be done in different ways depending on many different factors like:
<ul>
<li> The type of mutation itself
<li> The way the mutant is generated and area it's applied to
<li> The way of tests selection to verify against mutant
<li> The way the mutant is injected into application under test
<li> The way the processing goes in case of mutant detection
</ul>
Let's describe all the above factors in more details.
</p>
<h2>By Kinds of Mutation</h2>
<h3>Value Mutations</h3>
<p>
these mutations involve changing the values of constants or parameters (by adding or subtracting values etc), e.g. loop bounds - being one out on the start or finish is a very common error.
</p>
<h4>Example:</h4>
<div class="example">
<p>
<b>Before Mutation:</b>
<pre class="code">
OFFSET = 2
value = parameter + OFFSET
</pre>
<b>After Mutation:</b>
<pre class="code">
OFFSET = 2
value = parameter <span class="mark">-</span> OFFSET
</pre>
<b>Or:</b>
<pre class="code">
OFFSET = <span class="mark">1</span>
value = parameter + OFFSET
</pre>
</p>
</div>
<h3>Decision Mutations</h3>
<p>
this involves modifying conditions to reflect potential slips and errors in the coding of conditions in programs, e.g. a typical mutation might be replacing a > by a < in a comparison.
</p>
<h4>Example:</h4>
<div class="example">
<p>
<b>Before Mutation:</b>
<pre class="code">
if value < 100 {
value++
}
</pre>
<b>After Mutation:</b>
<pre class="code">
if value <span class="mark">></span> 100 {
value++
}
</pre>
<b>Or:</b>
<pre class="code">
if value <span class="mark"><=</span> 100 {
value++
}
</pre>
</p>
</div>
<h3>Statement Mutations</h3>
<p>these might involve deleting certain lines to reflect omissions in coding or swapping the order of lines of code. There are other operations, e.g. changing operations in arithmetic expressions. A typical omission might be to omit the increment on some variable in a while loop.
</p>
<h4>Example:</h4>
<div class="example">
<p>
<b>Before Mutation:</b>
<pre class="code">
x = 100
value = x + 20
x += 50
</pre>
<b>After Mutation:</b>
<pre class="code">
x = 100
<span class="mark">x += 50</span>
<span class="undone">value = x + 20</span>
</pre>
</p>
</div>
<h2>By Mutant generation</h2>
<p>Defines where the classes are analysed and mutants created.</p>
<h3>Source Code</h3>
<p>
The mutation is applied to source code. After that the application is re-built and started.
</p>
<p>
<ul>
<li> <b>Advantages</b>
<ul>
<li> <b>Simplicity</b> - everything starts from it. All you have to do is to modify some part of source code (which is very likely a text representation) and run application after such changes. Everything you operate with is accessible in readable format.
<li> <b>A large range of mutations can be generated this way</b> - this is the result from previous point.
<li> <b>Mutations can closely mimic the types of error a programmer might make</b> - and there's the reason why. A lot of problems appear due to improper operator used or improper condition or some other changes to the code we make. But this time all the stuff is done automatically.
<li> <b>The mutations made can be clearly described and understood</b> - since we're operating with text representation we can clearly define what we change.
<li> <b>Wide applicability</b> - this approach is applicable to many different languages including scripting languages which do not require compilation into machine-specific structure.
</ul>
<li> <b>Disadvantages</b>
<ul>
<li> <b>Generating mutations in this way is relatively slow</b> - most of time losses are due to necessity of compilation
<li> Mutants must be written to disk, limiting the methods by which they can be inserted
<li> <b>In theory a mutant class could be accidentally released</b> - this is normally resolved by preparing separate workspace for mutation testing but in general there's definitely risk like that.
</ul>
</ul>
</p>
<h3>Byte Code</h3>
<p>
The mutation is applied to the byte code or some other compiled representation.
</p>
<p>
<ul>
<li> <b>Advantages</b>
<ul>
<li> <b>Generally much faster</b> - we simply don't have time losses for compilation
<li> <b>Can potentially create mutants without access to source files</b> - if we communicate to the byte code directly we shouldn't care about source code at all.
<li> <b>Same mutation operators can in theory work for other languages</b> - e.g. there're many languages which are based on Java or .NET. If we modify the byte code representation of application under test we shouldn't take into account too much language specifics. Thus, similar changes can be applicable to multiple languages.
</ul>
<li> <b>Disadvantages</b>
<ul>
<li> <b>Applicability</b> - we cannot apply this to scripting languages (simply no need of it)
<li> <b>Difficult to implement</b> - the approach implementation requires knowledge of internal structure of compiled modules which is not really visible outside the source code.
<li> <b>Errors representativeness</b> - the mutants generated this way harder correspond to real life conditions where such error may be introduced. So, there's a risk to find the mutant which never happens in real life. And this may require time for analysis
</ul>
</ul>
</p>
<h2>By Test selection</h2>
<p>in which tests are selected to run against the mutants</p>
<h3>Manual</h3>
<p>
The selection is done manually. Normally it is done to check only some specific places which can be of the biggest interest by end users.
</p>
<p>
<ul>
<li> <b>Advantages:</b>
<ul>
<li> We're always aware of changes we introduce
</ul>
<li> <b>Disadvantages:</b>
<ul>
<li> Slow and hardly applicable for regular runs as it's still requires human interaction
<li> Can only be used to determine the coverage of individual classes
</ul>
</ul>
</p>
<h3>Naive</h3>
<p>
Selection is done automatically by passing through all possible locations. The idea is simple: we try to insert mutation wherever we see appropriate exercising various combinations.
</p>
<p>
<ul>
<li> <b>Advantages:</b>
<ul>
<li> Already can be used on fully-automated basis
<li> Provides high coverage
</ul>
<li> <b>Disadvantages:</b>
<ul>
<li> Slow as it goes step by step introducing even mutations at the places we're not interested in
<li> There may be time losses chasing mutations which are not covered by tests while we can get the same information by simple code coverage analysis
</ul>
</ul>
</p>
<h3>Convention Based</h3>
<p>
Selection is done automatically based on some specific conventions. Normally similar convention is used for unit tests coverage when using tests are created and named based on class under test. Thus the structure of such tests replicates the structure or components under test. So, if we have such convention we can use it for selecting tests for mutation testing.
</p>
<p>
<ul>
<li> <b>Advantages:</b>
<ul>
<li> Faster than naive approach as it's more targeted to areas where mutation occurs
</ul>
<li> <b>Disadvantages:</b>
<ul>
<li> Not all tests which actually cover mutated code can be involved
<li> Still there may be problems with mutants which are not covered by tests
<li> Works badly when conventions are not so stable or applicable (e.g. it's hard to set the correspondence between code and integration/system tests which involve areas of functionality)
</ul>
</ul>
</p>
<h3>Coverage Based</h3>
<p>
Tests are selected based on code coverage analysis. In other words, for each specific mutation we select only those tests which cover the changed code part.
</p>
<p>
<ul>
<li> <b>Advantages:</b>
<ul>
<li> Faster than any above approaches as it narrows down the number of tests to run. Only tests which cover mutated code are executed
<li> Gives clear picture of the entire test coverage in combination to code coverage
</ul>
<li> <b>Disadvantages:</b>
<ul>
<li> Requires code coverage analysis in addition to main activities
<li> Generally approach is more complicated
</ul>
</ul>
</p>
<h2>By Mutant insertion</h2>
<p>Identifies the way how mutations are inserted into the target system</p>
<h3>Naive</h3>
<p>
Each mutant is generated and each time instance of application under test is started from the scratch. It can be either by making changes to source code with re-compilation or in memory. The main thing is that each time we start new instance.
</p>
<p>
<ul>
<li> <b>Advantages:</b>
<ul>
<li> <b>Reliability</b> - this method works everywhere
<li> Mutants will be active during the construction of static state (singletons, static intializers etc)
</ul>
<li> <b>Disadvantages:</b>
<ul>
<li> This method is relatively slow as application under test requires to be restarted each time
</ul>
</ul>
</p>
<h3>Mutant schmeta</h3>
<p>
Main idea that there's some generated class which contains all mutants which are then enabled programmatically.
</p>
<p>
<ul>
<li> <b>Advantages:</b>
<ul>
<li> Relatively fast in comparison to naive approach
<li> Reliably works everywhere
</ul>
<li> <b>Disadvantages:</b>
<ul>
<li> Mutants will <b>NOT</b> be active during the construction of static state (singletons, static intializers etc)
<li> The approach itself is more complicated that naive
</ul>
</ul>
</p>
<h3>Debugger hotswap</h3>
<p>
In some cases there's possibility to access the debug information to insert changes we need. At least this is applicable to variable values. So, all potential mutations are held in memory and inserted using debugger API
</p>
<p>
<ul>
<li> <b>Advantages:</b>
<ul>
<li> Potentially performance should increase
<li> No risk of accidental release with mutants
</ul>
<li> <b>Disadvantages:</b>
<ul>
<li> Performance. Depending on implementation there may be either advantage or loss of performance
<li> Mutants are not active during static state construction
<li> There may be problems with such API support
</ul>
</ul>
</p>
<h3>Instrumentation api</h3>
<p>
Similar to debugger hotswap but any mutant is applied using instrumentation API
</p>
<p>
<ul>
<li> <b>Advantages:</b>
<ul>
<li> Fast
</ul>
<li> <b>Disadvantages:</b>
<ul>
<li> Mutants are not active during static state construction
<li> There may be problems with such API support
</ul>
</ul>
</p>
<h3>Others</h3>
<p>
In addition to the above types there may be some others which are based on technology specifics. E.g. for Java code testing we can override class loaders to control the mutation insertion process. But these approaches are technology specific and are not a subject of this article.
</p>
<h2>By Mutant detection</h2>
<p>in which the selected tests are run against the loaded mutant</p>
<h3>Naive</h3>
<p>
Means that all planned set of tests is executed.
<ul>
<li> <b>Advantages:</b>
<ul>
<li> Gives wider picture of code coverage by tests checkpoints. Actually, you can see how many tests are affected with each specific mutation to see how many times you cover the same code part.
</ul>
<li> <b>Disadvantages:</b>
<ul>
<li> Slow as we have to run all tests each time while only some of them are really the ones which are of interest
</ul>
</ul>
</p>
<h3>Early exit (coarse)</h3>
<p>
Test classes are run until any of them finds the bug.
<ul>
<li> <b>Advantages:</b>
<ul>
<li> Faster than naive approach as it runs no longer than it's needed to find first error
</ul>
<li> <b>Disadvantages:</b>
<ul>
<li> All tests within a class are run to completion so slower than a more fine grained approach
</ul>
</ul>
</p>
<h3>Early exit (fine)</h3>
<p>
<ul>
<li> <b>Advantages:</b>
<ul>
<li> Faster than above approaches
</ul>
<li> <b>Disadvantages:</b>
<ul>
<li> Some overhead required to split the test cases
<li> Splitting tests out of classes may cause issues with some JUnit extensions
</ul>
</ul>
</p>
<h1>Side Effects of Mutation Testing</h1>
<p>
<ul>
<li> <b>Stability testing</b> - in most cases mutation testing actually leads to running the entire test suite multiple times in different combination. Thus we actually can make sure that our tests are stable and reliable. Also, all this time we're working with the system under test. Thus, we're getting additional information how the entire system works during some long period of intensive use
<li> <b>Potential bug-fixes definition</b> - mutation testing is almost the only testing type which changes the quality of system under test by itself. In most cases it changes to worse (making existing tests to fail) however in some cases we can get some sudden surprise when mutation applied leads to some known problem fix. Well, the probability of this is really small and we shouldn't expect this to happen but anyway there may be surprise like that.
<li> <b>Coverage checks</b> - eventually if we run our tests on different mutations we can see how good we are at code coverage in real. Anyway, the mutation testing results are other reflection of code coverage. The main difference is that now we also know how good we are at checking what we cover. Actually, we receive another coverage metrics like:
<ul>
<li><b>"checkpoint coverage"</b> as the percentage of lines/branches which are really checked by tests. As other coverage metrics this value varies between 0 and 1 (or within the range from 0 to 100%)
<li><b>"checkpoints per code line"</b> which shows how many times each specific line is checked with tests. It can be calculated as ratio between total checkpoints affected to the number of lines of effective code. In particular it estimates how effectively we cover all the functionality with the tests. Ideally, this measure should be near 1 indicating that each line and each condition has at least 1 check per item. Of course, there should also be some values reflecting checkpoints distribution between different code parts and some other methods related to statistics.
</ul>
</ul>
</p>
<h1>Existing Systems Overview</h1>
<p>
<table class="notetable">
<tr><th class="notehead">NOTE</th></tr>
<tr><td class="notebody">The information below is taken from official documentation or other similar sources. Also, some criteria used are not clearly applicable in some specific cases. That may cause incomplete or not really precise information to be provides. So, if you find some mismatches or some information which is missing here, please, let the author know</td></tr>
</table>
</p>
<p>
<table>
<tr>
<th rowspan="3">System Name</th><th rowspan="3">Technology</th><th rowspan="3">Active?</th><th colspan="16">Mutations supported</th>
</tr>
<tr><th colspan="3">By Kinds of Mutation</th><th colspan="2">By Mutant generation</th><th colspan="4">By Test selection</th><th colspan="4">By Mutant insertion</th><th colspan="3">By Mutant detection</th></tr>
<tr><th>V<br>a<br>l<br>u<br>e<br> <br>M<br>u<br>t<br>a<br>t<br>i<br>o<br>n<br>s</th><th>D<br>e<br>c<br>i<br>s<br>i<br>o<br>n<br> <br>M<br>u<br>t<br>a<br>t<br>i<br>o<br>n<br>s</th><th>S<br>t<br>a<br>t<br>e<br>m<br>e<br>n<br>t<br> <br>M<br>u<br>t<br>a<br>t<br>i<br>o<br>n<br>s</th><th>S<br>o<br>u<br>r<br>c<br>e<br> <br>C<br>o<br>d<br>e</th><th>B<br>y<br>t<br>e<br> <br>C<br>o<br>d<br>e</th><th>M<br>a<br>n<br>u<br>a<br>l</th><th>N<br>a<br>i<br>v<br>e</th><th>C<br>o<br>n<br>v<br>e<br>n<br>t<br>i<br>o<br>n<br> <br>B<br>a<br>s<br>e<br>d</th><th>C<br>o<br>v<br>e<br>r<br>a<br>g<br>e<br> <br>B<br>a<br>s<br>e<br>d</th>
<th>N<br>a<br>i<br>v<br>e</th><th>M<br>u<br>t<br>a<br>n<br>t<br> <br>s<br>c<br>h<br>m<br>e<br>t<br>a</th><th>D<br>e<br>b<br>u<br>g<br>g<br>e<br>r<br> <br>h<br>o<br>t<br>s<br>w<br>a<br>p</th><th>I<br>n<br>s<br>t<br>r<br>u<br>m<br>e<br>n<br>t<br>a<br>t<br>i<br>o<br>n<br> <br>a<br>p<br>i</th>
<th>N<br>a<br>i<br>v<br>e</th><th>E<br>a<br>r<br>l<br>y<br> <br>e<br>x<br>i<br>t<br> <br>(<br>c<br>o<br>a<br>r<br>s<br>e<br>)</th><th>E<br>a<br>r<br>l<br>y<br> <br>e<br>x<br>i<br>t<br> <br>(<br>f<br>i<br>n<br>e<br>)</th>
</tr>
<tr><td><a href="http://pitest.org">PIT</a></td><td><span class="java"> Java </span></td>
<td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td></tr>
<tr><td><a href="http://jester.sourceforge.net/">Jester</a></td><td><span class="java"> Java </span></td>
<td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td></tr>
<tr><td><a href="http://jester.sourceforge.net/">Simple Jester</a></td><td><span class="java"> Java </span></td>
<td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td></tr>
<tr><td><a href="http://jumble.sourceforge.net/">Jumble</a></td><td><span class="java"> Java </span></td>
<td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td></tr>
<tr><td><a href="http://cs.gmu.edu/~offutt/mujava/">μJava</a></td><td><span class="java"> Java </span></td>
<td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td></tr>
<tr><td><a href="http://www.st.cs.uni-saarland.de/mutation/">JavaLanche</a></td><td><span class="java"> Java </span></td>
<td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td></tr>
<tr><td><a href="http://galera.ii.pw.edu.pl/~adr/CREAM/ref.php">CREAM</a></td><td><span class="csharp"> C# </span></td>
<td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td></tr>
<tr><td><a href="http://www.mutation-testing.net/">NinjaTurtles</a></td><td><span class="csharp"> C# </span></td>
<td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td></tr>
<tr><td><a href="">Nester</a></td><td><span class="csharp"> C# </span></td>
<td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td></tr>
<tr><td><a href="http://visualmutator.apphb.com/">Visual Mutator</a></td><td><span class="csharp"> C# </span></td>
<td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td></tr>
<tr><td><a href="https://github.com/saltlab/mutandis/">Mutandis</a></td><td><span class="jscript"> JavaScript </span></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td></tr>
<tr><td><a href="http://knishiura-lab.github.io/AjaxMutator/">AjaxMutator</a></td><td><span class="jscript"> JavaScript </span></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td></tr>
<tr><td><a href="https://www.npmjs.org/package/grunt-mutation-testing">Grunt</a></td><td><span class="jscript"> JavaScript </span></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td></tr>
<tr><td><a href="https://github.com/mbj/mutant">Mutant</a></td><td><span class="ruby"> Ruby </span></td>
<td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td></tr>
<tr><td><a href="http://ruby.sadi.st/Heckle.html">Heckle</a></td><td><span class="ruby"> Ruby </span></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td></tr>
<tr><td><a href="https://pypi.python.org/pypi/MutPy/0.4.0">MutPy</a></td><td><span class="python"> Python </span></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td></tr>
<tr><td><a href="https://pypi.python.org/pypi/pymutester">PyMuTester</a></td><td><span class="python"> Python </span></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td></tr>
<tr><td><a href="https://github.com/sk-/elcap">Nose plugin</a></td><td><span class="python"> Python </span></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td> <td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://3.bp.blogspot.com/-V0U_HZTYz6o/VAugSFo5q9I/AAAAAAAAAms/EL5ea09ofc0/s1600/yes.png"/></td><td><img src="http://1.bp.blogspot.com/-L95NLI-T3bA/VAugVOZemqI/AAAAAAAAAm0/KNIKbJywEck/s1600/no.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td><td><img src="http://2.bp.blogspot.com/-JxHdGuVfmLo/VAufzzMOPxI/AAAAAAAAAmk/NRVilJ9TTHI/s1600/unknown2.png"/></td></tr>
</table>
</p>
<h1>References</h1>
<p>
<ol>
<li> <a href="http://pitest.org/java_mutation_testing_systems/">Mutation Testing Systems for Java Compared</a> <span class="java"> Java </span>
<li> <a href="http://pitest.org/">Real world mutation testing</a> <span class="java"> Java </span>
<li> <a href="http://tocea.com/blog/2012/11/12/mutation-analysis-of-java-programs-with-pit/">How do you test your tests? - Mutation analysis of Java programs with PIT</a> <span class="java"> Java </span>
<li> <a href="http://java.dzone.com/articles/how-do-you-test-your-tests-–">How do you test your tests? - Mutation analysis of Java programs with PIT</a> <span class="java"> Java </span>
<li> <a href="http://madeyski.e-informatyka.pl/download/Madeyski10b.pdf">Judy - A Mutation Testing Tool for Java</a> <span class="java"> Java </span>
<li> <a href="http://cs.gmu.edu/~offutt/mujava/">μJava Home Page</a> <span class="java"> Java </span>
<li> <a href="http://www.javacodegeeks.com/2012/04/introduction-to-mutation-testing-with.html">Introduction to mutation testing with PIT and TestNG</a> <span class="java"> Java </span>
<li> <a href="http://mutation-testing.org/">The Major mutation framework. Easy and scalable mutation testing for Java!</a> <span class="java"> Java </span>
<li> <a href="http://blog.jdriven.com/2014/03/joy-coding-mutation-testing-java/">Joy of Coding... and mutation testing in Java</a> <span class="java"> Java </span>
<li> <a href="https://www.st.cs.uni-saarland.de/mutation/">Mutation Testing</a> <span class="java"> Java </span>
<li> <a href="http://galera.ii.pw.edu.pl/~adr/CREAM/ref.php">CREAM - CREAtor of Mutants</a> <span class="csharp"> C# </span>
<li> <a href="http://www.mutation-testing.net/">NinjaTurtles - .NET mutation testing</a> <span class="csharp"> C# </span>
<li> <a href="http://msdn.microsoft.com/en-us/magazine/hh148145.aspx">MSDN Magazine - Super-Simple Mutation Testing</a> <span class="csharp"> C# </span>
<li> <a href="http://nester.sourceforge.net/">Nester: What is this?</a> <span class="csharp"> C# </span>
<li> <a href="https://www.simple-talk.com/dotnet/.net-tools/mutation-testing/">Simple Talk - Mutation Testing</a> <span class="csharp"> C# </span>
<li> <a href="http://visualmutator.apphb.com/">Visual Mutator - Visual Studio Mutation Testing Tool</a> <span class="csharp"> C# </span>
<li> <a href="https://github.com/saltlab/mutandis/">Mutandis - GitHub project</a> <span class="jscript"> JavaScript </span>
<li> <a href="http://knishiura-lab.github.io/AjaxMutator/">AjaxMutator</a>. <a href="https://github.com/knishiura-lab/AjaxMutator">AjaxMutator - Related GitHub Project</a> <span class="jscript"> JavaScript </span>
<li> <a href="http://www.slideshare.net/nkazuki/mutation-analysis-for-javascript-web-applicaitons-testing-seke2013">SlideShare - Mutation Analysis for JavaScript</a> <span class="jscript"> JavaScript </span>
<li> <a href="https://www.npmjs.org/package/grunt-mutation-testing">Grunt Mutation Testing</a>. <a href="https://github.com/shybyte/grunt-mutation-testing">Related GitHub project</a> <span class="jscript"> JavaScript </span>
<li> <a href="https://github.com/mbj/mutant">Mutant - GitHub project</a>. <span class="ruby"> Ruby </span> . Related posts:
<ul>
<li> <a href="http://www.sitepoint.com/mutation-testing-mutant/" >Mutation testing with Mutant</a> <span class="ruby"> Ruby </span>
<li> <a href="http://solnic.eu/2013/01/23/mutation-testing-with-mutant.html" >Mutation testing with Mutant</a> <span class="ruby"> Ruby </span>
</ul>
<li> <a href="http://www.isotope11.com/blog/mutation-testing-in-ruby">Mutation Testing in Ruby</a> <span class="ruby"> Ruby </span>
<li> <a href="http://carlopecchia.eu/blog/2009/03/02/mutation-testing-with-heckle/">Mutation Testing with Heckle</a> <span class="ruby"> Ruby </span>
<li> <a href="https://speakerdeck.com/sinjo/mutation-testing-ruby-edition">Mutation Testing - Ruby Edition</a> <span class="ruby"> Ruby </span>
<li> <a href="https://pypi.python.org/pypi/MutPy/0.4.0">MutPy project site</a> <span class="python"> Python </span>
<li> <a href="https://pypi.python.org/pypi/pymutester">PyMuTester project site</a> <span class="python"> Python </span>
<li> <a href="https://miketeo.net/wp/index.php/projects/python-mutant-testing-pymutester">Python Mutant Testing (PyMuTester)</a> <span class="python"> Python </span>
<li> <a href="https://github.com/sk-/elcap">Nose plugin for mutation testing</a> <span class="python"> Python </span>
<li> <a href="http://www-inst.eecs.berkeley.edu/~selfpace/cs9honline/Q2/mutation.html">Mutation in Python</a> <span class="python"> Python </span>
</ol>
</p>
</body>
</html>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com1tag:blogger.com,1999:blog-2532302763215844416.post-62180956089682402013-12-26T20:43:00.000+00:002013-12-26T20:43:48.936+00:00Sirius: 1 year retrospective<a href="http://mkolisnyk.blogspot.co.uk/2012/12/sirius-ice-is-broken.html">1 year ago</a> I've started working on the <a href="http://mkolisnyk.blogspot.co.uk/search/label/Sirius">Sirius</a> project. <a href="http://mkolisnyk.blogspot.co.uk/2013/05/sirius-half-year-milestone-retrospective.html">6 month ago</a> there was the first retrospective where I summarized all achievements and future goals. Generally last 6 years were not the most productive. Nevertheless, there were some achievements even there. So, in this topic I'll summarize again where we are with the <a href="http://mkolisnyk.blogspot.co.uk/search/label/Sirius">Sirius</a> now.
<a name='more'></a>
<h1>What was done</h1>
<p>
<ul>
<li> <b>Win32 extended support was added</b> - before I mentioned that Win32 support had already been added, however it's not completely correct. There were problems with extended controls (like tab controls, list boxes) which are resolved now
<li> <b>Basis for .NET controls support was added</b> - I had to make <a href="http://mkolisnyk.blogspot.com/2013/06/sirius-c-adding-ui-automation-library.html">UI Automation wrapper</a> for the Sirius libraries to cover Win32 functionality. This library is already there. So, it can be used to cover WinForms and WPF controls quite smoothly
<li> <b>Test tracking was integrated into GitHub and it's automatically integrated with the build process</b> - now build process uses GitHub to get the information about tests, so we can make updates to verifications just by changing appropriate issues
<li> <b>Documentation was extended</b> - a lot attention was paid to the documentation. So, now GitHub page contains separate <a href="http://mkolisnyk.github.io/Sirius">project site</a> where a lot of examples are added
</ul>
</p>
<h1>What wasn't done</h1>
<p>
<ul>
<li> <b>Still only 3 programming languages are covered</b> - that's still the problem as it still requires duplicating efforts on client API development
<li> <b>Platform coverage was expanded very slow.</b> - as I mentioned before, last 6 months were not so productive as before but yet, it's not so still
<li> <b>Existing functional coverage is still not good enough</b> - this is mostly due to necessity to make duplicating development work for different clients written on different programming languages.
</ul>
</p>
<h1>What's needed to be done</h1>
<p>
<ul>
<li> Extend functional coverage of existing modules
<li> Extend coverage to the .NET (WinForms, WPF) areas as the basis is already created
<li> Extend programming language coverage
<li> Add ability to call the client API from Excel
</ul>
</p>
<h1>Summary</h1>
<p>
Well, I could expect much better results. However, there're a lot of fundamental things to do there as well as a lot of such basis work has been done already (which took most of the time). Nevertheless, the project is evolving, growing. So, there're expectations for better growth in the upcoming 6 months.
</p>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-63696583153802191412013-10-27T17:37:00.000+00:002013-10-27T17:37:12.860+00:00GitHub: integrating project into entire GitHub infrastructure<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
th {background-color:#CCCCDD;}
table{border:1px solid black}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
</style>
</head>
<body>
<p>
GitHub provides quite good set of toolsets which can be used to support full development cycle and prepare infrastructure to all necessary output artifacts. Recently, I've described samples how to <a href="http://mkolisnyk.blogspot.com/2013/09/github-organizing-and-automating-test.html">provide integration with GitHub issue tracker</a> and to <a href="http://mkolisnyk.blogspot.co.uk/2013/09/github-test-tracking-via-maven-plugin.html">wrap it with Maven plugin</a>. So, actually, I have dedicated Maven plugin project which I can use as a separate project. In order to make it as much self-contained as it's needed we should:
<ol>
<li> Set up new project in the GitHub which includes:
<ol>
<li> New repository setup
<li> Build script preparation
<li> Release settings
</ol>
<li> Prepare project documentation generation
<li> Integration with CI
</ol>
So, at the end, all we have to do is to work with our IDE on adding new code or making some changes to existing modules. Everything else is supported by GitHub infrastructure as soon as we push our changes into remote repository.
</p>
<p>
Not bad at the beginning. So, let's get started.
</p>
<a name='more'></a>
<h1>Set up new project</h1>
<h2>New repository setup</h2>
<p>
New GitHub repository creation is clearly described in help pages. In particular, <a href="https://help.github.com/articles/create-a-repo">here</a>.
</p>
<p>
Once it's cloned locally it can be integrated with the development environment. In my case it's Eclipse and all necessary settings for it can be found <a href="http://mkolisnyk.blogspot.co.uk/2012/12/sirius-dev-environment-setup.html#config_eclipse">here</a>.
</p>
<p>
Then we can create new Maven project in eclipse to fill in all the sources we have. Here is the example how to <a href="http://www.tech-recipes.com/rx/39279/create-a-new-maven-project-in-eclipse/">Create a New Maven Project in Eclipse</a>.
</p>
<p>
OK. After all above manipulations I have a project like <a href="https://github.com/mkolisnyk/sirius-maven-plugins">this</a> where I put all necessary sources for the plugin I'm going to create project for.
</p>
<h2>Build script preparation</h2>
<p>
All build logic is defined in the Maven <b>pom.xml</b> file. All nodes and attributes are defined to support the following:
<ul>
<li> Build should pass through the following stages:
<ul>
<li> clean
<li> compile
<li> package
<li> install
<li> release:prepare, release:perform
</ul>
<li> All informational sections should fit the <a href="https://docs.sonatype.org/display/Repository/Sonatype+OSS+Maven+Repository+Usage+Guide#SonatypeOSSMavenRepositoryUsageGuide-6.CentralSyncRequirement">Central Sync Requirements</a>
<li> All dependencies are just related to the functionality to <a href="http://mkolisnyk.blogspot.com/2013/09/github-organizing-and-automating-test.html">provide integration with GitHub issue tracker</a> and to <a href="http://mkolisnyk.blogspot.co.uk/2013/09/github-test-tracking-via-maven-plugin.html">wrap it with Maven plugin</a>
</ul>
So, base <b>pom.xml</b> file looks like (mouse over each section to see the explanation):
<pre class="code">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<a title="Main settings identifying Maven groupping information as well as the name of Maven artifacts. So, in this example the plugin may be invoked as: com.github.mkolisnyk:sirius-maven-plugin" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><modelVersion>4.0.0</modelVersion>
<groupId>com.github.mkolisnyk</groupId>
<artifactId>sirius-maven-plugin</artifactId>
<name>sirius-maven-plugin</name></a>
<version>1.1-SNAPSHOT</version>
<a title="This node identifies packaging type. Since current project should be delivered as Maven plugin the value of this node is set to 'maven-plugin' correspondingly." onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><packaging>maven-plugin</packaging></a>
<a title="The list of licenses in use. This field is infomational and needed mostly for documentation generation unless you have some other specific options" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><licenses>
<license>
<name>The Apache Software License, Version 2.0</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
</license>
</licenses></a>
<a title="Informational section containing information about developers and contributors for the project" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><developers>
<developer>
<id>mkolisnyk</id>
<name>mkolisnyk</name>
<email>kolesnik.nickolay@gmail.com</email>
</developer>
</developers></a>
<a title="This is Sonatype requirement to make projects interact properly with OSS Sonatype repository properly" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><parent>
<groupId>org.sonatype.oss</groupId>
<artifactId>oss-parent</artifactId>
<version>7</version>
</parent></a>
<a title="Source Control System information containing SCM URLs as well as tag names which should be generated during release process" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'">
<scm>
<connection>scm:git:https://github.com/mkolisnyk/sirius-maven-plugins.git</connection>
<developerConnection>scm:git:https://github.com/mkolisnyk/sirius-maven-plugins.git</developerConnection>
<url>https://github.com/mkolisnyk/sirius-maven-plugins.git</url>
<tag>v1.0</tag>
</scm></a>
<a title="Informational filed containing data about Issue tracking system in use" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'">
<issueManagement>
<system>GitHub</system>
<url>https://github.com/mkolisnyk/sirius-maven-plugins/issues</url>
</issueManagement>
</a>
<a title="Additional informational field restricting the set of Maven versions to use. Now it's restricted to Maven version 2.0" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'">
<prerequisites>
<maven>2.0</maven>
</prerequisites>
</a>
<a title="Informational field which is used during documentation generation as the project start year" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'">
<inceptionYear>2013</inceptionYear>
</a>
<build>
<a title="Locations of main and test sources" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'">
<sourceDirectory>src/main/java</sourceDirectory>
<testSourceDirectory>src/test/java</testSourceDirectory>
</a>
<resources>
<resource>
<directory>src</directory>
<excludes>
<exclude>**/*.java</exclude>
</excludes>
</resource>
<resource>
<directory>target/dependency</directory>
<excludes>
<exclude>**/*.java</exclude>
</excludes>
</resource>
</resources>
<plugins>
<a title="This plugin is used for Maven plugin development. In particular in this example we define various generation options for Maven plugin we develop." onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-plugin-plugin</artifactId>
<version>3.2</version>
<configuration>
<!-- see http://jira.codehaus.org/browse/MNG-5346 -->
<skipErrorNoDescriptorsFound>true</skipErrorNoDescriptorsFound>
</configuration>
<executions>
<execution>
<id>mojo-descriptor</id>
<goals>
<goal>descriptor</goal>
</goals>
</execution>
<!-- if you want to generate help goal -->
<execution>
<id>help-goal</id>
<goals>
<goal>helpmojo</goal>
</goals>
</execution>
</executions>
</plugin></a>
<a title="Identifies compilation related options. Currently the Java version is defined." onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin></a>
<a title="Responsible for component installation into local repository" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-install-plugin</artifactId>
<version>2.3.1</version>
<configuration>
<file>target/${project.artifactId}-${project.version}.jar</file>
<groupId>${project.groupId}</groupId>
<artifactId>${project.artifactId}</artifactId>
<version>${project.version}</version>
<packaging>maven-plugin</packaging>
</configuration>
</plugin></a>
<a title="Responsible for JAR packaging. In particular we define the list of files to include/exclude and JAR manifest file options." onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<excludes>
<exclude>*</exclude>
<exclude>com/thoughtworks/**/*</exclude>
<exclude>freemarker/**/*</exclude>
<exclude>ftl/**/*</exclude>
<exclude>i18n/**/*</exclude>
<exclude>style/**/*</exclude>
<exclude>junit/**/*</exclude>
<exclude>licenses/**/*</exclude>
<!-- <exclude>META-INF/maven/**/*</exclude> -->
<exclude>org/codehaus/**/*</exclude>
<exclude>org/apache/velocity/**/*</exclude>
<exclude>org/hamcrest/**/*</exclude>
<exclude>org/jbehave/**/*</exclude>
<exclude>org/junit/**/*</exclude>
<exclude>org/testng/**/*</exclude>
<exclude>org/xmlpull/**/*</exclude>
<exclude>stories/**/*</exclude>
<exclude>style/**/*</exclude>
<exclude>tests/**/*</exclude>
</excludes>
<archive>
<manifest>
<addClasspath>true</addClasspath>
<addDefaultImplementationEntries>true</addDefaultImplementationEntries>
<addDefaultSpecificationEntries>true</addDefaultSpecificationEntries>
<!-- <addExtensions>true</addExtensions> -->
<classpathLayoutType>repository</classpathLayoutType>
<mainClass>sirius.utils.retriever.Program</mainClass>
</manifest>
</archive>
</configuration>
</plugin></a>
<a title="" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>-deps</descriptorRef>
</descriptorRefs>
</configuration>
</plugin></a>
<a title="Activate dependencies retrieval before any compilation." onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><plugin>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>unpack-dependencies</id>
<phase>generate-resources</phase>
<goals>
<goal>unpack-dependencies</goal>
</goals>
</execution>
</executions>
</plugin></a>
<a title="Activates release related operations." onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<version>2.4</version>
</plugin></a>
</plugins>
<a title="Additional section where various plugin related settings are defined. Here it is needed to perform lify-cycle mapping. Otherwise Eclipse shows errors." onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><pluginManagement>
<plugins>
<!--This plugin's configuration is used to store Eclipse m2e settings
only. It has no influence on the Maven build itself. -->
<plugin>
<groupId>org.eclipse.m2e</groupId>
<artifactId>lifecycle-mapping</artifactId>
<version>1.0.0</version>
<configuration>
<lifecycleMappingMetadata>
<pluginExecutions>
<pluginExecution>
<pluginExecutionFilter>
<groupId>
org.apache.maven.plugins
</groupId>
<artifactId>
maven-dependency-plugin
</artifactId>
<versionRange>
[2.1,)
</versionRange>
<goals>
<goal>
unpack-dependencies
</goal>
</goals>
</pluginExecutionFilter>
<action>
<ignore />
</action>
</pluginExecution>
</pluginExecutions>
</lifecycleMappingMetadata>
</configuration>
</plugin>
</plugins>
</pluginManagement></a>
</build>
<dependencies>
<a title="Contains libraries for GitHub API usage. It's needed for building GitHub based reports" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><dependency>
<groupId>org.kohsuke</groupId>
<artifactId>github-api</artifactId>
<version>1.41</version>
</dependency></a>
<a title="The set of dependencies which are typical inclusion libraries for Maven plugins development and reporting" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><dependency>
<groupId>org.apache.maven.plugin-tools</groupId>
<artifactId>maven-plugin-annotations</artifactId>
<version>3.2</version>
<!-- annotations are not needed for plugin execution so you can remove
this dependency for execution with using provided scope -->
<scope>provided</scope>
</dependency>
<!-- generated help mojo has a dependency to plexus-utils -->
<dependency>
<groupId>org.codehaus.plexus</groupId>
<artifactId>plexus-utils</artifactId>
<version>3.0.1</version>
</dependency>
<dependency>
<groupId>org.apache.maven</groupId>
<artifactId>maven-plugin-api</artifactId>
<version>2.0</version>
</dependency>
<dependency>
<groupId>org.apache.maven.reporting</groupId>
<artifactId>maven-reporting-api</artifactId>
<version>2.0.8</version>
</dependency>
<dependency>
<groupId>org.apache.maven.reporting</groupId>
<artifactId>maven-reporting-impl</artifactId>
<version>2.0.4.3</version>
</dependency></a>
<a title="Contains libraries used for Cucumber execution reports generation" onmouseover="this.style='background-color:#DDDDDD'" onmouseout="this.style='background-color:silver'"><dependency>
<groupId>net.masterthought</groupId>
<artifactId>cucumber-reporting</artifactId>
<version>0.0.21</version>
</dependency></a>
</dependencies>
</project>
</pre>
</p>
<h2>Release settings</h2>
<p>
In order to make the entire project be prepared to the release we should make additional settings. For this purpose we should update <b>maven-release-plugin</b> section as follows:
<pre class="code">
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<version>2.4</version>
<span class="mark"><configuration>
<tagNameFormat>v@{project.version}</tagNameFormat>
<preparationGoals>package install:install-file</preparationGoals>
</configuration></span>
</plugin>
</pre>
These configuration sections define actions to be performed in order to make release process done and the template which will be used for tagging during release process.
</p>
<h1>Prepare project documentation generation</h1>
<p>
Documentation is an essential part of any software product. Without it the user will have some difficulties in using the software. Also, documentation contains some usefult references information.
</p>
<p>
If you take a look at any Maven plugin documentation you should see that the documentation structure is pretty similar and fits some common template. That's true and all this stuff is generated with Maven site plugin during Maven <b>site</b> goal phase.
</p>
<p>
Additionally GitHub provides the abilities to create project site where all necessary documentation is stored. This functionality is implemented by <a href="https://help.github.com/categories/20/articles">GitHub Pages</a> functionality. The idea is pretty simple: each GitHub project may have special branch named <b>gh-pages</b> which can be used as the storage for project site content. GitHub Pages functionality takes the <b>gh-pages</b> branch content and uses that as the site content.
</p>
<h2>Adding Maven reports</h2>
<p>
During build process there're many various activities which produce some reporting artifacts. That can be various static analysis tools like PMD, CPD, Check-Style, Code Coverage analysis etc. Also, there're some additional generators like Javadoc. All those reports may be triggered during site generation process. Maven <b>pom.xml</b> file has dedicated section for that. It's <b>reporting</b> section where all reports to be generated are defined. For our project it looks like:
<pre class="code">
<reporting>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-pmd-plugin</artifactId>
<version>3.0.1</version>
<configuration>
<excludes>
<exclude>**/plugin/**/*.java</exclude>
</excludes>
<excludeRoots>
<excludeRoot>target/generated-sources</excludeRoot>
</excludeRoots>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jxr-plugin</artifactId>
<version>2.3</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-plugin-plugin</artifactId>
<version>3.2</version>
</plugin>
</plugins>
</reporting>
</pre>
In this example we're adding 3 reporting items:
<ol>
<li> <b>maven-pmd-plugin</b> - generates static code analysis report
<li> <b>maven-jxr-plugin</b> - used for generating XRef resources (actually, source code with links to some specific lines)
<li> <b>maven-plugin-plugin</b> - generates plugin information
</ol>
After adding these entries and running the following command:
<pre class="console">
mvn site
</pre>
we'll get project site generated under the <b>target/site</b> directory which will contain the set of base project reports (most of the information is takes from the informational fields) + additional reports based on data in the <b>reporting</b> section of <b>pom.xml</b>.
</p>
<h2>Prepare project site structure</h2>
<p>
But the default set of project documentation pages is usually not enough. E.g. the <b>About</b> page is mainly built based on value in the <b>description</b> field in the <b>pom.xml</b> file. So, you can't put too much information in it. Additionally, we may like to have some usage samples or some other descriptive reports. So, we should configure:
<ul>
<li> The entire set of pages to use
<li> Some custom static pages
</ul>
For this purpose we should create <b>src/site</b> folder in our project. General information can be found at the <a href="http://maven.apache.org/guides/mini/guide-site.html">Maven Guide for Creating a site</a>. Mainly, we should build folder structure like:
<pre>
+- src/
+- site/
+- apt/
| +- index.apt
|
+- fml/
| +- general.fml
| +- faq.fml
|
+- site.xml
</pre>
Where:
<ul>
<li> <b>site.xml</b> - the XML file containing the template for the entire site structure
<li> <b>apt/</b> - the folder containing documentation files in the <a href="http://maven.apache.org/doxia/references/apt-format.html">APT format</a> which is another text markup format. During site generation each apt file is converted to appropriate HTML file with the same name.
<li> <b>fml/</b> - the folder containing files in FML format which is XML based format
</ul>
There're some other formats available but I've described main ones which I'm going to use. The templates and samples for these pages can be found at the <a href="http://maven.apache.org/guides/development/guide-plugin-documentation.html">Guide to the Plugin Documentation Standard</a> document.
</p>
<p>
I'm not going to describe all the files but I'll pay more attention to the <b>site.xml</b> file which contains the following structure for current project:
<pre class="code">
<?xml version="1.0" encoding="UTF-8"?>
<project>
<body>
<span class="mark"><menu name="Overview">
<item name="Introduction" href="index.html"/>
<item name="Goals" href="plugin-info.html"/>
<item name="Usage" href="usage.html"/>
<item name="FAQ" href="faq.html"/>
</menu>
<menu name="Examples">
<item name="description1" href="examples/example1.html"/>
<item name="description2" href="examples/example2.html"/>
</menu></span>
<span style="background-color:tomato"><menu ref="reports"/></span>
</body>
</project>
</pre>
There're 2 sections which require attention here:
<ol>
<li> Section marked with yellow contains custom menu items which reference to static pages generated during site generation process.
<li> Section marked with red contains reference to the area where all basic reports should be placed. If you remove this section the Maven site plugin will still produce all reports but they will not appear in the left navigation menu.
</ol>
</p>
<h2>Additional Custom reports</h2>
<p>
There may be some reports or artifacts which may be produced outside the tasks defined in the <b>pom.xml</b> inside the <b>reporting</b> section. E.g. in addition to any other reports I'd like to have GitHub change history report. It can be provided with the <b>site-maven-plugin</b>. So, I just have to add the following entry into the <b>build</b> section:
<pre class="code">
<plugin>
<groupId>com.github.danielflower.mavenplugins</groupId>
<artifactId>maven-gitlog-plugin</artifactId>
<version>1.5.0</version>
<configuration>
<reportTitle>Changelog for ${project.name} version
${project.version}</reportTitle>
<verbose>true</verbose>
<outputDirectory>target/site/changelog</outputDirectory>
<generatePlainTextChangeLog>true</generatePlainTextChangeLog>
<plainTextChangeLogFilename>changelog-${project.version}.txt</plainTextChangeLogFilename>
<generateSimpleHTMLChangeLog>true</generateSimpleHTMLChangeLog>
<markdownChangeLogFilename>changelog-${project.version}.md</markdownChangeLogFilename>
<generateMarkdownChangeLog>true</generateMarkdownChangeLog>
<simpleHTMLChangeLogFilename>changelog.html</simpleHTMLChangeLogFilename>
<generateHTMLTableOnlyChangeLog>true</generateHTMLTableOnlyChangeLog>
<htmlTableOnlyChangeLogFilename>changelog-tableonly.html</htmlTableOnlyChangeLogFilename>
<issueManagementSystem>GitHub issue tracker</issueManagementSystem>
<issueManagementUrl>https://github.com/mkolisnyk/sirius-maven-plugins/issues</issueManagementUrl>
<fullGitMessage>true</fullGitMessage>
<dateFormat>yyyy-MM-dd HH:mm:ss Z</dateFormat>
</configuration>
<executions>
<execution>
<goals>
<goal>generate</goal>
</goals>
</execution>
</executions>
</plugin>
</pre>
So, it should generate the <b>target/site/changelog.html</b> file containing change history. It would be usefult to add this report to Maven site. So, for this purpose I'm just adding the following entry to the <b>site.xml</b> file:
<pre class="code">
<?xml version="1.0" encoding="UTF-8"?>
<project>
<body>
<menu name="Overview">
<item name="Introduction" href="index.html"/>
<item name="Goals" href="plugin-info.html"/>
<item name="Usage" href="usage.html"/>
<item name="FAQ" href="faq.html"/>
<span class="mark"><item name="Change Log" href="changelog/changelog.html"/></span>
</menu>
<menu name="Examples">
<item name="description1" href="examples/example1.html"/>
<item name="description2" href="examples/example2.html"/>
</menu>
<menu ref="reports"/>
</body>
</project>
</pre>
So, this is the way how to add another reference to the report from Maven site.
</p>
<h2>Export to GH-Pages branch</h2>
<p>
After the above steps we just have our site deployed locally and located under <b>target/site</b> directory. But in order to share it we should commit it into <b>gh-pages</b> branch. For this purpose we can use <b>site-maven-plugin</b> from <a href="https://github.com/github/maven-plugins">GitHub Maven Plugins</a> repository. We should add the following entry under <b>build</b> section:
<pre class="code">
<plugin>
<groupId>com.github.github</groupId>
<artifactId>site-maven-plugin</artifactId>
<version>0.8</version>
<configuration>
<message>Creating site for ${project.version}</message>
<oauth2Token>MyAUTH2Token</oauth2Token>
<repositoryName>sirius-maven-plugins</repositoryName>
<repositoryOwner>mkolisnyk</repositoryOwner>
</configuration>
<executions>
<execution>
<goals>
<goal>site</goal>
</goals>
<phase>site</phase>
</execution>
</executions>
</plugin>
</pre>
So, after this if we run
<pre class="console">
mvn site
</pre>
we'll have our Maven site generated and committed into <b>GH-pages</b> branch of the repository we define.
</p>
<h1>Integration with CI</h1>
<h2>Integrating with Travis</h2>
<p>
<a href="http://travis-ci.org/">Travis</a> is another free hosted continuous integration system which can be used to build our GitHub projects as soon as we push some changes into the remote repository. The integration with GitHub is set quite easily and it is described in the <a href="http://about.travis-ci.org/docs/user/getting-started/">Travis Getting Started</a> page.
</p>
<p>
The only thing left after we set up the integration is to define the build file. For this purpose we should create <b>_travis.yml</b> file with the following content:
<pre class="code">
language: java
install: mvn install site
notifications:
email: false
</pre>
Generally all we have to define here is just the programming language and command line to initiate the build. So, from now each time we push our changes to the repository the Travis build will be triggered and the following command will be performed:
<pre class="console">
mvn install site
</pre>
</p>
<h2>Filtering out release related commits</h2>
<p>
We're almost done except one annoing and at the same time serious thing: when we perform the release the Maven switches version number and commits changes. But Travis is targeted to version changes as well. So, it means that during release process the Travis will perform at least 3 false runs. In order to prevent that we should mark the commit somehow to say that we don't need CI build after those changes.
</p>
<p>
According to <a href="http://about.travis-ci.org/docs/user/how-to-skip-a-build/">How to skip a build</a> we should provide the commit with the following text:
<pre>
[ci skip]
</pre>
So, we should make Maven commit release changes with this text in the commit message. And this can be set in the release plugin settings. So, we should update <b>pom.xml</b> entry like this:
<pre class="code">
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<version>2.4</version>
<configuration>
<tagNameFormat>v@{project.version}</tagNameFormat>
<preparationGoals>package install:install-file</preparationGoals>
<span class="mark"><scmCommentPrefix>#42'[ci skip]'</scmCommentPrefix></span>
</configuration>
</plugin>
</pre>
Now we're safe from false CI builds.
</p>
<h2>Bells and whistles: adding build status badge</h2>
<p>
From time to time we may notice various badges on the GitHub pages indicating the build status. It's just information but it's something nice to have on the page. Once we have Travis project active we always can request the build status badge. E.g. in order to display that in the GitHub README.md page we should add the following text:
<pre>
[![Build Status](https://travis-ci.org/mkolisnyk/sirius-maven-plugins.png)](https://travis-ci.org/mkolisnyk/sirius-maven-plugins)
</pre>
This will display the badge like that:
<a href="https://travis-ci.org/mkolisnyk/sirius-maven-plugins"><img src="https://travis-ci.org/mkolisnyk/sirius-maven-plugins.png" /></a>
</p>
<h1>Conclusion</h1>
<p>
That was an example showing how to create GitHub Java project and make all necessary settings to provide all necessary output artifacts automatically. So, actually, all we have to do is to work with our IDE and push changes to the repository. Everything else is prepared and generated using GitHub infrastructure.
</p>
</body>
Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-61262509679926427952013-09-23T00:05:00.000+01:002013-09-23T00:05:07.074+01:00GitHub: test tracking via Maven plugin
<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
th {background-color:#CCCCDD;}
table{border:1px solid black}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
</style>
</head>
<body>
<p>
<a href="http://mkolisnyk.blogspot.com/2013/09/github-organizing-and-automating-test.html">Recently</a> we've organized integration with <a href="https://github.com">GitHub</a> by retrieving issues content into the files. All this was done in a form of command line utility. In this post I'll extend the functionality to use this solution as <a href="http://maven.apache.org">Maven</a> plugin.
</p>
<a name='more'></a>
<h1>Building <a href="http://maven.apache.org">Maven</a> plugin skeleton</h1>
<p>
The plugin is developped using <a href="http://maven.apache.org/guides/plugin/guide-java-plugin-development.html">the following guide</a>. So, we'll create skeleton like:
<pre class="code">
/**
*
*/
package sirius.utils.retriever;
import java.io.IOException;
import org.apache.maven.plugin.AbstractMojo;
import org.apache.maven.plugin.MojoExecutionException;
import org.apache.maven.plugins.annotations.*;
/**
* @author Myk Kolisnyk
*
*/
@Mojo(name = "generate",defaultPhase=LifecyclePhase.GENERATE_SOURCES)
public class IssueGetPlugin extends AbstractMojo {
public void execute() throws MojoExecutionException {
;
}
}
</pre>
Bus this is just a base. We've created new <a href="http://maven.apache.org">Maven</a> goal named <b>generate</b> which should be run as a part of <b>generate-sources</b> stage.
</p>
<p>
Next step is to add properties:
<pre class="code">
@Parameter(property = "issueget.user", defaultValue = "")
private String userName = "";
@Parameter(property = "issueget.password", defaultValue = "")
private String password = "";
@Parameter(property = "issueget.repository", defaultValue = "Sirius")
private String repository = "";
@Parameter(property = "issueget.type", defaultValue = "trace")
private String outputType = "";
@Parameter(property = "issueget.groups", defaultValue = "Test")
private String groups;
@Parameter(property = "issueget.output", defaultValue = ".")
private String outputLocation = "";
</pre>
These are actually the parameters which were originally passed to retriever utility in the <a href="http://mkolisnyk.blogspot.com/2013/09/github-organizing-and-automating-test.html">previous example</a>. And to make picture complete we'll add setters to the above properties:
<pre class="code">
/**
* @param userName the userName to set
*/
public void setUserName(String userName) {
this.userName = userName;
}
/**
* @param password the password to set
*/
public void setPassword(String password) {
this.password = password;
}
/**
* @param repository the repository to set
*/
public void setRepository(String repository) {
this.repository = repository;
}
/**
* @param outputType the outputType to set
*/
public void setOutputType(String outputType) {
this.outputType = outputType;
}
/**
* @param groups the groups to set
*/
public void setGroups(String groups) {
this.groups = groups;
}
/**
* @param outputLocation the outputLocation to set
*/
public void setOutputLocation(String outputLocation) {
this.outputLocation = outputLocation;
}
</pre>
This is the skeleton.
</p>
<h1>Implementing plugin logic</h1>
<p>
We've created the base for the plugin. So, now it's time to implement the actions to that. For this purpose we'll fill in the content of <b>execute</b> method:
<pre class="code">
public void execute() throws MojoExecutionException {
String[] args = {
Program.REPO_NAME,
this.repository,
Program.USER_NAME,
this.userName,
Program.USER_PASS,
this.password,
Program.GROUPS,
this.groups,
Program.OUTPUT_TYPE,
this.outputType,
Program.OUTPUT_LOCATION,
this.outputLocation
};
try {
Program.main(args);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
</pre>
Well, it appears to be a bit more simple than it was thought initially.
</p>
<p>
Now the plugin is ready to use. We can compile it and publish to some local repository. Then it can be included into any other project as:
<pre class="code">
<build>
<plugins>
<plugin>
<groupId>com.github.mkolisnyk</groupId>
<artifactId>issueget-<a href="http://maven.apache.org">maven</a>-plugin</artifactId>
<version>1.0-SNAPSHOT</version>
<configuration>
<repository>Sirius</repository>
<groups>Test;Win32</groups>
<outputType>cucumber</outputType>
<outputLocation>src/test/java/org/sirius/server/test/features/win32/controls</outputLocation>
<goal>generate</goal>
</configuration>
</plugin>
</plugins>
</build>
</pre>
</p>
<h1>Some small tunings</h1>
<p>
The above example should have some additional settings. Firstly, there's no username and password properties specified. Well, usually, it's all about security purposes to avoid explicit password definition. But we still need them.
</p>
<p>
Also, at the moment we can invoke this plugin by the following command line:
<pre class="console">
mvn com.github.mkolisnyk:issueget-maven-plugin:generate
</pre>
which is a bit inconvenient. So, we should be able to make this goal name shorter.
</p>
<p>
All the above stuff can be configured at the <b>%<a href="http://maven.apache.org">MAVEN</a>_HOME%/conf/settings.xml</b> file. There're 2 major settings to be done there:
<ol>
<li> Update the default list of plugin packages:
<pre class="code">
<pluginGroups>
<pluginGroup>com.github.mkolisnyk</pluginGroup>
</pluginGroups>
</pre>
<li> Update default profile with issueget login/password information:
<pre class="code">
<profile>
<id>default</id>
<properties>
<issueget.user>mkolisnyk</issueget.user>
<issueget.password>****</issueget.password>
</properties>
</profile>
</profiles>
</pre>
</ol>
After the above settings this plugin will be called as:
<pre class="console">
mvn issueget:generate
</pre>
and by default the user credentials will be taken from the profile settings. Thus, you can specify different credentials in the different profiles.
</p>
<h1>Summary</h1>
<p>
Now we can generate <a href="http://cukes.info">Cucumber</a> definitions from the <a href="http://maven.apache.org">Maven</a>. Of course, it's not the final version and it would be improved to load results into multiple locations, to load data from different systems (e.g. JIRA) etc. But the first step is done. The code is available <a href="https://github.com/mkolisnyk/Sirius/tree/master/sirius.utils.retriever">here</a>.
</p>
</body>
Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-88804795612278605672013-09-15T21:04:00.001+01:002013-09-15T21:04:45.593+01:00GitHub: organizing and automating test tracking with Java<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
th {background-color:#CCCCDD;}
table{border:1px solid black}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
</style>
</head>
<body>
<p>
<a href="https://github.com">GitHub</a> provides some tracker where we can keep our tasks. The <a href="http://github-api.kohsuke.org/">GitHub API</a> provides the ability to access that resources programmatically. Thus, with the help of API I could <a href="http://mkolisnyk.blogspot.co.uk/2013/05/java-jbehave-integration-with-github.html">integrate JBehave and GitHub</a> so that I can run <a href="http://jbehave.org">JBehave</a> tests stored in <a href="https://github.com">GitHub</a> tracker. But it's not the only case.
</p>
<p>
With the help of API I can set the correspondence between tests and features. This gives the ability to make the traceability of tests and features they're applied for. Also, what if we use something different from <a href="http://jbehave.org">JBehave</a>, e.g. Cucumber, Sprecflow which use flat files for input? In this article I'll describe how to:
<ul>
<li> Organize tests with GitHub tracker
<li> Automatically generate <a href="http://en.wikipedia.org/wiki/Traceability_matrix">traceability matrix</a>
<li> Automatically generate feature files based on the content of the test issues
</ul>
</p>
<a name='more'></a>
<h1>Organize tests with GitHub tracker</h1>
<h2>What do we have initially?</h2>
<p>
First thing we should realize in GitHub tracker is that it is quite simple tracker which is mainly designed to track bugs/tasks but not various complicated resources. That's why there's no different issue types. There's only one entiry: issue.
</p>
<p>
We cannot distribute issues by types but we still can mark them in a specific way. For this purpose there's labelling functionality. We can setup different labels and use them for indicating different issue types and/or application area.
</p>
<p>
Another thing is that all issues are entities of the same hierarchial level. It means that you cannot create task and make some sub-tasks. At least explicitly. We can make some implicit check-points inside the task (I'll touch this point later) but it's not the same as you can do via more or less complicated tracking systems. Nevertheless, all tasks can be split by some general group of features which have pre-defined timelines. This groupping entity is called <b>milestone</b>. It can be used for issues groupping.
</p>
<h2>Organize structure</h2>
<p>
Based on the above points we can identify the following rules for issues tracking:
<ul>
<li> Milestone is the highest level object representing some specific feature. This item should contain feature description
<li> Each specific issue which is related to some feature should be linked to appropriate milestone
<li> In order to filter out tests from development tasks or bugs we can use labels
</ul>
OK. Now we're good to go with the filling data in.
</p>
<h2>Fill-in the content</h2>
<p>
For this example I've created <a href="https://github.com/mkolisnyk/Sirius/issues?milestone=1&state=open">Win32 Tab Control support</a> milestone and added some small description like:
<pre class="code">
As the system user
I want to be able to interact with Win32 Tab Control
</pre>
That would be the place-holder for tests.
</p>
<p>
For better distribution between tests and other task types I'll <a href="https://help.github.com/articles/customizing-issue-labels">add the following labels</a>:
<ul>
<li> <b>Test</b> - used for the issues containing test scenarios
<li> <b>Bug</b> - used for storing bug information
<li> <b>DevTask</b> - used for the issues containing development tasks related to each specific feature
</ul>
Additionally, we can add various labels representing application modules and other features which uniquely define the area this issue is applied to.
</p>
<p>
Once we're done with the labelling we can <a href="https://github.com/mkolisnyk/Sirius/issues/2">create new issue</a> and fill it in with the test scenario, e.g.:
<pre class="code">
Scenario: List All page names
- [ ] User should be able to get the list of all pages
When I start GUI tests application
Then I should see the following tabs:
| Tab Name |
| Static Text |
| Edit Page |
| Rich Text |
| Buttons |
| List Box |
| Combo Box |
| Scroll Bars |
| Image Lists |
| Progress Bar |
| Sliders |
| Spinners |
| Headers |
</pre>
<table class="notetable">
<tr><td class="notehead">NOTE</td></tr>
<tr><td class="notebody">
There's one additional line here:
<pre class="code">
- [ ] User should be able to get the list of all pages
</pre>
which is rather GitHub wiki markup feature than something that needed for test scenario. Is GitHub issue contains text like that it's interpreted as a check-list item. So, you can split task into smaller sub-tasks and mark their completion inside the issue. But for test scenario it doesn't make effect.
</td></tr>
</table>
</p>
<h1>Prepare the code generator</h1>
<p>
Before we start generating various resources we should identify how it can be done and what should be generated.
</p>
<h2>Retrieval interface</h2>
<p>
I'm going to prepare the utility which should be able to generate traceability matrix and feature file based on GitHub issues structure. In both cases the output should contain the following requisites:
<ul>
<li> Header - contains the most common information. Usually overview of the issues
<li> Milestone information - general information which is taken from milestone
<li> Issue information - contains information taken from issues
<li> Labels information - puts information based on labels
<li> Footer - contains some disclaimer or some other summaryzing information
</ul>
So, actually, I have to create something which retrieves that information. It fits the common structure provided by the following interface:
<pre class="code">
/**
*
*/
package sirius.utils.retriever.interfaces;
import java.util.ArrayList;
import org.kohsuke.github.GHIssue;
import org.kohsuke.github.GHMilestone;
/**
* @author Myk Kolisnyk
*
*/
public interface IStoryFormatter {
public String GetHeader(ArrayList<GHIssue> issues);
public String GetMilestone(GHMilestone milestone);
public String GetIssue(GHIssue issue);
public String GetLabels(GHIssue issue);
public String GetFooter(ArrayList<GHIssue> issues);
}
</pre>
and for now we'll create some dummy formatter implementing this interface:
<pre class="code">
/**
*
*/
package sirius.utils.retriever.formatters;
import java.util.ArrayList;
import org.kohsuke.github.GHIssue;
import org.kohsuke.github.GHMilestone;
import sirius.utils.retriever.interfaces.IStoryFormatter;
/**
* @author Myk Kolisnyk
*
*/
public class DummyFormatter implements IStoryFormatter {
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetHeader(java.util.ArrayList)
*/
public String GetHeader(ArrayList<GHIssue> issues) {
// TODO Auto-generated method stub
return null;
}
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetMilestone(org.kohsuke.github.GHMilestone)
*/
public String GetMilestone(GHMilestone milestone) {
// TODO Auto-generated method stub
return null;
}
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetIssue(org.kohsuke.github.GHIssue)
*/
public String GetIssue(GHIssue issue) {
// TODO Auto-generated method stub
return null;
}
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetFooter(java.util.ArrayList)
*/
public String GetFooter(ArrayList<GHIssue> issues) {
// TODO Auto-generated method stub
return null;
}
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetLabels(org.kohsuke.github.GHIssue)
*/
public String GetLabels(GHIssue issue) {
// TODO Auto-generated method stub
return null;
}
}
</pre>
This code is purely generated.
</p>
<h2>Adding command line parameters</h2>
<p>
It is convenient to provide command line interface. It can be reworked as the <a href="http://maven.apache.org">Maven</a> plugin but for now it's enough to have just command line utility. As an input we need the following information:
<ul>
<li> GitHub username
<li> GitHub password
<li> GitHub repository name
<li> Output type - the flag identifying whether we produce traceability matrix or something else
</ul>
So, generally the command line should look like:
<pre class="code">
java -jar sirius.utils.retriever-<version>.jar -r <repository> -u <user> -p <password> -t <report_type>
</pre>
Having this information we can start creating main program part. We'll start with the main program class which gets all those parameters:
<pre class="code">
/**
*
*/
package sirius.utils.retriever;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import org.kohsuke.github.GHIssue;
import org.kohsuke.github.GHIssueState;
import org.kohsuke.github.GHMilestone;
import org.kohsuke.github.GHRepository;
import org.kohsuke.github.GitHub;
import sirius.utils.retriever.formatters.CucumberFormatter;
import sirius.utils.retriever.formatters.DummyFormatter;
import sirius.utils.retriever.formatters.TraceabilityMatrixFormatter;
import sirius.utils.retriever.interfaces.IStoryFormatter;
/**
* @author Myk Kolisnyk
*
*/
public class Program {
public static final String REPO_NAME="-r";
public static final String USER_NAME="-u";
public static final String USER_PASS="-p";
public static final String OUTPUT_TYPE="-t";
/**
* @param args
* @throws IOException
*/
public static void main(String[] args) throws IOException {
String userName="";
String password="";
String repository="";
String outputType = "";
IStoryFormatter formatter = new DummyFormatter();
HashMap<String, String> params = new HashMap<String, String>();
for (int i = 0; i < (2 * (args.length / 2)); i += 2) {
if (i < args.length - 1) {
params.put(args[i], args[i + 1]);
}
}
if (params.containsKey(USER_NAME)) {
userName = params.get(USER_NAME);
}
if (params.containsKey(USER_PASS)) {
password = params.get(USER_PASS);
}
if (params.containsKey(REPO_NAME)) {
repository = params.get(REPO_NAME);
}
if (params.containsKey(OUTPUT_TYPE)) {
outputType = params.get(OUTPUT_TYPE);
}
.....
}
}
</pre>
Now we're ready to process the information from the issue tracker.
</p>
<h2>Implementing base work flow</h2>
<p>
Now we should go through all issues and report their information based on the formatter we use. The general flow is:
<ul>
<li> Connect to the repository
<li> Get all issues both of open and closed status
<li> Sort all issues based on Milestone id (thus we group them by milestones as well)
<li> Write header
<li> Loop through all issues and write appropriate information
<li> Write footer
</ul>
Additionally, I'd like to use only issues which contain <b>Test</b> label.
So, let's do it step by step.
</p>
<h2> Connect to the repository</h2>
<p>
We have login and password provided as the command line paramenters so we connect to GitHub repository in the following way:
<pre class="code">
GitHub client = GitHub.connectUsingPassword(userName, password);
GHRepository repo = client.getMyself().getRepository(repository);
</pre>
After that we have an access to all repository items.
</p>
<h2>Get all issues both of open and closed status</h2>
<p>
Unfortunately the API provides the ability to get issues by some specific status so I have to make 2 calls instead 1. Generally issues are retrieved with the following code:
<pre class="code">
ArrayList<GHIssue> issues = new ArrayList<GHIssue>();
issues.addAll(repo.getIssues(GHIssueState.OPEN));
issues.addAll(repo.getIssues(GHIssueState.CLOSED));
</pre>
</p>
<h2>Sort all issues based on Milestone id (thus we group them by milestones as well)</h2>
<p>
This is the task about sorting structures based on some field value. This is done in 2 steps:
<ol>
<li> Create comparison class where the comparison criteria is defined:
<pre class="code">
public class IssuesComparator implements Comparator<GHIssue> {
public IssuesComparator(){;}
/* (non-Javadoc)
* @see java.util.Comparator#compare(java.lang.Object, java.lang.Object)
*/
public int compare(GHIssue o1, GHIssue o2) {
int mileId1,mileId2;
if(o1.getMilestone()==null){
mileId1 = 0;
}
else {
mileId1 = o1.getMilestone().getNumber();
}
if(o2.getMilestone()==null){
mileId2 = 0;
}
else {
mileId2 = o2.getMilestone().getNumber();
}
return mileId1 - mileId2;
}
}
</pre>
<li> Sort array of issues based on the above criteria:
<pre class="code">
Program p = new Program();
IssuesComparator c = p.new IssuesComparator();
Collections.sort(issues, c);
</pre>
</ol>
</p>
<h2>Write header</h2>
<p>
<pre class="code">
System.out.println(formatter.GetHeader(issues));
</pre>
Nuff said.
</p>
<h2>Loop through all issues and write appropriate information</h2>
<p>
Milestone processing has it's own features. Firstly, if issue doesn't contain milestone defined the milestone should be created. Thus all it's attributes will be null. However, at least we won't have NullPointerException. Each milestone information is displayed only once per all issues within it.
</p>
<p>
Generally, the code looks like:
<pre class="code">
int prevMilestoneId = -1;
for(GHIssue issue:issues){
GHMilestone milestone = issue.getMilestone();
if(milestone==null){
milestone = new GHMilestone();
}
if(milestone.getNumber() != prevMilestoneId){
prevMilestoneId = milestone.getNumber();
System.out.println(formatter.GetMilestone(milestone));
}
if(issue.getLabels().contains("Test"))
{
System.out.println(formatter.GetIssue(issue));
}
}
</pre>
</p>
<h2>Write footer</h2>
<p>
<pre class="code">
System.out.println(formatter.GetFooter(issues));
</pre>
Nuff said.
</p>
<h2>Before we go next</h2>
<p>
All the above mentioned code can be found <a href="https://github.com/mkolisnyk/Sirius/blob/master/sirius.utils.retriever/src/main/java/sirius/utils/retriever/Program.java">here</a>. The only thing I haven't described yet is the formatters initialization. If we copy the code as I described in the chapter all the output will be processed by DummyFormatter. But we have to use more specific formatters in the next chapters.
</p>
<h1>Automatically generate traceability matrix</h1>
<h2>Creating formatter</h2>
<p>
First thing we should do is to create the <b>TraceabilityMatrixFormatter</b> class which implements <b>IStoryFormatter</b>. And them in the <b>main</b> function we should add the following statement:
<pre class="code">
if(outputType.equals("trace")){
formatter = new TraceabilityMatrixFormatter();
}
</pre>
It should be placed before any <b>formatter</b> variable usage.
</p>
<p>
Since we have interface and base flow, we should implement the formatter methods. The formatter class looks like:
<pre class="code">
/**
*
*/
package sirius.utils.retriever.formatters;
import java.util.ArrayList;
import org.kohsuke.github.GHIssue;
import org.kohsuke.github.GHIssueState;
import org.kohsuke.github.GHMilestone;
import sirius.utils.retriever.interfaces.IStoryFormatter;
/**
* @author Myk Kolisnyk
*
*/
public class TraceabilityMatrixFormatter implements IStoryFormatter {
public final String eol = System.getProperty("line.separator");
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetHeader(java.util.ArrayList)
*/
public String GetHeader(ArrayList<GHIssue> issues) {
return "# Tests status" + eol + eol;
}
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetMilestone(org.kohsuke.github.GHMilestone)
*/
public String GetMilestone(GHMilestone milestone) {
return "## Feature: [" + milestone.getTitle() + "](" + milestone.getUrl() + ")" + eol +
"```" + eol + milestone.getDescription() + eol + "```" + eol + eol +
"| Group | Test | Completed |" + eol +
"|-------|------|-----------|";
}
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetIssue(org.kohsuke.github.GHIssue)
*/
public String GetIssue(GHIssue issue) {
return "|" + GetLabels(issue) + " | [" + issue.getTitle() +
"](" + issue.getUrl() + ") | " +
((issue.getState().equals(GHIssueState.CLOSED))?("Yes"):("No")) +
"|";
}
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetFooter(java.util.ArrayList)
*/
public String GetFooter(ArrayList<GHIssue> issues) {
return "";
}
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetLabels(org.kohsuke.github.GHIssue)
*/
public String GetLabels(GHIssue issue) {
String result = "";
String[] labels = new String[issue.getLabels().size()];
issue.getLabels().toArray(labels);
for(int i=4;i<labels.length;i+=8){
result += "[" + labels[i] + "](" + labels[i-2] + ") ";
}
return result;
}
}
</pre>
After that you can run command like:
<pre class="code">
java -jar sirius.utils.retriever.jar -r Sirius -u %GH_USER% -p %GH_PASS% -t trace
</pre>
and if all credentials are proper you can get an output like:
<pre class="console">
# Tests status
## Feature: [Win32 Tab Control support](https://api.github.com/repos/mkolisnyk/Sirius/milestones/1)
```
As the system user
I want to be able to interact with Win32 Tab Control
```
| Group | Test | Completed |
|-------|------|-----------|
|[Test](https://api.github.com/repos/mkolisnyk/Sirius/labels/Test) [Win32](https://api.github.com/repos/mkolisnyk/Sirius/labels/Win32) | [Win32 Tab Control support. Base Operations](https://github.com/mkolisnyk/Sirius/issues/2) | No|
</pre>
</p>
<h2>Integrating formatter output with GitHub wiki pages</h2>
<p>
Unfortunately GitHub API doesn't contain code interacting with wiki pages. But there's another way to write to there. Wiki pages are stored in separate git repository which can be retrieved separately. To obtain GitHub wiki repository URL you should:
<ul>
<li> Navigate to project Wiki home page
<li> Click on <b>Clone URL</b> button
</ul>
After that the URL will appear in the clipboard. After that you can clone wiki repository into any location. Since it's now the file you can simply redirect the generator output to the required file and commit new changes. Generally it looks like:
<pre class="console">
java -jar sirius.utils.retriever.jar -r Sirius -u %GH_USER% -p %GH_PASS% -t trace > ../wiki/Traceability-Matrix.md
cd ../wiki/
git pull
git commit -a -m "Traceability matrix update."
git push
</pre>
Thus we're making changes into the <b>../wiki/Traceability-Matrix.md</b> file and push those changes to the repository. As the result you'll get newly generated page available at wiki. Something like <a href="https://github.com/mkolisnyk/Sirius/wiki/Traceability-Matrix">Traceability Matrix</a> page generated this way.
</p>
<h1>Automatically generate feature files based on the content of the test issues</h1>
<p>
Cucumber-like output is done in the same fasion. We have to add the formatter initialization:
<pre class="code">
if(outputType.equals("cucumber")){
formatter = new CucumberFormatter();
}
</pre>
After that we should implement another formatter adopter to Cucumber-like constructions:
<pre class="code">
/**
*
*/
package sirius.utils.retriever.formatters;
import java.util.ArrayList;
import org.kohsuke.github.GHIssue;
import org.kohsuke.github.GHIssueState;
import org.kohsuke.github.GHMilestone;
import sirius.utils.retriever.interfaces.IStoryFormatter;
/**
* @author Myk Kolisnyk
*
*/
public class CucumberFormatter implements IStoryFormatter {
public final String eol = System.getProperty("line.separator");
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetHeader(java.util.ArrayList)
*/
public String GetHeader(ArrayList<GHIssue> issues) {
return "";
}
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetMilestone(org.kohsuke.github.GHMilestone)
*/
public String GetMilestone(GHMilestone milestone) {
return "# " + milestone.getTitle() + eol;
}
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetIssue(org.kohsuke.github.GHIssue)
*/
public String GetIssue(GHIssue issue) {
return GetLabels(issue) + eol + "Feature: " + issue.getTitle() + eol +
issue.getBody() + eol;
}
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetFooter(java.util.ArrayList)
*/
public String GetFooter(ArrayList<GHIssue> issues) {
return "";
}
/* (non-Javadoc)
* @see sirius.utils.retriever.interfaces.IStoryFormatter#GetLabels(org.kohsuke.github.GHIssue)
*/
public String GetLabels(GHIssue issue) {
String result = "";
String[] labels = new String[issue.getLabels().size()];
issue.getLabels().toArray(labels);
for(int i=4;i<labels.length;i+=8){
result += "@" + labels[i] + " ";
}
return result;
}
}
</pre>
So, if we run the command like like:
<pre class="console">
java -jar sirius.utils.retriever.jar -r Sirius -u %GH_USER% -p %GH_PASS% -t cucumber
</pre>
This command will produce output like:
<pre class="console">
# Win32 Tab Control support
@Test @Win32
Feature: Win32 Tab Control support. Base Operations
Scenario: List All page names
- [x] User should be able to get the list of all pages
When I start GUI tests application
Then I should see the following tabs:
| Tab Name |
| Static Text |
| Edit Page |
| Rich Text |
| Buttons |
| List Box |
| Combo Box |
| Scroll Bars |
| Image Lists |
| Progress Bar |
| Sliders |
| Spinners |
| Headers |
</pre>
This output can be redirected to any other file and further used by any BDD engine.
</p>
<h1>Summary</h1>
<p>
In this article I've described how to organize test tracking using GitHub issue tracker. Also, I've described how to automate some reports generation. And finally, I've made the way to integrate GitHub not only with JBehave as <a href="http://mkolisnyk.blogspot.co.uk/2013/05/java-jbehave-integration-with-github.html">I did it before</a>. Thus, we can have a unified way to perform the same set of tests under different platforms and different programming languages.
</p>
</body>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0tag:blogger.com,1999:blog-2532302763215844416.post-79858701057933065222013-06-08T15:35:00.001+01:002013-06-08T15:35:55.524+01:00Sirius C#: adding UI Automation library<head>
<style type="text/css">
.code {border:1px solid black;background-color:silver}
.console {border:1px solid black;background-color:black;color:white}
.mark {background-color:yellow;font-weight:bold}
.wrong_text {color:red;font-weight:bold;text-decoration:line-through}
.wrong_area {background-color:red;font-weight:bold}
.right_text {color:green;font-weight:bold;text-decoration:underline}
.right_area {background-color:green;font-weight:bold}
.rule {border:2px dotted green;background-color:PaleGreen}
.notetable {border:1px dashed goldenrod}
.notehead {background-color:gold;text-align:left}
.notebody {background-color:khaki}
h1 {background-color:#9999CC}
h2 {background-color:#BBBBCC}
h3 {background-color:#DDDDFF}
th {background-color:#CCCCDD;}
table{border:1px solid black}
.done {background-color:lightgreen;font-weight:bold;color:darkgreen}
.undone {background-color:tomato;font-weight:bold;color:darkred}
</style>
</head>
<body>
<p>
Recently we've created code interacting with <a href="http://mkolisnyk.blogspot.co.uk/2013/01/sirius-java-writing-win32-client.html">Win32</a> and <a href="http://mkolisnyk.blogspot.co.uk/2013/03/sirius-adding-web-client-api.html">Web</a>. Now it's time to expand the coverage to the .NET area. For this purpose we have dedicated library called <a href="http://msdn.microsoft.com/en-us/library/ms747327.aspx">UI Automation</a>. This library is provided with .NET framework and contains basic API interacting with window objects. At the same time this library can be used for interaction with the standard Win32 controls, however it can be used as an auxiliary modules to expand the coverage of existing library I created before for interacting with Win32 elements. In this article I will create sample control classes and make some demo showing how it works. Also, in this example I'll take the tab control and create simple tests interacting with it. At this point that would be just stand-alone library but in the future it will be integrated into entire <a href="http://mkolisnyk.blogspot.com/search/label/Sirius">Sirius</a> project.
</p>
<a name='more'></a>
<p>
The following items will be covered:
<ol>
<li> Adding <a href="http://msdn.microsoft.com/en-us/library/ms747327.aspx">UI Automation</a> library into the project
<li> Finding window objects
<ol>
<li> Using simple search criteria
<li> Using complex search criteria
</ol>
<li> Interacting with window objects
<li> Creating class for common window object
<li> Creating class for tab control
<li> Sample test
</ol>
</p>
<h1> Adding <a href="http://msdn.microsoft.com/en-us/library/ms747327.aspx">UI Automation</a> library into the project</h1>
<p>
As it was mentioned before the <a href="http://msdn.microsoft.com/en-us/library/ms747327.aspx">UI Automation</a> library is provided with .Net framework and can be added as reference. So, in order to add <a href="http://msdn.microsoft.com/en-us/library/ms747327.aspx">UI Automation</a> library to the existing project we should do the following:
<ol>
<li> Create new C# project or open existing one in Visual Studio
<li> Right-click on <b>References</b> node and select <b>Add Reference</b> popup menu item
<li> In the references dialog we should switch to <b>.NET</b> tab and select the following items:
<ul>
<li> UIAutomationClient
<li> UIAutomationTypes
</ul>
<li> Click OK
</ol>
After that our project is ready for using <a href="http://msdn.microsoft.com/en-us/library/ms747327.aspx">UI Automation</a> library
</p>
<h1> Finding window objects</h1>
<p>
The core class which provides interaction with UI elements is <b>AutomationElement</b>. It's main entry point to get an access to control internal properties and methods. Initially the only instance accessible is the root element which corresponds to the desktop window. It can be accessed as:
<pre class="code">
AutomationElement root = AutomationElement.RootElement;
</pre>
All window objects are child windows of this element. There're 2 main methods which are responsible for search:
<ul>
<li> <b>FindFirst</b> - finds first element that matches search criteria
<li> <b>FindAll</b> - gets the list of all elements matching the search criteria
</ul>
Both methods have the following parameters:
<pre>
Find<First|All>(TreeScope scope,Condition condition)
</pre>
Where:
<ul>
<li> <b>scope</b> - identifies the search scope of the element. It can be only child windows or including descendants with different combinations
<li> <b>condition</b> - the search criteria itself. It is based on the values of some specific properties
</ul>
The scope parameter is just the constant. The condition parameter require more attention as it is complex structure. Conditions can be divided into simple (containing only one search criteria) and complex (containing the combination of simple criteria).
</p>
<h2> Using simple search criteria</h2>
<p>
As it was mentioned before, simple search criteria is based on one property only. To identify such simple condition we should use <b>PropertyCondition</b> class. That can be done in the following way:
<pre class="code">
Condition nameCondition = new PropertyCondition(AutomationElement.NameProperty, "Common Controls Examples");
</pre>
That will match the window with the <b>"Common Controls Examples"</b> text. So, if we look for the top level window with such header we should use the code like:
<pre class="code">
AutomationElement root = AutomationElement.RootElement;
Condition nameCondition = new PropertyCondition(AutomationElement.NameProperty, "Common Controls Examples");
AutomationElement mainWin = root.FindFirst(TreeScope.Children, nameCondition);
</pre>
After this code the <b>mainWin</b> variable will contain the reference to the automation element accessing the properties and methods of the top level window with <b>"Common Controls Examples"</b> text.
</p>
<h2> Using complex search criteria</h2>
<p>
In some cases it's not enough just to look for control using single property. E.g. there may be several elements with some specific text but they have different classes. Or the target control may have several possible property values. For this purpose we should use complex search criteria which includes several settings. The following code shows how to create complex condition based on specified class and text:
<pre class="code">
public static Condition ByTypeAndName(ControlType type, String name)
{
Condition[] locators =
{
new PropertyCondition(AutomationElement.NameProperty, name),
new PropertyCondition(AutomationElement.ControlTypeProperty, type)
};
return new AndCondition(locators);
}
</pre>
Using any combination of above conditions we can define search criteria for any object and find that based on criteria specified.
</p>
<h1>Writing custom conditions object</h1>
<p>
As it's seen from previous examples the conditions definition is quite complex operation and takes too much code to write. So, it would be reasonable to wrap the most frequently used conditions with some methods which simplify condition definitions. For this purpose I've created the <b>CustomConditions</b> class with the following content:
<pre class="code">
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows.Automation;
namespace <a href="http://mkolisnyk.blogspot.com/search/label/Sirius">Sirius</a>.Win32.Lib
{
public class CustomConditions
{
public static Condition ByHandle(int hwnd)
{
return new PropertyCondition(AutomationElement.NativeWindowHandleProperty, hwnd);
}
public static Condition ByName(String name)
{
return new PropertyCondition(AutomationElement.NameProperty, name);
}
public static Condition ByTypeAndName(ControlType type, String name)
{
Condition[] locators =
{
new PropertyCondition(AutomationElement.NameProperty, name),
new PropertyCondition(AutomationElement.ControlTypeProperty, type)
};
return new AndCondition(locators);
}
}
}
</pre>
So, in further examples I'll use this class to make instructions more compact.
</p>
<h1> Interacting with window objects</h1>
<p>
Once we located the object we should perform some actions on it. Initially the only interface we have is an instance of the <b>AutomationElement</b> class which contains only very generic set of properties and methods. But if we want to perform some control specific operation we should use something else. Each control specific set of actions is wrapped with specific bundles or patterns. They contain methods and properties specific to some control types. E.g. general window manipulation functionality is wrapped with <b>WindowPattern</b> class. If specific object supports that pattern the functionality will be accessible. Otherwise we'll get the exception.
</p>
<p>
So, let's try to make some sample. Let's say we should find some top level window with the <b>"Common Controls Examples"</b> text and then close it. It can be done in the following way:
<pre class="code">
AutomationElement root = AutomationElement.RootElement;
Condition nameCondition = new PropertyCondition(AutomationElement.NameProperty, "Common Controls Examples");
AutomationElement mainWin = root.FindFirst(TreeScope.Children, nameCondition);
<span class="mark">WindowPattern winOps = tabElement.GetCurrentPattern(WindowPattern.Pattern) as WindowPattern;
winOps.Close();
</span>
</pre>
The highlighted part shows the example of window object interaction. Actually, <b>AutomationElement</b> object contains only references while pattern classes are responsible for performing actions on objects.
</p>
<h1> Creating class for common window object </h1>
<p>
Before, I've described how to perform interaction with window objects using <a href="http://msdn.microsoft.com/en-us/library/ms747327.aspx">UI Automation</a> library. But the library itself is very generic and contains only basic abstractions. If we want to make some library we should wrap that functionality with more usable interface. Firstly, we'll create class for common window operations. At the moment we should wrap only search functionality. So, we'll create the class skeleton:
<pre class="code">
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows.Automation;
namespace <a href="http://mkolisnyk.blogspot.com/search/label/Sirius">Sirius</a>.Win32.Lib
{
public class Window
{
}
}
</pre>
Next step is to wrap the reference to root element because <b>AutomationElement.RootElement</b> statement is quite long while it's quite frequently used. So, we'll update our class with the following code:
<pre class="code">
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows.Automation;
namespace <a href="http://mkolisnyk.blogspot.com/search/label/Sirius">Sirius</a>.Win32.Lib
{
public class Window
{
<span class="mark"> public AutomationElement Root
{
get
{
return AutomationElement.RootElement;
}
}</span>
}
}
</pre>
Then we should add methods which search for windows. Currently I need the following searches:
<ul>
<li> By specified window handle
<li> By name
<li> By specified name and parent window
</ul>
The code implementing those methods looks like:
<pre class="code">
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows.Automation;
namespace <a href="http://mkolisnyk.blogspot.com/search/label/Sirius">Sirius</a>.Win32.Lib
{
public class Window
{
public AutomationElement Root
{
get
{
return AutomationElement.RootElement;
}
}
<span class="mark">
public AutomationElement Find(int hwnd)
{
return Root.FindFirst(TreeScope.Subtree, CustomConditions.ByHandle(hwnd));
}
public int Find(String className,String name,int index)
{
AutomationElementCollection elements = Root.FindAll(TreeScope.Children, CustomConditions.ByName(name));
if (elements.Count <= index)
{
return 0;
}
AutomationElement element = elements[index];
return element.Current.NativeWindowHandle;
}
public int Find(int parent, String className, String name, int index)
{
AutomationElement baseElement = Find(parent);
if (baseElement == null)
{
return 0;
}
AutomationElementCollection elements = baseElement.FindAll(TreeScope.Subtree, CustomConditions.ByName(name));
if (elements.Count <= index)
{
return 0;
}
AutomationElement element = elements[index];
return element.Current.NativeWindowHandle;
}</span>
}
}
</pre>
That's it. Now we're done with common window class. Current implementation of it can be found <a href="https://github.com/mkolisnyk/Sirius/blob/master/SiriusCSharp.Client/Sirius.Win32.Lib/Window.cs">here</a>.
</p>
<h1> Creating class for tab control</h1>
<p>
For the tab control we should firstly define class containing functionality interacting with controls of that type. It looks like:
<pre class="code">
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using <a href="http://mkolisnyk.blogspot.com/search/label/Sirius">Sirius</a>.Win32.Lib.Controls.Interfaces;
using System.Windows.Automation;
namespace <a href="http://mkolisnyk.blogspot.com/search/label/Sirius">Sirius</a>.Win32.Lib.Controls
{
public class Tab : Window
{
}
}
</pre>
All other functionality to be added is about searching for element and getting/setting selection. For that purpose we should use <b>SelectionItemPattern</b> class.
</p>
<h2>Find child control</h2>
<p>
Since tab control is always child element we should make a search based on parent window. Also, there can be multiple elements of that type. It means that the unique identifier here is index. Based on these assumptions we should add the following method to the <b>Tab</b> class:
<pre class="code">
public int Find(int parent, int index)
{
AutomationElement baseElement = base.Find(parent);
<span class="mark">Condition locator = new PropertyCondition(AutomationElement.ControlTypeProperty, ControlType.Tab);</span>
AutomationElementCollection elements = baseElement.FindAll(TreeScope.Subtree, locator);
if (elements.Count <= index)
{
return 0;
}
return elements[index].Current.NativeWindowHandle;
}
</pre>
The highlighted part contains the code initializing search criteria. We search for element by specified type defined by <b>ControlType.Tab</b> constant.
</p>
<h2>Get tab control values</h2>
<p>
Once we can find tab control element we can get the information from it. Currently what we need is:
<ul>
<li> Total number of tabs
<li> Currently selected tab index
<li> Currently selected tab name
<li> Get all available item names
</ul>
All those methods are done with the following code:
<pre class="code">
public int GetItemsCount(int hwnd)
{
int count = 0;
AutomationElement element = Find(hwnd);
AutomationElement tabElement = TreeWalker.RawViewWalker.GetFirstChild(element);
if (tabElement != null) { count++; }
while (tabElement != null)
{
count++;
tabElement = TreeWalker.RawViewWalker.GetNextSibling(tabElement);
}
return count;
}
public int GetSelectedIndex(int hwnd)
{
int index = -1;
AutomationElement element = Find(hwnd);
AutomationElement tabElement = TreeWalker.RawViewWalker.GetFirstChild(element);
while (tabElement != null)
{
index++;
SelectionItemPattern changeTab_aeTabPage = tabElement.GetCurrentPattern(SelectionItemPattern.Pattern) as SelectionItemPattern;
if (changeTab_aeTabPage.Current.IsSelected)
{
return index;
}
tabElement = TreeWalker.RawViewWalker.GetNextSibling(tabElement);
}
return -1;
}
public String GetSelectedItem(int hwnd)
{
return GetItemNames(hwnd)[GetSelectedIndex(hwnd)];
}
public String[] GetItemNames(int hwnd)
{
List<String> elementNames = new List<String>();
AutomationElement element = Find(hwnd);
AutomationElement tabElement = TreeWalker.RawViewWalker.GetFirstChild(element);
while (tabElement != null)
{
elementNames.Add(tabElement.Current.Name);
tabElement = TreeWalker.RawViewWalker.GetNextSibling(tabElement);
}
return elementNames.ToArray();
}
</pre>
</p>
<h2>Perform selection</h2>
<p>
Selection can be done by specifying either tab name of tab index. That's reflected in 2 methods performing the same action but with different input parameters. They are:
<pre class="code">
public void Select(int hwnd,int index)
{
int count = 0;
AutomationElement element = Find(hwnd);
AutomationElement tabElement = TreeWalker.RawViewWalker.GetFirstChild(element);
while (tabElement != null)
{
if (count == index)
{
SelectionItemPattern changeTab_aeTabPage =
tabElement.GetCurrentPattern(SelectionItemPattern.Pattern) as SelectionItemPattern;
changeTab_aeTabPage.Select();
return;
}
else
{
count++;
}
tabElement = TreeWalker.RawViewWalker.GetNextSibling(tabElement);
}
}
public void Select(int hwnd,String item)
{
AutomationElement element = Find(hwnd);
AutomationElement tabElement = TreeWalker.RawViewWalker.GetFirstChild(element);
while (tabElement != null)
{
if (tabElement.Current.Name.Equals(item))
{
SelectionItemPattern changeTab_aeTabPage =
tabElement.GetCurrentPattern(SelectionItemPattern.Pattern) as SelectionItemPattern;
changeTab_aeTabPage.Select();
return;
}
tabElement = TreeWalker.RawViewWalker.GetNextSibling(tabElement);
}
}
</pre>
These are all methods we need at the moment. Current implementation of Tab control class can be found <a href="https://github.com/mkolisnyk/Sirius/blob/master/SiriusCSharp.Client/Sirius.Win32.Lib/Controls/Tab.cs">here</a>.
</p>
<h1> Sample test</h1>
<p>
Once we have all necessary code we can create some sample test which shows how to use that API. The scenario is pretty simple:
<ol>
<li> Open <b>"Common Controls Examples"</b> application (can be found <a href="https://github.com/mkolisnyk/Sirius/blob/master/TestApps/win32/Controls.exe">here</a>)
<li> Get list of all tab names
<li> For each tab name perform the following:
<ol>
<li> Select tab by specific name
<li> Verify that the tab with specified name is selected
<li> Verify that the tab with the specified index is selected
</ol>
</ol>
The entire test code looks like:
<pre class="code">
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NUnit.Framework;
using System.Diagnostics;
using <a href="http://mkolisnyk.blogspot.com/search/label/Sirius">Sirius</a>.Win32.Lib;
using <a href="http://mkolisnyk.blogspot.com/search/label/Sirius">Sirius</a>.Win32.Lib.Controls;
namespace <a href="http://mkolisnyk.blogspot.com/search/label/Sirius">Sirius</a>CSharp.Client.Tests.Tests.Win32Lib
{
public class TabControlTests : ControlTestsCommon
{
protected Process controlsApp = null;
protected Window win;
protected Tab tab;
protected int mainHwnd;
[SetUp]
public void Before()
{
controlsApp = Process.Start( @"D:\Work\<a href="http://mkolisnyk.blogspot.com/search/label/Sirius">Sirius</a>Dev\<a href="http://mkolisnyk.blogspot.com/search/label/Sirius">Sirius</a>\TestApps\win32\Controls.exe");
win = new Window();
mainHwnd = win.Find("", "Common Controls Examples", 0);
tab = new Tab();
}
[TearDown]
public void After()
{
controlsApp.Kill();
}
[Test]
[Category("TabControl")]
public void TestTabNames()
{
int htab = tab.Find(mainHwnd, 0);
String[] names = tab.GetItemNames(htab);
int index = 0;
foreach(String name in names)
{
tab.Select(htab,name);
Assert.AreEqual(index, tab.GetSelectedIndex(htab));
Assert.AreEqual(name, tab.GetSelectedItem(htab));
index++;
}
}
}
}
</pre>
Current implementation can be found <a href="https://github.com/mkolisnyk/Sirius/tree/master/SiriusCSharp.Client/SiriusCSharp.Client.Tests/Tests/Win32Lib">here</a>.
</p>
<h1>Summary</h1>
<p>
Current sample just shows the way to use <a href="http://msdn.microsoft.com/en-us/library/ms747327.aspx">UI Automation</a> library. In the future the library created can be expanded to cover more different control types. Even more, that library can be provided as the standalone package.
</p>
</body>
Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com1tag:blogger.com,1999:blog-2532302763215844416.post-26719026839747339812013-05-16T22:36:00.000+01:002013-05-16T22:36:08.798+01:00Sirius: half year milestone retrospective<a href="http://mkolisnyk.blogspot.co.uk/2012/12/sirius-ice-is-broken.html">6 month ago</a> I've started working on the <a href="http://mkolisnyk.blogspot.co.uk/search/label/Sirius">Sirius</a> project. A lot of things were done since that time, a lot of things were discovered and a lot of things were changed. In general, a lot of work has been done and more work has to be done. So, the aim of this post is just to make some summary of the work being done so far and further directions definition.
<a name='more'></a>
<h1>What was done</h1>
<p>
During all this time the following things were done:
<ul>
<li> Core engine <a href="http://mkolisnyk.blogspot.co.uk/2012/12/sirius-first-steps.html">was developped</a> and genereal concept was created and stabilized
<li> Modules for <a href="http://mkolisnyk.blogspot.co.uk/2013/01/sirius-java-writing-win32-client.html">Win32</a> and <a href="http://mkolisnyk.blogspot.co.uk/2013/03/sirius-adding-web-client-api.html">Web</a> controls support were added
<li> The client side was migrated to Java, Ruby and C#
<li> All the components are delivered as an independent packages (for <a href="http://mkolisnyk.blogspot.co.uk/2013/03/sirius-now-in-maven.html">Maven</a>, NuGet and Rubygems)
<li> <a href="http://mkolisnyk.blogspot.co.uk/2012/12/sirius-dev-environment-setup.html">Entire build process</a> and foundation for testing was created
</ul>
</p>
<h1>What wasn't done</h1>
<p>
Despite a lot of success there're some things which were planned but not done yet. They are:
<ul>
<li> Most of the functionality is still rather draft than some complete functionality as the first goal was to provide the functional coverage
<li> Despite fully automated process was created it still requires user interaction as some modules require specific user account to publish packages to remote locations
<li> Documentation is still weak. This blog contains the development process description however more real life examples are needed
<li> Only 3 programming languages are now supported however by design there shouldn't be any restrictions
</ul>
</p>
<h1>What's needed to be done</h1>
<p>
<li> Expand functional coverage for existing libraries as well as add new functionality to cover such areas as .NET, mobile applications
<li> Expand the range of supported programming languages. The entire library should additionally ported into Python, Scala and some other languages
<li> Share the entire infrastructure as much as possible so that each build can be done online
<li> Add more detailed testing so that all the time the build succeed it delivers some portion of ready to use product
</p>
<h1>Summary</h1>
<p>
Actually, there're a lot of things to do. Generally speaking that list would never end. But anyway, I've started from nothing (only with the idea) and now there's some basis. So, I should expand it with more new features. What would that be? Who knows. Let's see in another 6 months
</p>Mykola Kolisnykhttp://www.blogger.com/profile/08484354844163560278noreply@blogger.com0