Search

Sunday, 31 December 2017

Wednesday, 25 October 2017

Video Course: Automated UI Testing in Android

This is the Automated UI Testing in Android video course created by me. It is the continuation of series of courses dedicated to step-by-step UI automation framework development. Just click on the image below for more details.

Tuesday, 2 May 2017

Video Course: Automated UI Testing in C#

This is the Automated UI Testing in C# video course created by me. It is the continuation of series of courses dedicated to step-by-step UI automation framework development. Just click on the image below for more details.

Thursday, 29 December 2016

Video Course: Automated UI Testing in Java

OK. This is the Automated UI Testing in Java video course created by me. Just click on the image below for more details.

Sunday, 18 October 2015

Cucumber JVM: Advanced Reporting 3. Handling FileNotFound or Invalid JSON Errors

Cucumber JVM: Advanced Reporting 3. Handling FileNotFound or Invalid JSON Errors

Introduction

Since Advanced Cucumber JVM reporting was initially introduced and then enhancements were provided the most frequent question was related to the error when input file was not found or it was improperly formatted. Normally such error was accompanied with the output like:

java.io.IOException: Input is invalid JSON; does not start with '{' or '[', c=-1
 at com.cedarsoftware.util.io.JsonReader.readJsonObject(JsonReader.java:1494)
 at com.cedarsoftware.util.io.JsonReader.readObject(JsonReader.java:707)
 at com.github.mkolisnyk.cucumber.reporting.CucumberResultsOverview.readFileContent(CucumberResultsOverview.java:81)
 ...
or
java.io.FileNotFoundException
 at com.github.mkolisnyk.cucumber.reporting.CucumberResultsOverview.readFileContent(CucumberResultsOverview.java:76)
 at com.github.mkolisnyk.cucumber.reporting.CucumberResultsOverview.executeFeaturesOverviewReport(CucumberResultsOverview.java:189)
 ...
As it is one of the most frequent questions I decided to create separate post explaining reasons of it and the way to fix.

Sunday, 13 September 2015

Test Automation Best Practices

Test Automation Best Practices

Introduction

Test automation is not brand new thing. Even more, it's pretty known and quite old part of entire software development process. Thus, a lot of people have good and bad experience with it. As the result, there is some common set of best practices which were formulated in different ways. Different people concentrate on different aspects of test automation. Some of them are concentrating on technical parts while others pay more attention to higher level things. As always, the truth is somewhere in between.

So, what are the best practices in test automation?

For this post I've collected several posts/articles covering this topic (the list of them can be found at the References section of this post) and I'll try to combine them into one common list of practices. Mainly I'll contentrate on the top most UI level automation, however, a lot of practices are fully applicable to any level of automated testing.

Tuesday, 30 June 2015

WebDriving Test Automation

WebDriving Test Automation

Introduction

WebDriver shows growing popularity for many years and at least for past 4 years it is number 1 player on the market of web test automation solutions. The growth hasn't been stopped yet. The WebDriver exposes open interface and it's server side API is documented in W3C Standard. Thus, a lot of people can implement their own back-end API and expand technology support. So, WebDriver can become more than another option among web test automation tools list. It may become (if not yet) the most wide spread test automation platform so that the term of UI test automation may be replaced with another term like Web-Driving. In this post I'll describe where the WebDriver solution "web-drives" us to and where are the areas of further growth.

Saturday, 13 June 2015

Cucumber JVM: Advanced Reporting 2. Detailed Report (HTML, PDF)

Cucumber JVM: Advanced Reporting 2. Detailed Report (HTML, PDF)

Introduction

Previously I've described basic report samples based on Cucumber-JVM. Since I'm using this reporting solution on practice quite frequently the number of requirements and enhancements grows so since last time I had to add some more additional features which can be in use.

Some of new features are:

  • Cucumber extension to support failed tests re-run
  • Detailed results report which supports the following:
    • Detailed results report generation in HTML format
    • Detailed results report generation in PDF format
    • Screen shot are included into both the above reports
All those new features are included into 0.0.5 version, so we can add Maven dependency:
<dependency>
 <groupId>com.github.mkolisnyk</groupId>
 <artifactId>cucumber-reports</artifactId>
 <version>0.0.5</version>
</dependency>
or the same thing for Gradle:
'com.github.mkolisnyk:cucumber-reports:0.0.5'

Since Cucumber failed tests re-run functionality was described before in this post I'll concentrate more on detailed results report features.

Tuesday, 26 May 2015

Cucumber JVM + JUnit: Re-run failed tests

Cucumber JVM + JUnit: Re-run failed tests

Automated tests should run reliably and provide predictable results. At the same time there are some spontaneous temporary errors which distort the entire picture while reviewing test results. And it's really annoying when tests fail on some temporary problem and mainly pass on next run. Of course, one thing is when we forgot to add some waiting time out to wait for element to appear before interacting with it. But there are cases when the reason of such temporary problem lays beyond automated tests implementation but mainly related to environment which may cause some delays of downtime for short time. So, normal reaction on that is to re-run failed tests and confirm functionality is fine. But this is too routine task and it doesn't really require some intellectual work to perform. It's simply additional logic which handles test result state and triggers repetitive run in case of error. If test fails permanently we'll still see the error but if test passes after that then the problem doesn't require too much attention.

And generally, if you simply stepped out it's not the reason to fail.

This problem is not new and it's been resolved already for many particular cases. E.g. here is the JUnit solution example. In this post I'll show you how to perform the same re-run for Cucumber-JVM in combination with JUnit as I'm actively using this combination of engines and it is quite popular. The solution shown in previous link doesn't really fit the Cucumber as each specific JUnit test in Cucumber-JVM corresponds to some specific step rather than entire scenario. Thus, the re-run functionality for this combination of engines looks a bit different. So, let's see how we can re-run our Cucumber tests in JUnit.

Monday, 11 May 2015

The Future of Test Automation Frameworks

The Future of Test Automation Frameworks

It's always interesting to know the future to be able to react on that properly. It's especially true for the world of technology when every time we get something new, when something which was just a subject of science fiction yesterday becomes observable reality nowadays. Automated testing is not an exception here. We should be able to catch proper trend and to be prepared for that. Actually, our professional growth depends on that. Should we stick to the technologies we use at the moment of should we dig more into some areas which aren't well-developed at the moment but still have big potential? In order to find an answer to that question we need to understand how test automation was evolving, what promising areas are. Based on that we can identify what should we expect next.

So, let's observe test automation frameworks evolution to see how it evolves and where we can grow to.

Wednesday, 6 May 2015

Ploblem Solved: Cucumber-JVM running actions before and after tests execution with JUnit

Ploblem Solved: Cucumber-JVM running actions before and after tests execution with JUnit

Background

It is frequent case when we need to do some actions before and/or after entire test suite execution. Mainly, such actions are needed for global initialization/cleanup or some additional reporting or any other kind of pre/post-processing. There may be many different reasons for that and some test engines provide such ability, e.g. TestNG has BeforeSuite and AfterSuite annotations, the JUnit has test fixtures which may run before/after test class (it's not really the same but when we use Cucumber-JVM it's very close to what we need).

Problem

The problem appears when you want to add some actions at the very latest or very earlies stages of tests execution and you use Cucumber-JVM with JUnit. In my case I wanted to add some reports post-processing to make an advanced Cucumber report generation. In this case JUnit fixtures didn't help as AfterClass-annotated method runs before Cucumber generates final reports.

At the same time adding @BeforeAll and @AfterAll hooks question raised on Cucumber side as well. And there was even some solution proposed. Unfortunately, authors decided to revert those changes as there were some cases when it does not work.

So, the problem is that I need something to run after entire Cucumber-JVM suite is done but neither Cucumber nor JUnit gives me built-in capability for doing this.

Solution

Sunday, 3 May 2015

Cucumber JVM: Advanced Reporting

Cucumber JVM: Advanced Reporting

Advanced Cucumber Reporting Introduction

The Cucumber JVM contains some set of predefined reports available as the plugin option. By default we have some raw reports. Some of them are ready to be provided for end users (e.g. HTML report) while others still require some post-processing like JSON reports (both for usage and results reports). Also, available standard HTML report isn't really good enough. It's good for analysis (partially) but if we want to provide some kind of overview information we don't need so much details as well as we may need some more visualized statistics rather than just one simple table.

Well, there are some already existing solutions for such advanced reporting, e.g. Cucumber Reports by masterthought.net. That's really nice results reporting solution for Cucumber JVM. But when I tried to apply this solution I encountered several problems with Maven dependencies resolution and report processing. That became a reason for me to look at something custom as I couldn't smoothly integrate this reporting to my testing solution. Additionally, it covers only results reporting while I'm also interested in steps usage statistic information to make sure that we use our Cucumber steps effectively. And apparently I didn't find reporting solution covering this area in appropriate way for Cucumber JVM.

Being honest, I'm not really big fan of writing custom reporting solution at all as for test automation engineers existing information is more than enough in most of the cases. But if you need to provide something more specific and preferably in e-mailable form to send to other members of project team we need something else. That's why I created some components which generate some Cucumber reports like results and usage reports (see sample screen shots below).

The above samples show e-mailable reports which mainly provide results overview information which can be sent via e-mail as well as additional HTML report summarizing usage statistics. In this post I'll describe my reporting solution with some examples and some detailed explanations.

Sunday, 22 February 2015

Aerial: Create New Project Using Archetype

Aerial: Create New Project Using Archetype

Since 0.0.5 version Aerial has an ability to generate Java projects from archetypes to get sample working project quickly. In this post I'll describe the steps for doing this.

Wednesday, 18 February 2015

Problem Solved: run TestNG test from JUnit using Maven

NBehave vs SpecFlow Comparison

Background

Recently I've been developing some applicaiton component which was supposed to run with TestNG. Actually, it was another extension of TestNG test. So, in order to test that functionality I had to emulate TestNG suite run within the test body. Well, performing TestNG run programmatically wasn't a problem. It can be done using code like:

import org.testng.TestListenerAdapter;
import org.testng.TestNG;

.......
        TestListenerAdapter tla = new TestListenerAdapter();
        TestNG testng = new TestNG();
        testng.setTestClasses(new Class[] {SomeTestClass.class});
        testng.addListener(tla);
        testng.run();
.......
where SomeTestClass is some existing TestNG class. This can be even used with JUnit (which was my case as I mainly used JUnit for the test suite). So, technically TestNG can be executed from JUnit test and vice versa.

Problem

The problem appeared when I tried to run JUnit test performing TestNG run via Maven. Normally tests are picked up using surefire plugin which can be included into Maven pom.xml file with the entry like this:

 <plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-surefire-plugin</artifactId>
  <version>2.18.1</version>
 </plugin>
If you use JUnit only it picks up JUnit tests. In my case I also used JUnit for running test suites but one test required TestNG test class (actually the class with TestNG @Test annotation) as well as I had to add TestNG Maven dependency. In this case only this TestNG class was executed during test run. So, how to let Maven know that I want to run exactly JUnit tests but not TestNG ones while both of them are present within the same Maven project?

Monday, 9 February 2015

NBehave vs SpecFlow Comparison

NBehave vs SpecFlow Comparison

It's always good when you use some technology and you have a choice between various tools/engines. In some cases it makes a problem like it happens sometimes with BDD Engines especially when we have to choose between similar engines which are widely used and at first glance they seem to be identical. Some time ago I've made JBehave vs Cucumber-JVM comparison to spot some differences and comparative characteristics of the most evolved engines in Java world. And as I can see from the pge views statistics it's quite interesting topic. At the same there is .NET technology which has another set of BDD engines. And they are quite popular as well. So, in this world we may encounter question like: What is better, NBehave or SpecFlow ?

This answer isn't so trivial. When I did cross-platform BDD Engines comparison almost 3 years ago some of the engines weren't well enough or at least their documentation was on low level. At that time NBehave didn't seem to look well. But since that time a lot of things changed and now both NBehave and SpecFlow are turned into full-featured Gherkin interpreter engines. So, the choice of better tool among then isn't so trivial anymore. It means we'll start new comparison between NBehave or SpecFlow

So, let's find out who's winner in this battle!!!

Sunday, 18 January 2015

Aerial: Introduction to Test Scenarios Generation Based on Requirements

Aerial: Introduction to Test Scenarios Generation Based on Requirements

In one of the previous articles dedicated to moving to executable requirements I've described some high-level ideas about how such requirements should look like. Main idea is that document containing requirements/specifications can be the source for generating test cases. But in the previous article we just made brief overview on how it should work in theory. Now it's time to turn theory into practice. In this article I'll make brief introduction to new engine I've started working with. So, let me introduce the Aerial the magic box which makes our requirements executable.

In this article I'll describe main principles of this engine and provide some base examples to bring an idea of what it is and where it is used.

Tuesday, 16 December 2014

Automated Testing: Moving to Executable Requirements

Automated Testing: Moving to Executable Requirements

A lot of test automation tools and frameworks are targeted to combine technical and non-technical aspects of test automation. Mainly it's about combination of test design and automated implementation. Thus, we can delegate test automation to people without programming skills as well as make solution more efficient due to maximal functionality re-use. Eventually, test automation tools are usually capable enough to create some sort of DSL which reflects business specifics of system under tests. There is a lot of test automation software created to provide such capabilities. Main thing they all reach is that they mitigate the border between manual and automated testing. As the result we usually control correspondence between test scenarios and their automated implementation.

But it's not enough as we also have requirements and if we change them we should spend some time to make sure that requirements are in line with tests and their automated implementation. Thus, we need an approach which combines requirements, tests and auto-tests into something uniform. One of such approaches is called Executable Requirements. In this post I'll try to describe existing solutions for that and some possible ways where to move.

Sunday, 21 September 2014

Measure Automated Tests Quality

Introduction

There's one logical recursion I encounter with test automation. Test automation is about developing software targeted to test some software. So, the output of test automation is another software. This is one of the reason for treating the test automation as the development process (which is one of the best practices for test automation). But how are we going to make sure that the software we create for testing is good enough? Indeed, when we develop the software we use testing (and test automation) as one of the tools for checking and measure the quality of the software under test.

So, what about software we create for test automation?

On the other hand we use testing to make sure that the software under test is of acceptable quality. In case of test automation we use another software for this. And in some cases this software becomes complicated as well. So, how can we rely on non-tested software for making any conclusions about the target product we develop? Of course, we can make test automation simple but it's not the common solution. So, we should find some compromise where we use reliable software to check the target software (the system under test). Also, we should find the way to find out how deep testing can be and how we can measure that.

So, main questions which appear here are:

  • How can we identify that the automated tests we have are enough to measure quality of end product?
  • How can we identify that our tests are really good?
  • How can we keep quality control on our automated tests?
  • How can we identify if our tests are of acceptable complexity?
In this article I'll try to find out answers to many of those questions.

Sunday, 7 September 2014

Mutation Testing Overview

Mutation Testing Overview

Introduction

It's always good to have the entire application code covered with tests. Also, it's nice to have some tracking on features we implement and test. All this stuff provides overall picture of actual behaviour correspondence to expectations. That's actually one of the main goals of testing. But there are some cases when all the coverage metrics don't work or do not show the actual picture. E.g. we can have some test which invokes some specific functionality and provides 100% coverage by any accessible measure but none of the tests contains any verifications. It means that we potentially have some problems there but nothing is alerted about that. Or we may have a lot of empty tests which are always green. We still may obtain coverage metrics to find out if existing number of tests is enough for some specific module. But additionally we should make sure that our tests provide good quality of potential problems detection. So, one of the ways to reach that is to inject some modification which is supposed to lead to some error and make sure that our tests are able to detect the problem. The approach of certifying tests against intentionally modified application is called Mutation Testing.

Main feature of such testing type is that we do not discover application issues but rather certify tests for errors detection abilities. Unlike "traditional testing" we initially know where the bug is expected to appear (as we insert it ourselves) and we have to make sure that our testing system is capable to detect it. So, mutation testing is mainly targeted to check quality of the tests. The above examples of empty test suites or tests without verifications are corner cases and they are quite easy to detect. But in real life there's interim stage when tests have verifications but they may have some gaps. In order to make testing solid and reliable we need to mitigate such gaps. And mutation testing is one of the best ways to detect such gaps.

In this article I'll describe main concepts of mutation testing as well as describe potential ways to perform this testing type with all relevant proc and cons.

Thursday, 26 December 2013

Sirius: 1 year retrospective

1 year ago I've started working on the Sirius project. 6 month ago there was the first retrospective where I summarized all achievements and future goals. Generally last 6 years were not the most productive. Nevertheless, there were some achievements even there. So, in this topic I'll summarize again where we are with the Sirius now.