Search

Showing posts with label Test Automation. Show all posts
Showing posts with label Test Automation. Show all posts

Sunday, 13 September 2015

Test Automation Best Practices

Test Automation Best Practices

Introduction

Test automation is not brand new thing. Even more, it's pretty known and quite old part of entire software development process. Thus, a lot of people have good and bad experience with it. As the result, there is some common set of best practices which were formulated in different ways. Different people concentrate on different aspects of test automation. Some of them are concentrating on technical parts while others pay more attention to higher level things. As always, the truth is somewhere in between.

So, what are the best practices in test automation?

For this post I've collected several posts/articles covering this topic (the list of them can be found at the References section of this post) and I'll try to combine them into one common list of practices. Mainly I'll contentrate on the top most UI level automation, however, a lot of practices are fully applicable to any level of automated testing.

Saturday, 13 June 2015

Cucumber JVM: Advanced Reporting 2. Detailed Report (HTML, PDF)

Cucumber JVM: Advanced Reporting 2. Detailed Report (HTML, PDF)

Introduction

Previously I've described basic report samples based on Cucumber-JVM. Since I'm using this reporting solution on practice quite frequently the number of requirements and enhancements grows so since last time I had to add some more additional features which can be in use.

Some of new features are:

  • Cucumber extension to support failed tests re-run
  • Detailed results report which supports the following:
    • Detailed results report generation in HTML format
    • Detailed results report generation in PDF format
    • Screen shot are included into both the above reports
All those new features are included into 0.0.5 version, so we can add Maven dependency:
<dependency>
 <groupId>com.github.mkolisnyk</groupId>
 <artifactId>cucumber-reports</artifactId>
 <version>0.0.5</version>
</dependency>
or the same thing for Gradle:
'com.github.mkolisnyk:cucumber-reports:0.0.5'

Since Cucumber failed tests re-run functionality was described before in this post I'll concentrate more on detailed results report features.

Monday, 11 May 2015

The Future of Test Automation Frameworks

The Future of Test Automation Frameworks

It's always interesting to know the future to be able to react on that properly. It's especially true for the world of technology when every time we get something new, when something which was just a subject of science fiction yesterday becomes observable reality nowadays. Automated testing is not an exception here. We should be able to catch proper trend and to be prepared for that. Actually, our professional growth depends on that. Should we stick to the technologies we use at the moment of should we dig more into some areas which aren't well-developed at the moment but still have big potential? In order to find an answer to that question we need to understand how test automation was evolving, what promising areas are. Based on that we can identify what should we expect next.

So, let's observe test automation frameworks evolution to see how it evolves and where we can grow to.

Wednesday, 18 February 2015

Problem Solved: run TestNG test from JUnit using Maven

NBehave vs SpecFlow Comparison

Background

Recently I've been developing some applicaiton component which was supposed to run with TestNG. Actually, it was another extension of TestNG test. So, in order to test that functionality I had to emulate TestNG suite run within the test body. Well, performing TestNG run programmatically wasn't a problem. It can be done using code like:

import org.testng.TestListenerAdapter;
import org.testng.TestNG;

.......
        TestListenerAdapter tla = new TestListenerAdapter();
        TestNG testng = new TestNG();
        testng.setTestClasses(new Class[] {SomeTestClass.class});
        testng.addListener(tla);
        testng.run();
.......
where SomeTestClass is some existing TestNG class. This can be even used with JUnit (which was my case as I mainly used JUnit for the test suite). So, technically TestNG can be executed from JUnit test and vice versa.

Problem

The problem appeared when I tried to run JUnit test performing TestNG run via Maven. Normally tests are picked up using surefire plugin which can be included into Maven pom.xml file with the entry like this:

 <plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-surefire-plugin</artifactId>
  <version>2.18.1</version>
 </plugin>
If you use JUnit only it picks up JUnit tests. In my case I also used JUnit for running test suites but one test required TestNG test class (actually the class with TestNG @Test annotation) as well as I had to add TestNG Maven dependency. In this case only this TestNG class was executed during test run. So, how to let Maven know that I want to run exactly JUnit tests but not TestNG ones while both of them are present within the same Maven project?

Sunday, 18 January 2015

Aerial: Introduction to Test Scenarios Generation Based on Requirements

Aerial: Introduction to Test Scenarios Generation Based on Requirements

In one of the previous articles dedicated to moving to executable requirements I've described some high-level ideas about how such requirements should look like. Main idea is that document containing requirements/specifications can be the source for generating test cases. But in the previous article we just made brief overview on how it should work in theory. Now it's time to turn theory into practice. In this article I'll make brief introduction to new engine I've started working with. So, let me introduce the Aerial the magic box which makes our requirements executable.

In this article I'll describe main principles of this engine and provide some base examples to bring an idea of what it is and where it is used.

Tuesday, 16 December 2014

Automated Testing: Moving to Executable Requirements

Automated Testing: Moving to Executable Requirements

A lot of test automation tools and frameworks are targeted to combine technical and non-technical aspects of test automation. Mainly it's about combination of test design and automated implementation. Thus, we can delegate test automation to people without programming skills as well as make solution more efficient due to maximal functionality re-use. Eventually, test automation tools are usually capable enough to create some sort of DSL which reflects business specifics of system under tests. There is a lot of test automation software created to provide such capabilities. Main thing they all reach is that they mitigate the border between manual and automated testing. As the result we usually control correspondence between test scenarios and their automated implementation.

But it's not enough as we also have requirements and if we change them we should spend some time to make sure that requirements are in line with tests and their automated implementation. Thus, we need an approach which combines requirements, tests and auto-tests into something uniform. One of such approaches is called Executable Requirements. In this post I'll try to describe existing solutions for that and some possible ways where to move.

Sunday, 7 September 2014

Mutation Testing Overview

Mutation Testing Overview

Introduction

It's always good to have the entire application code covered with tests. Also, it's nice to have some tracking on features we implement and test. All this stuff provides overall picture of actual behaviour correspondence to expectations. That's actually one of the main goals of testing. But there are some cases when all the coverage metrics don't work or do not show the actual picture. E.g. we can have some test which invokes some specific functionality and provides 100% coverage by any accessible measure but none of the tests contains any verifications. It means that we potentially have some problems there but nothing is alerted about that. Or we may have a lot of empty tests which are always green. We still may obtain coverage metrics to find out if existing number of tests is enough for some specific module. But additionally we should make sure that our tests provide good quality of potential problems detection. So, one of the ways to reach that is to inject some modification which is supposed to lead to some error and make sure that our tests are able to detect the problem. The approach of certifying tests against intentionally modified application is called Mutation Testing.

Main feature of such testing type is that we do not discover application issues but rather certify tests for errors detection abilities. Unlike "traditional testing" we initially know where the bug is expected to appear (as we insert it ourselves) and we have to make sure that our testing system is capable to detect it. So, mutation testing is mainly targeted to check quality of the tests. The above examples of empty test suites or tests without verifications are corner cases and they are quite easy to detect. But in real life there's interim stage when tests have verifications but they may have some gaps. In order to make testing solid and reliable we need to mitigate such gaps. And mutation testing is one of the best ways to detect such gaps.

In this article I'll describe main concepts of mutation testing as well as describe potential ways to perform this testing type with all relevant proc and cons.

Thursday, 26 December 2013

Sirius: 1 year retrospective

1 year ago I've started working on the Sirius project. 6 month ago there was the first retrospective where I summarized all achievements and future goals. Generally last 6 years were not the most productive. Nevertheless, there were some achievements even there. So, in this topic I'll summarize again where we are with the Sirius now.

Sunday, 15 September 2013

GitHub: organizing and automating test tracking with Java

GitHub provides some tracker where we can keep our tasks. The GitHub API provides the ability to access that resources programmatically. Thus, with the help of API I could integrate JBehave and GitHub so that I can run JBehave tests stored in GitHub tracker. But it's not the only case.

With the help of API I can set the correspondence between tests and features. This gives the ability to make the traceability of tests and features they're applied for. Also, what if we use something different from JBehave, e.g. Cucumber, Sprecflow which use flat files for input? In this article I'll describe how to:

  • Organize tests with GitHub tracker
  • Automatically generate traceability matrix
  • Automatically generate feature files based on the content of the test issues

Wednesday, 20 March 2013

JBehave vs Cucumber JVM comparison

The interest in BDD approach is growing and the more and more different engines appear in that space. However, there're some well known players in that area. We can compare their features however the final choice is still based on the technology used. E.g. if we compare Behat and SpecFlow we definitely can find some gaps and advantages but if we're going to use e.g. C# the Behat is not a subject of interest here. And vice versa.

The more interesting case is when 2 or more engine appear in the same technological area. And it's twice more interesting when these are one of the most popular engines on one of the most popular technology. In this post we'll compare JBehave vs Cucumber against their applicability for Java test automation. We compare them using the same scale as for BDD Engines Comparison post with some additional detalization on features. Additionally, I'd like to mention that we should compare apples to apples so there's no need to dig the Ruby implementation of Cucumber. It's interesting to check the Cucumber Java capabilities against JBehave to make comparison more or less valuable.

All right, we have JBehave, we have Cucumber, so ...

LET THE MORTAL COMBAT COMPARISON BEGIN!!!

Monday, 11 March 2013

Web test tools list

Introduction

In this post I'll collect the information about existing web test automation tools (both from vendors and open source). It is represented as the catalogue with basic information rather that some comparison characteristics.

NOTE
The content of this post will be appended/updated from time to time to reflect the most recent changes. Also, some information may be not very precise as it is not so widely accessible. So, any comments related to information corrections are welcome

Thursday, 10 January 2013

Sirius: core Win32 functionality support

Once we've done with infrastructure it's time to work on development itself. Now I'll start working on the functionality performing interaction with Win32 objects. In this article I'll describe basic preparations for Win32 interactions as well as I'll describe the core functionality to capture the window on screen and perform some manipulations with it. And of course this activity is done in the scope of Sirius development. So, it would be bound to the entire architecture of the platform.

Wednesday, 26 December 2012

BDD instructions writing standards

Introduction

This document describes rules for writing BDD instructions. It is used as a guideline for proper stated and formatted instructions. Natural language instructions used for BDD are not absolutely natural language instructions. They're actually "naturalized" form of some specific command with ability to variate some part. So, from the technical side of BDD the strict rules are still necessary to avoid different forms of absolutely the same phrases but with small differences. It's not essential for understanding but it is critical for technical implementation.

This document contains examples of correct and incorrect phrases. If some phrase is described as incorrect it only means that it violates the standards though it can be properly built from the language point of view. These standards do not teach English. They just describe which subset of possible phrases is taken as acceptable.

And again, standards are designed for uniformity only. There's no strict dogmatum in them. If some phrases seem to be more appropriate but they violate the rules it's OK to adjust rules to current needs rather than restrict needs with existing rules. The main thing is that there should be some uniform rule set which is kept within the project. Current document can be a basis for such standards

Saturday, 22 December 2012

Sirius: First steps

In the previous article we've created basic skeleton for entire solution. Now it's time to add the content. In this article I'll take sample method, write it's server part and create clients for all programming languages we already prepared infrastructure for. The example would be simple but it's enough to find out how it's better to design entire solution and what is the base flow for adding new functionality. Also it would be clearly seem how it is better to use such solution. So, in this article I'll describe the following:

  1. Sample server side method creation (actually that would be the sample of SOAP web-service creation on Java)
  2. Java client creation
  3. Ruby client creation
  4. C# client creation

OK. We're good to go.

Sunday, 9 September 2012

Test Automation ROI calculation Part 1 : Simple iterative development model

Introduction

The test automation isn't done just for fun. It has the purpose. Some of the main purposes are:
  • Increase testing team capacity
  • Minimize testing cycle without changing testing team capacity
  • Increase testing coverage without extra resources
  • Minimize human factor
But all above and many other benefits can be expressed with the single phase. Test automation should bring the profit. In other words, I have many ways to improve the testing efforts and each of them require some investment. Either I can hire more people or I can invest into some software/hardware which helps me making things faster. But if the investment into some improvement doesn't bring the profit this investment is useless. This is related to test automation as well. Yes, you can buy some expensive tools, you can build the complex infrastructure but if I can do the same in some other way which requires less investment I'll prefer the second option. So, before proposing some test automation efforts we should realize what the advantage we should make with this. One of the wide spread measure is the Return of investment. So, how should we calculate that for test automation project? Let's build some process model to see what we can calculate there.

Sunday, 19 August 2012

Test Automation tools: trends

While making decision which tool is more suitable for test automation we should always take into account the popularity of each specific tool. Why? Because firstly it's about people. What's the point in taking tool which doesn't have users? Thus, we'll have to invest some money into their learning which is extra expences. Secondly, the more popular tool is the more materials are available either in internet or in some other materials. So, we should take a look at some trends to identify which tool is growing with it's popularity and which one is going to be a history.

Friday, 8 June 2012

BDD engines comparison (Cucumber, Freshen, JBehave,NBehave,SpecFlow, Behat)

Introduction

Cucumber is not the only engine supporting natural language instructions. It's just one implementation of natural language instructions interpreter. The actual language to write tests with is called Gherkin. And it has different implementations adopted to different programming language. Thus we have:
This list isn't complete as there can be many other similar engines which are simply less popular. All of them have some common set of supported features but there're some restrictions and abilities specific to the actual engine. So, the aim of this post is to collect useful features for each listed above engine and present it in some comparable form. Key features to be mentioned are:

Tuesday, 29 May 2012

Cucumber: How to avoid major mistakes

Recently I came across several posts criticizing Cucumber and approach itself. Some of the links: and many others. Generally speaking the idea is that we should put off our pink sun glasses and look at the real world. Cucumber (and all it's analogs designed for other programming languages) is quite attractive solution providing the ability to write descriptive automated tests. But this attractiveness hides a myriad ways to "shoot your leg". And the more people use the approach of executable specifications the more people have bad experience. Why this happens?

Tuesday, 1 March 2011

Intro: Our own test automation tool

All the time I have to deal with some test automation tool. And all the time I have to live with the difficulties of each tool. If we have some huge vendor tool it's definitely something VBScript/JScript-based with all related restrictions. Or at least there're restrictions of programming language. It's critical because various wizards are not enough to make robust and reliable tests. Sometimes I have to make quite precise tuning to make my solution working the way I need. And even more, such wizards also consume memory and make overall work slower.

Monday, 10 May 2010

Test Automation infrastructure in Java

In general, it's good when you set up your automated testing using the same programming language as the application under test is written on. It provides the integration with unit tests, integration tests which are typically interact on code level.

Another good thing is that all the infrastructure and approaches generally used for development are applicable for test automation.