Search

Tuesday 26 May 2015

Cucumber JVM + JUnit: Re-run failed tests

Cucumber JVM + JUnit: Re-run failed tests

Automated tests should run reliably and provide predictable results. At the same time there are some spontaneous temporary errors which distort the entire picture while reviewing test results. And it's really annoying when tests fail on some temporary problem and mainly pass on next run. Of course, one thing is when we forgot to add some waiting time out to wait for element to appear before interacting with it. But there are cases when the reason of such temporary problem lays beyond automated tests implementation but mainly related to environment which may cause some delays of downtime for short time. So, normal reaction on that is to re-run failed tests and confirm functionality is fine. But this is too routine task and it doesn't really require some intellectual work to perform. It's simply additional logic which handles test result state and triggers repetitive run in case of error. If test fails permanently we'll still see the error but if test passes after that then the problem doesn't require too much attention.

And generally, if you simply stepped out it's not the reason to fail.

This problem is not new and it's been resolved already for many particular cases. E.g. here is the JUnit solution example. In this post I'll show you how to perform the same re-run for Cucumber-JVM in combination with JUnit as I'm actively using this combination of engines and it is quite popular. The solution shown in previous link doesn't really fit the Cucumber as each specific JUnit test in Cucumber-JVM corresponds to some specific step rather than entire scenario. Thus, the re-run functionality for this combination of engines looks a bit different. So, let's see how we can re-run our Cucumber tests in JUnit.

Major approach for re-run and areas of impact

General idea is to update existing engine classes with some post-processing which does the following:

  1. Handles test execution and status monitoring
  2. If current test fails initiate re-run
  3. If test still fails after some maximal number of re-runs the error should be escalated
  4. If test passes on re-run the corresponding scenario should be marked as passed
Since it is about handling test runs as well the hook mechanism doesn't seem to be the best idea. Also, we have an extra difficulty in handling errors for scenarios and scenario outline examples as they have a bit different hierarchical structure, so these scenario types should be handled separately.

So, major area of impact is JUnit runner class for Cucumber + some additional classes handling feature and specific scenario runs. So, let's describe all those items in more details.

Overriding existing Cucumber-JVM classes

There are 4 major classes which should be replaced with their versions supporting re-run functionality. They are:

All the above classes should be either extended or simply replaced with enhanced classes doing mainly the same except additional handling of re-runs on failure. Actually, we should build additional classes which represent similar structure to the above classes. General class hierarchy can be represented with the below UML diagram:
Classes with the red background are custom classes we should add while other classes represent existing Cucumber and JUnit classes. As you can see custom classes represent similar structure and relationships to original classes. This way we still can keep consistent behaviour between new and original runner classes so that normally Cucumber tests are run completely the same way as before. All customization should be related only to re-run functionality and appropriate data structure storage.

In this example only cucumber.runtime.junit.FeatureRunner class could be simply extended. Other classes were implemented as the replacement of original classes as in some cases we need to access some internal data structures which cannot be inherited by simple extension. So, let's take a look at each custom class separately.

Overriding FeatureRunner

All the below sources will be pasted in full with key parts highlighted with yellow with hyperlinks to detailed explanation of each specific item. This way the code is explained and at the same time it can be simply copy/pasted and ready to use.

First class to describe is the ExtendedFeatureRunner which is an extension of the FeatureRunner class. This is the first class where the retry functionality is applied. Mainly it is targeted to handle re-runs for scenarios. In case of scenario outlines it doesn't do anything special but transferring control to other classes. This difference takes place due to the fact that each scenario represents the list of classes corresponding to test steps. At the same time scenario outlines are containers for multiple scenarios as well as scenario outline examples have a bit different internal structure representing scenario data. So, for extended feature runner class the code looks like:

package com.github.mkolisnyk.cucumber.runner;

import java.util.ArrayList;
import java.util.List;

import org.junit.Assert;
import org.junit.internal.AssumptionViolatedException;
import org.junit.runner.Description;
import org.junit.runner.Result;
import org.junit.runner.notification.Failure;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.ParentRunner;
import org.junit.runners.model.InitializationError;

import cucumber.runtime.CucumberException;
import cucumber.runtime.Runtime;
import cucumber.runtime.junit.ExecutionUnitRunner;
import cucumber.runtime.junit.FeatureRunner;
import cucumber.runtime.junit.JUnitReporter;
import cucumber.runtime.model.CucumberFeature;
import cucumber.runtime.model.CucumberScenario;
import cucumber.runtime.model.CucumberScenarioOutline;
import cucumber.runtime.model.CucumberTagStatement;

public class ExtendedFeatureRunner extends FeatureRunner {
    private final List<ParentRunner> children = new ArrayList<ParentRunner>();

    private final int retryCount = 3;
    private int failedAttempts = 0;
    private int scenarioCount = 0;
    private Runtime runtime;
    private CucumberFeature cucumberFeature;
    private JUnitReporter jUnitReporter;
    
    public ExtendedFeatureRunner(CucumberFeature cucumberFeature,
            Runtime runtime, JUnitReporter jUnitReporter)
            throws InitializationError {
        super(cucumberFeature, runtime, jUnitReporter);
        this.cucumberFeature = cucumberFeature;
        this.runtime = runtime;
        this.jUnitReporter = jUnitReporter;
        buildFeatureElementRunners();
    }

    private void buildFeatureElementRunners() {
        for (CucumberTagStatement cucumberTagStatement : cucumberFeature.getFeatureElements()) {
            try {
                ParentRunner featureElementRunner;
                if (cucumberTagStatement instanceof CucumberScenario) {
                    featureElementRunner = new ExecutionUnitRunner(runtime, (CucumberScenario) cucumberTagStatement, jUnitReporter);
                } else {
                    featureElementRunner = new ExtendedScenarioOutlineRunner(runtime, (CucumberScenarioOutline) cucumberTagStatement, jUnitReporter);
                }
                children.add(featureElementRunner);
            } catch (InitializationError e) {
                throw new CucumberException("Failed to create scenario runner", e);
            }
        }
    }
    
    public final Runtime getRuntime() {
        return runtime;
    }

    @Override
    protected void runChild(ParentRunner child, RunNotifier notifier) {
        notifier.fireTestStarted(child.getDescription());
        try {
            child.run(notifier);
            Assert.assertEquals(0, this.getRuntime().exitStatus());
        } catch (AssumptionViolatedException e) {
            notifier.fireTestAssumptionFailed(new Failure(child.getDescription(), e));
        } catch (Throwable e) {
            retry(notifier, child, e);
        } finally {
            notifier.fireTestFinished(child.getDescription());
        }
        scenarioCount++;
        failedAttempts = 0;
    }

    public void retry(RunNotifier notifier, ParentRunner child, Throwable currentThrowable) {
        Throwable caughtThrowable = currentThrowable;
        ParentRunner featureElementRunner = null;
        boolean failed = true;

        Class<? extends ParentRunner> clazz = child.getClass();
        CucumberTagStatement cucumberTagStatement = this.cucumberFeature.getFeatureElements().get(scenarioCount);

        if (cucumberTagStatement instanceof CucumberScenarioOutline) {
            return;
        }
        while (retryCount > failedAttempts) {
            try {
                featureElementRunner = new ExecutionUnitRunner(runtime, (CucumberScenario) cucumberTagStatement , jUnitReporter);
                featureElementRunner.run(notifier);
                Assert.assertEquals(0, this.getRuntime().exitStatus());
                failed = false;
                break;
            } catch (Throwable t) {
                failedAttempts++;
                caughtThrowable = t;
                this.getRuntime().getErrors().clear();
            }
        }
        if (failed) {
            notifier.fireTestFailure(new Failure(featureElementRunner.getDescription(), caughtThrowable));
        }
    }

    @Override
    protected List getChildren() {
        return children;
    }

    @Override
    protected Description describeChild(ParentRunner child) {
        return child.getDescription();
    }
}
The highlighted items represent the following features:
  1. Here we extend our functionality from the FeatureRunner class as we need to override just some specific methods while internal data structures are either accessible directly or can be overridden in descendent class
  2. Additional fields which handle internal state of re-run process. Those fields are:
    • retryCount - contains the number of retries to be applied in case of failed test. If test doesn't show positive result after retryCount tries it is treated as failed
    • failedAttempts - stores the number of failed attempts. This counter is reset for each new test and controls how many times before the same test failed
    • scenarioCount - stores the index of current scenario within the feature. It's needed for re-run functionality as we need to re-create the same data structure from initial feature description
  3. Actually, the buildFeatureElementRunners method code is fully copied from the parent class except highlighted part where we initialize runner for scenario outlines. As it was mentioned before scenario outlines represent more complex structure in comparison to scenarios. Thus re-run process is handled for them differently. And this is another reason why scenario outline runners are customized. But that will be described later
  4. The this.getRuntime().exitStatus() statement returns the exit status of the latest test run. It returns 0 in case of successful completion. If any other value is returned it indicates the test failed
  5. This is the place where we call the retry method. If some exception occurs (in our case it's AssertionError on test status check) we initiate retry process
  6. When we completed with re-run processing we should reset failedAttempts count to 0 and increment the index of current scenario as we ended up with previous one
  7. Here is the definition of retry method itself
  8. This part of code filters out scenario elements which are actually scenario outlines. Since we initialize scenario outline instances with the ExtendedScenarioOutlineRunner runner type we customize re-run handling for scenario outlines there but not in current class
  9. These methods simply override existing methods in order to pick up proper references to objects of current class but not the parent
So, generally, this class handles re-runs for scenarios only. But it is already big step (almost half of the entire work is done). So, let's integrate it into major entry point.

Custom main runner class

And the major entry point for our re-run functionality is the main runner class which is supposed to be used as the parameter to @RunWith annotation. Mainly it's copy/paste from original Cucumber runner with small deviations related to the fact that instead of original FeatureRunner class we now use ExtendedFeatureRunner class. The code looks like:

package com.github.mkolisnyk.cucumber.runner;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import org.junit.runner.Description;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.ParentRunner;
import org.junit.runners.model.InitializationError;

import cucumber.runtime.ClassFinder;
import cucumber.runtime.Runtime;
import cucumber.runtime.RuntimeOptions;
import cucumber.runtime.RuntimeOptionsFactory;
import cucumber.runtime.io.MultiLoader;
import cucumber.runtime.io.ResourceLoader;
import cucumber.runtime.io.ResourceLoaderClassFinder;
import cucumber.runtime.junit.Assertions;
import cucumber.runtime.junit.JUnitReporter;
import cucumber.runtime.model.CucumberFeature;

public class ExtendedCucumber extends ParentRunner<ExtendedFeatureRunner> {
    private final JUnitReporter jUnitReporter;
    private final List<ExtendedFeatureRunner> children = new ArrayList<ExtendedFeatureRunner>();
    private final Runtime runtime;

    public ExtendedCucumber(Class clazz) throws InitializationError, IOException {
        super(clazz);
        ClassLoader classLoader = clazz.getClassLoader();
        Assertions.assertNoCucumberAnnotatedMethods(clazz);

        RuntimeOptionsFactory runtimeOptionsFactory = new RuntimeOptionsFactory(clazz);
        RuntimeOptions runtimeOptions = runtimeOptionsFactory.create();

        ResourceLoader resourceLoader = new MultiLoader(classLoader);
        runtime = createRuntime(resourceLoader, classLoader, runtimeOptions);

        final List<CucumberFeature> cucumberFeatures = runtimeOptions.cucumberFeatures(resourceLoader);
        jUnitReporter = new JUnitReporter(runtimeOptions.reporter(classLoader), runtimeOptions.formatter(classLoader), runtimeOptions.isStrict());
        addChildren(cucumberFeatures);
    }

    protected Runtime createRuntime(ResourceLoader resourceLoader, ClassLoader classLoader,
                                    RuntimeOptions runtimeOptions) throws InitializationError, IOException {
        ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
        return new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
    }

    @Override
    public List<ExtendedFeatureRunner> getChildren() {
        return children;
    }

    @Override
    protected Description describeChild(ExtendedFeatureRunner child) {
        return child.getDescription();
    }

    @Override
    protected void runChild(ExtendedFeatureRunner child, RunNotifier notifier) {
        child.run(notifier);
    }

    @Override
    public void run(RunNotifier notifier) {
        super.run(notifier);
        jUnitReporter.done();
        jUnitReporter.close();
        runtime.printSummary();
    }

    private void addChildren(List<CucumberFeature> cucumberFeatures) throws InitializationError {
        for (CucumberFeature cucumberFeature : cucumberFeatures) {
            children.add(new ExtendedFeatureRunner(cucumberFeature, runtime, jUnitReporter));
        }
    }
}
The highlighted parts represent the following:
  1. This runner extends ParentRunner<ExtendedFeatureRunner> instead of ParentRunner<FeatureRunner> class as it was for original Cucumber runner
  2. The list of child items are now using ExtendedFeatureRunner class
  3. These methods are simply overridden to match the entire interface (the parent class is abstract generic class so we should use proper types and implement some methods provided by the interface)
  4. Child elements are now initialized as instances of ExtendedFeatureRunner class
After the above change we already can use our new runner to deal with our Cucumber tests. If we use only simple scenarios this is more than enough. Unfortunately, if there are some scenario outlines this solution isn't applied as we simply ignore any custom processing. So, let's implement re-run functionality which carries out scenario outline specifics as well.

Re-running scenario outlines

The Scenario outline processing is handled by 2 major classes:

  • ScenarioOutlineRunner - stores and processes scenario outlines as the set of scenarios. So, it works similarly to FeatureRunner but the main difference is that scenario outlines are parsed and processed differently as on the back of Cucumber-JVM each scenario is generated based on outline scenario with parameters inserted.
  • ExamplesRunner - actually represents the runner for specific scenario. So, in terms of re-run functionality it is the major place where re-run logic should be applied to.
Since the ExamplesRunner class contains more logic related to re-run functionality for scenario outlines let's start with it first. The code is:
package com.github.mkolisnyk.cucumber.runner;

import java.util.ArrayList;
import java.util.List;

import org.junit.Assert;
import org.junit.internal.AssumptionViolatedException;
import org.junit.runner.Description;
import org.junit.runner.Runner;
import org.junit.runner.notification.Failure;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.ParentRunner;
import org.junit.runners.Suite;
import org.junit.runners.model.InitializationError;

import cucumber.runtime.Runtime;
import cucumber.runtime.junit.ExecutionUnitRunner;
import cucumber.runtime.junit.JUnitReporter;
import cucumber.runtime.model.CucumberExamples;
import cucumber.runtime.model.CucumberScenario;

public class ExtendedExamplesRunner extends Suite {
    private int retryCount = 3;
    private Runtime runtime;

    private final CucumberExamples cucumberExamples;
    private Description description;
    private JUnitReporter jUnitReporter;
    private int exampleCount = 0;
    private static List<Runner> runners;
    private static List<CucumberScenario> exampleScenarios;

    protected ExtendedExamplesRunner(Runtime runtime, CucumberExamples cucumberExamples, JUnitReporter jUnitReporter) throws InitializationError {
        super(ExtendedExamplesRunner.class, buildRunners(runtime, cucumberExamples, jUnitReporter));
        this.cucumberExamples = cucumberExamples;
        this.jUnitReporter = jUnitReporter;
        this.runtime = runtime;
    }

    private static List<Runner> buildRunners(Runtime runtime, CucumberExamples cucumberExamples, JUnitReporter jUnitReporter) {
        runners = new ArrayList<Runner>();
        exampleScenarios = cucumberExamples.createExampleScenarios();
        for (CucumberScenario scenario : exampleScenarios) {
            try {
                ExecutionUnitRunner exampleScenarioRunner = new ExecutionUnitRunner(runtime, scenario, jUnitReporter);
                runners.add(exampleScenarioRunner);
            } catch (InitializationError initializationError) {
                initializationError.printStackTrace();
            }
        }
        return runners;
    }

    
    public final Runtime getRuntime() {
        return runtime;
    }

    @Override
    protected String getName() {
        return cucumberExamples.getExamples().getKeyword() + ": " + cucumberExamples.getExamples().getName();
    }

    @Override
    public Description getDescription() {
        if (description == null) {
            description = Description.createSuiteDescription(getName(), cucumberExamples.getExamples());
            for (Runner child : getChildren()) {
                description.addChild(describeChild(child));
            }
        }
        return description;
    }

    @Override
    public void run(final RunNotifier notifier) {
        jUnitReporter.examples(cucumberExamples.getExamples());
        super.run(notifier);
    }


    @Override
    protected void runChild(Runner runner, RunNotifier notifier) {
        ParentRunner featureElementRunner = null;
        featureElementRunner = (ExecutionUnitRunner)runner;
        
        try {
                featureElementRunner.run(notifier);
                Assert.assertEquals(0, this.getRuntime().exitStatus());
                
        } catch (AssumptionViolatedException e) {
            notifier.fireTestAssumptionFailed(new Failure(runner.getDescription(), e));
        } catch (Throwable e) {
            retry(notifier, featureElementRunner, e);
        } finally {
            notifier.fireTestFinished(runner.getDescription());
        }
        exampleCount++;
   }

    public void retry(RunNotifier notifier, ParentRunner child, Throwable currentThrowable) {
        Throwable caughtThrowable = currentThrowable;
        CucumberScenario scenario = exampleScenarios.get(exampleCount);
        ParentRunner featureElementRunner = null;
        boolean failed = true;
        
        int failedAttempts = 0;
        while (retryCount > failedAttempts ) {
            try {
                featureElementRunner = new ExecutionUnitRunner(runtime, scenario, jUnitReporter);
                featureElementRunner.run(notifier);
                Assert.assertEquals(0, this.getRuntime().exitStatus());
                failed = false;
                break;
            } catch (Throwable t) {
                failedAttempts++;
                caughtThrowable = t;
                this.getRuntime().getErrors().clear();
            }
        }
        if (failed) {
            notifier.fireTestFailure(new Failure(featureElementRunner.getDescription(), caughtThrowable));
        }
    }
}
Here highlighted items represent the following:
  1. Here is another place where retryCount is applied
  2. Custom fields storing internal data. Some of those fields are available in the original class as well but they are hidden inside private fields. That's why I had to make a copy of original class instead of simple extension
  3. In scenario outline each example item is actually single scenario, so in this block we just initialize all child items as single scenarios
  4. Override parent class methods to expose current class fields as well as to customize some scenario outline specific features like scenario outline name which includes the parameters row.
  5. Overridden runChild method definition
  6. Similar to FeatureRunner extension we identify current test status by running the following statement:
    Assert.assertEquals(0, this.getRuntime().exitStatus());
    
  7. If the above assertion throws the error we invoke retry method which handles major re-run logic
  8. After scenario is processed we switch the current scenario counter to 1 position. Thus, we store all generated scenarios in the exampleScenarios field and the index of currently running scenario is in exampleCount field. That would be needed inside the retry method
  9. And here is the retry method. Before re-running scenario we should get reference to recently run scenario outline item. The key thing is that we shouldn't re-create the same scenario but to get exact reference to it. This way we can keep our results more or less consistent. That's why we kept exampleScenarios and exampleCount fields as the following statement:
    CucumberScenario scenario = exampleScenarios.get(exampleCount);
    
    returns exactly currently running scenario instance which already has one failed result at the moment.
  10. Re-run handling logic. We initialize new feature runner based on existing scenario instance and try to run it again handling the result code. If re-run passes we break the loop and mark test as successful. Otherwise we'll trigger final error.
So, now we can handle re-runs on scenario outline level. Now we have to integrate our changes into main runner. Actually, neither original ExamplesRunner nor recently written ExtendedExamplesRunner classes are targeted to be invoked from main runner class directly. For this reason there is the container class of ScenarioOutlineRunner or it's modification in a form of ExtendedScenarioOutlineRunner class which was already included in our code here. For the ExtendedScenarioOutlineRunner class we shouldn't do too many changes. All we need is the reference to customized ExtendedExamplesRunner class. Thus the code of this class is almost similar to ScenarioOutlineRunner class and looks like:
package com.github.mkolisnyk.cucumber.runner;

import java.util.ArrayList;
import java.util.List;

import org.junit.runner.Description;
import org.junit.runner.Runner;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.Suite;
import org.junit.runners.model.InitializationError;

import cucumber.runtime.Runtime;
import cucumber.runtime.junit.JUnitReporter;
import cucumber.runtime.model.CucumberExamples;
import cucumber.runtime.model.CucumberScenarioOutline;

public class ExtendedScenarioOutlineRunner extends
        Suite {
    private final CucumberScenarioOutline cucumberScenarioOutline;
    private final JUnitReporter jUnitReporter;
    private Description description;

    public ExtendedScenarioOutlineRunner(Runtime runtime, CucumberScenarioOutline cucumberScenarioOutline, JUnitReporter jUnitReporter) throws InitializationError {
        super(null, buildRunners(runtime, cucumberScenarioOutline, jUnitReporter));
        this.cucumberScenarioOutline = cucumberScenarioOutline;
        this.jUnitReporter = jUnitReporter;
    }

    private static List<Runner> buildRunners(Runtime runtime, CucumberScenarioOutline cucumberScenarioOutline, JUnitReporter jUnitReporter) throws InitializationError {
        List<Runner> runners = new ArrayList<Runner>();
        for (CucumberExamples cucumberExamples : cucumberScenarioOutline.getCucumberExamplesList()) {
            runners.add(new ExtendedExamplesRunner(runtime, cucumberExamples, jUnitReporter));
        }
        return runners;
    }

    @Override
    public String getName() {
        return cucumberScenarioOutline.getVisualName();
    }

    @Override
    public Description getDescription() {
        if (description == null) {
            description = Description.createSuiteDescription(getName(), cucumberScenarioOutline.getGherkinModel());
            for (Runner child : getChildren()) {
                description.addChild(describeChild(child));
            }
        }
        return description;
    }

    @Override
    public void run(final RunNotifier notifier) {
        cucumberScenarioOutline.formatOutlineScenario(jUnitReporter);
        super.run(notifier);
    }

    @Override
    protected void runChild(Runner runner, final RunNotifier notifier) {
        super.runChild(runner, notifier);
    }
}
Here highlighted elements represent the following:
  1. As it was mentioned before major change we should include is that current class contains collection of ExtendedExamplesRunner elements. Everything else is taken from the original class. Of course, extension could be more effective but in our case we had to make changes into private static method. Thus, I had to make precise copy of the original class for scenario outlines container.
These are major changes we had to do in order to make our Cucumber tests re-runnable in case of error.

Running tests

Now it's time to use our newly extended Cucumber runner. The use is completely the same as for original Cucumber class and sample test may look like this:

package com.github.mkolisnyk.cucumber.reporting;

import org.junit.runner.RunWith;

import com.github.mkolisnyk.cucumber.runner.ExtendedCucumber;

import cucumber.api.CucumberOptions;

@RunWith(ExtendedCucumber.class)
@CucumberOptions(
        plugin = {"html:target/cucumber-html-report",
                  "json:target/cucumber.json",
                  "pretty:target/cucumber-pretty.txt",
                  "usage:target/cucumber-usage.json",
                  "junit:target/cucumber-results.xml"
                 },
        features = {"./src/test/java/com/github/mkolisnyk/cucumber/features" },
        glue = {"com/github/mkolisnyk/cucumber/steps" },
        tags = { }
)
public class SampleCucumberTest {
}
Our custom runner class inclusion is highlighted. Well, now we can run our tests as ordinary JUnit tests and get our test results.

Since we operate only with top level Cucumber engine structures we do not update anything specific to JUnit or some standard reporting features which record all our errors even we had successful re-run after that. Nevertheless, we still can trace whether our tests are failing permanently or the error we spot is just temporary. The example below shows the fragment of HTML report for one test which passed after some re-runs:

Scenario: Flaky test
  1. Given I am in the system
  2. When I do something
    java.lang.AssertionError
    	at org.junit.Assert.fail(Assert.java:86)
    	at org.junit.Assert.assertTrue(Assert.java:41)
    	at org.junit.Assert.assertTrue(Assert.java:52)
    	at com.github.mkolisnyk.cucumber.steps.TestSteps.i_do_something(TestSteps.java:35)
    	at ✽.When I do something(Test.feature:15)
    
  3. Then I should see nothing
Scenario: Flaky test
  1. Given I am in the system
  2. When I do something
    java.lang.AssertionError
    	at org.junit.Assert.fail(Assert.java:86)
    	at org.junit.Assert.assertTrue(Assert.java:41)
    	at org.junit.Assert.assertTrue(Assert.java:52)
    	at com.github.mkolisnyk.cucumber.steps.TestSteps.i_do_something(TestSteps.java:35)
    	at ✽.When I do something(Test.feature:15)
    
  3. Then I should see nothing
Scenario: Flaky test
  1. Given I am in the system
  2. When I do something
  3. Then I should see nothing
So, in case of this report we should look at the latest run to see whether the test fails permanently. Also, it's useful to see interim errors as some temporary problems may be related not just to environment issues but also it can be a reflection of some real software problem which may appear as a combination of many different factors.

Further improvement

The solution described above is not final and requires some improvements. E.g. as you might have seen from the above code samples the number of retries is hard-coded while it's much more convenient to provide some configuration in a form of annotation or whatever we seem useful to state how many times we want to re-run our tests.

Also, despite we re-run our tests were still see all previous errors. Thus we need additional reporting (maybe another extension as the part or Advanced Reporting) which cleans up results to show only permanent errors. However, we shouldn't get rid of current reports as well as they may show us even temporary problems which also may be useful to know about.

And we shouldn't forget that this solution is done for combination of Cucumber-JVM and JUnit however in some cases people use different combination of engines even within Java stack.

There may be some other improvements to be done. But the aim of this post is to show one of the ways we can make our life simpler.

20 comments:

  1. Hi,

    Thnx for the blog. If we run the Junit test and some test failes and in the rerun it turns green then the overall result is a failure, But we want the test to succeed if the test turns green the second time. Is this possible or is this just the way Junit works.

    Kind regards in advance,

    JT

    ReplyDelete
    Replies
    1. I'm afraid for now it's impossible, at least the way it is done now. The thing is that re-run functionality was added to the Cucumber JUnit runner, not for JUnit reporter which actually stores execution status. For JUnit reporter all tests are different even if it is the same test run several times. So, as soon as any JUnit test is completed with failed status it is saved as failed and there's no visible way to remove previous results. Also, my solution was designed the way it still keeps information about previous errors.

      What I suggest you to do with current solution is to disable build fail in case of errors, generate aggregated report and fail build only if aggregated report shows %pass different from 100%

      Delete
    2. Hi Kolesnik Nickolay, Is there any away to get report with only last retry Result?

      Delete
    3. In case of pure JUnit it's impossible at the moment. Just see my previous comment. You need to post-process test results report to retrieve only latest results. The library which implements ExtendedCucumber runner actually contains several reports which do that.

      Example 1 - Overview report:


      CucumberResultsOverview results = new CucumberResultsOverview();
      results.setOutputDirectory("target");
      results.setOutputName("cucumber-results");
      results.setSourceFile("./src/test/resources/cucumber.json");
      results.executeFeaturesOverviewReport();


      Example 2 - Detailed Report:


      CucumberDetailedResults results = new CucumberDetailedResults();
      results.setOutputDirectory("target/");
      results.setOutputName("cucumber-results");
      results.setSourceFile("./src/test/resources/cucumber.json");
      results.setScreenShotLocation("../src/test/resources/");
      results.executeDetailedResultsReport(true, true);

      Delete
    4. Sorry to bring up an old topic, but if I were to fail the build manually as opposed to relying on JUnit, would the suggested approach be to parse the HTML of the aggregated report then? I can't find anything more useful that would let me ascertain if the build was a success or not, and parsing HTML file doesn't seem appealing either, especially considering that the element I'm after has neither a CSS class nor a unique ID to find it by...

      Delete
    5. Aggregated report still contains overview table which is the first table. The last column always shows the pass rate. Usually, if it is 100% all worked well.

      Delete
    6. Yeah, that's essentially what I was referring to - so the last column in the overview table is the suggested approach then, thanks! If you're still maintaining this project, I think it'd be useful to add an id or a CSS class to this table cell so that it's easier to extract this data.

      Delete
    7. OK. I've created related issue #100. This is small enhancement and it will be available with 1.0.8 version. But for now you still can parse the last column. This is not really good but definitely the best I can propose for now.

      Delete
  2. This comment has been removed by the author.

    ReplyDelete
  3. Hi Kolesnik Nickolay,

    Thanks for your wonderful works, you are truly a blessing to the test Automation world.

    However, I have a little problem, I have implemented the cucumber custom reports, overview, detailed and usage reports. In your post that I followed one of the classes is ExtendedCucumberRunner(extends Runner) which is used in the Cucumber runner class(@RunWithExtendedCucumberRunner.class) this allows the reports to be generated.

    However in the rerun failed test post, you have created ExtendedCucumber which extends ParentRunner

    Now I am having issue to generate reports and rerun failed tests are the same time because the classes for the @RunWith annotation are different .I am having to use one or the other: @RunWith(ExtendedCucumber.class) which reruns failed tests or @RunWith(ExtendedCucumberRunner.class) which generates the custom reports.

    How can I generate the custom reports and rerun failed tests at the same time bearing in mind that I have the two classes (ExtendedCucumberRunner and ExtendedCucumber).
    Is there a way I can combine these two classes? so that I can pass only one class to the @RunWith annotation.

    ReplyDelete
    Replies
    1. If you're using cucumber-reports library you should use @RunWith(ExtendedCucumber.class) . ExtendedCucumber is the runner from the library.

      Delete
    2. Thanks for the prompt response when I use @RunWith(ExtendedCucumber.class) the reports are not generated , my ExtendedCucumber class looks like the one you have in this post and my ExtendedCucumberRunner looks like below:


      public class ExtendedCucumberRunner extends Runner {

      private Class clazz;
      private Cucumber cucumber;

      public ExtendedCucumberRunner(Class clazzValue) throws Exception {
      clazz = clazzValue;
      cucumber = new Cucumber(clazzValue);
      }

      @Override
      public Description getDescription() {
      return cucumber.getDescription();
      }

      private void runPredefinedMethods(Class annotation) throws Exception {
      if (!annotation.isAnnotation()) {
      return;
      }
      Method[] methodList = this.clazz.getMethods();
      for (Method method : methodList) {
      Annotation[] annotations = method.getAnnotations();
      for (Annotation item : annotations) {
      if (item.annotationType().equals(annotation)) {
      method.invoke(null);
      break;
      }
      }
      }
      }

      @Override
      public void run(RunNotifier notifier) {
      try {
      runPredefinedMethods(BeforeSuite.class);
      } catch (Exception e) {
      e.printStackTrace();
      }
      cucumber.run(notifier);
      try {
      runPredefinedMethods(AfterSuite.class);
      } catch (Exception e) {
      e.printStackTrace();
      }
      }




      }

      Is it possible to use both classes with @RunWith?

      Thanks
      Fola

      Delete
    3. Reports are not generated because your runner class doesn't have any instruction for this. You can switch purely to the cucumber-reports library and use the library like in this sample.
      Notice that report settings are defined in @CucumberOptions and @ExtendedCucumberOptions annotations

      Delete
  4. Awesome. You saved me so much time, I thought I was going to have to figure out and rewrite the entire cucumber-junit jar myself! :)

    ReplyDelete
  5. Hi Nickolay,

    With this current setup, what could I modify to ignore the first failure from reporting?
    If (For example) I test one Scenario:

    Scenario: Flaky Test
    Given: It always works the second time

    Runs....
    Failure
    Retry....
    Success

    Then the JUnit report reads something like:
    1 Failure, 1 Success

    What if I want it to just read:
    1 Success

    (Essentially, not output anything on the first failure).
    Any thoughts on what I should change?

    ReplyDelete
    Replies
    1. If you mean the JUnit reporting then the answer is: No, you can't. It is because JUnit logs results during execution and it doesn't override previous results.

      Delete
  6. I'm using the following method to run the tests, and everytime they run 3 times for the same feature file same scenario. I can't figure out how to run the tests just once... any help!

    @ExtendedCucumberOptions(jsonReport = "results/cucumber.json",
    retryCount = 0,
    detailedReport = true,
    toPDF = true,
    outputFolder = "results")
    @CucumberOptions(plugin = { "html:results/cucumber-html-report",
    "json:results/cucumber.json", "pretty:results/cucumber-pretty.txt",
    "usage:results/cucumber-usage.json", "junit:results/cucumber-results.xml" },
    features="src/main/resources/features/RegressionMonacoSmokeTest.feature",
    glue="steps.android")
    public class RegressionMonacoSmokeTestRunner extends ExtendedTestNGRunner /*implements IHookable */{
    }

    If I run the tests with IHookable it runs the tests once and the retryCount attribute certainly isn't taking effect.

    ReplyDelete
    Replies
    1. 1) If you use IHookable no extended options are supposed to work at all as the ExtendedTestNGRunner is the class which actually handles those options

      2) The problem looks strange. I've created an issue to investigate

      Delete
  7. I must say, it *does* work; does the retries--as advertised.
    But why does Chrome now wait forever between announcing:
    Starting ChromeDriver 81.0.4044.69 (xxxxxx-refs/branch-heads/4044@{#776}) on port 34119
    Only local connections are allowed.
    Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.

    It is in some invisible wait mode for a minute before Chrome became visible.

    ReplyDelete
    Replies
    1. This problem isn't relevant to the article. But the problem is related to Chrome security policies. I'd look at test driver settings to make sure it runs on local host, not the 0.0.0.0 IP, for instance

      Delete