Automated End-to-End testing React Native apps with Detox

Everyone knows the importance of testing in software development projects so lets jump directly to the topic of how to use Detox for end-to-end testing React Native applications. It's similar to how you would do end-to-end testing React applications with Cypress like I wrote previously. Testing React Native applications needs a bit more setup especially when running on CI/CD but the benefits of having comprehensive tests is great. This article focuses on using Detox and leaves the setup stuff for the official documentation.

Detox for end-to-end testing

End-to-end testing is widely performed in the web applications using frameworks like Cypress but with mobile applications it's not as common. There are essentially couple of frameworks for end-to-end testing mobile applications: Detox and Appium.

This article covers how to use Detox which uses grey-box-testing by monitoring the state of the app to tackle the problem of flaky tests. The main difference between Appium and Detox is that Appium uses black-box-testing, meaning that it doesn't monitor the internal state of the app. There's a good article of comparing Detox and Appium with example projects.

I'll use my personal Hailakka project as an example of setting up Detox for end-to-end testing and for visual regression testing. The app is based on my native Highkara news reader but done in React Native and is still missing features like localization.

Hailakka on Android and iOS

Setup Detox for React Native project

Starting with Detox e2e and React Native gets you through setting Detox up in your project, one step at a time both for Android and iOS.

Detox provides an example project for React Native which shows a starting point for your own e2e tests. It's good to note that support for Mocha is dropped on upcoming Detox 20.

Now you just need to write simple e2e tests for starters and build your app and run Detox tests. Detox has good documentation to get you through the steps and has some troubleshooting tips you might come across especially with Android.

Detox with Expo

Using Detox with Expo projects should work quite nicely without any helper libraries (although some older blog posts suggests so). You can even run e2e tests on EAS builds. For practice I added example of using Detox to my personal Hailakka project with managed workflow.

The changes compared to non-Expo project I followed the Detox on EAS build documentation and added the @config-plugins/detox for a managed project. Also to get the native projects for Detox I ran the npx expo prebuild script which practically configured both iOS and especially Android projects to work with Detox!. You can see my detox.config.js from my GitHub.

Running Detox end-to-end tests

With iOS related tests on simulator I got everything working pretty smoothly but on Android I had some problems to troubleshoot with the help of Detox documentation. Using the FORCE_BUNDLING=true environment variable helped to get rid of the Metro bundler when running Detox as Android emulator had problems connecting to it. Before that I got "unable to load script from assets index.android.bundle" error and I had to use adb reverse tcp:8081 tcp:8081 for proxying the connection.

For running the Detox tests I created the following npm scripts.

"e2e:ios": "npm run e2e:ios:build && npm run e2e:ios:run",
"e2e:ios:build": "detox build --configuration ios.development",
"e2e:ios:run": "detox test --configuration ios.development",
"e2e:android": "npm run e2e:android:build && npm run e2e:android:run",
"e2e:android:build": "detox build --configuration android.development",
"e2e:android:run": "detox test --configuration android.development",

Note on running on Android: Use system images without Google services (AOSP emulators). Detox "strongly recommend to strictly use this flavor of emulators for running automation/Detox tests.".

Also remember to change the Java SDK to 11. You can do it with e.g. asdf.

brew reinstall asdf
asdf plugin add java
asdf list-all java
asdf install java zulu-11.60.19
asdf global java zulu-11.60.19

If you need to debug your Detox tests on Android and run Metro bundler on the side you might need to start the metro builder (react-native start) manually. Also when using concurrently the arguments need to be passed through to desired script, e.g. "e2e:android:run": "concurrently --passthrough-arguments 'npx react-native start' 'npm run e2e:android:test -- {@}' --",.

Running same tests on iOS and Android have some differences and it helps to use "--loglevel=verbose" parameter to debug component hierarchy. For example opening the sidebar navigation from icon button failed on Android but worked on iOS. The issue was that I was using Android system image with Google's services and not the AOSP version.

Setting locale for Detox tests

Detox provides launchApp command which allows you to accept permissions and set language and locale.

For example in beforeAll you can set the locale for and accept the permissions for notifications as we can't accept it when the app runs and the modal is shown.

beforeAll(async () => {
    await device.launchApp({
        languageAndLocale: { language: 'en', locale: 'en-US' },
        newInstance: true,
        permissions: { notifications: 'YES' }, // This will cause the app to terminate before permissions are applied.
    });
});

But unfortunately setting the languageAndLocale works only on iOS. For Android you need a lot more steps to achieve that. Fortunately I'm not alone and "Changing Android 10 or higher device Locale programatically" covers just what I needed. You can use the Appium Settings application and get it to automatically installed to the emulator by using "utilBinaryPaths" as shown in Detox documentation

You can run the needed adb shell commands with execSync when setting up the tests:

import { execSync } from 'child_process';

// Set permissions for Settings app for changing locale
execSync('adb shell pm grant io.appium.settings android.permission.CHANGE_CONFIGURATION');
// Set lang and country
execSync(`adb shell am broadcast -a io.appium.settings.locale -n io.appium.settings/.receivers.LocaleSettingReceiver --es lang fi --es country FI`

Tips for writing Detox tests

People at Siili have written about Detox testing native mobile apps which provides good practices for testing.

  • Start each scenario from a predictable application state. Using beforeAll, beforeEach, await device.reloadReactNative() methods can help with that
  • Add testIDs to the elements you are going to use in the tests,
  • Keep all locators in one place (changing them in the future will not cost too much time)
  • Create helpers method to make your code more legible and easily maintainable

Visual regression testing React Native apps with Detox

Now we have achieved end-to-end tests with Detox but we can do more. By harnessing Detox for visual regression testing we can "verify the proper visual structure and layout of elements appearing on the device's screen". Detox supports taking screenshots, which can be used for visual regression testing purposes as they describe in their documentation.

  • Taking a screenshot, once, and manually verifying it, visually.
  • Storing it as an e2e-test asset (i.e. the snapshot).
  • Using it as the point-of-reference for comparison against screenshots taken in consequent tests, from that point on.

They write that "more practical way of doing this, is by utilizing more advanced 3rd-party image snapshotting & comparison tools such as Applitools." And here is were we come into the good article of how to do Visual regression testing with Detox. In short, we use Jest Image Snapshot and recommendations using SSIM Comparison.

jest-image-snapshot generates an image which shows thee baseline and how it differs with current state with red color.

jest-image-snapshot baseline diff with Detox

To aid writing tests, use some convenience methods by extending Jest expect and automatically taking a screenshot if the method is invoked; consider the platform and device name when doing the comparisons.

Helper methods in setup.ts extending Jest expect, adapted to TypeScript from Visual regression testing React Native apps with Detox and Jest:

import { device } from 'detox';
import fs from 'fs';
import { configureToMatchImageSnapshot } from 'jest-image-snapshot';
import path from 'path';
const jestExpect = (global as any).expect;

const kebabCase = (str: string) =>
    str.match(/[A-Z]{2,}(?=[A-Z][a-z]+[0-9]*|\b)|[A-Z]?[a-z]+[0-9]*|[A-Z]|[0-9]+/g)!.join('-').toLowerCase();

const toMatchImage = configureToMatchImageSnapshot({
    comparisonMethod: 'ssim',
    failureThreshold: 0.01, // fail if there is more than a 1% difference
    failureThresholdType: 'percent',
});

jestExpect.extend({ toMatchImage });

jestExpect.extend({
    async toMatchImageSnapshot(
        // a matcher is a method, it has access to Jest context on `this`
        this: jest.MatcherContext,
        screenName: string
    ) {
        const { name } = device;
        const deviceName = name.split(' ').slice(1).join('').replace('(','').replace(')', '');
        const language = languageAndLocale();
        const SNAPSHOTS_DIR = `__image_snapshots__/${language.code}/${deviceName}`;
        const { testPath } = this;
        const customSnapshotsDir = path.join(path.dirname(testPath || ''),
SNAPSHOTS_DIR);
        const customSnapshotIdentifier = kebabCase(`${screenName}`);
        const tempPath = await device.takeScreenshot(screenName);
        const image = fs.readFileSync(tempPath);
        jestExpect(image).toMatchImage({ customSnapshotIdentifier, customSnapshotsDir });
        return { message: () => 'screenshot matches', pass: true };
    },
});

Writing an expectation for an image comparison then becomes as simple as (HomeScreen.e2e.ts):

import { by, element, expect } from 'detox';

const jestExpect = (global as any).expect;

describe('Home Screen', () => {
  it('should show section header', async () => {
    await expect(element(by.id('navigation-container-view'))).toBeVisible();

    await jestExpect('Home Screen').toMatchImageSnapshot();
  });
});

You should also put the device into demo mode by freezing the irrelevant, volatile elements (like time, network information etc.). The helper.ts might look like:

import { execSync } from 'child_process';
import { device } from 'detox';

export const setDemoMode = async () => {
    if (device.getPlatform() === 'ios') {
        await (device as any).setStatusBar({
            batteryLevel: '100',
            batteryState: 'charged',
            cellularBars: '4',
            cellularMode: 'active',
            dataNetwork: 'wifi',
            time: '9:42',
            wifiBars: '3',
        });
    } else {
        // enter demo mode
        execSync('adb shell settings put global sysui_demo_allowed 1');
        // display time 12:00
        execSync('adb shell am broadcast -a com.android.systemui.demo -e command clock -e hhmm 1200');
        // Display full mobile data with 4g type and no wifi
        execSync(
            'adb shell am broadcast -a com.android.systemui.demo -e command network -e mobile show -e level 4 -e datatype 4g -e wifi true'
        );
        // Hide notifications
        execSync('adb shell am broadcast -a com.android.systemui.demo -e command notifications -e visible false');
        // Show full battery but not in charging state
        execSync(
            'adb shell am broadcast -a com.android.systemui.demo -e command battery -e plugged false -e level 100'
        );
    }
};

And as your application changes you may want to add some script to update the snapshots to you package.json

"e2e:ios:refresh": "npm run e2e:ios:build && npm run e2e:ios:run -- --updateSnapshot",
"e2e:android:refresh": "npm run e2e:android:build && npm run e2e:android:run -- --updateSnapshot",

Now you can just enjoy your end-to-end tests taking care that the application looks and works like you want it to after changes. And wait for React Native Owl to mature for getting out-of-the box visual regression testing.

The visual regression testing part of using Detox was for my case more of a theoretical practice than a real use case. As the visual regression testing compares current state to baseline images but with my personal is news reader with changing news it isn't feasible at it's current state.

Notes of Best Practices for writing Cypress tests

Cypress is a nice tool for end-to-end tests and it has good documentation also for Best Practices including "Cypress Best Practices" talk by Brian Mann at Assert(JS) 2018. Here are my notes from the talk combined with the Cypress documentation. This article assumes you know and have Cypress running.

In short:

  • Set state programmatically, don't use the UI to build up state.
  • Write specs in isolation, avoid coupling.
  • Don't limit yourself trying to act like a user.
  • Tests should always be able to be run independently and still pass.
  • Only test what you control.
  • Use data-* attributes to provide context to your selectors.
  • Clean up state before tests run (not after).

I've also made slides for this and they can be found on SlideShare.

Update 20.1.2022: Cypress own documentation for Best Practices contains more detailed explanations and for practical approach the Cypress team maintains the Real World App (RWA), a full stack example application that demonstrates best practices and scalable strategies with Cypress in practical and realistic scenarios.

Organizing tests

- Don't use page objects to share UI knowledge
+ Write specs in isolation, avoid coupling

"Writing and Organizing tests" documentation just tells you the basics how you should organize your tests. You should organize tests by pages and by components as you should test components individually if possible. So the folder structure for tests might look like.

├ articles
├── article_details_spec.js
├── article_new_spec.js
├── article_list_spec.js
├ author
├── author_details_spec.js
├ shared
├── header_spec.js
├ user
├── login_spec.js
├── register_spec.js
└── settings_spec.js

Selecting Elements

- Dont' use highly brittle selectors that are subject to change.
+ Use data-* attributes to provide context to your selectors and insulate them from CSS or JS changes.

Add data-* attributes to make it easier to target elements.

For example:

<button id="main" class="btn btn-large" name="submit"
  role="button" data-cy="submit">Submit</button>

Writing Tests

- Don't couple multiple tests together.
+ Tests should always be able to be run independently and still pass.

Best practice when writing tests on Cypress is to iterate on a single one at a time, i.a.

describe('/login', () => {

  beforeEach() => {
    // Wipe out state from the previous tests
    cy.visit('/#/login')
  }

  it('requires email', () =>
    cy.get('form').contains('Sign in').click()
    cy.get('.error-messages')
    .should('contain', 'email can\'t be blank')
  })

  it('requires password', () => {
    cy.get('[data-test=email]').type('joe@example.com{enter}')
    cy.get('.error-messages')
    .should('contain', 'password can\'t be blank')
  })

  it('navigates to #/ on successful login', () => {
    cy.get('[data-test=email]').type('joe@example.com')
    cy.get('[data-test=password]').type('joe{enter}')
    cy.hash().should('eq', '#/')
  })

})

Note that we don't add assertions about the home page because we're on the login spec, that's not our responsibility. We'll leave that for the home page which is the article spec.

Controlling State

"abstraction, reusability and decoupling"

- Don't use the UI to build up state
+ Set state directly / programmatically

Now you have the login spec done and it's the cornerstone for every single test you will do. So how do you use it in e.g. settings spec? For not to copy & paste login steps to each of your tests and duplicating code you could use custom command: cy.login(). But using custom command for login fails at testing in isolation, adds 0% more confidence and accounts for 75% of the test duration. You need to log in without using the UI. And to do that depends of how your app works. For example you can check for JWT token in the App and in Cypress make a silent (HTTP) request.

So your custom login command becomes:

Cypress.Commands.add('login', () => {
  cy.request({
    method: 'POST',
    url: 'http://localhost:3000/api/users/login',
    body: {
      user: {
        email: 'joe@example.com',
        password: 'joe',
      }
    }
  })
  .then((resp) => {
    window.localStorage.setItem('jwt', resp.body.user.token)
  })
})

Setting state programmatically isn't always as easy as making requests to endpoint. You might need to manually dispatch e.g. Vue actions to set desired values for the application state in the store. Cypress documentation has good example of how you can test Vue web applications with Vuex data store & REST backend.

Visiting external sites

- Don't try to visit or interact with sites or servers you do not control.
+ Only test what you control.

Try to avoid requiring a 3rd party server. When necessary, always use cy.request() to talk to 3rd party servers via their APIs like testing log in when your app uses another provider via OAuth. Or you could try stub out the OAuth provider. Cypress has recipes for different approaches.

Add multiple assertions

- Don't create "tiny" tests with a single assertion and acting like you’re writing unit tests.
+ Add multiple assertions and don’t worry about it

Cypress runs a series of async lifecycle events that reset state between tests. Resetting tests is much slower than adding more assertions.

it('validates and formats first name', function () {
    cy.get('#first')
      .type('johnny')
      .should('have.attr', 'data-validation', 'required')
      .and('have.class', 'active')
      .and('have.value', 'Johnny')
  })

Clean up state before tests run

- Don't use after or afterEach hooks to clean up state.
+ Clean up state before tests run.

When your tests end - you are left with your working application at the exact point where your test finished. If you remove your application's state after each test, then you lose the ability to use your application in this mode or debug your application or write a partial tests.

Unnecessary Waiting

- Don't wait for arbitrary time periods using cy.wait(Number).
+ Use route aliases or assertions to guard Cypress from proceeding until an explicit condition is met.

For example waiting explicitly for an aliased route:

cy.server()
cy.route('GET', /users/, [{ 'name': 'Maggy' }, { 'name': 'Joan' }]).as('getUsers')
cy.get('#fetch').click()
cy.wait('@getUsers')     // <--- wait explicitly for this route to finish
cy.get('table tr').should('have.length', 2)

No constraints

You've native access to everything so don't limit yourself trying to act like a user. You can e.g.

  • Control Time: cy.clock(), e.g. control how your app responds to system time, force set timeouts and set intervals to fire when you want them to.
  • Stub Objects: cy.stub(), force callbacks to fire, assert things are called with right arguments.
  • Modify Stores: cy.window(), e.g. dispatch events, like logout.

Set global baseUrl

+ Set a baseUrl in your configuration file.

Adding a baseUrl in your configuration allows you to omit passing the baseUrl to commands like cy.visit() and cy.request().

Without baseUrl set, Cypress loads main window in localhost + random port. As soon as it encounters a cy.visit(), it then switches to the url of the main window to the url specified in your visit. This can result in a ‘flash’ or ‘reload’ when your tests first start. By setting the baseUrl, you can avoid this reload altogether.

Assertions should be obvious

"A good practice is to force an assertion to fail and see if the error message and the output is enough to know why. It is easiest to put a .only on the it block you're evaluating. This way the application will stop where a screenshot is normally taken and you're left to debug as if you were debugging a real failure. Thinking about the failure case will help the person who has to work on a failing test." (Best practices for maintainable tests)

<code>
it.only('check for tab descendants', () => {
  cy
    .get('body')
    .should('have.descendants', '[data-testid=Tab]') // expected '' to have descendants '[data-testid=Tab]'
    .find('[data-testid=Tab]')
    .should('have.length', 2) // expected '[ <div[data-testid=tab]>, 4 more... ]' to have a length of 2 but got 5
});
</code>

Explore the environment

You can pause the test execution by using debugger keyword. Make sure the DevTools are open.

it('bar', function () {
   debugger
   // explore "this" context
 })

Running in CI

If you're running in Cypress in CI and need to start and stop your web server there's recipes showing you that.

Try the start-server-and-test module. It's good to note that when using e2e-cypress plugin for vue-cli it starts the app automatically for Cypress.

If your videos taken during cypress run freeze when running on CI then increase the CPU resources, see: #4722

Adjust the compression level on cypress.json to minimal with "videoCompression": 0 or disable it with "videoCompression": false. Disable recording with "video": false.

Record success and failure videos

Cypress captures videos from test runs and whenever a test fails you can watch the failure video side by side with the video from the last successful test run. The differences in the subject under test are quickly obvious as Bahtumov's tips suggests.

If you're using e.g. GitLab CI you can configure it to keep artifacts from failed test runs for 1 week, while keeping videos from successful test runs only for a 3 days.

artifacts:
    when: on_failure
    expire_in: '1 week'
    untracked: true
    paths:
      - cypress/videos
      - cypress/screenshots
  artifacts:
    when: on_success
    expire_in: '3 days'
    untracked: true
    paths:
      - cypress/screenshots

Helpful practices

Disable ServiceWorker

ServiceWorkers are great but they can really affect your end-to-end tests by introducing caching and coupling tests. If you want to disable the service worker caching you need to remove or delete navigator.serviceWorker when visiting the page with cy.visit.

it('disable serviceWorker', function () {
  cy.visit('index.html', {
    onBeforeLoad (win) {
      delete win.navigator.__proto__.serviceWorker
    }
  })
})

Note: once deleted, the SW stays deleted in the window, even if the application navigates to another URL.

Get command log on failure

In the headless CI mode, you can get a JSON file for each failed test with the log of all commands. All you need is cypress-failed-log project and include it from your cypress/support/index.js file.

Conditional logic

Sometimes you might need to interact with a page element that does not always exist. For example there might a modal dialog the first time you use the website. You want to close the modal dialog. But the modal is not shown the second time around and the above code will fail.

In order to check if an element exists without asserting it, use the proxied jQuery function Cypress.$:

const $el = Cypress.$('.greeting')
if ($el.length) {
  cy.log('Closing greeting')
  cy.get('.greeting')
    .contains('Close')
    .click()
}
cy.get('.greeting')
  .should('not.be.visible')

Summary

- Don't use the UI to build up state
+ Set state directly / programmatically

- Don't use page objects to share UI knowledge
+ Write specs in isolation, avoid coupling

- Don't limit yourself trying to act like a user
+ You have native access to everything

- Don't couple multiple tests together.
+ Tests should always be able to be run independently and still pass.

- Don't try to visit or interact with sites or servers you do not control.
+ Only test what you control.

- Dont' use highly brittle selectors that are subject to change.
+ Use data-* attributes to provide context to your selectors

- Don't create tests with a single assertion
+ Add multiple assertions and don’t worry about it

- Don't use after or afterEach hooks to clean up state.
+ Clean up state before tests run.

+ Set a baseUrl in your configuration file.

More to read

Use cypress-testing-library which encourage good testing practices through simple and complete custom Cypress commands and utilities.

Set up intelligent code completion for Cypress commands and assertions by adding a triple-slash directive to the head of your JavaScript or TypeScript testing spec file. This will turn the IntelliSense on a per file basis.

/// <reference types="Cypress" />

Read What I’ve Learned Using Cypress.io for the Past Three Weeks if you need a temporary workaround for iframes and testing file uploads as for now Cypress does not natively support those.

And of course Gleb Bahmutov's blog is useful resource for practical things like Tips and tricks post.

Web application test automation with Robot Framework

Software quality has always been important but seems that lately it has become more generally acknowledged fact that quality assurance and testing aren't things to be left behind. With Java EE Web applications you have different ways to achieve test coverage and test that your application works with tools like JUnit, Mockito and DBUnit. But what about testing your web application with different browsers? One great way is to use Robot Framework which is a generic test automation framework and when combined with Selenium 2 it makes both writing your tests and running them quite intuitive.

Contents

Introduction

Robot Framework which is a generic test automation framework for acceptance testing and its tabular test data syntax is almost plain English and easy to understand. Its testing capabilities can be extended by test libraries implemented either with Python or Java, and users can create new higher-level keywords from existing ones using the same syntax that is used for creating test cases. Robot Framework itself is open source and released under Apache License 2.0, and most of the libraries and tools in the ecosystem are also open source. The development of the core framework is supported by Nokia Siemens Networks.

Robot Framework doesn't do any specific testing activity but instead it acts as a front end for libraries like Selenium2Library. Selenium2Library is a web testing library for Robot Framework that leverages the Selenium 2 (WebDriver) libraries from the Selenium project. In practice it starts the browser (eg. IE, Firefox, Chrome) and runs the tests against it natively as a user would. There's no need to manually click through the user interface.

Robot Framework has good documentation and by going through the "Web testing with Robot Framework and Selenium2Library" demo you see how it's used in web testing, get introduction to test data syntax, how tests are executed, and how logs and reports look like. For more detailed view about Robot Framework's features you can read the User Guide.

Installing test tools

The "Web testing with Robot Framework and Selenium2Library" demo is good starting point for getting to know Robot Framework but it more or less skips the details of setting up the system and as the installation instructions are a bit too verbose here is an example how to install and use Robot Framework and Selenium 2 in 64-bit Windows 7.

Python installation

First we need Python as a precondition to run Robot Framework and we install Python version 2.7.x as Robot Framework is currently not compatible with Python 3.x. From the Python download page select Python 2.7.9 Windows X86-64 Installer.

For using the RIDE editor we also need wxPython. From the download page select wxPython2.8-win64-unicode-py27 for 64-bit Python 2.7.

Next we need to set up the PATH environment variable in Windows if you didn't setup it when you installed Python.

Open Start > Settings > Control Panel > System > Advanced > Environment Variables
Select System variables > PATH > Edit and add e.g. ;\Python27;C:\Python27\Scripts at the end of the value.
Exit the dialog with OK to save the changes.

Starting from Python 2.7.9, the standard Windows installer by default installs and activates pip.

Robot Framework and Selenium2Library installation

In practice it is easiest to install Robot Framework and Selenium2Library along with its dependencies using pip package manager. Once you have pip installed, all you need to do is running these commands in your Command Prompt:

1. pip install robotframework
2. pip install robotframework-selenium2library

It's good to notice that pip has a "feature" that unless a specific version is given, they install the latest possible version even if that is an alpha or beta release. A workaround is giving the version explicitly. like pip install robotframework==2.7.7

RIDE installation

RIDE is a light-weight and intuitive editor for Robot Framework test case files. It can be installed by using Windows installer (select robotframework-ride-1.1.win-amd64.exe) or with pip using:

pip install robotframework-ride

The Windows installer does a shortcut to the desktop and you can start it from Command Prompt with command ride.py.

Now you have everything you need to create and execute Robot Framework tests.

Executing Robot Framework tests

As described in WebDemo running tests requires the demo application located under demoapp directory to be running. It can be started by executing it from the command line:

python demoapp/server.py

After the demo application is started, it is be available at http://localhost:7272 and it needs to be running while executing the automated tests. It can be shut down by using Ctrl-C.

In Robot Framework each file contains one or more tests and is treated as a test suite. Every directory that contains a test suite file or directory is also a test suite. When Robot Framework is executed on a directory it will go through all files and directories of the correct kind except those that start with an underscore character.

WebDemo's test cases are located in login_tests directory and to execute them all type in your Command Prompt:

pybot login_tests

Running the tests opens a browser window which Selenium 2 is driving natively as a user would and you can see the interactions.
When the test is finished executing four files will have been generated: report.html, log.html and output.xml. On failed tests selenium takes screenshots which are named like selenium-screenshot-1.png. The browser can also be run on a remote machine using the Selenium Server.

You can also run an individual test case file and use various command line options (see pybot --help) supported by Robot Framework:

pybot login_tests/valid_login.txt
pybot --test InvalidUserName --loglevel DEBUG login_tests

If you selected Firefox as your browser and get an error like "Type Error: environment can only contain strings" that's a bug in Selenium's Firefox profile. You can fix it with a "monkey patch" to C:\Python27\Lib\site-packages\selenium\webdriver\firefox\firefox_profile.py.

Using different browsers

The browser that is used is controlled by ${BROWSER} variable defined in resource.txt resource file. Firefox browser is used by default, but that can be easily overridden from the command line.

pybot --variable BROWSER:Chrome login_tests
pybot --variable BROWSER:IE login_tests

Browsers like Chrome and Internet Explorer require separate Internet Explorer Driver and Chrome Driver to be installed before they can be used. InternetExplorerDriver can be downloaded from Selenium project and ChromeDriver from Chromium project. Just place them both somewhere in your PATH.

With Internet Explorer Driver you can get an error like "'Unexpected error launching Internet Explorer. Protected Mode settings are not the same for all zones. Enable Protected Mode must be set to the same value (enabled or disabled) for all zones.'". As it reads in the driver's configuration you must set the Protected Mode settings for each zone to be the same value. To set the Protected Mode settings in Internet Explorer, choose "Internet Options..." from the Tools menu, and click on the Security tab. For each zone, there will be a check box at the bottom of the tab labeled "Enable Protected Mode".

Reading the results

After the tests have run there are couple of result files to read: report.html and log.html.

The report.html shows the results of your tests and its background is green when all tests have passed and red if any have failed. It also shows "Test Statistics" for how many tests have passed and failed. "Test Details" shows how long the test took to run and, if it failed, what the fail message was.

The log.html gives you more detailed information about why some test fails if the fail message doesn't make it obvious. It also gives a detailed view of the execution of each of the tests.

Summary

From the short experience I have played with Robot Framework it seems to be powerful tool for designing and executing tests and good way to improve your application's overall quality.

Next it's time to get to know the Robot Framework syntax better, write some tests and run Selenium Server. Also the Maven plugin and RobotFramework-EclipseIDE plugin looks interesting.

References

Robot Framework documentation
Robot Framework User Guide
Web testing with Robot Framework and Selenium2Library demo
RIDE: light-weight and intuitive editor for Robot Framework test case files