Build a docker image for your protractor automation

Hey guys, today I will post about an example of Docker file where you can run your protractor automation in a Docker container on firefox and chrome.

You will need to have the known hosts and the public key from github to be able to clone the repository and run the automation. Also you need to install java to be able to run the selenium server in your docker container.

Create an empty file in your project and paste these instructions:

Docker file

FROM node


#Install required applications

RUN echo 'deb jessie-backports main' >> /etc/apt/sources.list.d/jessie-backports.list

RUN apt-get update && \
apt install -y -t jessie-backports openjdk-8-jre-headless ca-certificates-java && \
apt-get install -y xvfb wget openjdk-8-jre && \
wget && \
dpkg --unpack google-chrome-stable_current_amd64.deb && \
apt-get install -f -y && \
apt-get clean && \
rm google-chrome-stable_current_amd64.deb && \
apt-get install -y pkg-mozilla-archive-keyring

ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/

#Install Firefox on Linux
RUN touch /etc/apt/sources.list.d/debian-mozilla.list
RUN echo "deb jessie-backports firefox-release" >> /etc/apt/sources.list.d/debian-mozilla.list
RUN wget
RUN dpkg -i pkg-mozilla-archive-keyring_1.1_all.deb
RUN apt-get update && apt-get install -y -t jessie-backports firefox

#Creating user
RUN useradd -ms /bin/bash admin

# Copy key for github for user admin
ADD id_rsa /home/ci_box/.ssh/id_rsa
ADD known_hosts /home/ci_box/.ssh/known_hosts

ADD id_rsa /home/ci_box/.ssh/id_rsa
RUN chmod 700 /home/ci_box/.ssh/id_rsa; chown -R admin:staff /home/ci_box/.ssh
ADD known_hosts /home/ci_box/.ssh/known_hosts

#Changing user
USER admin

#To avoid conflicts with host machine dbus

#Copying script which will clone repository
WORKDIR /home/admin/workspace
ADD ./

#Open the bash
CMD ["/bin/bash"]


Script to clone the branch


#Branch and repository

#Clone the repository

git clone -b ${BRANCH}${REPOSITORY}.git

#Script to run your automation

./ ${BRANCH}


Script to run the tests

set -e

#Starting dbus daemon and exporting related environmental variables
export $(dbus-launch)

#Starting X server to be able to run firefox
Xvfb :1 -screen 0 1200x800x24 &

# Clean the target with reports
rm -rf target

# Install all dependencies
npm install

# Run tests
DISPLAY=:1.0 npm run regression


Build the image in the folder that contains the docker file

docker build -t rafazzevedo/test .


Push the image to your account

docker login
docker push rafazzevedo/test


Run the automation in the container:

docker run --name=test rafazzevedo/test:latest /home/admin/ master regression


See you guys soon !


Testcast – QA Market around the world

Hello everyone,

I participated in a Podcast called TestCast ūüėÄ

We talked about our different experiences in QA Market around the world. Sadly, the podcast is 100% in Portuguese with no transcription.

Cucumber report with protractor tests

Hi guys, today I will show what you can do with the module cucumber-html-report that I am currently using in my protractor automated tests.

So, it is a node library that reads the Json reports and convert them into a nice html report. It should show the coverage of your scenarios and if you want you can have pictures attached as well.
I am pasting here some snippets that you guys can use and customise as you want.
To install you need to add the cucumber-html-report in your dependencies like:
    "name": "Rafazzevedo",
    "version": "1.0.0",
    "description": "Rafazzevedo",
    "private": true,
    "dependencies": {
        "cucumber-html-reporter": "^0.4.0",
        "protractor": "^5.1.1"

    "devDependencies": {
        "chai": "^3.5.0",
        "chai-as-promised": "^6.0.0",
        "chai-string": "^1.3.0",
        "cucumber": "^1.3.2",
        "protractor-cucumber-framework": "^1.0.2"
Then you create an After in your Hooks class and Add the creation of the report:
import { browser } from 'protractor';
import reporter from 'cucumber-html-reporter';
import _ from 'lodash';
import Navigation from './helpers/navigation';

const API = automation.API;

const Hooks = function () {
    const api = new API(browser.params);
    const navigation = new Navigation(browser);
    const credentials = browser.params.credentials;

    this.registerHandler('BeforeFeatures', () => {
        browser.api = api;
        browser.navigation = navigation;
    this.registerHandler('AfterFeatures', () => {
        browser.getProcessedConfig().then((config) => {
            const jsonFile = config.capabilities.cucumberReportPath;
            browser.getCapabilities().then((capabilities) => {
                const reportName = createReportName
(config.capabilities, capabilities);
                const htmlFile = jsonFile.replace('.json', '.html');
                const options = {
                    name: `Automation (${reportName})`,
                    theme: 'bootstrap',
                    output: htmlFile,
                    reportSuiteAsScenarios: true,
    function createReportName(configCapabilities, browserCapabilities) {
        const browserName = configCapabilities.browserName;
        let deviceName = 'desktop';
        let browserVersion = '';
        if (!_.isUndefined(configCapabilities.chromeOptions)) {
            browserVersion = ` ${browserCapabilities.get('version')}`;
            if (!_.isUndefined(
configCapabilities.chromeOptions.mobileEmulation)) {
                deviceName = configCapabilities.chromeOptions.
        return `${browserName} ${deviceName}${browserVersion}`;

module.exports = Hooks;
After adding these 2 functions about creating the report for multiple browsers and adding the version and the device name (desktop or mobile) in the report, you can add pictures in case of the scenario fails:
this.After((scenario) => {
        if (scenario.isFailed()) return browser.takeScreenshot().
then(screenshot => scenario.attach(new 
Buffer (screenshot, 'base64'), 'image/png'));
You can always customise and improve the functions, if you feel that you need a picture after all the scenarios even they passed, you can just remove the condition in the after. Also you have different available themes and other options on
See you guys !

Avoiding false positives with Protractor and Javascript

Do you have false positives when automating with protractor ?

Asynchronous code can be really annoying sometimes, callback pyramids, error handling on every second line, promises swallowing¬†errors by default…

When using ES6 promises any exception which is thrown within a then or a catch handler, will be silently disposed of unless manually handled.

If you are not getting the errors, you are probably coding the promise something like this:

Promise.resolve('promised value').then(function() {
 throw new Error('error'); 

Then the more obvious way  to print the error is adding a catch after the promise:

Promise.resolve('promised value').then(function() {
 throw new Error('error'); 
}).catch(function(error) {

Remember that you need to add this after all the promises you want to print the error. Do you really think this is reliable ? Are you going to repeat yourself adding a catch after all the promises ?

So, how can you print the error without need to repeat yourself when creating a promise ? You can use .done instead of .then. This will run the promise and after print the error that you want.

Here is how you can avoid them:

Promise.resolve('promised value').done(function() {
 throw new Error('error');

But what do you do when you chain multiple thens ?

There is a library called Bluebird which has a fix integrating your existing code by extending the ES6 Promises API. You can make sure you know about unhandled rejections, so no more false positives.

 throw error; 

Then if you want to discard an exception not because it is been swallowed, but you don’t need to print it, you can do something like this:

Promise.reject('error value').catch(function() {


Bad practices when writing your BDD scenarios

I usually write about the best practices to follow when writing your BDD scenarios. This time I will do different and write some examples that I found of how to not write your BDD scenarios.

 Example 1 РToo many steps :

  Scenario: Valid data entered
    Given a user exists
    When I visit the details access page
    And I fill in email with ""
    And I select "read only" from "Permissions"
    And I fill in "Valid until" with "2010-03-01"
    And I press "Grant access"
    Then I should see "Access granted till 2010-03-01."
    And I should be on the details page


 Example 2  РUI elements dependency:

   Scenario: Adding a picture 
     Given I go to the Home page 
     When I click the Add picture button 
     And I click on the drop down "Gallery"
     And I click on the first image
     Then I should see the image added on the home page


 Example 3 РExcessive use of tables :

   Scenario: Adding a new data user 
     Given I am on  user details page
     When I select an existent user
     And I send some new user data
     |name |age|country|language|address |telephone|
     |James|20 |england|english |street 1|123456789|
     |Chris|30 |spain  |spanish |street 2|987654321|
     Then I should see the correct previous registered data
     |gender  |married|smoke|
     |male    |true   |true |
     |male    |false  |false|


 Example 4  РCode and data structure:

   Scenario: Include attachment
     Given I go to the Attachments page 
     When I click the Add attachment button with css "#attachment-button"
     And I click on the first csv file with class ".file .csv"
     Then I should see the file attached with the id ".attachment"


  • Write declarative¬†scenarios.
  • Write at a higher level of abstraction, instead of being concerned with clicking widgets on a page.¬†Surfacing UI concerns on a feature can make it fragile¬†to ordinary¬†design changes.
  • Try to¬†stick to having not¬†more than 5 steps per scenario.¬†It’s easier to read.
  • Avoid code or data structure¬†like xpath selectors, css class, regex in your scenario.


Test Reporting in Agile

When working in an Agile team, the feedback loop should be as quick as possible. If you don’t send the feedback at the right time, future¬†bugs could be costly as the feature has already a big amount¬†of code.

What is this feedback loop ?

If you have implemented Continuous integration and have automated tests running after a new commit on your dev environment, you need to report back as soon as possible the result of these tests. This is the feedback loop and you need to know the correct time to report the issue to the dev team.

If your automation is taking too long to run the tests after a new commit, this is a sign that you need to improve your smoke tests, maybe your scope is too long or maybe your automation is taking too long to run for other reason, sleeps, not scalable, etc.

The feedback loop influences how your agile process works and if you are saving or wasting time when developing. Tight feedback loops will improve performance of the team in general, give confidence, save time and avoid costly bug fixes.

Feedback loops are not only about the continuous integration, it is about pair programming and unit tests as well, but this time we will focus on Continuous integration tests.

When you are implementing a new scenario in your automated tests, you want to know ASAP¬†if something you implemented is breaking some other scenario or the same scenario. Same situation when you are developing something related to that feature and you want to know if this new implementation is breaking the tests. It is easier to fix, it is fresh in your mind, you don’t need to¬†wait 30 minutes to know there is a bug when you changed the name of a variable…

In my personal opinion,¬†if you don’t have parallel tests to check in multiple browsers or mobiles at the same time if something is broken, it is better you focus on the most used browser/mobile, since this is first priority in all the cases.

Use case: 90% users are on Chrome on Desktop, other 5% users are on Firefox mobile, other 5% are on Safari mobile. What is the best strategy ?

After commit:

  1. Run Smoke tests and all the browsers, take 15 minutes to receive feedback ?
  2. Run Regression tests and all the browsers, take 40 minutes to receive feedback ?
  3. Run Smoke tests and only the most used browser, take 5 minutes to receive feedback, leave to run on all the browsers every hour ?
  4. Run Regression tests and only the most used browser, take 10 minutes to receive feedback, leave to run on all the browsers every hour ?

There is no¬†rule to follow, since¬†in this case¬†you don’t have parallel tests, I would¬†go for the third option. Then, you can focus on the most used browser and leave running the other browsers in a dedicated job each hour. Why not fourth option ? Because you need to keep in mind the business value.

Of course that we need to delivery the feature¬†on all used¬†browsers, but when the time is tight (very often) and you need to deliver as fast as you can, you go for the most business value option and implement the other browsers after. Don’t forget when you automate, you don’t think only about helping the development, but also you think about helping the end users.

If you are wondering how long each type of test should take to give feedback, you can build your own process basing on this graph:

Screen Shot 2017-04-06 at 19.18.44.png

For how long should the team keep the test reports ?

Depends of how many times you run the tests through the day. There is not a rule for that, so you need to find what is the best option for you and your team. In my team we keep the test reports until the 15th run on Jenkins and after this we discard the report and the logs. In most of the cases, I’ve found that if something goes back more than 3 major versions, look¬†for more resolution is a waste of time.

If regressions are reported as soon as they’re observed, the reporting should include the first known failing build and the last known good build. Ideally these are sequential, but this isn’t necessarily the case. Some people like to archive the old reports outside Jenkins. I didn’t feel the¬†need for this until now, but is up to you¬†to keep these reports outside Jenkins.



Webinar: Using Espresso for Fast and Reliable Feedback

%d bloggers like this: