Archive

Archive for the ‘QA’ Category

Filling a ticket – Best practices

December 3, 2017 1 comment

Search for duplicates

Before reporting a bug it is necessary to make sure that the bug does not already exist.

Pick a good summary

When we’re looking through lists of lots of issues, the summaries are essential in helping us to understand what an issue is about. This means that you should err in favour of putting too much information in the summary, rather than too little.

Some examples:

Bad: Space export doesn’t work
Good: Space export fails with NullPointerException when performed by the anonymous user

Bad: Administrator attachment control
Good: Allow configuration of maximum attachment sizes per space or per page

Set the Priority, Severity and Impact

You need to select the priority, severity and impact of this bug.

Set the “Fix version”, release version

Set the fix version once you know the test has passed and this is going in the next release, unless the release is based on what needs to go and not in what is ready.

Choose appropriate components

You need to select all the components that are related to the ticket. So, if you need to make changes on different components, this is the field that you need to specify them. It will be useful for the QA engineer to know what are the components affected and all the components that need to be tested.

Attach images/logs/videos/files

Often, especially when describing bugs, an image/log/video/file will help tremendously to explain the issue. If you are testing on mobile platform, try to attach tablet logs, videos and images. For web an image or a video should be fine and for the server platform you can attach the response and of course the steps to reproduce.

If you need to include a large piece of information in an issue, such as a stack trace, please do not put it in the Description field. Instead, add it as an attachment or a comment. (This makes it a lot easier to view and parse issues in most browsers.) I suggest you to use Jing to capture the images and video.

Link issues

If you know that a feature request or a bug is similar (but not identical) to an existing one, please link it using a link type of “relates to”. This will help to get a broader view of the problem, and may be able to solve not only one but two issues.

Write a detailed description

For all issues, please provide a description that can be understood by all in the community, not just yourself or other developers. Also, the description should allow the QA engineer to understand the implementation details of the story, down to the code level. This informs their decisions on risks, what testing needs to be added, and what testing is unnecessary. If you need to include some techno-babble, in addition to the plain language, for other developer types to understand the full details that’s fine.

First and foremost, put yourself in the place of someone else trying to solve the issue based on information in the Jira ticket alone. That means put as much information as you can into the ticket so that the next person can work the issue without having to follow up. Even if the ticket is for an issue you plan to work yourself, the more information provided, the better.

Every ticket must include:

  • Detailed steps to test
  • Given/When/Then following BDD
  • Impacts, any integration with other components/features/functions. So, the tester will be able to think about edge cases
  • Write which environment this needs to be tested (QA is the default if not specified). For mobile platform, we test directly from each component’s master branch, unless written in the ticket that the app was deployed on qa/dev environment.

Every bug must include:

  • Detailed steps to reproduce
  • Impacts, any integration with other components/features/functions. So, the tester will be able to think about edge cases
  • What you expected to happen
  • What happened instead
  • Don’t forget to mention the environment you are testing. Make sure to include any information which might be relevant to the bug (eg. your screen resolution, browser, etc)

 

Don’t reopen issues that have been resolved and shipped

Don’t reopen resolved/closed issues once they have been shipped. If an issue similar to yours is marked as resolved, but you’re still experiencing a problem with it, create a new issue and provide a link to the original one. In the case of resolved bug-fix-issues, this will help to evaluate whether it is in fact the same problem, or actually a new issue with some of the same symptoms.

Testing a new story/feature

If you find a bug while testing a new feature:

  • Create a Story bug and link it to the main ticket, remember to prioritize this bug. Also, put back the story ticket to in progress again
  • Once the related Story bugs are fixed, the developer should move the story containing the bugs to Ready for QA, so any QA will be able to check if all bugs have been fixed and close the main story if there was not introduced new bugs.

Resources:

https://confluence.sakaiproject.org/display/MGT/Sakai+Jira+Guidelines#SakaiJiraGuidelines-Community
https://wiki.collectionspace.org/display/collectionspace/Guidelines+for+Filing+a+JIRA+Bug+Report
https://confluence.atlassian.com/jirakb/using-jira-for-test-case-management-136872198.html
http://www.3pillarglobal.com/insights/manage-test-cases-bugs-using-jira
http://blogs.atlassian.com/2013/12/jira-qa-process/
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=30746287
https://confluence.atlassian.com/display/DEV/JIRA+usage+guidelines

Advertisements

Usability Testing

October 12, 2017 Leave a comment

Hello guys,

Today I am going to write about some usability tests techniques that you can use in your day-to-day tests. Usability tests is where you can figure out if your design is working or not, what can be improved and what is not straight forward.

Jakob Nielsen created the 10 usability heuristics, which includes:

  • Visibility of system status
  • Match between system and the real world
  • User control and freedom
  • Consistency and standards
  • Error prevention
  • Recognition rather than recall
  • Flexibility and efficiency of use
  • Aesthetic and minimalist design
  • Help users recognize, diagnose, and recover from errors
  • Help and documentation

 

Know your user

When you know your users, you can focus on their goals, the characteristics that they have and the attitudes that they display. You should also examine what the user expects from the product.

User personas are created from other forms of user research and thus offer an real life portrait that is easy for the team to keep in mind when designing the products. User personas have a name and a story.

 

Simplify

We need to start with the idea of a user in your worst case scenario when doing UX testing, someone who knows nothing about your product, is distracted when they use the system, has bad cell reception, etc. By watching that person use and fumble through your product, you can quickly identify areas where the app is not simple, clear or fast enough.

 

Trust your intuition

It’s important to remember what specific problem you’re setting out to solve. Trust your guts: early on, it’s especially important when larger decisions are so fragile. Remember that you can’t build something that pleases everyone: trying to do so normally results in a weak release. Stay focused on the use-case you want to nail and avoid trying to solve all the use cases at once. The smallest details can be the difference between a product that has a good experience for the user and one that has not.

 

Efficiency

It lies in the time taken by the user to do a task. For instance, if you are an E-commerce site, the efficiency of your site depends on the time and the number of steps that the users take to complete the basic tasks like buying a product.

 

Recall

This is one of the most important aspects to examine the person’s memory concerning the browsing process and the interface they used a while ago. If your design is simple and straight forward it will be easier to remember how to complete a task.

 

Emotional response

This helps to analyze the user’s feelings after they have used the product. Some might feel happy while others might feel down and there are others who are so deeply impressed that they recommend the product to their friend.

 

Resources:

https://uxmastery.com/beginners-guide-to-usability-testing/

https://www.nngroup.com/articles/how-many-test-users/

https://thenextweb.com/dd/2013/08/10/13-ways-to-master-ux-testing-for-your-startup/#.tnw_3ngxAqEM

https://www.invisionapp.com/blog/ux-usability-research-testing/

https://www.interaction-design.org/literature/article/7-great-tried-and-tested-ux-research-techniques

https://uxdesign.cc/ux-tools-for-user-research-and-user-testing-a720131552e1

https://www.cso.com.au/article/626752/data-breach-notification-just-it-business/

Hiring QAs, Headless vs Real Browsers, Automated tests, Consumer contracting tests

September 30, 2017 Leave a comment

Hi guys, I went to the #18 Agile Roundabout meetup here in London and I found really interesting to share. The first video is a talk about some of the challenges that we have when hiring QAs and about automation on headless browsers vs real browsers. The second is about some of the challenges that we have when implementing automated tests in a company and the third one is about how to do consumer contracting tests with Pact.

I do recommend everyone to watch the videos since it was really interesting to hear these experiences and you might be facing one of these situations.

Common mistakes when you hire QAs

 

Automated Tests

 

Consumer Contract Testing

 

 

Thank you guys for this great meetup !

How to deal with common situations in QA area

August 23, 2017 2 comments

I know you are probably thinking, why do you need to talk about it ? Unfortunately, I can spend hours here talking about the situations that a big part of QAs go through everyday. Maybe you are not a QA, but you have to deal with one of these problems as well.

So, I will talk about the most common problems and how you can deal with them:

 

When you have been telling management about a problem for weeks and now you’re just like

 


This is a classical problem, QAs should raise their concerns about some issue that they see it is coming. As a QA you know how the end user uses the product and you can see some upfront failures, scalability issues, etc.

Your job is to point the issue and raise the concern. Now, it is up to your manager to act on the problem or not.

 

 

 

 

When you hear about “small last-minute changes”

 

This is completely common in agile (personally, I don’t think this is a major issue), we just need to be aware of the risks. If we keep this in mind, it is okay. In the end we are not machines, and most of people don’t perform well when working under pressure compared to a normal day.

My advice is to be cold blooded when you have these last minute demands, just go to a place where you can focus and ignore any outside distractions, put some music on if you think it is better to concentrate.

 

When a new feature comes to QA in the last day of the sprint

 

Be realistic, as a QA you probably already know the percentage of the bugs you will find for each developer’s ticket in the first round of tests. This experience improves according to the time you stay in the company and know the quality of the work of each developer.

When this feature arrives to QA, be realistic and raise the point that it has good chances to be back in development and not finished until the end of the sprint.

 

When a bug slips through to the production environment

 

Don’t cry ! haha This is completely common, and this is not always QA’s fault. QA is just the tip of the iceberg and for this reason you need to know the feature not only on the technical, but also on the business point of view. For this reason the kickoff meetings and all the details are important.
When a bug slips to production it is a series of mistakes, this means that maybe the development team was not aware of some scenario, or the PO was not aware of what the users really wanted, and consequently QA didn’t know about some edge cases since they were not involved in the technical and business discussions. You know when you missed something and you know when you didn’t test something because you were not aware of the implications/impact.

 

 

When no one recognizes QA

 

This is really sad, but after some years you get used to it. I mean, it is not ideal, most of the recognition comes from the developers themselves or  the QAs that work with you.
Learn to motivate yourself without waiting for any recognition from your managers. I mean, if you love what you do, you don’t need anybody to say how good you are at it. If you have a feedback about your work, take the positive and constructive criticism to improve yourself, ditch the negative ones. Do what you like, as long as you are learning and you are happy, it is okay.

SQL Injection Automation Tool

Hello guys,

Today I will share this tool that will help you to perform some SQL Injection tests on your website.

What is SQL Injection tests ? It is a type of security tests that you can perform on your web application. You need to be sure that your website is preventing users and hackers to access your database through SQL injection.

To test if your web page has a SQL injection vulnerability, you need to check if it accepts dynamic user-provided values via GET, POST or Cookie parameters or via the HTTP User-Agent request header. You need to explore them to retrieve as much information as possible from the back-end database management system, or even be able to access the underlying file system and operating system.

This tool, sqlmap, can automate the process of identifying and exploiting this type of vulnerability. I will give you some tips here:

  • First you need to download the file or git clone:
git clone --depth 1 https://github.com/sqlmapproject/sqlmap.git sqlmap-dev
  • Run the command below to check the available options:
    python sqlmap.py -hh 
  • If you want to test, you need to pass the url:
python sqlmap.py -u "http://localhost:8000/test?id=1" --batch
  • To increase the level of the tests you can use –level or –risk options and specify the level (1-5) or the risk (1-3) of the scope of the tests:
python sqlmap.py -u "http://localhost:8000/test?id=1" --level=5 

These options above are used to customize the detection phase, the default is 1.

If you have authentication, you can send the --cookies of an already logged session or, --auth-type and --auth-cred to authenticate before the tests.

I suggest to you to test on your localhost with a copy of your database so you don’t mess your data.

Resources:

http://sqlmap.org/

https://github.com/sqlmapproject/sqlmap

Testcast – QA Market around the world

Hello everyone,

I participated in a Podcast called TestCast 😀

We talked about our different experiences in QA Market around the world. Sadly, the podcast is 100% in Portuguese with no transcription.

Cucumber report with protractor tests

Hi guys, today I will show what you can do with the module cucumber-html-report that I am currently using in my protractor automated tests.

So, it is a node library that reads the Json reports and convert them into a nice html report. It should show the coverage of your scenarios and if you want you can have pictures attached as well.
I am pasting here some snippets that you guys can use and customise as you want.
To install you need to add the cucumber-html-report in your dependencies like:
{
    "name": "Rafazzevedo",
    "version": "1.0.0",
    "description": "Rafazzevedo",
    "private": true,
    "dependencies": {
        "cucumber-html-reporter": "^0.4.0",
        "protractor": "^5.1.1"
    },

    "devDependencies": {
        "chai": "^3.5.0",
        "chai-as-promised": "^6.0.0",
        "chai-string": "^1.3.0",
        "cucumber": "^1.3.2",
        "protractor-cucumber-framework": "^1.0.2"
    }
}
Then you create an After in your Hooks class and Add the creation of the report:
import { browser } from 'protractor';
import reporter from 'cucumber-html-reporter';
import _ from 'lodash';
import Navigation from './helpers/navigation';

const API = automation.API;

const Hooks = function () {
    const api = new API(browser.params);
    const navigation = new Navigation(browser);
    const credentials = browser.params.credentials;

    this.registerHandler('BeforeFeatures', () => {
        api.authenticate(credentials);
        browser.api = api;
        browser.navigation = navigation;
        browser.get(browser.baseUrl);
    });
    this.registerHandler('AfterFeatures', () => {
        browser.getProcessedConfig().then((config) => {
            const jsonFile = config.capabilities.cucumberReportPath;
            browser.getCapabilities().then((capabilities) => {
                const reportName = createReportName
(config.capabilities, capabilities);
                const htmlFile = jsonFile.replace('.json', '.html');
                const options = {
                    name: `Automation (${reportName})`,
                    theme: 'bootstrap',
                    jsonFile,
                    output: htmlFile,
                    reportSuiteAsScenarios: true,
                };
                reporter.generate(options);
            });
        });
    });
    function createReportName(configCapabilities, browserCapabilities) {
        const browserName = configCapabilities.browserName;
        let deviceName = 'desktop';
        let browserVersion = '';
        if (!_.isUndefined(configCapabilities.chromeOptions)) {
            browserVersion = ` ${browserCapabilities.get('version')}`;
            if (!_.isUndefined(
configCapabilities.chromeOptions.mobileEmulation)) {
                deviceName = configCapabilities.chromeOptions.
mobileEmulation.deviceName;
            }
        }
        return `${browserName} ${deviceName}${browserVersion}`;
    }
};

module.exports = Hooks;
After adding these 2 functions about creating the report for multiple browsers and adding the version and the device name (desktop or mobile) in the report, you can add pictures in case of the scenario fails:
this.After((scenario) => {
        if (scenario.isFailed()) return browser.takeScreenshot().
then(screenshot => scenario.attach(new 
Buffer (screenshot, 'base64'), 'image/png'));
    });
 
You can always customise and improve the functions, if you feel that you need a picture after all the scenarios even they passed, you can just remove the condition in the after. Also you have different available themes and other options on https://www.npmjs.com/package/cucumber-html-reporter
See you guys !
%d bloggers like this: