How to make automation tests more effective

As a QA Engineer you often need to do regression tests and you know this is a really waste of time since every time is the same test. Then, you keep in mind that you need to automate the regression tests in order to save time with the old things that could be broken with the introduction of the new features.

The development of the automation is usually done by the developers in test, people who has QA  and programming knowledge. One completes the other, so you know how to automate the right scenarios with the right prioritisation. Don’t spending time automating scenarios with no or small value or with wrong prioritisation for the regression pack.

You need to include the repetitive scenarios, the critical ones and the happy path but try to avoid the edge cases. Why ? Because edge cases are often scenarios with minimum or no value, where it is a specific flow that should be test once when developing the feature not every time when doing the regression. Regression pack ensures the happy path is working and the end user will be able to do what he needs to do. When you spend time implementing automated edge cases, you actually waste your valuable time implementing scenarios without real business value.

Although the product owners may be able to immediately suggest points to automate, it also depends on developers working on the detailed code. So, for this reason you need to have a good analyse before about the scenarios that should be implemented and if they will change very often.

Here some tips about how to create your automation tests more effectively:

 

Developing Automation

Automation code is quite similar to the development code, but it should be more simple and readable. Automation is not meant to be complex. The business rules should be implemented as part of the BDD scenarios, so the code is clean and doesn’t have anchors of complexity.

ROI (Return of Investment), you need to guarantee the value of the scenarios when automating them. For example, a scenario that tests the function of a feature is far more important and valuable than a scenario that tests if the buttons have the expected colour. Layouts are important, but you will spend more time implementing a scenario asserting the colour than opening the page and manually checking in milliseconds, also it is not a critical issue, the user will be able to finish the scenario despite the colour of the button. Measure the time before and after the automation is implemented, so you can have an idea of the time and effort saved.

Optimizing time

We have a common problem in the agile environment, when we rush to finish the sprint and forget quality. This is because we close the tickets, but we create a backlog of bugs every time and it keeps growing every sprint. These sprint backlogs make it difficult to devote time for the development, debugging and testing of each iteration.

For this reason it is really important to save a good amount of time for testing. To help saving time with the test, you can run the automation in parallel to the development, so any regression bug would be caught as soon as it is merged in a master branch. This gives more scope to the QA engineers to develop efficient tests through exploratory search.

Client Test Coverage

The ideas that come just after the brainstorming help testers to identify different scenarios and create a better test coverage for the feature. So you need to have this time to mature the idea of the feature and think about the flow and possibilities.

It is important to think more broadly when talking about test automation and not think only about the test cases. The planning and brainstorming can lead to breakthroughs that change the testing pattern altogether.

Regression Pack

When you implement the automated regression tests you need to keep well maintained with the development of the features. If not, your regression will be not up to date and you will not be able to trust on your automation anymore. Make sure your regression pack guarantee the functionality of the system and monitor the performance of the tests so as soon as you have some failure you can identify if it was a bug on the system or if is something you need to update with the current development code.

Regression tests should run smoothly and without human intervention. The only thing you need to worry is adding new features to the package.

Visibility

As I have described before, you need to keep it simple. This is the key to have a smooth automation. You need to be sure the stakeholders will understand what is being tested. Show the statistics of how long is taking to run the regression pack, how much time you are saving, the percentage of tests coverage vs time before and after automation, the overall quality improved.

Sharing this data will show a positive thinking about automation and how much you have improved automating the tests. This makes it simpler to frequently update test scripts, and guarantees collaborative effort through mutual cooperation.

Stay well aligned with Developers

It is essential to be aligned with the development work. Understand all the flow and how something they have changed could impact on another completely different, for example. This will help you to anticipate and be one step ahead when maintaining the scenarios. Also, it is good all the teams work with the same environment, using the most similar tools when it is possible.

Understand the functionality of the current environment, in order to successfully perform root-cause analysis that yields in constructive solutions. This will help you to find bugs more efficiently and build your automation focusing on your actual environment. Remember you need to align your needs with your project and the current development cycle. Companies/projects/teams are not equal and there is no formula, but some tips of how can you take the best for your situation.

 

Webinar – Quality Metrics (Sealights)

Hey guys, today I am going to post this quick webinar with some QA metrics to use in your project.

Really simple and good presentation about percentages of integration/code coverage and other metrics.

 

Categories: QA Tags: , ,

Webinar Jmeter Pipeline – Improving the performance

Hi guys, just sharing a cool webinar that I watched this week about performance tests, but what I thought really interesting is the demo with Taurus and Jmeter.

The performance tests with Taurus look way more readable and simple than when creating on Jmeter.

Common questions about Performance Tests

February 25, 2017 Leave a comment

 

When do I need to create a Performance Test ?

To validate the behavior of the system at various load conditions performance testing is done. So you can reproduce several user performs for desired operations Customer, Tester, Developer, DBA and N/W management team checking the behavior of the system. It requires close to production test environment and several H/W facilities to populate the load.

What all thing involves in Performance Testing Process?

    • Right testing environment: Figure out the physical test environment before carry performance testing, like hardware, software and network configuration
    • Identify the performance acceptance criteria: It contains constraints and goals for throughput, response times and resource allocation
    • Plan and design Performance tests: Define how usage is likely to vary among end users, and find key scenarios to test for all possible use cases
    • Test environment configuration: Before the execution, prepare the testing environment and arranges tools, other resources, etc.
    • Test design implementation: According to your test design, create a performance test
    • Run the tests: Execute and monitor the tests
    • Analyze, tune and retest: Analyze, consolidate and share test results. After that, fine tune and test again to see if there is any enhancement in performance. Stop the test, if CPU is causing bottlenecking.

What parameters should I consider for performance testing?

    • Memory usage
    • Processor usage
    • Bandwidth
    • Memory pages
    • Network output queue length
    • Response time
    • CPU interruption per second
    • Committed memory
    • Thread counts
    • Top waits, etc.

What are the different types of performance testing?

    • Load testing
    • Stress testing
    • Endurance testing
    • Spike testing
    • Volume testing
    • Scalability testing

Endurance vs Spike

    • Endurance Testing: It is one type of performance testing where the testing is conducted to evaluate the behavior of the system when a significant workload is given continuously
    • Spike Testing: It is also a type of performance testing that is performed to analyze the behavior of the system when the load is increased substantially.

How you can execute spike testing in JMeter?

In JMeter, spike testing can be done by using Synchronizing Timer.  The threads are jammed by synchronizing the timer until a specific number of threads have been blocked and then release at once, creating a large instantaneous load.

What is concurrent user hits in load testing?

In load testing, without any time difference when multiple users hit on the same event of an application under the load test is called a concurrent user hit.

What are the common mistakes done in Performance Testing?

    • Direct jump to multi-user tests
    • Test results not validated
    • Unknown workload details
    • Too small run durations
    • Lacking long duration sustainability test
    • Confusion on definition of concurrent users
    • Data not populated sufficiently
    • Significant difference between test and production environment
    • Network bandwidth not simulated
    • Underestimating performance testing schedules
    • Incorrect extrapolation of pilots
    • Inappropriate base-lining of configurations

What is the throughput in Performance Testing?

In performance testing, throughput is referred to the amount of data transported to the server in responds to the client request at a given period of time. It is calculated in terms of requests per second, calls per day, reports per year, hits per second, etc. Performance of application depends on throughput value, higher the value of throughput -higher the performance of the application.

What are the common performance bottlenecks?

    • CPU Utilization
    • Memory Utilization
    • Networking Utilization
    • S limitation
    • Disk Usage

What are the common performance problem does user face?

    • Longer loading time
    • Poor response time
    • Poor Scalability
    • Bottlenecking (coding errors or hardware issues)

Passing a function as parameter [Protractor + Javascript]

February 18, 2017 Leave a comment

If you look for how to pass a function as a parameter in javascript you will find solutions like this:

bar(function(){ foo("Hello World!") });

This week I learned how to use bind which I found more readable than the method above.

The bind structure is:

function.bind(thisArg,arg1,arg2,...)

thisArg – set the value of “this” to an specific object. This becomes very helpful as sometimes this is not what is intended.

arg1, arg2 – a list of values whose elements are used as the first arguments to any call to the wrapped function.

So, in this assertFirst I call a function passing another function as a parameter.

assertFirst: function() {
    return this.assertion(consumer.assertThatConsumerIsValid);
}

And after assertSecond I call a function and pass a function with bind parameters, ignoring the context.

assertSecond: function(element) {
    return this.assertion(consumer.assertThatConsumerIsDisplayed.bind(null,
 element));
}

Then I receive the function as a parameter and call it inside this assertion.

assertion: function(assert) {
    var consumers = browser.model.getConsumers();
    var promises = [];
    for (var i = 0; i < consumers.length; i++) {
        browser.navigation.goToDetailsConsumer(i);
        promises.push(assert());
    }
    return Q.all(promises);
}

 

Basically I am calling this assertion function to go to each consumer page and assert that is valid and has an element for each of them.

How to test angular and non angular pages with protractor

February 13, 2017 Leave a comment

As you know Protractor is known as the best compatible automation framework for angular sites, since it awaits for angular to do his work and you don’t need to use waits methods. But what if you have an angular site that has some non angular pages ? How can you proceed ?

 

Protractor provides the means to test angularjs and non angularjs out of the box. The DSL is not the same for angular and non angular sites.

AngularJS:

element.find(By.model('details'))

The element keyword is exposed via the global, so you can use in any js file without need to require it. You can check on your runner.js that you are exporting all the necessary keywords.

// Export protractor to the global namespace to be used in tests.
    global.protractor = protractor;
    global.browser = browser;
    global.$ = browser.$;
    global.$$ = browser.$$;
    global.element = browser.element;

 

NonAngularJS: You may access the wrapped webDriver instance directly by using browser.driver.

browser.driver.find(By.model('details'))

You can also create an alias for this browser.driver. This will allow you to use elem.find(by.css(‘.details’)) instead of using browser.driver. for example:

onPrepare: function(){
      global.elem = browser.driver;
      }

So, how can you use the same DSL for non-angular and angular pages ? You will need to ignore the sync, keep in mind that once you set this value, it will run for the entire suite. This will allow you to start using the same DSL.

onPrepare:function(){
   global.isAngularSite = function(flag) {
     browser.ignoreSynchronization = !flag;
   };
}

You can add a Before for each angular/non angular scenario, you just need to tag each scenario indicating which one will run on an angular page, example:

 this.Before({tags: ['~@angular'] },
function(features, callback) {
   isAngularSite(false);
   callback();
 });

 this.Before({tags: ['@angular'] },
function(features, callback) {
   isAngularSite(true);
   callback();
 });

 

Hope this helps you guys !

Mobile Automation Strategy

January 29, 2017 Leave a comment

Critical scenarios

First of all, you need to build a set of the most critical/important scenarios. So, create a smoke tests with the critical basic features and divide them into phases. Also, remember to add the most frequent scenarios, those that are used in a daily basis.

 

Continuous integration

Implement your continuous integration since the beginning so you can follow when a scenario has broken and if you have false positives. The reason why is you need to trust on your automation, for this reason in the beginning you will need to pair the manual tests with the automation until you have confidence on your tests.

 

Devices

It is impossible to make your tests run on all the existent devices in the world. So, what you need to do is getting the information about what are the most used devices for your app. Exactly, this needs to follow your app, your users. If you don’t have and there is no possibility to get this data, then you can follow the most used devices in general. Focus on your app and your client in the first place.

In this category we can include the different OS’s, screen resolutions, etc.

 

Network

Mobiles are trick because you need to test the network, so you will need to have specific scenarios to simulate the 3G, 4G, WiFi. Remember to have the expected behaviour with poor connection or if the connection drops down and back again.

 

Language (Localisation Testings)

If you have a multiple language app, you also need to worry with the translation.

  1. You can add the language after all the smoke tests are done, since this is easier and faster to test manually.
  2. You can add a specific scenario to go through all the pages and check the translation against the database.
  3. You can specify on your automation that you will run each time with a different language and add the checks along the scenarios.

My suggestion is go for a specific scenario going through all the main pages and checking the translations (2). If you go with option 3 remember, your automation will take longer since it is performing all the scenarios again but with different languages, when a simple assertion on the page without any functionality check would be enough.

 

Screen Orientation

As for mobile, you can have portrait or landscape, so you need to remember to add scenarios related to the orientation. You can start the tests including both of the orientations. You will need to set this in the beginning of the automation or you can have specific scenario to test the orientation for the main screens.

 

Emulators vs Real Devices

Another aspect for which “balance” is a good mantra is testing on real devices vs. emulators. Emulators can’t give you the pixel-perfect resolution you might need for some testing or allow you to see how your app functions in conjunction with the quirks of real-life phone hardware. But they do allow you to do cost-efficient testing at scale and are a powerful tool to have in your mobile testing arsenal. A healthy mix of real device and simulator/emulator testing can give you the test coverage you need at a reasonable price.

 

Be sure you are leaving room for growth, both of the marketplace and your own needs. You need to always choose the best tools and practices that fit your needs, but at the same time you need to think about what is coming in the future. So, expand your automation thinking about what could come next, and minimize the threat of having to spend more time and resources redoing or revising systems that are out of date. Choose always flexibility: Cross-platform testing tools and scalable third party infrastructure are a good example of how to keep it.

%d bloggers like this: