Archive

Archive for the ‘Test and Automation Stuffs’ Category

Testcast – QA Market around the world

Hello everyone,

I participated in a Podcast called TestCast ūüėÄ

We talked about our different experiences in QA Market around the world. Sadly, the podcast is 100% in Portuguese with no transcription.

Avoiding false positives with Protractor and Javascript

Do you have false positives when automating with protractor ?

Asynchronous code can be really annoying sometimes, callback pyramids, error handling on every second line, promises swallowing¬†errors by default…

When using ES6 promises any exception which is thrown within a then or a catch handler, will be silently disposed of unless manually handled.

If you are not getting the errors, you are probably coding the promise something like this:

Promise.resolve('promised value').then(function() {
 throw new Error('error'); 
});

Then the more obvious way  to print the error is adding a catch after the promise:

Promise.resolve('promised value').then(function() {
 throw new Error('error'); 
}).catch(function(error) {
 console.error(error.stack); 
});

Remember that you need to add this after all the promises you want to print the error. Do you really think this is reliable ? Are you going to repeat yourself adding a catch after all the promises ?

So, how can you print the error without need to repeat yourself when creating a promise ? You can use .done instead of .then. This will run the promise and after print the error that you want.

Here is how you can avoid them:

Promise.resolve('promised value').done(function() {
 throw new Error('error');
 });

But what do you do when you chain multiple thens ?

There is a library called Bluebird which has a fix integrating your existing code by extending the ES6 Promises API. You can make sure you know about unhandled rejections, so no more false positives.

Promise.onPossiblyUnhandledRejection(function(error){
 throw error; 
});

Then if you want to discard an exception not because it is been swallowed, but you don’t need to print it, you can do something like this:

Promise.reject('error value').catch(function() {
});

 

Bad practices when writing your BDD scenarios

I usually write about the best practices to follow when writing your BDD scenarios. This time I will do different and write some examples that I found of how to not write your BDD scenarios.

 Example 1 РToo many steps :

  Scenario: Valid data entered
    Given a user exists
    When I visit the details access page
    And I fill in email with "test@email.com"
    And I select "read only" from "Permissions"
    And I fill in "Valid until" with "2010-03-01"
    And I press "Grant access"
    Then I should see "Access granted till 2010-03-01."
    And I should be on the details page

 

 Example 2  РUI elements dependency:

   Scenario: Adding a picture 
     Given I go to the Home page 
     When I click the Add picture button 
     And I click on the drop down "Gallery"
     And I click on the first image
     Then I should see the image added on the home page

 

 Example 3 РExcessive use of tables :

   Scenario: Adding a new data user 
     Given I am on  user details page
     When I select an existent user
     And I send some new user data
     |name |age|country|language|address |telephone|
     |James|20 |england|english |street 1|123456789|
     |Chris|30 |spain  |spanish |street 2|987654321|
     Then I should see the correct previous registered data
     |gender  |married|smoke|
     |male    |true   |true |
     |male    |false  |false|

 

 Example 4  РCode and data structure:

   Scenario: Include attachment
     Given I go to the Attachments page 
     When I click the Add attachment button with css "#attachment-button"
     And I click on the first csv file with class ".file .csv"
     Then I should see the file attached with the id ".attachment"

 

  • Write declarative¬†scenarios.
  • Write at a higher level of abstraction, instead of being concerned with clicking widgets on a page.¬†Surfacing UI concerns on a feature can make it fragile¬†to ordinary¬†design changes.
  • Try to¬†stick to having not¬†more than 5 steps per scenario.¬†It’s easier to read.
  • Avoid code or data structure¬†like xpath selectors, css class, regex in your scenario.

 

Test Reporting in Agile

When working in an Agile team, the feedback loop should be as quick as possible. If you don’t send the feedback at the right time, future¬†bugs could be costly as the feature has already a big amount¬†of code.

What is this feedback loop ?

If you have implemented Continuous integration and have automated tests running after a new commit on your dev environment, you need to report back as soon as possible the result of these tests. This is the feedback loop and you need to know the correct time to report the issue to the dev team.

If your automation is taking too long to run the tests after a new commit, this is a sign that you need to improve your smoke tests, maybe your scope is too long or maybe your automation is taking too long to run for other reason, sleeps, not scalable, etc.

The feedback loop influences how your agile process works and if you are saving or wasting time when developing. Tight feedback loops will improve performance of the team in general, give confidence, save time and avoid costly bug fixes.

Feedback loops are not only about the continuous integration, it is about pair programming and unit tests as well, but this time we will focus on Continuous integration tests.

When you are implementing a new scenario in your automated tests, you want to know ASAP¬†if something you implemented is breaking some other scenario or the same scenario. Same situation when you are developing something related to that feature and you want to know if this new implementation is breaking the tests. It is easier to fix, it is fresh in your mind, you don’t need to¬†wait 30 minutes to know there is a bug when you changed the name of a variable…

In my personal opinion,¬†if you don’t have parallel tests to check in multiple browsers or mobiles at the same time if something is broken, it is better you focus on the most used browser/mobile, since this is first priority in all the cases.

Use case: 90% users are on Chrome on Desktop, other 5% users are on Firefox mobile, other 5% are on Safari mobile. What is the best strategy ?

After commit:

  1. Run Smoke tests and all the browsers, take 15 minutes to receive feedback ?
  2. Run Regression tests and all the browsers, take 40 minutes to receive feedback ?
  3. Run Smoke tests and only the most used browser, take 5 minutes to receive feedback, leave to run on all the browsers every hour ?
  4. Run Regression tests and only the most used browser, take 10 minutes to receive feedback, leave to run on all the browsers every hour ?

There is no¬†rule to follow, since¬†in this case¬†you don’t have parallel tests, I would¬†go for the third option. Then, you can focus on the most used browser and leave running the other browsers in a dedicated job each hour. Why not fourth option ? Because you need to keep in mind the business value.

Of course that we need to delivery the feature¬†on all used¬†browsers, but when the time is tight (very often) and you need to deliver as fast as you can, you go for the most business value option and implement the other browsers after. Don’t forget when you automate, you don’t think only about helping the development, but also you think about helping the end users.

If you are wondering how long each type of test should take to give feedback, you can build your own process basing on this graph:

Screen Shot 2017-04-06 at 19.18.44.png

For how long should the team keep the test reports ?

Depends of how many times you run the tests through the day. There is not a rule for that, so you need to find what is the best option for you and your team. In my team we keep the test reports until the 15th run on Jenkins and after this we discard the report and the logs. In most of the cases, I’ve found that if something goes back more than 3 major versions, look¬†for more resolution is a waste of time.

If regressions are reported as soon as they’re observed, the reporting should include the first known failing build and the last known good build. Ideally these are sequential, but this isn’t necessarily the case. Some people like to archive the old reports outside Jenkins. I didn’t feel the¬†need for this until now, but is up to you¬†to keep these reports outside Jenkins.

 

Resources:

http://istqbexamcertification.com/why-is-early-and-frequent-feedback-in-agile-methodology-important/

https://www.infoq.com/news/2011/03/agile-feedback-loops

How to make automation tests more effective

As a QA Engineer you often need to do regression tests and you know this is a really waste of time since every time is the same test. Then, you keep in mind that you need to automate the regression tests in order to save time with the old things that could be broken with the introduction of the new features.

The development of the automation is usually done by the developers in test, people who has QA ¬†and programming knowledge. One completes the other, so you know how to automate the right scenarios with the right prioritisation. Don’t spending time automating scenarios with no or small¬†value or with wrong prioritisation for the regression pack.

You need to include the repetitive scenarios, the critical ones and the happy path but try to avoid the edge cases. Why ? Because edge cases are often scenarios with minimum or no value, where it is a specific flow that should be test once when developing the feature not every time when doing the regression. Regression pack ensures the happy path is working and the end user will be able to do what he needs to do. When you spend time implementing automated edge cases, you actually waste your valuable time implementing scenarios without real business value.

Although the product owners may be able to immediately suggest points to automate, it also depends on developers working on the detailed code. So, for this reason you need to have a good analyse before about the scenarios that should be implemented and if they will change very often.

Here some tips about how to create your automation tests more effectively:

 

Developing Automation

Automation code is quite similar to¬†the development code, but it should be more simple and readable. Automation is not meant to be complex. The business rules should be implemented¬†as part of the BDD scenarios, so the code is clean and doesn’t have anchors of complexity.

ROI (Return of Investment), you need to guarantee the value of the scenarios when automating them. For example, a scenario that tests the function of a feature is far more important and valuable than a scenario that tests if the buttons have the expected colour. Layouts are important, but you will spend more time implementing a scenario asserting the colour than opening the page and manually checking in milliseconds, also it is not a critical issue, the user will be able to finish the scenario despite the colour of the button. Measure the time before and after the automation is implemented, so you can have an idea of the time and effort saved.

Optimizing time

We have a common problem in the agile environment, when we rush to finish the sprint and forget quality. This is because we close the tickets, but we create a backlog of bugs every time and it keeps growing every sprint. These sprint backlogs make it difficult to devote time for the development, debugging and testing of each iteration.

For this reason it is really important to save a good amount of time for testing. To help saving time with the test, you can run the automation in parallel to the development, so any regression bug would be caught as soon as it is merged in a master branch. This gives more scope to the QA engineers to develop efficient tests through exploratory search.

Client Test Coverage

The ideas that come just after the brainstorming help testers to identify different scenarios and create a better test coverage for the feature. So you need to have this time to mature the idea of the feature and think about the flow and possibilities.

It is important to think more broadly when talking about test automation and not think only about the test cases. The planning and brainstorming can lead to breakthroughs that change the testing pattern altogether.

Regression Pack

When you implement the automated regression tests you need to keep well maintained with the development of the features. If not, your regression will be not up to date and you will not be able to trust on your automation anymore. Make sure your regression pack guarantee the functionality of the system and monitor the performance of the tests so as soon as you have some failure you can identify if it was a bug on the system or if is something you need to update with the current development code.

Regression tests should run smoothly and without human intervention. The only thing you need to worry is adding new features to the package.

Visibility

As I have described before, you need to keep it simple. This is the key to have a smooth automation. You need to be sure the stakeholders will understand what is being tested. Show the statistics of how long is taking to run the regression pack, how much time you are saving, the percentage of tests coverage vs time before and after automation, the overall quality improved.

Sharing this data will show a positive thinking about automation and how much you have improved automating the tests. This makes it simpler to frequently update test scripts, and guarantees collaborative effort through mutual cooperation.

Stay well aligned with Developers

It is essential to be aligned with the development work. Understand all the flow and how something they have changed could impact on another completely different, for example. This will help you to anticipate and be one step ahead when maintaining the scenarios. Also, it is good all the teams work with the same environment, using the most similar tools when it is possible.

Understand the functionality of the current environment, in order to successfully perform root-cause analysis that yields in constructive solutions. This will help you to find bugs more efficiently and build your automation focusing on your actual environment. Remember you need to align your needs with your project and the current development cycle. Companies/projects/teams are not equal and there is no formula, but some tips of how can you take the best for your situation.

 

Webinar Jmeter Pipeline – Improving the performance

Hi guys, just sharing a cool webinar that I watched this week about performance tests, but what I thought really interesting is the demo with Taurus and Jmeter.

The performance tests with Taurus look way more readable and simple than when creating on Jmeter.

Common questions about Performance Tests

February 25, 2017 Leave a comment

 

When do I need to create a Performance Test ?

To validate the behavior of the system at various load conditions performance testing is done. So you can reproduce several user performs for desired operations Customer, Tester, Developer, DBA and N/W management team checking the behavior of the system. It requires close to production test environment and several H/W facilities to populate the load.

What all thing involves in Performance Testing Process?

    • Right testing environment: Figure out the physical test environment before carry performance testing, like hardware, software and network configuration
    • Identify the performance acceptance criteria: It contains constraints and goals for throughput, response times and resource allocation
    • Plan and design Performance tests: Define how usage is likely to vary among end users, and find key scenarios to test for all possible use cases
    • Test environment configuration: Before the execution, prepare the testing environment and arranges tools, other resources, etc.
    • Test design implementation: According to your test design, create a performance test
    • Run the tests: Execute and monitor the tests
    • Analyze, tune and retest: Analyze, consolidate and share test results. After that, fine tune and test again to see if there is any enhancement in performance. Stop the test, if CPU is causing bottlenecking.

What parameters should I consider for performance testing?

    • Memory usage
    • Processor usage
    • Bandwidth
    • Memory pages
    • Network output queue length
    • Response time
    • CPU interruption per second
    • Committed memory
    • Thread counts
    • Top waits, etc.

What are the different types of performance testing?

    • Load testing
    • Stress testing
    • Endurance testing
    • Spike testing
    • Volume testing
    • Scalability testing

Endurance vs Spike

    • Endurance Testing: It is one type of performance testing where the testing is conducted to evaluate the behavior of the system when a significant workload is given continuously
    • Spike Testing: It is also a type of performance testing that is performed to analyze the behavior of the system when the load is increased substantially.

How you can execute spike testing in JMeter?

In JMeter, spike testing can be done by using Synchronizing Timer.  The threads are jammed by synchronizing the timer until a specific number of threads have been blocked and then release at once, creating a large instantaneous load.

What is concurrent user hits in load testing?

In load testing, without any time difference when multiple users hit on the same event of an application under the load test is called a concurrent user hit.

What are the common mistakes done in Performance Testing?

    • Direct jump to multi-user tests
    • Test results not validated
    • Unknown workload details
    • Too small run durations
    • Lacking long duration sustainability test
    • Confusion on definition of concurrent users
    • Data not populated sufficiently
    • Significant difference between test and production environment
    • Network bandwidth not simulated
    • Underestimating performance testing schedules
    • Incorrect extrapolation of pilots
    • Inappropriate base-lining of configurations

What is the throughput in Performance Testing?

In performance testing, throughput is referred to the amount of data transported to the server in responds to the client request at a given period of time. It is calculated in terms of requests per second, calls per day, reports per year, hits per second, etc. Performance of application depends on throughput value, higher the value of throughput -higher the performance of the application.

What are the common performance bottlenecks?

    • CPU Utilization
    • Memory Utilization
    • Networking Utilization
    • S limitation
    • Disk Usage

What are the common performance problem does user face?

    • Longer loading time
    • Poor response time
    • Poor Scalability
    • Bottlenecking (coding errors or hardware issues)
%d bloggers like this: