3 Things to Avoid in Test Automation

No matter how hard we try to make our test framework and scripts better, there will always be a few things that will keep holding it back. Our quest to a perfect, reliable, quick and maintainable automation suite usually ends up in hours of frustration. Here we discuss a few things we can avoid in test automation, to make our scripts more reliable and most importantly increase the team’s trust on the automation scripts.

6 Things to Avoid in Test Automation

Image Credit: Flickr

Things to Avoid in Test Automation

Not Testing your Test Cases

We test the developer’s code, but who tests our code?

All programs are prone to bugs, but the only scenario where it is not acceptable is when it’s the test script.

Making a mistake in your assertions or other checks defeats the purpose of your test. To avoid this, after creating every test method, I make some modifications in the expected value or any other setting (in code or manually) to make the test fail and confirm that the test works. In some cases, there might be an existing bug which fails the test case you are currently automating, in which case try something to make sure the test passes once the bug is fixed.

Extra Info: For tests which fail due to existing bugs, I always add the bug number to the assertion message. This saves time during result analysis

Flaky Tests

I’m sure that majority of automation testers have wasted hours of their life due to flaky tests. The fact that they pass or fail randomly almost makes it impossible to debug and find the root cause.

SEE ALSO:   WebDriver Tutorial 4 : Better Window Handling

We have to rely on better logging to find the issue and resolve it. In selenium, it’s probably due to delay in loading or state change of a web element. Make use of WebDriver Wait in such situations(Check this post on WebDriver API to find more about it). If its environment related issue, try to find the issue beforehand and provide a better failure reason.

Avoiding unreliable results is of primary importance in any automation suite as it destroys your team’s trust in your script and automation in general.

Considerable Result Analysis Time

You have a fast and reliable automation framework, but once the results are in, you spend hours or even days analyzing the results to separate the bugs from the failure due to script issue. Sounds familiar?

The reporting section of an automation framework is as important(or even more) as the other components. In Agile projects following a short sprint cycle, it’s really important to get the analysis completed as soon as possible.

Some of the methods I have tried is automatically categorizing the errors into various categories. This is comparatively easy in case of Web Service testing as the error messages are within a set of expected of messages. Implement a means to automatically categorize the failure into various expected categories – like authentication issue, data issue, environment issue etc.

In WebDriver, it’s a real mess. It can fail due to assertion failure (which is normally due to a bug – no issues there), but it can also fail in other places (which can be due to a bug or not – this is the problem). We can implement a method similar to the above method one, by differentiating between assertion failures and other failure reasons. For analyzing the latter case, we have to rely on logs or reported steps to find the issue. In our framework, we use a custom reporting module which logs each operation in the report which could be helpful in identifying the point of failure. Along with a screenshot, it can be helpful in reducing the analysis time.

SEE ALSO:   10 Things To Do After Setting Up Raspberry Pi

In addition to this, here are a few obvious things to avoid in test automation.

Don’t automate 100% of your test cases

Unless you are completely sure that each n every inch of the application is covered, don’t go for 100% automation no matter how strong your boss would argue. As testers who doesn’t believe in 100%, we shouldn’t leave it to chance.

Dependent test cases

In our automation framework(web driver) we have a means to automatically rerun the failed cases if they are failed due to script issues (remember the categorization we talked about in the above section). This goes for a toss if there are test case dependencies. This is something you should avoid in having a stable and reliable framework.

Are there any other things you avoid in test automation scripts, let us know in the comments section.

Further Reading:

You may also like...

8 Responses

  1. Pradeep says:

    There can be a few cases where it might b unavoidable to hv dependencies between testcases. I hav had testcases where multiple testcases had similar flow which were taking large time to execute. So had to have them in first testcase and make the second dependant on the first.

  2. Barry Preppernau says:

    Logging results to a DB can help reduce the analysis time considerably. You can query for similar results, etc.

    • axatrikx says:

      Ya.. we also follow that approach. Apart from being able to query the result, we will also be able to create various reports and graphs from the historic data.

  3. Aleisha says:

    WONDERFUL Post.thanks for share..more wait .. …

  4. Laura says:

    When somelne writes an article he/she maintains the
    thought of a uswr in his/her mindd that howw a user can know it.
    Thus that’s why this post is outstdanding. Thanks!

  5. When somelne writes an article he/she maintains the
    thought of a uswr in his/her mindd that howw a user can know it.
    Thus that’s why this post is outstdanding. Thanks!

    Reply

  6. This was excellent info. Discover current front end development technologies and unit testing tool preferences of software professionals!
    http://blog.testproject.io/2016/09/01/front-end-development-unit-test-automation-trends2/

Leave a Reply

Your email address will not be published. Required fields are marked *