Automated Tests Should Be Living Documentation

Wednesday, September 17, 2014 Posted by

To start off let me clarify that when I talk about an automated test in the title, I mean an automated check.  I’m aware of the difference, but most people I interact with on a daily basis still talk about automated tests, not automated checks.

I often see people talking about adding automated checks to “check that something still works”.  I think that if you are doing this, you are looking at your automated checks in the wrong way.

The Ideal

TDD is a methodology that is prevalent in the modern development landscape and the base idea is that you write some failing checks in advance (writing these checks first helps to highlight bad design patterns) and then you write the required code to make them pass.  These checks were never really designed to “check” that something still works, they are describing a series of desired behaviours that you want your program to exhibit and then the code is written to make program work in that way.

Now to me this sounds like documentation, we have a series of desired behaviours that have been expressed in code that tell the developer (or the tester, or anybody that cares to look at them) how the program works.

This documentation may not be perfect, it may be hard to read (Think of a cheap electrical product you have bought that had a manual In Chinese instead of English), it may be severely limited (are there more checks you should add?) and it some cases it may even be wrong (We have all seen automated checks that never fail).  These are all very real issues, but they are issues that are not insurmountable obstacles.

The important thing to note is that as the program changes, these automated checks change as well, they evolve with the product and continue to explain how it works in its current state, just like any halfway decent set of documentation.

How do we achieve this?  Well we make sure that the development team (and I do mean team, not just the developers in the team, or the testers in the team) are responsible for the automated checks and ensure that they are added appropriately and that they run all of them to make sure that they pass before pushing code to the central code repository.

This gives us a high level of confidence that things work in the way we want them to work, and a fast feedback loop if they don’t.  It also means that our documentation (the automated checks) are always up to date because it is changed as the code changes.

 

The Sad Reality…

So what is the point of this post?  Well I regularly see people who are not part of the development team writing automated checks after the fact, your classic “Test Automation Team”.  The focus of these teams is generally to write a series of checks that ensure the program still works in an expected way, in other words checks that detect change (you may be used to the term “regression tests”).

The problem with this is that while your developers are working on a product, it’s constantly changing, so the checks that the automation team have written will constantly be detecting this change (or to put it another way, checks will constantly be failing).  In this scenario automated checks are not seen as documentation, they are seen as a validation layer and this (in my mind) is a bad thing!

When there is a failure in the validation layer defects are raised, time is taken away from the development team to triage non-issues and the automated test team are slowly getting further and further behind as they struggle to keep up with all the changes that are being made as well as adding new checks for new functionality.

While all of this is happening the checks that are being run by the test automation team are rarely ever green because new changes keep coming in that break the test automation build and we all knows what this means, nobody trusts them…

“Oh look the test automation team’s board is red again”

“Don’t worry about it, they probably haven’t updated their checks to deal with our latest change…”

People start to distrust the automated checks written by the test automation team, and this then normally results in the test automation team announcing that they need more resource to be able to keep up with the workload, however this then just exacerbates the problem.

You’re now in a downward spiral of test automation hell, where we are writing more and more tests, things keep failing for no good reason and problems start slipping through the cracks.  People get stressed, the feedback loop is getting longer and longer and things keep breaking.

 

Sound like any projects you have worked on?

Testing doesn’t add value?

Friday, May 2, 2014 Posted by

I was at the London Tester Gathering last night where the mystery speaker was Michael Bolton.During his talk he said something that caused me to pause:

Testing is a cost of doing business, testing does not add value.

(I’m paraphrasing as I can’t remember the exact quote)

When it was said I had an alarm bell go off in my head (the sort that you get when you are testing a bit of software and suddenly something doesn’t feel right, so you stop and look around and try and work out what triggered that feeling because you know it’s probably a bug).

So I thought about it on the train on the way home and came to the conclusion that he is wrong, testing does add value.

Testing adds value because testing is not just a reactive discipline, it’s also a proactive discipline!

As a tester I look at code written by developers and find problems with it on a daily basis. I then report these problems so that they can be fixed and when the code goes live I like to think that I help those developers look even more awesome than they already are (Outside of the development community I regularly hear people say “Who wrote this shit” when something doesn’t work. I never hear people say “Who tested this shit”).

I agree that I’m not adding value at this point, if anything I’m adding cost. A worthwhile cost, but a cost never the less.

However as a tester I don’t just look at code produced by developers, I also get involved in planning and pre-planning and in my mind this is one of the most important testing activities I can perform.

I have been testing code for over 10 years now and during that time I have seen code that is awesome, code that looks like it was written by 100 drunken monkeys and a multitude of things in the middle. I have been involved in testing from many levels, from code reviews to running UAT workshops, from usability to security and all sorts of other things as well, even performance and load. I’ve been lucky, I’ve got to see a lot and because of this I have a broad range of experience. When I’m in planning/pre-planning sessions I can use all of this experience to add value.

How do I do this? I can help the PO shape the product. I can highlight things that I have seen implemented before and have been a disaster, and get them removed very early in the process. I can suggest additional value added features that I’ve seen before that the PO didn’t think of/didn’t know were possible. I can highlight things that may end up being extremely costly to test so that we can think of different ways to look at the problem that are more cost effective to implement.

But what is the number one thing I can do? I can turn a “Can we do this” into a “Should we do this” before it gets near a developer.

If that’s not adding value, I don’t know what is.

Waiting for Angular

Friday, February 7, 2014 Posted by

Recently I’ve spent a fair bit of time working with Angular JS applications and as great as Angular is, it can be a pain when it comes to automating it.

The main problem you will probably see is due to the fact that Angular does everything asynchronously, so you’re never quite sure when the page has finished loading, if only there was a way to know Angular had finished before you started doing stuff on a page…

Here’s an ExpectedCondition that will wait for Angular to finish processing stuff on the page:

public static ExpectedCondition angularHasFinishedProcessing() {
        return new ExpectedCondition() {
            @Override
            public Boolean apply(WebDriver driver) {
                return Boolean.valueOf(((JavascriptExecutor) driver).executeScript("return (window.angular != null) && (angular.element(document).injector() != null) && (angular.element(document).injector().get('$http').pendingRequests.length === 0)").toString());
            }
        };
    }

JMeter Maven Plugin Version 1.9.0 Released

Wednesday, January 15, 2014 Posted by

It has been a while since the last release and there are quite a few fixes and improvements.

One notable exclusion is JMeter 2.11 support, this is due to some dependency problems with the 2.11 JMeter artifacts.  We plan to make another release to support 2.11 as soon as everything is in place.

So on to the release notes:

Version 1.9.0 Release Notes

  • JMeter version 2.10 support added.
  • Issue #56 – Now using a ProcessBuilder to isolate the JVM JMeter runs in.
  • Merge pull request #70 from Erik G. H. Meade – Add requiresDirectInvocation true to JMeterMojo.
  • Issue #71 – Fixed documentation errors.
  • Issue #63 – Fixed remote configuration documentation errors.
  • Merge pull request #73 from Zmicier Zaleznicenka – Added missed dependency causing file not found / error in NonGUIDriver error.
  • Issue #72 – Remove the maven site from the plugin.
  • Issue #73 – Add missing dependency for ApacheJMeter-native.
  • Issue #84 – Correctly place explicit dependencies in the /lib directory.
  • Issue #66 – Jmeter lib directory contains additional jars.
  • Issue #75 – Allow empty propertiesUser properties.
  • Issue #80 – Integration Tests Failing With Maven 2.
  • Issue #77 – JMeter plugins artifacts now placed in lib/ext directory. You can specify which artifacts are JMeter plugins using the new jmeterPlugins configuration setting:
  • <configuration>
        <jmeterPlugins>
            <plugin>
                <groupId>my.group</groupId>
                <artifactId>my.artifact</artifactId>
            </plugin>
        </jmeterPlugins>
    </configuration>
  • Added the ability to configure the JMeter JVM:
  • <configuration>
        <jMeterProcessJVMSettings>
            <xms>1024</xms>
            <xmx>1024</xmx>
            <arguments>
                <argument>-Xprof</argument>
                <argument>-Xfuture</argument>
            </arguments>
        </jMeterProcessJVMSettings>
    </configuration>
  • Issue #82 – Allow users to specify the resultsDir:
  • <configuration>
        <resultsDirectory>/tmp/jmeter</resultsDirectory>
    </configuration>
  • Issue #64 – Remote execution seems to be stopping before agent stops running the tests.
  • Merge pull request #78 from Mike Patel – Changes to allow system / global jmeter properties to be sent to remote clients.
  • Issue #89 – Add support for advanced log config. If you add a “logkit.xml” into the <testFilesDirectory> it will now be copied into the /bin folder. If one does not exist the default one supplied with JMeter will be used instead. If you don’t want to call your advanced log config file “logkit.xml”, you can specify the filename using:
  • <configuration>
        <logConfigFilename>myFile.xml</logConfigFilename>
    </configuration>
  • Issue #88 – ApacheJMeter_mongodb dependency is not in POM

The Driver Binary Downloader Maven Plugin for Selenium 1.0.0 Released

Wednesday, January 15, 2014 Posted by

The initial stable release of the driver-binary-downloader-maven-plugin has been released. This brings in the following changes:

  1. Improved the performance of the unzip code (things are much quicker now).
  2. Only download binaries for the current OS (note more pulling down windows binaries on your linux box).
  3. PhantomJS support so that you can get GhostDriver(PhantomJSDriver) up and running with minimal effort.

To use it is very simple, just add the following to your POM:

    <plugins>
        <plugin>
            <groupId>com.lazerycode.selenium</groupId>
            <artifactId>driver-binary-downloader-maven-plugin</artifactId>
            <version>1.0.0</version>
            <configuration>
                <!-- root directory that downloaded driver binaries will be stored in -->
                <rootStandaloneServerDirectory>/my/location/binaries</rootStandaloneServerDirectory>
                <!-- Where you want to store downloaded zip files -->
                <downloadedZipFileDirectory>/my/location/zips</downloadedZipFileDirectory>
            </configuration>
            <executions>
                <execution>
                    <goals>
                        <goal>selenium</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>

For more information see the project on github:  https://github.com/Ardesco/selenium-standalone-server-plugin

If you want to see it in action have a look at https://github.com/Ardesco/Selenium-Maven-Template

Waiting with jQuery

Monday, May 6, 2013 Posted by

Waiting can be hard, so here are a couple of useful tricks to use with jQuery:

First of all, have you ever tried to interact with something on the screen only for some background AJAX call to change what is on the screen at the last possible moment as if it was purposly trying to break your test? Well lets get rid of that problem by waiting until all AJAX calls have finished processing:

    public static ExpectedCondition<Boolean> jQueryAJAXCallsHaveCompleted() {
        return new ExpectedCondition<Boolean>() {
 
            @Override
            public Boolean apply(WebDriver driver) {
                return (Boolean) ((JavascriptExecutor) driver).executeScript("return (window.jQuery != null) && (jQuery.active === 0);");
            }
        };
    }

Bear in mind that this will wait until there are no outstanding AJAX calls, once this condition has been met something sneaky could then fire off another AJAX call just to be awkward so it’s not totally foolproof, it should help increase reliability however.

Secondly, have you ever tried to click on an element that is supposed to do something special (e.g. save a form, sort a table, make something magical happen on screen, make a drop down box appear on mouseover, etc) and once you have clicked on it found that nothing happened? The usual thing to do is blame Selenium because it didn’t click on your element, but have you ever thought that Selenium is so fast that it managed to click on the element before the JavaScript that is rendering the page managed to register a listener on the element that you are about to interact with? You may want to try this:

    public static ExpectedCondition<Boolean> listenerIsRegisteredOnElement(final String listenerType, final WebElement element) {
        return new ExpectedCondition<Boolean>() {
            @Override
            public Boolean apply(WebDriver driver) {
                Map<String, Object> registeredListeners = (Map<String, Object>) ((JavascriptExecutor) driver).executeScript("return jQuery._data(jQuery(arguments[0]).get(0), 'events')", element);
                for (Map.Entry<String, Object> listener : registeredListeners.entrySet()) {
                    if (listener.getKey().equals(listenerType)) {
                        return true;
                    }
                }
                return false;
            }
        };
    }

You would use it like this:

    WebElement myDropDownMenu = driver.findElement(By.id("menu"));
    wait.until(listenerIsRegisteredOnElement("mouseover", myDropDownMenu ))

This would make selenium wait until a mouseover listener has been applied to a dropdown menu element (obviously this example assumes that the menu dropdown is being performed using jQuery).

The above will only work if your site is using jQuery and jQuery is triggering the relevant actions so it is limited, however it is hopefully useful as well, enjoy.

JMeter Maven Plugin 1.8.1 Released

Saturday, April 13, 2013 Posted by

Version 1.8.1 of the JMeter Maven plugin has been released.

This is a minor update that fixes Issue #62 – testResultsTimestamp not working.

JMeter Maven Plugin 1.8.0 Released

Wednesday, March 13, 2013 Posted by

I’m a bit late adding this here (I’ve been distracted updating the wiki for the plugin and doing a bit of running around closing off issues) but thought it would be a good idea to start posting stuff about the JMeter Maven plugin here as well. Version 1.8.0 of the JMeter Maven Plugin is now available in maven central.

The source code is available on Github and there is now also an up to date Wiki as well.

Release Notes

  • Added support for JMeter version 2.9.
  • Fixed issue #61 – Added skipTests ability. You can now add a configuration option to skip tests, use it like this:

    <properties>
        <skipTests>false</skipTests>
    </properties>
     
    <plugin>
        <groupId>com.lazerycode.jmeter</groupId>
        <artifactId>jmeter-maven-plugin</artifactId>
        <version>1.8.0</version>
        <executions>
            <execution>
                <id>jmeter-tests</id>
                <phase>verify</phase>
                <goals>
                    <goal>jmeter</goal>
                </goals>
                <configuration>
                    <skipTests>${skipTests}</skipTests>
                </configuration>
            </execution>
        </executions>
    </plugin>

    If you now run:

    mvn verify –DskipTests=true

    The performance tests will be skipped.

  • #58,#59 – Add dependencies with custom function to /lib/ext folder (pull request made by dpishchukhin that has been merged in).
  • Removed jmx file sorting code as it was not sorting files into a deterministic order. Tests are run in the order the plugin discovers them on disk now.
  • Removed checks for <error>true</error> and <failure>true</failure> in .jtl files, these elements do not occur in JMeter 2.9.
  • Added ability to choose whether to Append or Prepend date to filename using the new “appendResultsTimestamp“ configuration option (Valid values are: TRUE,FALSE):

    <appendResultsTimestamp>false</appendResultsTimestamp>

  • Set default timestamp to an ISO_8601 timestamp. The formatter now used in the configuration option “resultsFileNameDateFormat“ is a JodaTime DateTimeFormatter (See http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html):

    <resultsFileNameDateFormat >MMMM, yyyy</resultsFileNameDateFormat >

  • Added the ability to override the root log level using the new “overrideRootLogLevel” configuration option (Valid log levels are FATAL_ERROR, ERROR, WARN, INFO and DEBUG):

    <overrideRootLogLevel>DEBUG</overrideRootLogLevel>

  • Failure scanner refactored to use a Boyer-Moore algorithm to increase performance on large results files, you should hopefully see some improvements in speed when the plugin is checking your results files for the presence of failures.
  • Added the ability to set the result file format using a new “resultsFileFormat” configuration option (Valid options are XML and CSV, it will default to XML):

    <resultsFileFormat>CSV</resultsFileFormat>

  • Modified remote configuration settings, configuration options are now:

    <remoteConfiguration>
    	<startAndStopServersForEachTest>false</startAndStopServersForEachTest>
    	<startServersBeforeTests>true</startServersBeforeTests>
    	<stopServersAfterTests>true</stopServersAfterTests>
    	<serverList>server1,server2</serverList>
    </remoteConfiguration>

    If you use “startAndStopServersForEachTest” it will override “startServersBeforeTests” and “stopServersAfterTests” if they have been configured as well.

Stop Moving So I Can Click You Dammit!

Sunday, February 24, 2013 Posted by

This is a little trick that some may find useful.

I re-factored some tests that were checking an accordion control on Friday to speed things up, unfortunately when I was done I started getting some intermittent failures.  It seemed that I was now sometimes unable to open up one of the accordion elements.  A bit of head scratching and some time in the debugger with nothing obvious jumping out at me I finally realised what it was.  I was sometimes clicking on an element to open up the next accordion whilst it was still moving (all because I made things run faster).  

The solution? Wait until the element has finished moving.

Hers is an Expected condition that you can use with WebDriverWait:

public static ExpectedCondition<Boolean> elementHasStoppedMoving(final WebElement element) {
    new ExpectedCondition<Boolean>() {
        @Override
        Boolean apply(WebDriver driver) {
            Point initialLocation = ((Locatable) element).getCoordinates().inViewPort();
            Thread.sleep(50);
            Point finalLocation = ((Locatable) element).getCoordinates().inViewPort();
            initialLocation.equals(finalLocation);
        }
    }
}

Please Let Manual Testers Be Manual Testers

Sunday, February 24, 2013 Posted by

The testing world seems to have entered a state of flux in the last couple of years where “Automated Testing” is the new nirvana, I suspect this in part due to more and more companies following Google’s lead and starting to hire developers in test. Now in my opinion having people performing these roles is not a bad thing. When you have somebody writing your test framework you want somebody with some experience writing code and making architectural decisions. A developer in test is a very useful and powerful resource in this situation. The problem is that people have seen how useful a developer in test is, and they have decided that every tester should now become a developer in test, even though the majority of testers are probably not going to be performing automated testing, or writing test frameworks.

I regularly frequent the Selenium User’s mailing list and day by day, I see more and more people coming to the list who just don’t seem to have a basic clue about programming. These people invariably want you to ‘urgently’ help them because they have to write an automated test/test framework and they have no idea how to do it. Now people wanting to learn Selenium and become automated testers is not a bad thing, but most of these requests seem to be people who have testing jobs and have suddenly had the job of automated tester thrust upon them, this is a bad thing!

The other thing I see more and more regularly now is test frameworks that have been written so that manual testers can easily use them without learning how to program. These products seem to come in two forms:

  1. Something that scans a page finding all the elements of interest to abstract away the logic of finding them.Let manual testers be manual testers
  2. An Excel spread sheet driven framework where you have to manually populate the spread sheet with locators/expected text to run the tests.

Now I can see some value in option 1, that could be useful for automated testers who don’t want to spend lots of time locating things that may be interesting on a page and just want to spend time interacting with elements. Personally however I would not want to use something like this, I prefer to locate elements of interest myself to keep my tests lean and mean.

Option 2 is something that should be killed right now in my opinion. It has exclusively been written to take manual testers and trick them with the promise that they will now become automated testers and eventually developers in test. It will not do this; they will spend all of their day filling in Excel spread sheets (Why do we still have such an obsession with Excel spread sheets anyway?) and then the rest of their time updating them as things go wrong.

This process really turns manual testers into data entry clerks. These frameworks are invariably brittle as hell and require a lot of manual effort to keep them up to date. The spread sheets that are used end up being very complex, because automated testing is a complex thing to do, and as you add more functionality they get worse. They are useless, soulless and worst of all, take up all the time that manual testers have so that they stop doing the thing that manual testers do best, manual testing!

In our eagerness to move testing forward we are actually forgetting what the point of automated testing was in the first place. Automated testing was designed to make boring and monotonous regression tests a thing of the past. If the machine can rerun a series of known tests by itself and check that nothing has broken in the current build that frees the manual testers up to do the thing they do best; manual exploratory testing which is where you find all the bugs.

Automated testing checks that the functionality you have written works as you expect it to. Manual testing starts to push the envelope and use the program in ways it wasn’t intended to be used. Manual testing has no barriers, it isn’t constrained, and it is what will find holes in your application.

Some of the best manual testers I have worked with knew nothing about programming; they could not write automated tests and quite frankly had no interest in it. If I was building a new test team and could pick either two of them, or a team of forty automated testers, I would pick the manual testers any day of the week. They will find bugs, they will exercise the system properly and they will be of much more benefit to the project. I would still want automated testers as well to write the automated test framework and write regression tests, the point is that they would be doing this to free up time for the manual testers to go in and break the system in new and inventive ways.

To finish off I would like to make a plea:

Please don’t try to turn manual testers into data entry clerks, and don’t try and force them to become programmers; all you are doing is destroying the testing profession.

Manual testers rock and are the heart and soul of testing, make sure you appreciate them!