By Patrick Lightbody
Additional Tuning for Nexus + Selenium
There are a few other miscellaneous tricks we did with this project that are worth sharing, most of which are visible in the
SeleniumTest source code and the corresponding
SeleniumJUnitRunner, which is used by any test case extending SeleniumTest. In there we do a few things:
- Check for any mock assertion failures whenever a Selenium call is made.
- Capture an automatic screenshot of the browser upon any test failure (great for debugging).
- Capture the log of all network calls made during the test, including HTTP headers (also great for debugging).
- Utilize the grid.sonatype.org build farm for launching browsers remotely.
Checking Mock Assertions
Recall that in ChangePasswordTest we set mock expectations with the following code:
MockHelper.expect("/users_changepw", new MockResponse(Status.SUCCESS_NO_CONTENT, null) {
@Override
public void setPayload(Object payload) throws AssertionFailedError {
UserChangePasswordRequest r = (UserChangePasswordRequest) payload;
assertEquals("password", r.getData().getOldPassword());
assertEquals("newPassword", r.getData().getNewPassword());
}
});
What we didn't explain is that the MockResponse's setPayload() method was not being called by the test case itself, but rather via a callback managed by an entirely separate thread within the mock framework. This thread is the one that picks up the HTTP request from the browser and maps it to the mock response.
If an assertEquals() call fails within that thread, it will throw an AssertionFailedError, which is standard JUnit behavior. But because the error is being thrown outside of the scope of JUnit, JUnit won't know about it and therefore won't be able to mark the test as a failure.
To compensate for this, we needed to write the framework to capture assertion fails that happened in the mock environment and then regularly check if one was reported and, if so, report it as a failure to JUnit. We did this by creating a dynamic proxy around the Selenium object that would check the assertions and throw any that were found before every Selenium call:
final Selenium original = new DefaultSelenium(...);
selenium = (Selenium) Proxy.newProxyInstance(..., new InvocationHandler() {
@Override
public Object invoke(Object p, Method m, Object[] args) throws Throwable {
// check assertions on every remote call we do!
MockHelper.checkAssertions();
return m.invoke(original, args);
}
});
This now means that when we use the Page Object Pattern to indirectly interact with Selenium, each interaction will quietly check to see if there were any assertion failures. If there were, it will be re-thrown and reported in a way that JUnit can catch and report on.
Capturing Screenshots and Network Logs Automatically
Another thing we did was make it easier for developers to debug what went wrong if a test failed by capturing a screenshot of the browser upon failure and by always capturing the network traffic between the browser and the mock Nexus web server.
This is where the SeleniumJUnitRunner class comes in to a play. In the world of JUnit, a "runner" is the thing responsible for executing test cases. If your class doesn't specify a runner (and most do not), the default BlockJUnit4ClassRuner is used, which looks for methods with the @Test annotation that you're likely already familiar with. Our custom runner extends this class:
public class SeleniumJUnitRunner extends BlockJUnit4ClassRunner {
public SeleniumJUnitRunner(Class<?> c) throws InitializationError {
super(c);
}
protected Statement methodInvoker(FrameworkMethod m, Object test) {
if (!(test instanceof SeleniumTest)) {
throw new RuntimeException("Only works with SeleniumTest");
}
final SeleniumTest stc = ((SeleniumTest) test);
stc.setDescription(describeChild(m));
return new InvokeMethod(m, test) {
@Override
public void evaluate() throws Throwable {
try {
super.evaluate();
} catch (Throwable throwable) {
stc.takeScreenshot("FAILURE");
throw throwable;
} finally {
stc.captureNetworkTraffic();
}
}
};
}
}
What is happening here is that we're overriding the way JUnit invokes a method in our test case. We're still letting it go through, but we're also catching and re-throwing any exception, with the addition of capturing a screenshot in between. We're also capturing the network traffic that ran from the browser.
Both of these methods are part of the SeleniumTest that our test cases extend, and both use standard commands built in to Selenium. Screenshots are taken with a selenium.captureScreenshotToString() command, which returns the screenshot as a Base64 encoded string. The network log is retrieved using the selenium.captureNetworkTraffic(), which returns the network traffic in XML, JSON, or plain text format. Note: in order for the captureNetworkTraffic() command to work, you must start Selenium like so:
selenium.start("captureNetworkTraffic=true");
This tells Selenium to route all browser traffic through its local proxy, which allows it to see the network traffic and capture it for retrieval later.
Launching Browsers from the Sonatype Grid
The last thing we did was some work to make it easier to get started with the test framework. Normally, Selenium Remote Control requires that you run a "Selenium Server" either locally or on another computer:
java -jar selenium-server.jar
While this overhead is relatively small, it'd be nice if we could avoid it entirely. Another problem the Nexus developers have is that they are like most web developers today: they work on a Mac and test/develop locally with Firefox. As such, they don't have an easy way to launch tests that automate IE.
Fortunately, Sonatype has a large grid of machines that are running various operating systems and have many different browsers installed on them. It was currently being used to run Hudson build jobs, but it was clearly also perfectly capable of serving as a remote farm of browsers. As such, we decided that what we'd do is make test runs from developer laptops to Hudson build jobs all use the same set of browsers on the Sonatype grid.
We did this by dynamically opening up an SSH tunnel and port forwarding the necessary ports to talk to the Selenium Server that was already running on the remote machine, as well as to let the browser on the remote machine talk back to the hosted mock Nexus web server.
One big gotcha with this approach is that you have to remember that if there are multiple test runs from two separate developers executing at the same time, they can't both open a remote port forward (pointing back to the mock Nexus web server) on the same port. To solve that issue, we used the port-allocator-maven-plugin developed by Sonatype, which finds a random, unused port to use with the SSH connection.
One requirement of this model is that you must be connected to the internet in order to run tests. While we provide some command line switches that let you run your tests on a local browser, the default assumes you're always on. Also, because we're making an SSH connection, some form of credentials are required. We went with a shared (private) SSH key or the option to use your own personal SSH key and a supplied password for that key. Check out the openTunnel() and seleniumSetup() methods in the
SeleniumTest source code to see how it all works.
The result is that we now have a farm of machines that can easily be used by developer builds and Hudson builds, all without needing to set up local browsers or Selenium Servers (except for those in the Sonatype Grid itself). Developers can now write a test in their IDE on OS X, right click on it and select "Run Test", and have it drive an IE browser on a remote machine.
Conclusion
When I started this project and the Sonatype team suggested we mock out the entire UI, I was skeptical. I felt there were already more than enough challenges with building a Selenium framework for an application built on top of ExtJS and that trying to mock out all these RESTful calls would only make the project more complex. Fortunately, I was wrong!
Because the Nexus team had put so much effort in to their headless integration tests, there was little need to really test the backend through a user interface test. As such, we were given a rare opportunity to truly only focus on the UI. This project is a testimony to the value of writing unit tests, integration tests, and functional UI tests without needing to necessarily embrace the overhead and complexity of trying to test them all at once.
I hope that this article gives you some ideas for testing your project with Selenium, whether it's the use of the Page Object Pattern or coming up creative ways to focus your testing efforts on the things that Selenium does well (UI testing) and using other testing techniques for the things it isn't necessarily best at (functional and unit testing).