Skip to main content

Re: A dozen tips for testing free software

Posted by robogeek on March 20, 2007 at 4:23 AM PDT

A dozen tips for testing free software is an interesting article about OSS quality. I want to compare this with some ideas we in the OpenJDK Quality Team have for quality of the OpenJDK and commercial JDK releases. I talked a little about these ideas at FOSDEM last month.

One of the best ways you can participate in the free and open source software (FOSS) revolution is by helping to test software and reporting bugs and issues to project developers to help them improve their code.: I completely agree. There are many types of contribution one can make to open source projects, and not everybody has to be a hotshot coder to make valuable contributions. Over the weekend I listened to FLOSS-Weekly's interview with Jeff Waugh where he said he's not much of a programmer and makes his contributions in other ways such as release engineering.

With the OpenJDK quality team, when we launch this part of the OpenJDK project, we plan to have several ways for collaborating on quality.

Why? There's several reasons to collaborate with us on quality. For example you might be more suited to testing, either test execution or test development. It seems to take a different mind set to do testing and some of the most intriguing software development I've done was the test suites I've written while working in this team. Also by collaborating on quality you can expect future JDK releases to be higher quality. Finally there are ways where your collaboration can help us at Sun do a better job testing the commercial JDK, for example if you provide test cases that reflect what's written in your applications then you have told us valuable clues on programming style etc which we can reuse and apply to our testing.

Testing in the FOSS world is different. I know better than to make broad generalizations about the OSS world. But it seems after a little study that quality processes in the OSS world do not match commercial quality processes, and that the quality expectations of OSS software is not the same. I know this is about to generate some heated comments talking about many eyes make all bugs shallow or some such ...so..

Consider the quality expectation for the commercial JDK. We have customers who have stringent 5-nines or more reliability requirements. Some of these customers are required by their governments to have certain downtime maximums in the order of 5 minutes per year, and that if they exceed that amount of downtime they start being fined huge sums per minute of downtime. I have yet to see an OSS project with that kind of quality.

To test the commercial JDK we have an array of several hundred test systems to which we can automatedly dispatch test jobs allowing us to run test jobs 24/7 and around the world in our distributed team that lives on three continents. We run a variety of tests on each developers putback, on weekly builds, etc. By contrast it seems most OSS projects rely on the ad-hoc testing done by the users (a.k.a. the "many eyes") as they download and run the latest snapshots. More recently there is this idea of test-first-development with developers writing unit tests.

I'm sure it's a valuable approach given the number of developers embracing it. It's an approach I've never experienced, so maybe I'm naive but.. as a developer I know that I would not trust myself to write tests for my own code. I know when I'm wearing a developer hat I think my code is perfect, so obviously I'm going to skimp on writing tests.

Someone wearing a test developer hat is freer to explore, to try oddball scenarios, to engage in a competition with the person writing the code to see how many bugs they wrote, etc.. It's a very interesting puzzle to work out how software might break. I remember a conversation at FOSDEM about some of the CLASSPATH/Mauve developers who get into competing with each other over bug writing versus bug finding versus bug fixing.

Run the latest version of the software you're testing Yeah, bugs are being fixed all the time. On the other hand the later build may have other bugs, it depends a lot on where in the release cycle that later build sits. e.g. the builds leading up to the final release ought to be higher quality than the builds prior to entering Beta. If they're not then what are you doing heading towards the final release?

Check for duplicates before filing a bug report It wastes everybody's time for a bug to be reported that's already known. Unfortunately the state of most bug tracking systems makes it hard to determine whether a bug is already known.

Include enough information in your report so the issue can be reproduced Writing a good bug report is essential... it's so frustrating as a developer to be looking at a report, know that the user at the other end is frustrated, but having no clue what the problem is because they couldn't make a decent explanation of the problem.

If possible, write automated unit tests Yup... I've spent more years than I care to remember working on GUI automation tools. It may not be obvious to an external test developer, but in the environment I live in any manual test execution means an under-utilized test. We run enough test cycles it's not feasible to run manual tests all the time, and so as a practical consideration the manual tests are run less frequently.

If possible, use the code you're testing under real-life conditions I'm more interested in whether the JDK is of use in real life conditions. At the end of the day that's the measure of usefulness of the Java platform, whether it enables all of us to run the applications we want, on the systems we want to run them on. A test involving some random mishmosh of features thrown together might demonstrate some bugs, but if it's not likely to be used that way by a developer in an application then is it worth testing and fixing?

If possible, maintain a separate environment for testing Yeah, I'd hate for someones production environment to be damaged by test code that went awry.

Related Topics >>