Skip to main content

Jini: 300,000 test-executions strong

Posted by nidaley on April 13, 2006 at 1:32 PM PDT

Testing the Jini Technology Starter Kit is a lot of fun. It's also a lot of work.

Since version 2.0 of the starter kit (when a new plug-able implementation of the Java RMI programming model was added -- named JERI), the test matrix has really exploded! For instance, here is a sampling of the different options that we test:

  • Operating Systems: 1 Mac, 3 Linux, 2 Windows, and 4 Solaris
  • JDKs: Java 1.4.2 and Java 5.0
  • Hotspot VM: client and server
  • RMI Transport: default, JRMP, JERI, HTTP, HTTP thru a proxy, HTTPS, HTTPS thru a proxy, JSSE, and Kerberos
  • Persistence: activatable, persistent, and transient modes of all the starter kit services
  • Assertions: enabled and disabled
  • Logging: default level and finest level
  • NIO Usage in JERI: enabled and disabled
  • Mode: local host execution and distributed execution across many hosts

Taking the cross product of the above options together with the 1600 or so unique tests that we have, we end up running almost 300,000 tests over a full two week test cycle (roughly 25,000 tests per day). Since the entire test process is automated, we simply repeat this test cycle every two weeks. For all the gory details, check out the test matrix for Jini Technology Starter Kit v2.1 (released 10/2005).

Our test lab is comprised of roughly 100 machines with 2 or 3 different operating systems installed on each machine. With a distributed system of this magnitude, the test scheduling software is (of course) based on Jini itself...but that's another blog entry.

Finally, there are a couple challenges presented by such a large volume of regularly repeated tests. First, the size of the test results from a single day can be many gigabytes. Storing and archiving this amount of data is burdensome. We've been able to decrease this to megabytes by throwing away logs for tests that pass. Second, many of the tests wait for a specific amount of time for some event to happen or not happen (we're testing a distributed, event drive system after all). These timing dependencies in the tests can be (too) sensitive to network load, test host load, a specific RMI transport, and many other factors. The test failures caused by these timing issues are a real nuisance. A lot of time can be wasted chasing down these false negatives.

The Final Word...

Why write something in five days that you can spend five years automating?

- Terence Parr

Related Topics >>