Skip to main content

Summary of Mojarra Hudson Jobs and the Automated Tests They Run

Posted by edburns on March 15, 2011 at 12:55 PM PDT

I'm trying to provide transparency into Mojarra development practices and increase Mojarra code quality. To that end, this blog entry summarizes the current state of the Hudson jobs for Mojarra.

Continuous integration is the layer of the software safety net that ties together the other two layers: version control mastery and a comprehensive automated test library. Mojarra has been developed with rigorous dedication to test driven development, and so has an automated test library with over 2,000 tests. The hudson jobs in this article run the jobs on a variety of server configurations.

Internal to the GlassFish group at Oracle, we have a high performance hudson server with numerous slaves. The Mojarra jobs are spread across two slaves, one running Solaris X86 and the other running Oracle Enterprise GNU/Linux. All of the Mojarra related jobs are published to the external hudson view <http://hudson.glassfish.org/view/JSF%20Mojarra/> using the hudson publisher plugin. This blog entry will examine five of those jobs in some detail.

Before diving into the jobs, I want to outline the structure of the automated test suite itself. This test suite has evolved since the first commit was made to Mojarra (then just known as the JSF RI) back in Fall 2001. The tests use a mix of in-container, out-of-container, white box, black box and acrylic box techniques, sometimes using mock objects. The mock objects come from a home-grown test library, <http://java.net/projects/jsf-extensions/>, and pre-date the many nicer mock alternatives such as EasyMock, Mockito, or even JSFUnit. It has never been economical to dump jsf-extensions so we just keep updating it as we go. The tests also use a combination of JUnit, HtmlUnit, and even Cactus. The following table summarizes the layout of the Mojarra test suite.

Source directory relative to Mojarra root What kinds of tests live there
jsf-api/src/test/java JUnit based, out-of-container, white box, unit tests, uses jsf-extensions for mocks.
jsf-ri/test cactus based, in-contaniner, white-box, unit tests. Some simple JUnit based out-of-container tests as well.
jsf-ri/systest HtmlUnit based, in-container, acrylic box, system tests. This entire test suite is run twice, once with partial state saving enabled, and again with partial state saving disabled. The entire test is bundled into a single war, and therefore any tests that need to exercise per-webapp configuration cannot be done here.
jsf-ri/systest-per-webapp HtmlUnit based, in-container, black box, integration tests. For the non-clustered deployment scenario, some of these tests are run twice, once with no virtual servers, and again with two virtual servers.
jsf-test HtmlUnit based, in container, black box, regression tests, specific to bug reports. Each test is responsible for its own deployment configuration and is packaged in its own self contained web-app.

Because the primary target of Mojarra is GlassFish, four of the five jobs are devoted to that container. The fifth job runs a subset of the automated tests on Tomcat 7. I plan to add another job that runs a subset of the tests against WebLogic Server. Of the four GlassFish focused jobs, two of them run against the Mojarra trunk and two run against the MOJARRA_2_1X_ROLLING branch. Version 2.1.0 of this branch was what shipped in GlassFish 3.1. For both the trunk and the branch, the complete test suite is run in two configurations: 1) a two node cluster, and 2) a single node with two virtual servers. The last job runs just the systest suite on Tomcat 7, with certain exclusions. The following table summarizes the hudson jobs for Mojarra.

Job Name Description Expected Number of Successful Tests
JSF_TRUNK-GF3.1 With the Mojarra trunk source base, run the entire automated test suite in GlassFish 3.1 in a two node cluster scenario. 2077
JSF_TRUNK_NO_CLUSTER-GF3.1 With the Mojarra trunk source base, run the entire automated test suite in GlassFish 3.1 in a single instance. Run some of the tests with a two virtual server scenario. 2109
MOJARRA_2_1X_ROLLING-GF3.1 With the MOJARRA_2_1X_ROLLING branch source base, run the entire automated test suite in GlassFish 3.1 in a two node cluster scenario. 2075
MOJARRA_2_1X_ROLLING_NO_CLUSTER-GF3.1 With the MOJARRA_2_1X_ROLLING branch source base, run the entire automated test suite in GlassFish 3.1 in a single instance. Run some of the tests with a two virtual server scenario. 2107
JSF_TRUNK-TOMCAT7 With the Mojarra trunk source base, run the systest with partial state saving set to true. 368

Note that I'd really like to make it so the tomcat job runs more of the entire test suite but it is not economical for Oracle to spend any more time on it at this time. I would welcome help to do this work. Please see <http://java.net/jira/browse/JAVASERVERFACES-1993>. In fact, having the tests run in tomcat at all is the result of an open source contribution from Manuel Gay, integrated by Oracle Mojarra team member Sheetal Vartak.

Additional Notes about These Jobs

There is a persistently occurring and annoying bug in the version of Hudson we use that would cause the hudson parsing of the JUnit test results to fail. The bug is <http://issues.hudson-ci.org/browse/HUDSON-8716>. The cause of the failure is an errant ">" character on a line by itself. I'm not sure why that line is there, but it's messing up the parsing and causing the job to fail. I added the following bit of script to remove the offending character to the job before the test results are processed by Hudson.

find ${WORKSPACE} -name "TEST*.xml" -exec perl -pi.bak -e "s/^>/ /g" {} \;

I'm sure you could do the same thing with sed(1), but I never bothered to learn sed(1). Shame on me.

The last little note is that I wanted to make it so the job fails if fewer than expected tests pass. I achieved this by adding a build parameter to each job with a default value that must be updated as more tests are added. Then, I call the following ant task that parses the test results, extracts the actual number of tests passed, and compares it with the expected number, passed in as a parameter. Note that there is no built-in comparison task in ant, so I just used math from antcontrib, which we already import.

  1. name="assert.expected.passed.test.count">
  2.   >
  3.     property="expected.passed.test.count" />
  4.     >
  5. >perform the assertion>
  6.      >
  7.         name="test.report.dir"
  8.                 value="${impl.dir}/build/test-reports" />
  9.         property="report.summary"
  10.                 srcFile="${test.report.dir}/html/overview-summary.html" />
  11.         property="actual.passed.test.count"
  12.                      input="${report.summary}"
  13.                      regexp="(?s)(.*)(href=.all-tests.html.>)([0-9]{1,6})(.*)"
  14.                       select="\3"
  15.        />
  16.         result="passed.test.count.difference" datatype="int"
  17.             operation="subtract"
  18.             operand1="${actual.passed.test.count}"
  19.             operand2="${expected.passed.test.count}" />
  20.         property="actual.lessthan.expected"
  21.                      input="${passed.test.count.difference}"
  22.                      regexp="^-.*"
  23.                      replace="actual.lessthan.expected" />
  24.         if="actual.lessthan.expected" status="-1"
  25.             message="
  26. --JOB FAILED!-- Fewer than expected tests passed.  Expected: ${expected.passed.test.count} Actual: ${actual.passed.test.count}"/>
  27.       >
  28.     >
  29.   >
  30. >

I'm sure there's a more elegant way to do this, but this gets the job done.

Technorati Tags:

Related Topics >>