Skip to main content

Open quality metrics and processes

Posted by cos on September 18, 2006 at 4:44 PM PDT

Despite the differences between business models, development
processes, functional areas of the application, and the languages,
these applications are written in, the quality approaches for them are
quite similar and final goal is the same. The engineering team has to
deliver quality application to the end users.

Let's retrospect how it gets addressed by different teams. What techniques and tools are used by a few of well
adopted systems in both open and close source communities (to get this
information together I was using different sources; mostly from
available on the Web such as application's web sites, pod casts, etc.)
Also, I have to apologize in advance as the information you'll see
below isn't fully normalized and I'd to spent more time for the pretty
formatting, but I guess it will illustrate my point. Also, the choice of application might seem a bit awkward for you, but I could've done it intentionally

Open SuSe Linux:

  • stress testing
  • reliability testing (similar to the above + some boundary testing)
  • scalability testing (behavior on NUMA, etc.)
  • Tools: home grown automation tools, bugzilla for defects tracking
  • source code is available through participation in OpenSuse community
  • Number of currently reported bugs: 1238 or so it appears

Mozilla:

  • has some smoke tests http://www.mozilla.org/quality/smoketests/
  • overall Mozilla has a number of QA teams (per functional area,
    actually) and usually they have some tests and testing specs being
    published, so contributors can participate. More information
    http://www.mozilla.org/quality/

  • Tools: mostly manual testing; obviously Bugzilla for defects
    tracking

  • Number of currently reported bugs (I've looken into Firefox only 9237)

LTP: Community quality project for POSIX UNIX's

  • a few commercial vendors are contributing (SuSe, IBM, etc.)
  • participants from all over the place (23+ by now)
  • Tools: lcov, gcov
  • Number of currently reported bugs: the web-site doesn't have much
  • information on it. Or it perhaps, my inability to find relevant
    information.

  • also you might want to check this atricle with more detail about
    Linux reliability testing
    http://www-128.ibm.com/developerworks/linux/library/l-rel/

JetBrains (Intellij IDEA):

despite the tool is commercial it seems to utilize quite smart
model of providing the quality of the product. It is manyfold:

  • developers are using new features immediately; thus as soon as
    new feature is being developed it gets into work. So, the
    programmers are eating their own dog food and polish their product
    as it goes. Also, this allows to polish work flow which is quite
    noticeable as the IDEA's user interface is very intuitive and
    flawless

  • Intellij IDEA has a system of EAP (Early Access Program) which
    allow IDEA's enthusiasts to get new build every week and to start
    using fresh builds and find new bugs without delays. All
    developers are communicating with EAP's participants directly,
    what eliminates any communication hardies

  • as for pure QA, they simply didn't have any designated QA forces
    until a couple of weeks ago and all quality activities were
    covered by development

  • Tools: JUnit, however most of the tests aren't classic unit, but
    rather functional tests; home grown continuous integration system
    (http://www.jetbrains.com/teamcity); defects tracking is done
    through JIRA
    (information is gathered from an insider. Thanks to Max Shafirov
    for his help)

Sun JDK:

  • has a well developed set of test suites for both JDK and virtual machine
  • has designated QA team of 100+ people
  • Tools: home grown tools for test harnesses, task dispatching,
    results analysis

  • internal defects tracking with external gateway and Webbugs
    system, which accepts bugs submission along with some test cases
    being sent in from outside

  • Source code is available through dev.java.net portal. Bugs could
    be submitted externally, however code is under Sun's own source
    license

  • overall number of bugs submitted so far:

Kaffe:

  • free project with free time contributions from around the
    world. No designated QA forces. Most of the quality activities are
    done by developers themselves. However they have a few smart ways of
    proven their product to be stable and sound. Here are the main
    approaches:

    1. continuous integration: Tinderbox on regular basis
    2. sneaking into other's build to see if there's problems:
      http://buildd.debian.org/build.php?pkg=kaffe

    3. some in-house configuration scripts around HP's testdrive site
      (http://www.testdrive.hp.com/current.shtml). One can ftp his
      stuff in, login in and run something, than ftp the results
      back.

    4. Kaffe cross-compilations to CPUs supported by Debian, which
      helps to weed out the odd breakage every now and then, and in
      general is a good test for intrusive patches, as they tend to
      trip up a lot of compiler warnings on non-mainstream platforms

    5. There is also an archive of weekly snapshots created by Kiyo
      Inaba at ftp://ricohgwy.ricoh.co.jp/pub/Lang/Java/Kaffe/ , so
      he regularly notices and catches any breakage that occurs on
      NetBSD.
  • on the side of industrial type of testing, Kaffe runs regression
    testing and Mauve test suite. Also EMMA based coverage is enforced
    for Mauve (http://builder.classpath.org/~cpdev/coverage/)

  • Kaffe folks mentioned that they would love to use some of Sun's
    test suites, and I didn't have any comments at the time of our
    email conversation

  • Dalibor.Topic made quite valuable observation in his email (cited
    below) " Besides changes to the core VM, it involves picking the
    right projects to cooperate with, and driving public attention
    towards them, so that all boats are lifted, rather then everyone
    having to develop their own set of insular functionality. So, we
    are in touch with distributions, and in close touch with various
    projects we use code from, and participate in their (QA) efforts,
    as well, and encourage people to do the same. A lot of the QA work
    comes from the integrators in distribution using Kaffe to deliver
    packaged applications and libraries to their users."

  • the information above has been provided by robilad@kaffe.org and
    jim@kaffe.org. Thanks a bunch, folks!

Eclipse:

  • I'm expecting to add more information about Eclipse as soon as
    I'll get it from the team.

  • For now I was able to find 22,000+ open bugs filed against the
    all components sub-projects

As we can see from above, all these team are mostly using quite
traditional approaches to provide a quality of their software systems
and applications. I would call these traditional ways of quality
delivery extensive or brutal force, as they are involving more tests
to be written and executed and/or more developer's eye-rolling.

However, technologies such BSP (
) and some other
concepts of impact analysis ( some links>) allows to sharply increase ROI of a typical quality
organization. Implementations of the impact analysis might differ, but
the general idea might be well expressed as following a couple of
rules of thumb:

  1. find pieces of source code/modules/subsystems critical for your
    application

  2. focus your quality efforts on those

There is yet another way to somehow guarantee the quality of an
application. You can do constant static analysis of your code, which
will give you some sense of security too. I'll be writing about
different static analyzers next time. Stay tuned to see info about
Coverity, Klocwork, FindBugs and some others. There was an article
recently in one of online technology news sites. The article was
comparing Klocwork vs. Coverity. Frankly speaking, I read it twice and
didn't get a clue of what was the difference between those two :-)

Another lesson I learned from the above observation, is that despite
of popularity of the platform quality processes might be done equally
or less successful, especially in the world of open source software.

Anyways, it's being rather a long post. And I'd like to draw a
conclusion now.

Sun JDK is getting to open source quite soon. Considering all examples
above I could tell, that Sun JDK's quality process might be
implemented in one of already known models. However, "already known"
doesn't mean "perfect" or "sufficient".

Thus, I would love to hear your ideas, suggestions, comments,
outbursts, etc. Please post them here or just send me an email to
kboudnik@gmail.com

Related Topics >>