Skip to main content

Java. Quality. Metrics (part 2)

Posted by cos on September 30, 2005 at 9:05 PM PDT

Hi there!



Surprisingly, my last post was rated #1 by Google for 'java quality'
search and lasted in this position for a few days. My friends were
wondering how much I had paid to gain this honorable
position. Honestly: I didn't pay a penny for it and I only have to thank
those of you, who spent time reading it. So, thank you! I also hope not to
disappoint you this time.



Moving closer to the promises given in this href="http://weblogs.java.net/blog/cos/archive/2005/09/java_quality_me_1.html">blog,
I will discuss some of not totally innovative, but still interesting
techniques of improving quality development effectiveness.



Firstly, I'd like to talk about code coverage methodology (sometimes
referred as a test coverage). It is a very efficient quality indicator,
but relatively simple to collect and analyze. I will not spend much
time describing this technique here, considering the variety of good
information sources. If you're willing to educate yourself about the
topic, you might want to check href="http://www.bullseye.com/coverage.html">a paper like this



One of the common misunderstandings of this interesting technology is
as follows: you shouldn't consider coverage numbers as the final point of
your quality development, but rather the starting one.



I mentioned it once and I would like to emphasize it again: code
coverage merely shows an amount of tested code. However, it doesn't
address quality of the test coverage nor its effectiveness; it doesn't
guarantee that the most important pieces of the source code are covered.
But nevertheless, code coverage is a valuable measure, and I'm not in a
position to 'misunderestimate (C)' it.



Coverage data gathering in a pure Java environment is quite a straight
forward task and there's a variety of tools for this job (sometime I'll
talk about mixed one, i.e. where Java code coexists with
native).
So far, you might want to: check this



However, you still have to take care about your build's
instrumentation and about a storage for all produced information. And
that one might be a very significant chunk of data sometimes. Ideally,
you might want to use a database server, which seems to do the job
right.



Processing of collected data and visualizing of results might be not
that easy at times and would require surrounding frameworks and/or
infrastructure.

    Here's the approach we've been using in-house to achieve above goals.

  • once in a month, release engineering prepares JDK's instrumented
    build for coverage testing (it might not be required in your
    particular case, but, of course, I don't know your situation)

  • components' quality engineering teams run their test sets and
    gather results, which can be collected in a central storage during
    the execution stage or right after it

  • collected results might later visualized on-demand through
    Web-interface or another device. It is really helpful - even for
    manual code inspection - to present a product's source code in
    different colors, e.g. red for non-covered areas of code; green
    otherwise. Having executions counter associated with test(s) per
    basic block of code might be very handy as well. Also, you might
    consider a possibility to show a version control's data, i.e.
    check-in information, pointing to the latest code update or
    something. I hope you already got the idea. However, let me
    illustrate my point with that tiny snapshot (A friend of mine Roman Shaposhnick has created this tool a while ago for Sun's compilers work. After some minor modifications it was applied to Java quality as well):

  • coverage data is getting used for managerial reports, exit
    criteria preparation, et cetera
    coverage statistic are used in a process of quality metrics
    preparation. Metric subsystems are working with coverage storage
    facility through some kind of API, i.e. JDBC
  • And please keep in mind: most of the quantative metrics of quality
    development are about trends in the first place. So, instead of doing
    code coverage once in a release time, you might find it useful to collect
    data on a regular basis and build a tendency of the coverage's
    improvements or otherwise:-)



    Sorta disclaimer, though: my opinion might appear as arguable. Well,
    leave me a comment if you disagree with it and I'll try to address
    your concern in my future posts. See you soon!

    Related Topics >>

    ExplorerPanel.java



    org/openide/explorer/ExplorerPanel.java code coverage



    1.2 psk 1: private final class PropL extends Object implements PropertyChangeListener {
    2: 12728 ->       PropL() {}
    1.1 ma 3:
    1.1 ma 4:
          public void propertyChange(PropertyChangeEvent evt) {
    5: 12728 ->            if(evt.getSource() != manager) {
    6: return;
    7:            }
    1.3 rvs 8:
    9: 12728 ->
               if (ExplorerManager.PROP_EXPLORED_CONTEXT.equals(evt.getPropertyName())) {
    10: 12728 ->                  updateTitle ();
    1.1 ma 11:                  return;
    1.1 ma 12:            }
    1.1 ma 13:        }
    1.1 ma 14: }