Skip to main content

Switching tracks

Posted by sdo on July 9, 2007 at 2:06 PM PDT

One of those lesser-known features of Java is that it contains two
different bytecodes for switch statements: a generic switch statement,
and an (allegedly more optimal) table-driven switch statement. The
compiler will automatically generate one or the other of these
statements depending on the values in the switch statement: the
table-driven statement is used when the switch values are close to
being sequential (possibly with a few gaps), where the generic
statement is used in all other cases. It's the sort of thing that
intrigues performance-oriented developers: is the table-driven
statement really more optimal? Is it worth coercing the variable
involved in a switch statement so that the compiler can generate a
table-driven statement? Is there ever a case in a real-world program
where this would even matter? Interesting questions, but since I
assumed the answer to the last one was "no", I never really thought
about the first few.

Now, however, I'm looking at some profiles of Glassfish V2, and I find
that when running a particular application, we're spending a full 1% of
our time in this method:

protected java.util.logging.Level convertLevel(int level) {
    int index = level / 100;
    switch (index) {
        case 3: return Level.FINEST;
        case 4: return Level.FINER;
        case 5: return Level.FINE;
        case 7: return Level.CONFIG;
        case 8: return Level.INFO;
        case 9: return Level.WARNING;
        case 10: return Level.SEVERE;
        default: return Level.FINER;

Seems like a pretty simple method to be spending so much time in (and
let's face it, sampling profilers may overstate their time for a method
like this). So I dug in a little further. The level value passed to
this method is always exactly divisible by 100: it's not the case that
level can be 300, 305, and 310. So there is a one-to-one correspondence
between the integers passed to the method and the Level object
returned. So I was rather impressed that the original author of this
code had known enough arcane Java trivia to know that he could coerce
the argument to get the table-driven switch statement.

Alas, if only he'd taken the next step to see if the performance
difference was worthwhile. It turns out that it wasn't: removing the
division from this method and recasting the switch statement to values
of 300, 400, and so on eliminted all the time the profiler attributed
to this method and resulted in a .5% improvement in the way the
application ran. I also did some quick micro-benchmarking of the method
and discovered that if I didn't need to coerce the argument into the
switch statement (that is, if I passed in values of 3, 4, 5, etc. to
begin with), the perfomance of the method was essentially the same, but
adding the division statement to coerce the argument slowed down
execution of the method quite significantly.

At .5% of performance, I'm not sure that this is the real-world example
of where this would ever matter -- though when you provide a platform
for other people's applications, you worry about your operations being
as streamlined as possible. But it is another example of why you should
test you code before making assumptions about how it will perform, and
particularly before writing code to work around a potential performance

Related Topics >>