Oh, go ahead -- prematurely optimize
Recently, I've been reading an article entitled
The Fallacy of Premature Optimization by Randall Hyde. I urge everyone to go read the full article, but I can't help
summarizing some of it here -- it meshes so well with some of my conversations with developers
over the past few years.
Most people can quote the line "Premature optimization is the root of all evil" (which was
popularized by Donald Knuth, but originally comes from Tony Hoare). Unfortunately, I (and
apparently My. Hyde) come across too many developers who have taken this to mean that they
don't have to care about the performance of their code at all, or at least not until the code
is completed. This is just wrong.
To begin, the complete quote is actually
We should forget about small efficiencies, say about 97% of the time: premature optimization
is the root of all evil.
I agree with the basic premise of what this says, and also with everything it does not say.
In particular, this quote is abused in three ways.
First, it is only talking about small efficiencies. If you're designing a multi-tier app
that uses the network alot, you want to pay attention to the number of network calls you
make and the data involved in them. Network calls are a large inefficiency. And
not to pick on network calls -- experienced developers know what things are inefficient,
and know to program them carefully from the start.
Second, Hoare is saying (and Hyde and I agree) that you can safely ignore the small
inefficiencies 97% of the time. That means that you should pay attention to small
inefficiencies 1 out of every 33 lines of code you write.
Third, and only somewhat relatedly, this quote builds into the perception that 80% of
the time an application spends will be in 20% of the code, so we don't have to worry about
our code's performance until we find out we're in the 80%.
I'll present one example from glassfish to highlight those last two points. One day, we
discovered that a particular test case for glassfish was bottlenecked on calls to Vector.size --
in particular, because of loops like this:
for (int i = 0; i < v.size(); i++)
This is a suboptimal way to process a vector, and one of the 3% of cases you need to pay
attention to. The key reason here is because of the synchronization around vector, which
turns out to be quite expensive when this loop is the hot loop in your program. I know,
you've been told that uncontended access to a synchronized block is almost free, but that's
also not quite true -- crossing a synchronization boundary means that the JVM must flush all
instance variables presently held in registers to main memory. The synchronization boundary
also prevents the JVM from performing certain optimzations, because it limits how the JVM
can re-order the code. So we got a big performance boost by re-writing this as
for (int i = 0, j = v.size(); i < j; i++)
Perhaps you're thinking that we needed to use a vector because of threading issues, but
look at that first loop again: it is not threadsafe. If this code is accessed by multiple
threads, then it's buggy in both cases.
What about that 80/20 rule? It's true that we found this case because it was consuming a lot
(not 80%, but still a lot) of time in our program. [Which also means that fixing this case
is tardy optimization, but there it is.]
But the problem is that there wasn't just
one loop written like this in the code; there were (and still are...sigh) hundreds. We
fixed the few that we the worst offenders, but there are still many, many places in the
code where this construct lives on. It's considered "too hard" to go change all the places
where this occurs (though NetBeans could refactor it all pretty quickly, but there's a
risk that subtle differences in the loop would mean that it would need to be refactored
When we addressed preformance in Glassfish V2 in order to get our excellent SPECjAppServer results,
we fixed a lot of little things like this, because we spend 80% of our time in about 50% of
our code. It's what I call performance death by a thousand cuts: it's great when you can
find a simple CPU-intensive set of code to optimize. But it's even better if developers
pay some attention to writing good, performant code at the outset and you don't have to
track down hundreds of small things to fix.
Hyde's full article
has some excellent references for further reading, as well as other important points about
why, in fact, paying attention to performance as you're developing is a necessary part of