Skip to main content

SJSAS 9.1 (Glassfish V2) posts new SPECjAppServer 2004 result

Posted by sdo on July 10, 2007 at 7:27 AM PDT

Today, Sun officially announced SPECjAppServer 2004 scores on our Sun Java
Application Server 9.1, which (as you no doubt know) is the productized
version of the open-source Glassfish V2 project. We've previously submitted
results for SJSAS 9.0 (aka Glassfish V1), which at the time we were quite
proud of: they were the only SPECjAppServer scores based on an open-source
application server, and that gave us a quite good price/performance story.
Considering where we started, I was happy to conclude that those scores were
"good enough."



"Good enough" is no longer good enough. Today, we posted the highest ever
score for SPECjAppServer 2004 on a single Sun Fire T2000 application server:
883.66 JOPS@Standard [1]. The Sun Fire T2000 in this case has a 1.4ghz CPU; the
application also uses a Sun Fire T2000 running at 1.0ghz for its database tier.
This result is 10% higher
than WebLogic's score of 801.70 JOPS@Standard [2] on the same appserver machine.
In addition, this result is almost 70% higher than our previous score of 521.42
JOPS@Standard on a Sun Fire T2000 [3], although that Sun Fire T2000 was running at only 1.2ghz. So
that doesn't mean that we are 70% faster than we were, but we are quite
substantially faster and are quite pleased to have the highest ever score
on the Sun Fire T2000.



This result is personally gratifying to me in many ways, and I am proud
of it (and proud of the work by the appserver engineers that it represents)
on many, many levels. But it is just a benchmark, so let me touch on two
things that means.



First, vendors and their marketing department love to play leap-frog games
with benchmarks.
My favorite example of this:
some time ago, BEA posted a score of 615.64 JOPS@Standard [4] on the 1.2ghz T2000,
only to be outdone a few months later by IBM WebSphere's score of 616.22
JOPS@Standard [5] on the same system. It's good marketing press, but at some
point those sort of differences become slightly ridiculous to end users.



So yes, at some point it's conceivable that someone will post a higher
score on this machine than we have; it's conceivable that I'll be back touting
some improvements on our score (because my protestations about benchmarks
aside, I'm not above playing the game either). But don't let any of that keep
you from the point: this is a result that fundamentally changes the nature
of that game.
We used to be content with having a good result in terms of price/performance
and watching IBM, Oracle, and
BEA leap-frog among themselves in terms of raw performance. Now, we're
the raw performance leader. There will be jockeying for position in the
future, but we've changed forever the set of contenders. [We're also still
quite interested in being price/performance leaders, by the way, which is
why we also published a score this week using the free, open-source Postgres
database.]



Second, remember that this is just a benchmark. Will you see similar results
on your application? It depends. SPECjAppServer 2004 doesn't use EJB 3.0, JPA,
WebServices, JSF, or any of a host of Java EE technologies (and frankly,
I'm pretty happy with our performance in most of those areas; see, for example
this article or this one on our WebServices performance). On the other
hand, its performance is significantly affected by improvements we made to
read-only EJBs, remote
EJB invocation, and co-located JMS consumers and producers. So some of the
improvements we've made may be in areas your application doesn't even use.
[That's another reason I was happy with our previous scores: they established
us as a viable appserver vendor, and I knew that customers who benchmarked
their own applications would likely see better relative performance than
that displayed by SPECjAppServer.]



Don't get me wrong: we have also made substantial performance improvements
across
the board: in the servlet connector and container, in JSP processing,
in the local EJB container,
in connection pooling, in CMP 2.1, and so on. This is really an important
performance release for us. But as I always have said: the
only realistic benchmark for your
environment is your application. So go grab a recent build of glassfish V2,
and see for yourself.

Now, as always, some disclosures:
SPEC and the benchmark name SPECjAppServer 2004 are registered trademarks of the Standard Performance Evaluation Corporation.
Competitive benchmark results stated above reflect results published
on www.spec.org as of 07/10/06. The comparison presented is based on
application servers run on the Sun Fire T2000 1.2 ghz and 1.4ghz servers.
For the latest SPECjAppServer 2004 benchmark results, visit http://www.spec.org/. Referenced scores:

[1] One Sun Fire T2000 (1 chip, 8 cores) appserver and one Sun Fire T2000 (1 chip, 6 cores) database; 883.66 SPECjAppServer2004 JOPS@Standard

[2] One Sun Fire T2000 (1 chip, 8 cores) appserver and one Sun Fire T2000 (1 chip, 6 cores) database; 801.70 SPECjAppServer2004 JOPS@Standard

[3] One Sun Fire T2000 (1 chip, 8 cores) appserver and one Sun Fire T2000 (1 chip, 6 cores) database ; 521.42 SPECjAppServer2004 JOPS@Standard

[4] One Sun Fire T2000 (1 chip, 8 cores) appserver and one Sun Fire V490 (4 chips, 8 cores, 2 cores/chip) database; 615.64 SPECjAppServer2004 JOPS@Standard

[5] One Sun Fire T2000 (1 chip, 8 cores) appserver and one Sun Fire X4200 (2 chips, 4 cores, 2 cores/chip) database; 616.22 SPECjAppServer2004 JOPS@Standard

Related Topics >>