Skip to main content

What's new in Glassfish V3.1 Performance

Posted by sdo on March 1, 2011 at 9:55 AM PST

By now, you are hopefully well aware that Glassfish 3.1 has been released.  Because the performance group has been a little quiet lately, maybe you're thinking there aren't a lot of interesting performance features in this release. In fact, there are two key performance benefits: one which benefits developers, and one which is important for anyone using Glassfish's new clustering and high-availability features.

Let's start with developers. One of our primary goal has always been to make the development experience fast and lightweight. This was of course a key factor driving the modularization of Glassfish V3; in V3, we finally had an architecture that allowed the server to load only those Java EE features that a developer was actually using. And the results were quite satisfactory. Given all our previous progress, what -- if anything -- could we actually do in Glassfish V3.1?

Developer Metrics Improve by 29%

With a lot of hard work and a laser-like focus by our development team, we managed to improve our core metric of a "developer scenario" by 29%. This scenario includes starting the appserver, deploying a JSP-based application and accessing its URL, and then a cycle of three changes to the application all followed by tests of the application URL. We aggregate the entire time for that scenario as our primary metric, but the table below shows the improvements in each of these areas:

BUILD STARTUP DEPLOY REDEPLOY AVERAGE
Glassfish V3.1 2.96 3.14 0.9
Glassfish V3.0 3.28 4.35 1.59

As you can see, our improvement here is across the board of all activities that make up the development cycle. Well, most: we haven't figured out how to automatically find bugs in your program, so your testing will take the same amount of time. But at least it will be a little more pleasant since your redeployment cycle will be that much faster in Glassfish V3.1.

Let me mention here too why we test the entire cycle of development and not just startup -- particularly because I've seen some recent blogs touting only startup performance of other now-modularized Java EE servers. In our past, we've made the mistake of focusing solely on startup performance; in the Sun Java Systems Application Server 9.1, we introduced a quick-start feature which improved startup quite significantly. The problem is that it just deferred necessary work until the first time the server was accessed, and the total time it took to start the server and then load its admin console got worse (in fact, general performance of anything socket-related suffered because the architecture of that old server didn't easily support deferring activities). In the end, pure startup isn't what is important -- what's important is how quickly you can get all of your work done. Otherwise, we'd all do what tomcat-based servers do for startup: return to the command prompt immediately, before the server is up, to make it look like startup is immediate. Of course, if you immediately access the server at that point, you'll get an error because it hasn't finished initializing, but hey, at least it started fast.

HA Performance Improves by 33%

On the other end of the spectrum, Glassfish V3.1 contains some quite impressive improvements in its high-availability architecture. This is somewhat old news: when we did project Sailfin for our SIP server, we re-architected the entire failover stack to make it faster and more scalable. But although Sailfin is based on Glassfish, Glassfish V3 didn't support clustering at all yet. V3.1 is the first time we've been able to bring that architural work forward into the main Glassfish release.

In Glassfish V3.1, we support in-memory replication: one server is a primary server and holds the session object. Whenever the session object is modified, the data is sent to a secondary server elsewhere in the cluster so that if the primary server fails, the secondary can supply the session information. This is actually a fairly common implementation of high availability, though of course it does not address the situation of multiple failures. Still, the speed benefits you get from replicating to another server (vs. replicating to something like HADB) are quite significant. We introduced in-memory replication in SJSAS 9.1, and at the time had a nice performance gain compared to traditional database replication.

In Glassfish V3.1, we've taken that architecture and optimized it significantly; it is now based on the scalable grizzly adapter which uses Java NIO for its underpinnings. We've also optimized around the session serialization and general implementation to get a 33% improvement in performance for in-memory replication of Glassfish V3.1 compared to in-memory replication of SJSAS 9.1.1. And again we've tried to pay attention to all aspects of how it might be used. We support full session scope, where the entire HTTP and stateful SFSBs are replicated on each request. We also support modified-attribute scope, where for HTTP sessions, only those attributes in the session that have been marked as changed are replicated. Clearly the modified-attribute scope will perform better, but it does rely on the application to call setAttribute() to mark the attribute as having been modified (which, while not a standard in the Java EE specification, is the common technique adopted by virtuall all Java EE servers).

In practical terms, the improvement we see holds for both kinds of tests. Like all our performance measurements, we take some basic workloads that mimic a typical web or EE application and see how many users we can scale up to such that the response time for the requests is within some limit (typically 0.8 seconds), often with some CPU limit on the machine (e.g., we don't want to use more than 60% of the CPU because in the event of a failure, the remaining instances must take on more load). For HTTP-only, HTTP and EJB, modified attribute, full session: all see about a 33% increase in the number of users we can support and still meet that 0.8 second reponse time.

General Performance

Of course, we haven't neglected general performance either; we've run our usual battery of tests to ensure that Glassfish V3.1 hasn't regressed in performance in any area. The Performance Tuner is back in Glassfish V3.1 to help optimize your production environment so that you get the very best performance Glassfish has to offer. And of course Glassfish V3 remains the only open-source application server to submit performance results on SPECjAppServer 2004.