Skip to main content

A Glassfish Tuning Primer

Posted by sdo on December 3, 2007 at 11:25 AM PST

When I reported our recent excellent SPECjAppServer 2004 scores, one glassfish user responded:

I sure wish you guys were able to come up with a thorough write up
about the SPEC Benchmark architecture, and the techniques you guys
used to get the numbers you get and, more importantly, how those
techniques might apply to every day applications we run in the wild.

While we do have a full performance-tuning chapter in the glassfish/SJSAS docset, I can understand the appeal of a quick cheat-sheet for getting the most out of glassfish in production. Most of this information has appeared in various blogs, particularly by Jeanfrancois, who is so expertly focused on making sure that grizzly and our http path is as fast as possible. Still, I hope that gathering this quick list together will be a good single-source summary.



One thing to note about these guidlines: a lot of glassfish configurations (particularly when you start with a developer profile) are optimized for developers. In development, performance is different: you'll trade off a few seconds here and there to make starting the appserver faster, or deploying something faster. In production, you'll make opposite trade-offs. So if you wonder why some of the things in this list aren't necessarily the default setting, that's probably why.

Tune your JVM

The first step is to tune the JVM, which is of course different for every deployment. These are the options set via the jvm-option tag in your domain.xml (or the JVM options page in the admin console). As a general rule, I like to use the throughput collector with large heaps and a moderate-sized young generations: that makes young GCs quite fast. That will lead to a periodic full GC, but the impact of that on total throughput is usually quite minimal. If you absolutely cannot tolerate a pause of a few seconds, you can look at the concurrent collector, but be aware that this will impact your total throughput. So a good set of JVM arguments to start with are:

-server -Xmx3500m -Xms3500m -Xmn1500m -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:+AggressiveOpts

On a CMT machine like the SunFire T5220 server, you'll want to use large pages of 256m, and a heap that is a multiple of that:
-server -XX:LargePageSizeInBytes=256m -Xmx2560m -Xms2560m -Xmn1024m -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:ParallelGCThreads=16 -XX:+AggressiveOpts

More details of the impact of a CMT machine are available at Sun's Cool Threads website.



Make sure to remove the -client option from your jvm options, to include the -Dcom.sun.enterprise.server.ss.ASQuickStartup=false flag, and -- if you are using CMP 2.1 entity beans -- to include -DAllowMediatedWriteInDefaultFetchGroup=true.

Tune the default-web.xml

Settings in the default-web.xml file are overridden by an application's web.xml, but I find it easier to set production-ready values in the default-web.xml file so that all applications will get them. In particular, under the JspServlet definition, add these two parameters:

<init-param>
  <param-name>development</param-name>
  <param-value>false</param-value>
</init-param>
<init-param>
  <param-name>genStrAsCharArray</param-name>
  <param-value>true</param-value>
</init-param>

That will mean you cannot change JSP pages on your production server without redeploying the application, but that's generally what you want anyway.



On note about this: this file is only consulted when an application is deployed. So make sure you change the file and then deploy your application, or you won't see any benefit from this change.

Tune the HTTP threads

As you know, there are two parameters here: the HTTP acceptor threads, and the request-processing threads. These value have unfortunately had different meanings in a few of our releases, and some confusion about them remains. The acceptor threads are used to both to accept new connections to the server and to schedule existing connections when a new request comes over them. In general, you'll need 1 of these for every 1-4 cores on your machine; no more than that (unlike, say SJSAS 8.1 where this had a completely different meaning). The request threads run HTTP requests. You want "just enough" of those: enough to keep the machine busy, but not so many that they compete for CPU resources -- if they compete for CPU resources, then your throughput will suffer greatly. Too many request processing threads is often a big performance problem I see on many machines.



How many is "just enough"? It depends, of course -- in a case where HTTP requests don't use any external resource and are hence CPU bound, you want only as many HTTP request processing threads as you have CPUs on the machine. But if the HTTP request makes a database call (even indirectly, like by using a JPA entity), the request will block while waiting for the database, and you could profitably run another thread. So this takes some trial and error, but start with the same number of threads as you have CPU and increase them until you no longer see an improvement in throughput.

Tune your JDBC drivers

Speaking of databases, it's quite important in glassfish to use JDBC drivers that perform statement caching; this allows the appserver to reuse prepared statements and is a huge performance win. The JDBC drivers that come bundled with the Sun Java Systems Application Server provide such caching; Oracle's standard JDBC drivers do as well, as do recent drivers for Postgres and MySQL. Whichever driver you use, make sure to configure the properties to use statement caching when you set up the JDBC connection pool -- e.g., for Oracle's JDBC drivers, include the properties

ImplicitCachingEnabled=true
MaxStatements=200

Use the HTTP file cache

If you serve a lot of static content, make sure to enable the HTTP file cache.





Have I piqued your interest? As I mentioned, there are hundreds of pages of tuning guidelines in our docset. But here at least you have some important first steps.

Related Topics >>