Debugging JMS (Open MQ) issues in GlassFish
In my earlier posts, i had written about the features and enhancements that were introduced in GlassFish V2 in the JMS area. In this post let us see how we can debug JMS related problems (if) in GlassFish.
First, a little background on JMS integration in GlassFish V2. Open MQ is the default JMS provider that is bundled with GlassFish V2 and is the only JMS provider whose life cycle (start/stop) can be controlled by GlassFish. And this life cycle control is made possible by the jms resource adapter (jmsra) that is provided by Open MQ. Though other MQ providers (like WebSphere MQ, ActiveMQ, JBoss messaging, TIBCO MQ...) can be integrated with GlassFish V2, this integration is limited to runtime integration only (through a Java Connector Architecture API compliant resource adapter like generic jmsra).
1. It is not possible to control the life cycle of any other JMS provider (other than Open MQ) in GlassFish V2.
2. And Java EE spec demands that a Java EE application server should always be started with a JMS service.
Requirements 1 and 2 together mean that an Open MQ instance should be available when the GlassFish V2 starts up, and this is only possible if
a. GlassFish V2 starts a jms broker when it starts up OR
b. An administrator starts a JMS broker and configures GlassFish V2 to use it.
(a) can be achieved by configuring the jms-service in GlassFish to be EMBEDDED or LOCAL. Life cycle of the bundled Open MQ broker is managed by GlassFish (using the jmsra). EMBEDDED meaning the broker will be started in the same VM as that of the application server and LOCAL meaning it will be started in a separate VM.
(b) can be achieved by configuring jms-service as REMOTE.
EMBEDDED being the default mode for a DAS instance, and LOCAL being the default for a cluster instance. These modes have been present even in earlier versions of GlassFish (and Sun Java System Application Server), but there are subtle differences in the underlying working across different versions.
With the context gained from above, let us categorize the issues that one might face when using JMS applications in GlassFish V2.
To get started, for any JMS related issues, it is not sufficient to just look at the application server log file (server.log), but also the Open MQ log file that , which is located
GFHOME/domains/domain1/imq/instance/imqbroker/log/log.txt for DAS
GFHOME/nodeagents/NODEAGENT/INSTANCENAME/imq/instance/BROKERINSTANCE/log/log.txt for cluster instances.
1. Startup issues :
The jmsra would try to start the broker and there could be problems when this happens. And when the jms ra fails to start the broker the startup of GlassFish fails. Some reasons why broker can fail to start are
i. Port is not available : The broker uses certain port numbers when it starts up and these ports have to be free, the default port is 7676 and this is main port where the broker listens for incoming connection. If the broker is started in LOCAL mode then it also requires another port which is the RMI port and this is 100 plus the application server rmi registry port. You have to ensure that these ports are free. If a problem occurs then the MQ log file would clearly print the message showing a bind exception,
For contingency situations, in the event that you want to configure the RMI port (and not allow AS to choose one for you), you can configure it using the System property "com.sun.enterprise.connectors.system.mq.rmiport=
Note: This property "com.sun.enterprise.connectors.system.mq.rmiportt" is not supported (not tested) by GlassFish V2, which means if you use this, you are on your own. This is just a developer aid that is provided.
ii. Loop back address (127.0.0.1) is used in a clustered instance.
This has been documented in GlassFish documentation (http://docs.sun.com/app/docs/doc/819-3666/gawmb?a=view).
In GlassFish V2, auto clustering was introduced as a new feature, an MQ cluster is created behind the scenes when an GlassFish cluster is created. And an Open MQ clustered broker cannot be started if the IP is a loop back IP, and this is by design of Open MQ.
There are couple of workarounds for this.
a. Modify the /etc/hosts file to ensure that the hostname (localhost, or a host name) points to a valid IP address, could be a DHCP address or a static IP.
b.(a) should address most of the situations, but for special ones, there is a property that can be used to disable the auto clustering feature in GlassFish V2. Again the following propery is not supported (not tested) in GlassFish V2, its just a developer property that may not be production ready.
When this property is set broker cluster will not be created along with a GlassFish cluster and each broker in the clustered instance will function as an independent standalone broker instance.
2 Runtime issues : In GlassFish V2 there were few enhancements to the EMBEDDED mode that was introduced in GlassFish V1, the V2 EMBEDDED mode uses in memory objects as a means of communication between AS and the MQ broker, and this is possible because they are running in the same VM. Whereas the V1 EMBEDDED mode still used socket based communication between the applications and the Open MQ broker. If you find any issues with the EMBEDDED mode, as a quick check you can try to use the LOCAL mode and retry the use case. If it works with LOCAL and not with EMBEDDED mode then is a clear issue with the enhancements that happened. You should create a GlassFish issue in issue tracker.
Also , keep in mind that LOCAL mode would make the broker run in a separate VM,
If you still have questions/ issues in using Open MQ with GlassFish please post them at GlassFish forums.