Skip to main content

OpenMQ, the Open source Message Queuing, for beginners and professionals (OpenMQ from A to Z)

Posted by kalali on March 3, 2010 at 6:45 AM PST

Talking about messaging imply two basic functionalities which all other provided features are built around them; these two capabilities include support for topics and queues which basically lets a message to be consumed by as many interested consumer  as subscribed or at just one of the interested consumer(s).

Messaging middleware was present in the industry before Java come along and every one of them had its own interfaces for providing the messaging services. There were no standard interfaces which vendors try to comply with it in order to increase the compatibility, interoperability, ease of use, and portability. The art of Java community was defining Java Messaging System (JMS) as a standard set of interfaces to interact with this type of middleware from Java based applications.

Asynchronous communication of data and events is one of the always present communication models in an enterprise application to resolve some requirements like long running process delegation, batch processing, loose coupling or different systems, updating dynamically growing or shrinking clients’ set, and so on. Messaging can be considered as one of important basics required by every advanced middleware system software stack which provides essentials required to define a SOA as it provides the required basics for loose coupling.

Open MQ is an open source project mostly sponsored by Sun Microsystems for providing the Java community with a high performance, cross platform message queuing system with commercial usage friendly licenses including CDDL and GPL. Open MQ which is hosted at mq.dev.java.net shares the same code base that Sun Java System Message Queue uses. This shared code base means that any feature available in the commercial product is also available in the open source project and the only difference between them is that for the commercial one you should pay a license fee and in return you will get professional level support from Sun engineers while with the open source alternative you do not have commercial level support and instead you can use community provided support.

Open MQ provides a broad range of functionalities in security, high availability, performance management and tuning,  monitoring, multiple language support, and so on which lead easier utilization of its base functionalities in the overall software architecture.

1 Introducing Open MQ

 

Open MQ is the umbrella project for multiple components which forms Open MQ and Sun Java System Message queue which is name of the productized sibling of Open MQ. These components include:

§     Message broker or messaging server

Message broker is the heart of the messaging system which generally maintains the message queues and topics, manage the clients connections and their requests, control the clients access to queues and topics either for reading or writing, and so on.

§     Client libraries

Client libraries provide APIs which let developers interact with the message broker from different programming languages. Open MQ provides Java and C libraries in addition to a platform and programming language agnostic interaction mechanism.

§     Administrative tools

Glassfish provides a set of very intuitive and easy to use administration tools including tools for broker’s life cycle management, broker monitoring, and destination life cycle management and so on.

1.1 Installing Open MQ

From version 4.3, Open MQ provides an easy to use installer which is platform depended as it is bundled with compiled copy of client libraries and administration tools. Download bits for installer are available at https://mq.dev.java.net/downloads.html, download the binary copy compatible with your operating system. Make sure that you have JDK 1.5 or above already installed before you try to install Open MQ. Virtually Open MQ works on any operating system capable of running JDK 1.5 and above.

Beauty of open source

You might be using an operating system for development or deployment which is not included in table 1; in this case you can simply download the source code archive from the same page mentioned above and build the source code yourself. The building instructions are available at http://download.java.net/mq/open-mq/4.3/b05/Compiling and Running OpenMQ 4.3 in NetBeans.txt which is straight forward. Even if you could not manage to build the binary tools you can still use Open MQ and JMS on that operating system and run your management tools from a supported operating system.

 

To install Open MQ extract the downloaded archive and run installer script or executable file depending on your operating system, as you can see installer is straight forward and does not require any special user interaction except accepting some defaults.

After we complete the installation process we will have a directory structure similar to figure 1 in the installation path we select during the installation.

Figure 1 Open MQ installation directory structure

 

As you can suggest from the figure, we have some Meta directory related to installer application and a directory named mq that contains Open MQ related files. Table 1 lists Open MQ directories along with their descriptions.

Table 1 Important items inside Open MQ installation directory

Directory name

Description

bin

All administration utilities resides here

include

Required headers for using OpenMQ C APIs

lib

All JAR files and some native dependencies (NSS) resides here

lib/*.war

HTTP and HTTP tunneling for firewall restricted configuration, Universal Messaging service for language agnostic use of Open MQ.

var/instances

Configuration files for different broker instances created on this Open MQ installation are stored inside this folder by default.

 

2 Open MQ administration

Every server side application requires some level of administration and management to keep it up and running and Open MQ is not an exception. As we discussed in article introduction each messaging system must have some basic capabilities including point to point to point and publish subscribe mechanism for distributing messages. So, basic administration should evolve around the same concept.

2.1 Queue and Topic administration and management

When a broker receive a message it should put the message in a queue, topic or discard it. Queues and topics are called physical destinations as they are latest place in the broker space that a message should be placed.

In Open MQ we can use two administration utilities for creating queues and topics, imqadmin and imqcmd which the first one is a simple swing application and the later one is a command line utility. In order to create a queue we can execute the following command in the bin directory of Open MQ installation.

 

imqcmd create dst -n smsQueue -t q -o "maxNumMsgs=1000" -o "limitBehavior=REMOVE_OLDEST" –o "useDMQ"

 

 

This command simply creates a destination of type queue, this queue will not store more than 1000 message and if producers try to put more messages in the queue it will remove oldest present message to open up some space for new messages. Removed message will be placed in an specific queue named Dead Messages Queue (DMQ). The queue that we create using this command is named smsQueue. This command creates the queue on the local Open MQ instance which is running on localhost port 7676. We can use –b to ask imqcmd to communicate with a remote server. Following command create a topic named smsTopic on a server running on 192.168.100.1:7676, remember that default username and password for the default broker is admin.

 

imqcmd –b 192.168.100.1:7676 create dst -n smsTopic -t t

 

imqcmd is a very complete and feature rich command, we can use it to monitor the physical destinations. A sample command to monitor the destination can be like:

 

imqcmd metrics dst -m rts -n smsQueue -t q

Sample output for this command is similar to figure 2 which shows consuming and producing messages’ rates of smsQueue. The only thing about the command is that -m argument which determine which type of metrics we need to see, different types of observable metrics for physical destinations includes:

§              con, which will show destination consumer information

§              dsk, disk usage by This destination

§              rts, as already said it shows message rates

§              ttl, shows message total, it is the default metric type if none specified

 

Figure 2 sample output for OpenMQ monitoring command

 

There some other operation which we may perform on destinations:

§              Listing destinations: imqcmd list dst

§              Emptying a destination: imqcmd purge dst -n ase -t q

§              Pausing, resuming a destination: imqcmd pause dst -n ase -t q

§              Updating a destination: imqcmd update dst -t q -n smsQueue -o "maxNumProducers=5"

§              Query destination information: imqcmd query dst -t q -n smsQueue

§              Destroy a destination: imqcmd destroy dst -n smsTopic -t t

2.2 Brokers administration and managemetn

We discussed that when we install Open MQ it creates a default broker which will start when we run imqbrokerd without any additional parameters. An Open MQ installation can have multiple brokers running on different ports with different configuration. We can create a new broker by executing following command which will create the broker and start it for accepting further commands or incoming messages.

imqbrokerd -port 7677 -name broker02

 

In table 1 we talked about a var/instances directory which all configuration files related to brokers reside inside it. If you look at this folder after executing the above command you will see two directories name imqbroker and broker02 which corresponds to default and newly created broker. Table 2 shows important artifacts residing inside broker02 directory.

 

Table 2 Brokers important files and directories

File or directory name

Description

etc/ accesscontrol.properties

Configurations related to access controls, roles, authentication source and mechanism are stored here.

etc/passfile

Users and password for accessing Open MQ are stored here

fs370 (directory)

Broker file based message stores, physical destinations, and so on.

log (directory)

Open MQ log files

props/ config.properties

All configurations relate to this broker are stored inside this file, configurations like services, security, clustering, and so on.

lock (file)

Exists when the broker is running and shows broker’s name, host and port

 

Open MQ has a very initiative configuration mechanism, first of all there are some default configuration which apply on Open MQ installation, there is an installation configuration file which we can use to override the default configurations, then each broker has its own configuration file which can override the installation configuration parameters and finally, we can override  broker (instance) level configurations by overriding them by passing their values to  imqbrokerd when staring a broker. When we create a broker we can change its configuration by editing the config.properties file mentioned in table 2. Table 3 lists all other mentioned configuration files and their default location on your hard disk.

Table 3 Open MQ configuration files path and descriptions

Configuration file location

Configuration file description

default.properties

Located in lib/props/broker* directory, for determining default configuration parameters

install.properties

Located in lib/props/broker* directory, for determining installation configuration parameters

config.properties

Located inside props** folder of instance directory

* We discussed this directory in table 2

** This directory discussed in table 4

Now that we understand how we can create a broker, let’s see how we can administrate and manage a broker. In order to manage a broker we use imqcmd utilities which were also discussed in previous section for destination administration.

There are several level of monitoring data associated with a broker including incoming and outgoing messages and packets (-m ttl), messages and packet incoming and outgoing rate (-m rts), and finally operational related statistics like connections, threads and heap size (-m cxn). Following command returns message and packets rate for a broker running on localhost, listening on port 7677, the statistics updates each 10 seconds.

imqcmd metrics bkr  -b 127.0.0.1:7677 -m rts -int 10 -u admin

As you can see we passed the username with –u parameter and server will not ask for the username again. We can not pass the associated password directly in the command line and instead we can use a plain text file which includes the associated password. the password file should contains:

imq.imqcmd.password=admin

Default password for Open MQ administrator user is admin and as you can see the password file is a simple properties file. Table 4 list some other common tasks related to brokers along with a description and a sample command.

Table 4 Broker administration and management

Task

Description and sample command

pause and resume

Make the broker to stop or resume accepting connections on all services.

Quiescence, un-quiescence

Issuing quiescence on a broker because it to stop accepting new connections, but it serve already connected connections, and un-quiescence command will return it to normal mode.

imqcmd quiesce bkr -b 127.0.0.1:7677 -u admin –passfile passfile.txt

Update a broker

Update a broker attributes like

imqcmd update bkr –o “imq.log.level=ERROR”

Shutdown broker

imqcmd shutdown bkr -b 127.0.0.1:7677 -time 90 -u admin

The sample command shutdown the broker after 90 seconds and during this time it only serves already established connections

Query broker

Shows broker information

./imqcmd query bkr –b 127.0.0.1:7677 –u admin

 

 

3 Open MQ security

After getting the sense of Open MQ basics we can look at its security mode to be able to setup a secure system based on Open MQ messaging capability. We will discuss connection authentication to ensure that a user is allowed to create a connection either as a producer or consumer, access control or authorization to check whether the connected user can send a message to a specific queue or not. Security the transport step is a responsibility of messaging server administrators, Open MQ support SSL and TLS for preventing message tamper and providing the required level of encryption but we will not discuss it as it is out of this article scope.

Open MQ provides supports for two types of user repository out of the box along with facilities to use JAAS modules for authentication. User information repositories which can be used out of the box includes flat file user information repository and directory server with an LDAP communication interface.

3.1 Authentication

Authentication happens before a new connection establishes. To configure Open MQ perform authentication we should provide an authentication source along with editing the configuration files to enable the authentication and configure it to use our prepared user information repository.

Flat file authentication

Flat File authentication rely on a flat file which contains username, encoded password, role name, each line of this files contains mentioned properties of one user. A sample line can be similar to:

admin:-2d5455c8583c24eec82c7a1e273ea02e:admin:1

Default location of each broker’s password file is the broker’s home directory/etc/passwd, later on when we use imqusermgr utility we either edit this file or the file that we described using related properties which are mentioned in table 5.

To configure a broker instance to perform authentication before establishing any connection we can either use imqcmd to update the broker’s properties or we can directly edit its configuration file which is mentioned in table 5. In order to configure broker02 that we create in section 1.2 to perform authenticate before establishing a connection we need add properties listed in table 5 to config.properties file which is located inside props directory or broker’s home directory, broker directory structure and configuration files are listed in table 3 and 2.

We can ignore two later attributes as we want to use the default file in its default path,. So, shutdown the broker, add listed attributes to its configuration file and start it again. Now we can simply uses imqusrmgr to perform CRUD operations on our user repository associated to our target broker.

 

Table 5 required changes to enable connection authentication for broker02

Required information by broker

Corresponding property in configuration file

What kind of user information repository we want to use

imq.authentication.basic.user_repository=file

What is password encoding

imq.authentication.type=digest

Path to directory containing password file

imq.passfile.dirpath=path/to/the/folder

Path to password file which contains passwords

imq.passfile.name=password/file/name

 

Now that we configured the broker to perform authentication we need some mechanism to add, remove, or update the user information repository content. Table 6 shows how we can use imqusermgr utility to perform CRUD operations on broker02.

Table 6 Flat file user information repository management using imqusermgr

CRUD operation

Sample command

Create user

Add a user to user group, other groups are admin and anonymous.

imqusermgr add -u user01 -p password01 -i broker02 -g user

 

List users

imqusermgr list -i broker02

Update a user

Change the user status to inactive.

imqusermgr update -u user01 -a false -i broker02

Remove a user

Remove user01 from broker02

imqusermgr delete -u user01 -i broker02

Now create another user as follow:

imqusermgr add -u user02 -p password02 -i broker02 -g user

We will use these two users in defining authorization rules and later on in section 4 to see how we can connect to Open MQ from Java code when authentication is enabled.

 

All other utilities that we discussed need the Open MQ to be running and all of them we able to perform the required action on remote Open MQ instances but this command only works on local instances and does not require the instance to be running.

3.2 Authorization

After we enabled authentication which result in the broker checking user credentials before establishing the connection we may need to check the connected user’s permissions before we let it perform a specific operation like connecting to a service, putting message into a queue (acting as a producer), subscribing to a topic (acting as a consumer), browsing a physical destination, or automatic creation of physical destinations,. All these permissions can be defined using a very simple syntax which is shown below:

 

resourceType. resourceVariant. operation. access. principalType= principals

 

In this syntax we have 6 variable elements which are described in table 7.

Table 7 different elements in each Open MQ authorization rule

Element

Description

resourceType

Type of resource to which the rule applies:

connection: Connections

queue:Queue destinations

topic: Topic destinations

resourceVariant

Specific resource (connection service type or destination) to which the rule applies.

An asterisk (*) may be used as a wild-card character to denote all resources of a given type: for example, a rule beginning with queue.* applies to all queue destinations.

operation

Operation to which the rule applies. This syntax element is not used for resourceType=connection

access

Level of access authorized:

allow: Authorize user to perform operation

deny: Prohibit user from performing operation

principalType

Type of principal (user or group) to which the rule applies:

user: Individual user

group: User group

principals

List of principals (users or groups) to whom the rule applies, separated by commas.

An asterisk (*) may be used as a wild-card character to denote all users or all groups: for example, a rule ending with user=* applies to all users.

 

Enabling authoriization

To enable authorization we will have to do a bit more than what we did for authentication because in addition to enabling the authorization we need to define roles privileges. Open MQ provides a text based file to describe the access controlling rules. Default path for this file is inside the etc directory of broker home, and the default name is accesscontrol.properties. If we do not provide the path for this file when configuring Open MQ for authorization, Open MQ will pick up the default file.

In order to enable authorization we need to add following attributes to one of configuration files depending on how much wide we want the authorization be applied. Second attribute is not necessary if we want to use default file path.

 

§              To Enable the authorization: imq.accesscontrol.enabled=true

  • Determine the access control description file path: imq.accesscontrol.file.filename= path/toaccess/contro/file

Defining access control roles inside the access control file is an easy task, we just need use the rules syntax and limited set of variable values to define simple or complex access control roles. Listing 1 shows list of rules which we need to add to our access control file that either can be the default file or another file in the specific path that we should describe in the configuration file. If you are using default access control configuration file make sure that you comment all already defined rules by adding # sign in beginning of each line.

Listing 1 Open MQ access control rules definition #connection authorization rules connection.ADMIN.allow.group=admin                            #1 connection.ADMIN.allow.user=user01                            #1 connection.NORMAL.allow.group=*                                          #2   #queue authorization rules queue.smsQueue.produce.allow.user=user01,user02                                          #3 queue.smsQueue.consume.allow.user=user01,user02                                          #4 queue.*.browse.allow.user=*                                                                      #5   #topic authorization rules topic.smsTopic.*.allow.group=*                                                                      #6 topic.smsTopic.produce.deny.user=user01                                                        #7 topic.*.produce.deny.group=anonymous                                                        #8              topic.*.consume.allow.user=*                                                                      #9   #Auto creating destinations authorization rules queue.create.allow.user=*                                                                      #10 topic.create.allow.group=user                                                                      #11 topic.create.deny.user=user01                                                                      #12

 

By adding content of listing 1 to access control description file, we are applying following rules in Open MQ authentication, first of all we are letting admin group to connect to the system as administrator #1 and the only user01 as a normal user has previledge to connect to system as administrator #1. We let any user from any group to connect as a normal user to the broker#2. At #3 we let any of users that we create before to either act as a producer and at #4 we let these two users to act as producers of smsQueue destination. Because of rules we defined at #3 and #4 no other user can act as a consumer or producer of any other queue in the system. At #5 we let any user to browse any queue in the system. At #6 we let any operation on smsTopic by any group of users and later on at #7 we define a rule to deny user01 from acting as a smsTopic producer. As you can see generalized rules can be overridden by more specific rules as we did it for denying user01 from acting as a producer. At #8 we deny anonymous from acting as producer of smsTopic. At #9 we let any user from any group to act as a consumer of any present topic in the system. At #10 we allow any user to use auto destination creating facility. At #11 we just let one specific group to automatically create topics while one single user has no privilege to automatically create a topic #12.

WARNING

If we use an empty access control definition file no user can connect to the system so, no operation will be possible in the system by any user. Open MQ default permission for any operation is denial, so if we need to change the default permission to allow we should add allows rules to the access control definition file.

Now we can simply connect to Open MQ by using user01 to perform administrative tasks. For example executing these commands will create our sample queue and topics in the broker that we create in section 1.2. When imqcmd asked for username and password user user01 and password01

 

imqcmd  –b 127.0.0.1:7677 create dst -n smsQueue -t q -o "maxNumMsgs=1000" -o "limitBehavior=REMOVE_OLDEST" –o "useDMQ"

 

imqcmd –b 127.0.0.1:7677 create dst -n smsTopic -t t

 

As you can see we can simply define very complex authorization rules using a simple text file and its rules definition syntax.

So far, we tried and use simple text file for user management and authentication purposes, but in addition to using text file, Open MQ can use a more robust and enterprise friendly user repository like OpenDS. When we use directory service as the user information repository we will no more manage it using the Open MQ command line utilities and instead the enterprise wide administrators perform user management from one central location and we as Open MQ administrators or developers just define out authorization rules.

In order to configure Open MQ to use a directory server (like Open DS) as the user repository we need to add the properties listed in table 8 to our broker or installation configuration file.

 

Table 8 Configuring Open MQ to use OpenDS as the user information repository

Required information by broker

Corresponding property in configuration file

Determine user information repository type

imq.authentication.basic.user_repository=ldap

Determine password exchange encryption, in this case it is base-64 encoding similar to directory server method.

imq.authentication.type=basic

*Directory server host and port

imq.user_repository.ldap.server=127.0.0.1:7677

A DN for binding and searching.

imq.user_repository.ldap.principal=cn=gf cn=admin

Provided DN password

imq.user_repository.ldap.password=admin

User attribute name to compare the username.

imq.user_repository.ldap.uidattr=uid

**User’s group name

imq.user_repository.ldap.gidattr=groupid

* Multiple directory server address can be determined for high availability reasons, for example ldap://127.0.0.1:7677 ldap://192.168.1.1:7677 each address should be separated from other address by an space.

** User’s group name(s) attribute is vendor dependent and vary from one directory service to other directory services.

There are several other attributes which can be used to either tweak the LDAP authentication in term of security and performance, but essential properties are listed in table 10. Other properties which can be used are available at Sun Java System Message Queue Administration Guide available at http://docs.sun.com/app/docs/coll/1307.5.

The key to use directory service is to know the schema

The key behind a successful use of a directory service in any system is to know the directory service itself and the schema that we are dealing with. For example different schemas uses different attribute names and for each property of a directory service object so before diving into using a directory service implementation make sure that you know both the directory server and its schema.

4 clustering and high availability

We know that redundancy is one of the high availability enabler. We discussed data and service availability along with horizontal and vertical scalability of software solutions. OpenMQ like any other component of a software solution which drive an enterprise should be highly available sometimes both for data and service layer and sometimes just for service layer.

The engine behind the JMS interface in the message broker which its tasks is managing destinations, routing and delivering messages, checking with the security and so on. Therefore if we could keep the brokers and related services available for the clients, then we have the service availability in place. Later on if we manage to keep the state of queues, topics, durable subscribers, transactions and so on preserved in event of a broker failure then we can provide out clients with high availability in both service and data layer.

Open MQ provides two types of clustering, conventional clusters and high availability clusters, which can be used depending on the high availability degree required for messaging component in the whole software solution.

Conventional clusters which provide service availability but not data availability. If one broker in a cluster fails, clients connected to that broker can reconnect to another broker in the cluster but may be unable to access some data while they are reconnected to the alternate broker. Figure 3 shows how conventional cluster member and potential clients work together.

Figure 3 Open MQ Conventional clustering, one MQ broker act as the master broker to keep track of changes and propagate it to other cluster members

As you can see in the figure 3, in conventional clusters, we have different brokers running with no shared data. But in conventional cluster configuration is shared by a master broker that propagates changes between different cluster members to keep them up to dated about destinations and durable subscribers. Master broker keep newly joined or offline members which become online after a period of time synchronized with new configuration information and durable subscribers. Each client is connected to one broker and will use that broker until the broker fails (get offline, a disaster happens, and so on) which will connect to another broker in the same cluster.

High availability clusters provide both service availability and data availability. If one broker in a cluster fails, clients connected to that broker are automatically connected to that broker in the cluster which takes over the failed broker's store. Clients continue to operate with all persistent data available to the new broker at all times. Figure 4 shows how high availability clusters’ members and potential clients work together.

Figure 4 Open MQHigh availability cluster, no single point of failure as all information are stored in a highly available database and all cluster member interact with that database

As you can see in figure 4 we have one central highly available database which we relay upon for storing any kind of dynamic data like transactions, destinations, messages, durable subscribers, and so on. These database itself should be highly available, usually a cluster of different database instances running on different machines in different geographical locations. Each client connect to one broker and continue using the same broker until broker fails, in term of the broker failure, another broker will take over all of the failed broker responsibilities like open transactions, and durable subscribers. Then client will reconnect to the broker that takes over of failed broker’s responsibilities.

All in all we can summarize the differences between conventional clusters and high availability clusters as listed in table 9.

Table 9 comparison between high availability clusters and conventional clusters

Functionality

Conventional cluster

High availability cluster

Performance

Faster

Slower

Service availability

Yes, but partial when master broker is not available

Yes

Data availability

No, a failed broker can cause data lose

Yes

Transparent failover recovery

May not be possible if failover occurs during a commit

May not be possible if failover

occurs during a commit and the

client cannot reconnect to any

Other broker in the cluster

Configuration

Done by setting appropriate cluster configuration broker properties

Done by setting appropriate cluster configuration broker properties

3rd party software requirement

None

Highly available database

 

Usually when we are talking about data availability we need to accept the slight performance overhead, you remember that we had similar condition with GlassFish HADB backed clusters.

We said that high availability clusters use a database to store dynamic information; some databases tested with OpenMQ include ORACLE RAC, MySQL, Derby, PostgreSQL, and HADB. We usually use a high available database to maximize the data availability and reduce the risk of losing data.

We can configure a highly available messaging infrastructure by a following a set of simple steps; First you need to create as many brokers as you need in your messaging infrastructure, so create brokers as discussed in 1.2, In second step we need to configure the brokers to use a shared database for dynamic data storage and act as a cluster member. To perform this step we need to add some changes to the each broker configuration file, these changes includes adding the content of listing 2 to broker’s configuration file

Listing 2 Required properties for making a broker highly available

#data store configuration imq.persist.store=jdbc                          #1 imq.persist.jdbc.dbVendor=mysql                  #2 imq.persist.jdbc.mysql.driver=com.mysql.jdbc.Driver   #2 imq.persist.jdbc.mysql.property.url=jdbc:mysql://localhost:3306 #2 imq.persist.jdbc.mysql.property.databasename=jmsStore         #2 imq.persist.jdbc.mysql.user=privilegedUser                  #2 imq.persist.jdbc.mysql.needpassword=true                    #2 imq.persist.jdbc.mysql.password=dbpassword                   #2   #cluster related configuration imq.cluster.ha=true                                          #3 imq.cluster.clusterid=e.cluster01                           #3 imq.brokerid=e.broker02                                      #4

 

 

Please remove the numbers in the above listing and following paragraph with cueballs

At #1 we define that data store is of type JDBC and not Open MQ managed binary files, for high availability clusters it must be jdbc instead of file. At #2 we provide required configuration parameters for Open MQ to be able to connect to the database, we can replace mysql with oracle, derby, hadb or postgresql if we want to. At #3 we configure this broker as a member of a high availability cluster named e.cluster01, and finally at #4 we determine the unique name of this broker. Make sure that each broker should have a unique name in the entire cluster and different clusters using the same database need to be uniquely identified.

Now we have a database server available for our messaging infrastructure high availability and we configured all of our brokers to use this database, the last step is creating the database that Open MQ is going to use to store the dynamic data. To create the initial schema we should use following command:

./imqdbmgr create all –b broker_name

This command creates the database, required tables, and initializes the database with topology related information. As you can see we passed a broker name to the command, this means that we want the imqdbmgr to use that specific broker’s configuration file to create the database. As you remember we can have multiple brokers in the same Open MQ installation and each broker can operate completely independent from other brokers.

The imqdbmgr command has many usages including: upgrading storage from old Open MQ versions, creating backups, restoring backups and so on. To see a complete list of its commands and parameters execute the command with –h parameter.

Now you should have a good understanding of Open MQ clustering and high availability. In next section we will discuss how we can use Open MQ from a Java SE application and later on we will see how Open MQ is related to GlassFish and GlassFish clustering architecture.

Excellent segue

Thanks, I am learning

 

 

5 Open MQ management using JMX

Open MQ provides a very rich set of JMX MBeans to expose administration and management interface of Open MQ. These MBeans can also be used to monitor the Open MQ using JMX. Generally any task which we can do using imqcmd can be done using JMX code or a JMX console like JDK’s Java Monitoring and Management Console, jconsole, utility.

Administrative tasks that we can do using JMX include managing brokers, services, destinations, consumers, producers. And so on. Meantime management and monitoring is possible using JMX, for example we can monitor destinations using our code or change the configuration of the JMX broker dynamically using our own code or a JMX console. Some use cases of the JMX connection channel includes:

  • We can include JMX code in our JMS client application to monitor application performance and, based on the results, to reconfigure the JMS objects you use to improve performance.
  • We can write JMX clients that monitor the broker to identify use patterns and performance problems, and we can use the JMX API to reconfigure the broker to optimize performance.
  • We can write a JMX client to automate regular maintenance tasks, rolling upgrades, and so on.
  • We can write a JMX application that constitutes your own version of imqcmd, and we can use it instead of imqcmd.

To connect to Open MQ broker using a JMX console we can use service:jmx:rmi:///jndi/rmi://host:port/server  as the connection URL pattern.  Listing 3 shows how we can use Java code to pause jms service temporarily.

Please replace numbers with cueballs in following sample codes and paragraphs

 

Listing 3 pausing the jms service using Java code, JMX and admin connection factory

AdminConnectionFactory  acf = new AdminConnectionFactory();             #1 acf.setProperty(AdminConnectionConfiguration.imqAddress, "192.168.1.1:7677");                                                       #2 JMXConnector  jmxc = acf.createConnection("admin", "admin");          #3 MBeanServerConnection  mbsc = jmxc.getMBeanServerConnection();           #4 ObjectName  serviceConfigName = MQObjectName.createServiceConfig("jms"); #5 mbsc.invoke(serviceConfigName, ServiceOperations.PAUSE, null, null);    #6 jmxc.close();                                                           #7

 

This sample code uses admin connection factory which brings some dependencies on Open MQ classes which make you add imqjmx.jar to your class path. This file is located in Open MQ installation lib directory. At #1 we create an administration connection factory, this class is an Open MQ helper class which make this sample code depending on Open MQ libraries, at #2 we configure the factory to gives us connection to a non-default host, the default host is 127.0.0.1:7676 at #3 we set the credentials which we want to use to make the connection, at #4 we get the connection to MBean server. At #5 we get jms service MBean and at #6 we invoke one of its operation and finally we close the connection at #7.

Listing 4 shows a pure JMX sample code for reading MaxNumProducers attributes of a smsQueue.

Listing 4Reading smsQueue attributes using pure JMX code

HashMap   environment = new HashMap();                       #1 String[]  credentials = new String[] {"admin", "admin"};   #1 environment.put (JMXConnector.CREDENTIALS, credentials);     #1 JMXServiceURL  url; url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://192.168.1.1:7677/server"); #2 JMXConnector  jmxc = JMXConnectorFactory.connect(url, environment); #3 MBeanServerConnection  mbsc = jmxc.getMBeanServerConnection(); #4 ObjectName  destConfigName= MQObjectName.createDestinationConfig(DestinationType.QUEUE, "smsQueu"); #5 Integer  attrValue= (Integer)mbsc.getAttribute(destConfigName,DestinationAttributes.MAX_NUM_PRODUCERS); #6 System.out.println( "Maximum number of producers: " + attrValue ); jmxc.close();                                                        #7

 

At #1 and #2 we determine the parameters that JMX needs to create a connection, at #3 we create the JMX connection, at #4 we get a connection to the Open MQ’s MBean server, at #5 get the MBean representing our smsQueue, at #6 we get value of an attribute which represent the maximum number of permitted producers and finally we close the connection at #7.

To learn more about JMX interfaces and programming model for Open MQ you can check Sun Java System Message Queue 4.2 Developer's Guide for JMX Clients which is located at http://docs.sun.com/app/docs/doc/820-5207.

6 Using Open MQ

Open MQ messaging infrastructure can be accessed from variety of channels and programming languages, we can perform messaging tasks using Java, C, and virtually any programming language capable of performing HTTP communication. When it comes to java we can access the broker functionalities either using JMS or JMX.

Open MQ directly provides APIs for accessing the broker using C and Java but for other languages it takes a relatively evolutionary approach by providing a HTTP gateway for communicating between any language like python, JavaScript (AJAX), C#, and so on. This approach is called Universal Messaging System (UMS) which introduce in Open MQ 4.3. in this section we only discuss Java and OpenMQ and other supported mechanism and languages are out of scope. You can find more information about them at http://mq.dev.java.net

6.1 Using Open MQ from Java

We may use Open MQ either directly or by using application server managed resources, here we just discuss how we can use Open MQ from Java SE as using JMS service from Java EE is straight forward.

Open MQ provides a broad range of functionalities wrapped in a set of JMS compliant APIs, you can perform usual JMS operation like producing and consuming messages or you can gather metrics related to different resources using the JMS API. Listing 5 shows how sample applications which communicate with a cluster of open MQ brokers. Running the application will result in sending a message and consuming the same message. In order to execute listing 5 and 6 you will need to add imq.jar, imqjmx.jar and jms.jar to your classpath. These files are inside the imq installation lib directory. 

Listing 5 Producing and consuming mesaages from a Java SE application public class QueueClient implements MessageListener {      #1    public void startClientConsumer()throws JMSException {    com.sun.messaging.ConnectionFactory connectionFactory = new com.sun.messaging.ConnectionFactory();             connectionFactory.setProperty(com.sun.messaging.ConnectionConfiguration.imqAddressList, "mq://127.0.0.1:7676,mq://127.0.0.1:7677");          #2 connectionFactory.setProperty(ConnectionConfiguration.imqReconnectEnabled, "true"); #3 connectionFactory.setProperty(ConnectionConfiguration.imqReconnectAttempts, "5"); #3 connectionFactory.setProperty(ConnectionConfiguration.imqReconnectInterval, "500"); #3 connectionFactory.setProperty(ConnectionConfiguration.imqAddressListBehavior, "RANDOM"); #3    javax.jms.QueueConnection queueConnection = connectionFactory.createQueueConnection("user01", "password01"); #4    javax.jms.Queue smsQueue = null;    javax.jms.Session session = queueConnection.createSession(false, javax.jms.Session.AUTO_ACKNOWLEDGE);               #5    javax.jms.MessageProducer producer = session.createProducer(smsQueue);    Message msg = session.createTextMessage("A sample sms message");    producer.send(msg);                 #6    javax.jms.MessageConsumer consumer = session.createConsumer(smsQueue);    consumer.setMessageListener(this);         #7    queueConnection.start();                #8    }    public void onMessage(Message sms) { try {    String smsContent = ((javax.jms.TextMessage) sms).getText(); #9    System.out.println(smsContent); } catch (JMSException ex) {   ex.printStackTrace(); }    } }

As you can see in listing 5 we used JMS programming model to communicate with the Open MQ for producing and consuming messages. At #1 we use the QueueClient class to implement MessageListener interface. At #2 we create a connection factory which connects to one of a cluster member; at #3 we define the behavior of the connection factory in relation to the cluster members. At #4 we provide a privileged user to connect to the brokers. At #5 we create a session which automatically acknowledges receiving the messages. At #6 we send a text message to the queue. At #7 we set QueueClient as the message listener for our consumer. At #8 we start receiving messages from the server and finally at #9 we consume the message in the onMessage method which is MessageListener interface method for processing the incoming messages.

We said that Open MQ provides some functionality which let us retirve monitoring information using JMS APIs. It means that we can receive JMS messages containing Open MQ metrics. But before we can receive such messages we should know where these messages are published and how we should configure Open MQ to publish these messages.

First we need to add some configuration to enable gathering metrics and the interval that these metrics should be gathered, to do this we need to add following properties to one of the configuration files which we discussed in table 5.

 

imq.metrics.topic.enabled=true imq.metrics.topic.interval=30

 

Now that we enabled the metric gathering, Open MQ will gather brokers, destinations list, JVM, queue, and topics metrics and send each type of these metrics to a pre-configured topic. Broker gathers and sends the metrics messages based on the provided interval. Table 10 shows the topic names for each type of metric information.

 

Table 10 Topic names for each type of metric information.

 

Topic Name

Description

mq.metrics.broker

Broker metrics: information on connections, message flow, and volume of messages in the broker.

mq.metrics.jvm

Java Virtual Machine metrics: information on memory usage in the JVM.

mq.metrics.destination_list

A list of all destinations on the broker, and their types.

mq.metrics.destination.queue.queueName

Destination metrics for a queue of the specified name. Metrics data includes number of consumers, message flow or volume, disk usage, and more. Specify the destination name for the queueName variable.

mq.metrics.destination.topic.topicName

Destination metrics for a topic of the specified name. Metrics data includes number of consumers, message flow or volume, disk usage, and more. Specify the destination name for the topicName variable.

Now that we know which topics we should subscribe for our metrics we should think about the security of these topics and privileged users which can subscribe to these topics. To configure the authorization for these topics we can simply define some rules in access control file which we discussed in 2.2. Syntax and template for defining authorization for these resources is similar to:

  topic.mq.metrics.broker.consume.deny.user=* topic.mq.metrics.broker.consume.allow.user=user01,user02 topic.mq.metrics.destination.topic.t1.consume.deny.user=* topic.mq.metrics.destination.topic.t1.consume.allow.user=user01

Now that we have the security in place we can write a sample code to retrieve smsQueue metrics, listing 6 shows how we can retrieve smsQueue metrics.

listing 6 retrieving smsQueue metrics using brokers metrics messages public void onMessage(Message metricsMessage) {         try {               String metricTopicName = "mq.metrics.destination.queue.smsQueue";             MapMessage mapMsg = (MapMessage) metricsMessage;             String metrics[] = new String[11];             int i = 0;             metrics[i++] = Long.toString(mapMsg.getLong("numMsgsIn"));             metrics[i++] = Long.toString(mapMsg.getLong("numMsgsOut"));             metrics[i++] = Long.toString(mapMsg.getLong("msgBytesIn"));             metrics[i++] = Long.toString(mapMsg.getLong("msgBytesOut"));             metrics[i++] = Long.toString(mapMsg.getLong("numMsgs"));             metrics[i++] = Long.toString(mapMsg.getLong("peakNumMsgs"));             metrics[i++] = Long.toString(mapMsg.getLong("avgNumMsgs"));             metrics[i++] = Long.toString(mapMsg.getLong("totalMsgBytes") / 1024);             metrics[i++] = Long.toString(mapMsg.getLong("peakTotalMsgBytes") / 1024);             metrics[i++] = Long.toString(mapMsg.getLong("avgTotalMsgBytes") / 1024);             metrics[i++] = Long.toString(mapMsg.getLong("peakMsgBytes") / 1024);           } catch (Exception ex) {             ex.printStackTrace();         }     }  

Listing 6 only shows onMessage method of the message listener class, other part of the code is trivial and very similar to listing 5 with slight changes like subscribing to a topic named mq.metrics.destination.queue.smsQueue instead of smsQueue.

A list of all possible metrics for each type of resource which are properties of the received map message can be found in chapter 20 of Sun Java System Message Queue 4.2 Developer's Guide for Java Clients which is located at docs.sun.com/app/docs/doc/820-5205/aeqej?a=view

7 GlassFish and Open MQ

 

GlassFish like other Java EE application servers need to provide messaging functionality behind its JMS implementation. GlassFish uses Open MQ as message broker behind its JMS implementation. GlassFish supports Java Connector architecture 1.5 and using this specification it integrates with Open MQ. If you take a closer look at table 1.4 you can see that by default, different GlassFish installation profiles uses different way of communication with JMS broker which ranges from an embedded instance, local or Open MQ server running on a remote host. Embedded MQ broker is only mode that Open MQ runs in the same process that GlassFish itself runs. In local mode starting and stopping Open MQ server is done by the application server itself. GlassFish starts the associated Open MQ server when GlassFish starts and stop them when GlassFish stops. Finally remote mode means that starting and stopping Open MQ server should be done independently and an administrator or an automated script should take care of the tasks.

We know that how we can setup a glassfish cluster both with an in-memory replication and also a high available cluster with HADB backend for persisting transient information like sessions, JMS messages, and so on. Here in this section we discuss the GlassFish in memory and HADB backed clusters’ relation with Open MQ in more details to let you have a better understanding of the overall architecture of a highly available solution.

First let’s see what happens when we setup an in-memory cluster; after we configure the cluster, we have several instances in the cluster which uses a local Open MQ broker instance for messaging tasks. When we start the cluster all GlassFish instance starts and upon each instances startup, corresponding Open MQ broker starts.

You remember that we discussed conventional clusters which need no backend database for storing the transient data, and instead use one broker as the master broker to propagate changes and keep other broker instances up to date. In GlassFish in-memory replicated clusters, Open MQ broker associated with the first GlassFish instance that we create act as the master broker. A sample deployment diagram for in-memory replicated cluster and its relation with Open MQ is shown in figure 5

Figure 5 Open MQ and GlassFish in an in-memory replicated cluster of GlassFish instances. Open MQ will work as conventional cluster which lead in one broker acting as the master broker.

 

When it come to enterprise profiles, we usually uses independent high available cluster of Open MQ brokers with a database backend  with a highly available cluster of GlassFish instances which are backed by a highly available HADB cluster. We discussed that each GlassFish instance can have multiple Open MQ brokers to choose between when it creates a connection to the broker, to configure glassfish to use multiple brokers of a cluster we can add al available brokers to glassfish instance and then let the glassfish manage how to distribute the load between broker instances.  To add multiple brokers to a Glassfish instance we can follow these steps:

  • Open Glassfish administration console
  • Navigate to instance configuration node or default configuration node of glassfish application server based on either you want to change the glassfish cluster’s instances configuration or a single instance configuration.
  • Navigate to Configuration> Java Message Service> JMS Hosts from the navigation tree

§   Remove the default_JMS_host

§   Add as much hosts as you like this instance or cluster use, each host represent a broker in your cluster

  • Navigate to Configuration> Java Message Service and change the fields value as shown in table 11

 

Table 11 Glassfish configuration for using a standalone JMS cluster

 

Field

Value and description

Type

We  discussed different types of integration in this section, we will use remote standalone cluster of brokers

Startup Timeout

Time that GlassFish waits during its startup time before it gives up on JMS service startup, if it gives up no JMS service will be available

Start Arguments

We can pass any arguments that we can pass to imqbrokerd command using these field

Reconnect, Reconnect Interval, Reconnect Attempts

We should enable the reconnection, set a meaningful number of retries to connect to the brokers and a meaningful wait time between reconnection

Default JMS Host

It determines which broker will be used to execute and communicate administrative commands on the cluster. Commands like creating a destination and so on

Address List Behavior

It can be used to determine how GlassFish select a broker when it creates a connection; it can either random or based on the first available broker in the available host list. First broker has the highest priority.

Address List Iterations

How many times GlassFish should reiterate over the available hosts list in order to establish a connection if the brokers are not available at first iteration

MQ Scheme

Please refer to table 13

MQ Service

Please refer to table 13

 

As you can see we can configure GlassFish to use a cluster of Open MQ brokers very easily. You may like to know how this configuration affects your MDB or a JMS connection that you acquire from the application server, I should say that when you create a connection using a connection factory, GlassFish check the JMS service configuration and return a connection which has the address of a healthy broker as its host address, the selection of the broker is based on the Address List Behavior value, it can select a healthy broker at random or the first healthy host starting from the top of the hosts list.

Open MQ is one of the most feature complete message broker implementation available in the open source and commercial market, it provides a lot of feature including the possibility of using multiple connection protocols for sack simplicity and vast types of clients that may need to connect to it. Two concepts are introduced to cover this connection protocol flexibility:

  • Open MQ Service: Different connection handling services implemented to support variety of connection protocols. All available connection services are listed at table 12.
  • Open MQ Scheme: Determine the connection schema between an Open MQ client, a connection factory, and the brokers. Supported schemas are listed at table 12.

The full syntax for a message service address is scheme://address-syntax and before using any of the schemas we should ensure that its corresponding service is already enabled. By setting a broker’s imq.service.activelist property, you can configure it to run any or all of these connection services. The value of this property is a list of connection services to be activated when the broker is started up; if the property is not specified explicitly, the jms and admin services will be activated by default.

Table 12 available services and schemas in Open MQ 4.3

Schema

Service

Description

mq

jms and ssljms

Uses the broker’s port mapper to assign a port dynamically for either the jms or ssljms connection service.

mqtcp

jms and admin

Bypasses the port mapper and connects directly to a specified port, using the jms connection service.

mqssl

ssljms and ssladmin

Makes a SSL connection to a specified port, using the ssljms connection service.

http

httpjms

Makes a HTTP connection to the Open MQ tunnel Servlet at a specified URL, using the httpjms connection service.

https

httpsjms

Makes a HTTPS connection to the Open MQ tunnel Servlet at a specified URL, using the httpsjms connection service.

 

We usually enable the services that we need and leave other services to stay disabled. Enabling each service need its own measure for firewalls configuration and authorization.

 

Performance consideration

Performance is always a hurdle in any large scale system because of relatively large number of components which are involved in the system architecture. Open MQ is a high performance messaging system, however you should know that performance differs greatly depending on many different factors including:

The network topology

Transport protocols used

Quality of service

Topics versus queues

Durable messaging versus reliable (some buffering takes place but if a consumer is down long enough the messages are discarded)

Message timeout

Hardware, network, JVM and operating system

Number of producers, number of consumers

Distribution of messages across destinations along with message size

 

Summary

Messaging is one of the most basic and fundamental requirement in every large scale application and in this article So far you learned what is OpenMQ, how does it works, what are its main components, how it can be administrated either using Java code and JMX or using its command line utilities. You learned how a J2SE application can connect to Open MQ using JMS interfaces and act as a consumer or producer. We discussed how we can configure Open MQ to perform access control management. You learned how OpenMQ broker instances can be created and configured to work in a mission critical system.

AttachmentSize
2.png6.72 KB
3.png45.34 KB
4.png74.44 KB
5.png118.3 KB
1.png39.28 KB

Comments

This article is great and is

This article is great and is pretty useful and helpful for beginning to understand OpenMQ, HADB and the JMS part of glassfish, but there is something that I miss. When I create the first cluster (within a domain), with the cluster profile in glassfish using the default config and then the default_JMS_host, the broker is the same of the DAS? In the article is written that when i create a instance in a cluster each instance has its own MQbroker, but when i add instance to a cluster the broker is always the same. I don't understand the following:
1 The broker by default is unique for each glassfish domain ?
2 Every time that i create a instance in a cluster I must create its own broker or it's created by default and I don'y know where to find it. I ask this because to me seems that each instance by deafalut speak with the same broker (the DAS broker).
3 If I want to achieve what it's shown in figure 5 (1 broker for each instance without HADB) should i use the embedded profile, or also the local is good? If the response at question number 2 is that i need to create the broker manually then the communication between them is achieved in an automatic fashion or Do I need to setup that?
4 The brokers in figure 5 are actually used for enabling (it's written for administrative use) the communication between the DAS and the node agent ? It's right to use them for applicative uses or it's reccomandable to create others brokers? In this case Do I need to setup in some way that the brokers are in the same cluster?
Sorry for the relatively dummy question but I'm a beginner when it comes to clustering and as eveyone can see my english it's a little bad.