Skip to main content

GlassFish 3.1 m2 supports creating and starting instances on remote hosts.

Posted by carlavmott on June 24, 2010 at 9:46 AM PDT

One of the main features in GlassFish 3.1 is clustering and for m2 we have added support for creating and starting instances on remote hosts.  The underying GlassFish 3.1 code uses SSH to connect to the remote hosts and introduces the concept of a node which is used by the system to deterimine where the instances will be created or started. At this time the only connection type supported is SSH.  Users now have a few new commands to manage nodes. 

  • create-node-ssh  creates a node that describes the hostname where the instance will run and location of GlassFish installation.  
  • delete-node-ssh and list-nodes are useful in deleting and listing nodes respectfully. 

Below is a simple example of creating a cluster, creating an instance and starting the instance all from the administration host or the DAS (Domain Administration System). 

First, some assumptions about the setup for GlassFish.  For m2, users will have to install and start GlassFish on all hosts that are part of the cluster. We do not currently support installing or starting GlassFish on a remote host and this is planned for a future release.  Second, SSH needs to be setup on both hosts as it is the underlying mechanism that is used to run commands on the remote hosts.  Currently we have only tested on UNIX (MacOs, Ubuntu, and OpenSolaris) but for m3 we will be including Windows as a tested platform.  There are many blogs that talk about setting up SSH so I won't go into all details here. I found this blog useful.  To summarize how I set up the authentication key,  I used ssh-keygen -t dsa to create the key file in my .ssh dir.   Note: a limitation for m2 is that we don't support encrypted key files so you must  not set a passphrase when creating keys.  I then used scp to copy the key file id_dsa.pub to the host I want to log in to.  I put it in the .ssh dir and called it authorized_keys2.  Also I had the same username on both systems which further simplified things.  At that point I can ssh into the remote host without supplying a password.  This is a good test to see if you are set up correctly before you try the commands below.

In this example, we will create a cluster with two machines, foo and barfoo will be the DAS which has all the information about the servers running in the cluster.  Recall that in this release we have introduced a new CLI command create-node-ssh to create a node which is used to locate the host for a particular instance. 

create-node-ssh has three required parameters, 

  1. --nodehost:  the name of the host where the instance lives
  2. --nodename:  the GlassFish installation directory
  3. --name:  name of the node being created 

All other parameters will default to reasonable values.  We default the ssh port to 22. If no username is provided we default to the user running the command and we look for the key file in the home directory of that user.  All instances are now required to reference a node element which GlassFish uses to determine where the instance will be created or started.  This means that we have added a --node option to the create-instance command. As a convience we have a default node for localhost so if the  node option is not specified when the instance is created a reference is automatically added  to the localhost node.  The localhost node contains only a node name of localhost.  We can get the GlassFish installation directory from the server. 

 

Let's see how this works.  All commands are run on the DAS machine and as long as there is SSH access to the other host we will be able to create and start instances.

 

Install and start GlassFish 3.1 m2 on foo and bar.  

On host foo (the DAS) we run all the commands. 

$asadmin create-cluster c1

Command create-cluster executed successfully.

$asadmin create-node-ssh --nodehost=bar --nodehome=/home/cmott/glassfishv3/glassfish nodebar

Command create-node-ssh executed successfully.

$asadmin list-nodes 
localhost
nodebar

Command list-nodes executed successfully.

$asadmin create-instance --cluster=c1 --node=nodebar instance1

Command create-instance executed successfully.

$asadmin create-instance --cluster=c1 instance2

Command create-instance executed successfully.

$asadmin list-instances
instance2 not running
instance1 not running

Command list-instances executed successfully.

$asadmin start-cluster c1


Command start-cluster executed successfully.

$asadmin list-instances

instance2 running
instance1 running

Command list-instances executed successfully.

 

Notice that when creating instance2 I did not specify a node and so the default node lcoalhost is used.  In a future release of GlassFish, create-node-ssh will test if a connection can be made to the remote host when the node is created.  If not reachable the user can create the node if the --force option is set to true.

 

 

Related Topics >>

Comments

WAR using external JARs

I have a problem with glassfish I wanted to know if there's a solution for it. We have a web application that uses something like 40 Jars. We need to deploy a new version of the application every day to a remote production server. The application changes, but the Jars stay the same, so we would like to seperate the WAR file from it's jars, preferably to another WAR, so ideally we want to have 2 files, one is the WAR file of the application, and the other one a file (Super-Jar) containing the JARs, but there isn't such a thing. We need to do so, so we won't have to upload one big WAR every time, which takes a lot of time. We we do now, but it's awkward, is to put all the Jars in the glassfish/lib folder, but it's really hard to manage from there, and it is a bad solution. We need something that we can deploy, not to just copy. Thank you