Testing Java ME Implementations - AMS
API testing vs AMS testing
article described the primary test execution mechanisms,
used in the world of Java ME implementation test suites:
- 'Server/Agent' approach, used for CDC implementations,
where test code is downloaded by Agent application from the Server
- 'autotest' approach, where tests are packaged into sequence
of applications, that are repeatedly downloaded/executed/removed.
Please make sure to familiarize yourself with these concepts, I will be
referring to them later.
Today I would like to focus on limitations of
these approaches, what
types of functionality can not be tested this way and describe several
solutions. Some of these ideas are
implemented in ME
there are variations,
implemented in other Java ME testing products.
Talking about limitations, the main point is that using these
mechanisms one can verify what is going style="font-style: italic;">inside of a style="font-style: italic;">single application
when it is running.
These mechanisms do not allow application restarts and crashes, assume
fixed scenario and only single application running at any given time.
Note, that this is usually all that is needed to test of Java
API behavior. Just it (API) is not the only focus of standards and
testing in the Java
Examples of Java ME standards, that can not be tested with
the above mentioned approaches, are as follows:
OTA specification, among other things, describes
criteria, that result in an application installation failure. Tests
for these requirements need to meet these criteria and verify that style="font-style: italic;">installation
fails. API testing mechanisms work only
if install&run steps pass successfully.
PushRegistry specification talks about style="font-style: italic;">static registration
of the Push event handlers. If application is provided with special
attributes, it gets registered in the PushRegistry to handle incoming
WMA messages, for example. Testing of this functionality assumes
the following steps are present in the test procedure:
- an application with Push registration is
-the push event is initiated
- application is launched to handle
The scenario does not fit into the Server/Agent scheme, because it
an application installation to happen as part of the test, and does not
fit into the autotest scheme,
because it does not have automatic 'run/remove' steps, that are
mandatory parts of the 'autotest' cycle.
spec talks about communications between style="font-style: italic;">two applications,
the ContentHandler and Invoker.
Overall, there is a considerable number of JCP and non-JCP standards
that focus not only on API but on applications. Testing of these
standards requires non-standard techniques.
The cases above are usually
associated with the platform
component, named JAM (Java Application Manager) or AMS
(Application Management Software), therefore the testing
techniques that involve AMS operations (install/update/run/remove) or
application life cycle (start/pause/stop/destroy) will be referred to
as 'AMS testing'.
OTA Testing Framework
OTA Testing Framework is an established terminology for the technique,
that was first
time used to test the OTA specification. Below is the description of
specific case first, followed by some analysis and
discussion of potential
Typically, the OTA Testing Framework consists of the following
- OTA test - the test scenario, that includes
standard and custom AMS operations. It operates with applications
associated with this test. It is executed on the server side of the
main communication point responsible for reporting the whole test
title="AMS test components"
- Test application(s) - zero or more pre-packaged
applications that may contain part of the test code that needs to be
executed on the device. They may communicate with the main test
OTA test is executed in this framework on the test harness side and
uses OTA server to publish applications:
title="OTA test and test applications"
There is also an important component called OTA Server. In
case it is HTTP server controlled by OTA
test and responsible for provisioning of test applications. Nice
picture of the
OTA Testing Framework in provided in the
Framework Developer's Guide ( href="http://java.sun.com/javame/meframework/docs/meframework_devguide.pdf">link
to PDF version), check Figure 3-7.
The approach described above is universal and powerful. It can
be used to test the most exotic cases. At least, so far it stays as a
absolute weapon used if no other technique works.
Last Thing to Use
The only disadvantage of OTA Testing Framework is that it is
terribly interactive. During the certification run TCKs do not allow to
use any automation, every AMS operation must be performed manually.
With a simple install/run/remove cycle, where one needs to read
instructions, enter url, wait for download/launch/execution
complete/remove, it takes 1 minute minimum for a test case.
Now imagine yourself a QA engineer, who has to run
that contains ~40 OTA tests, as a regression test suite on a regular
basis during the whole development cycle without any automation.
Lets consider a variation to the above
approach, that can be used when it is OK to narrow the scope of covered
possibilities and simplify the usage.
This is the approach, used initially in PBP & TV TCKs to verify
parts of the specifications, related to an application life cycle. In
case it was also for testing Ixc - Inter Xlet Communication.
The idea is to specify the Java interface to the AMS that
one application to perform the life cycle operations on
another application. It was created for PBP/TV, where testing is
based on the Server/Agent model. The scheme works in the following way:
- An implementation developer provides an implementation
of XletManager interface, that would allow a test application, executed
on the device to invoke the AMS operations through the Java API. In
general case, if Java interface to AMS is not available, the
XletManager implementation may be interactive.
- Test Suite user manually downloads and installs on the
device the Agent application and a set of Xlet applications that will
used in tests
- Tests that are executed within the Agent instantiate
XletManager and operate with additional pre-installed test applications.
title="XletManager and test applications"
While in the PBP TCK this approach is used to initiate only
operations from the test, executed on the device, it also can be
extended to any AMS operations like install/update/run/remove.
Comparing with OTA Test Framework
Main benefit of XletManager-like approach is that it allows to
single Server/Agent test execution model for the whole test suite.
With this specific approach, there is no dynamic application
installation/removal, no application provisioning component, test
applications are preinstalled before test execution begins.
There may be restrictions, that will complicate the
use of this model or just make it impossible, like:
- interactive implementation of XletManager may be
complicated for devices with small screen
- platform may not allow more than one application to run
There are several active components involved into execution
the test, there may be several places where to put the main scenario
responsible for execution of AMS operations. It may be server (OTA Test
Framework) or client side (XletManager-like) of the distributed test.
It can be AMS itself - 'autotest' mode, that AMS is supposed to provide
for an automated
execution of test suites on CLDC/MIDP,
is also a scenario of AMS operations, that can be used for test
Interface to the AMS may be implemented through plugins, that
may be provided by implementation developers. These plugins may be
interactive in general case or automated. Automated testing may be used
for QA/regression testing or even for certification in some cases.
There are interesting possibilities of using standard Java ME
APIs in order to communicate with AMS, for example CHAPI. As a minimum,
be used to have the test code distributed across multiple MIDlets or
MIDlet-suites that are registered as ContentHandler-s and invoked from
main test scenario.
Over time, the number and complexity of Java ME implementation
tests dealing with AMS have increased. I believe this is a natural
consequence of consistent focus on standards in the area of application
management for consumer devices.
The interesting and important problem, that comes together
with number of appearing and evolving AMS-testing frameworks
that are often interactive is automation, it deserves separate