Skip to main content

Java ME Testing - Debugging Test Failures

Posted by alexeyp on February 22, 2007 at 11:44 PM PST

What to do when
tests of JavaTM ME implementation test suite
fail ?

This article offers some suggestions for debugging test
failures - with
a special focus on the JT
harness
and ME
Framework
features that support debugging. My initial
motivation for writing this article was to announce improvements in the
Test
Export
feature, however the topic is just so entertaining that
I couldn't stop there. :-)

Debugging
in
Java Micro Edition (ME) has number of challenges. Issues related to
debugging problems in Java ME implementation can include any of the
following items:

  • The problem can reside at the Java layer or in the native
    code.
  • Mobile device is usually a component of a larger
    system rather then isolated system.

    For example, this may be
    connectivity of
    different kinds, integration of mobile application with server-side
    components.
  • Multiple technologies and standards are involved.

    For
    example,
    the CLDC/MIDP implementation may communicate with
    javacard using SATSA
    and with Java SE/EE using RPC API of href="http://www.jcp.org/en/jsr/detail?id=172">jsr172
  • The device might not provide access to the Java console or
    support of debugging protocols.

In addition, automated test suites do not automatically simplify the
problem of debugging, but
instead add their own set of unique complexities. The fact that some
tests fail might not
provide enough information to help you identify the problem. Often it
can take time and require a certain certain amount of
effort to determine what is wrong. 

Problems, specific to automated Java ME test suites, have the
following roots:

  • Automated test execution on CLDC/MIDP requires special
    AMS mode, autotest.

    CDC/PBP execution mechanisms may be complex as
    well. Basic information on this topic can be found in the first
    article:  href="http://weblogs.java.net/blog/alexeyp/archive/2006/11/testing_java_me_1.html">Introduction
    to Java ME Testing Tools.
  • The distributed test execution mechanism.

    This includes
    execution of a test with test harness (JT harness) running on the
    desktop and the test
    agent is running on the device. In addition the distributed tests
    themselves consist of multiple components (such as SE and ME
    components).
  • A complex href="http://weblogs.java.net/blog/alexeyp/#Configurable">configuration
    mechanism.

    Before starting to run
    tests you need to configure test harness according to the environment
    and implementation under test. This complicates the debugging by making
    the whole system more integrated. Tighter integration makes it
    difficult to identify the problem by isolating the piece of the test
    code from the test harness.

Debugging Tips

The goal of this section is
to provide you with a list of practical steps that could be taken by
the ME
Framework and JT harness user in the process of analyzing test
failures. These debugging tips range from know-how to very natural and
evident. 

Tip 1 - Browse the code, test specification, test output.

If tests are written well, atomic, documented, provided with good
trace information, if type of failure allows this trace to be viewed at
the
console or in the test result, stored by the test harness. If all these
'ifs' are there, it is enough to look at test spec, sources and output
to identify the
problem.
It is not rare case, btw. Example
of JT harness test result file contains all information, that test
harness could provide to help with debugging, has sections for:

  • recognized parameters of Test Description
  • values of environment variables
  • version
  • time stamp
  • exact command used for test execution
  • test output itself
  • lots of other stuff, useful and not

As I can see, other then standard, old-style style="font-style: italic;">.jtr files and
reports,
most of teams working actively with JavaTest TM
harness-based test suites use reports, customized for their needs and
type of work they are doing.

Tip 2 - Turn on logging

In the ME Framework, you can configure the logging level in the
following places:

  • Logger API used by the ME Framework implementation classes.
  • 'Debug output' options for ME Framework components

Turn on Logger

This type of tracing is available for the server side of the ME
Framework. It is more useful when identifying problems with
the ME Framework than
when debugging problems with tests. To identify problems with the
framework, you need to understand what is going on
with the internals, what test filters are used, etc. It can also be
useful when debugging problems with types of tests that use some
service run
by the test harness.

To turn this type of
logging on you need to specify the config file through java system
property when starting JavaTest harness, like this:

java^

   
-Djava.util.logging.config.file=D:/exec/0-configs/j2me-fw-log.properties^

    -jar javatest.jar

With the following example
of a log config file,
you will have this type of
content on your console.

ME Framework Tracing Capabilities

As previously mentioned, one of problems here is that the entire system
is distributed. Each major piece of functionality is provided by
several
components, working on both the harness and the device sides. To be
able to
track process steps, it is possible to turn on logging of debug
information to console for every subsystem. These subsystems are listed
below. Each of them has nice illustration in the href="http://java.sun.com/javame/meframework/docs/meframework_devguide.pdf">ME
Framework
Developer's Guide (1.5Mb) and in the following text I have
referred to the corresponding
pages in that
document.

  • Test
    execution subsystem (CLDC/MIDP), Figure 3-4 at page 27.


    The trace lists control messages between the harness and the device
    that
    correspond to href="http://weblogs.java.net/blog/alexeyp/archive/2006/11/testing_java_me_1.html#Autotest">Autotest
    execution flow. This may be useful if test execution is very
    slow, like when
    there are multiple tests packaged per test bundle, to make sure that
    there is a progress, to find which exactly test crashed the VM, to see
    exact test parameters and test result outside of the harness. The
    real-time trace can be observed on the device console, if the console
    is
    available. See a snapshot

    from the Configuration Editor that shows which Interview questions are
    responsible for this option. Check server
    and device-side examples
    of trace output.

  • Distributed
    test framework, 
    style="font-weight: bold;">Figure 3-5 at page 29.

    Here you can see all messages exchanged by this framework, who sends
    what. Needless to say, it is one of the important features used to
    investigate failures of
    distributed tests. Simple interactive test with 3 static images sends href="http://weblogs.java.net/blog/alexeyp/archive/distributed.log">that many messages to
    pass.
  • Agent
    (CDC/PBP).


    Passing '-trace' option to agent displays everything going inside of it
    at
    the console (if available). This is a standard feature of the JT
    harness
    Agent. The example of the
    Agent trace can serve as a good illustration to href="http://weblogs.java.net/blog/alexeyp/archive/2006/11/testing_java_me_1.html#CDC">CDC-specific
    approaches to address test suite scalability. Note the data
    exchanges and test classes instantiations for every test.

When JT harness or a ME Framework-based test suite is used as a part of
automated regression testing subsystem for nightly/weekly test
execution, we turn all debugging options
on by default, to have more data ready for offline failure analysis, if
required.

Tip 3 - Execute the test standalone

The natural step of isolating the problem is isolating failing test
from the test harness environment by executing it standalone. It may be
possible in some situations to modify original test source, recompile
it, and execute in the debugger.

Each individual test usually has an application entry point,
allowing to
execute it outside of the test harness. This is usually static main()
taking
an array of
parameters (the same parameters as those passed to the test by the test
harness). You
can put test classes in the classpath and call the test by the class
name.

To get values of test parameters
you may
need to dig into test environment. To execute standalone test
corresponding to the following sample test description:

cellpadding="0" cellspacing="0">
summary="Javatest Test Description" class="TestDescription"
border="1">
Item Value
title java.security.KeyFactory generatePublic() Tests
source generatePublicTests.java
executeClass com.sun.tck.satsa.tests.api.java.security.KeyFactory.generatePublicTests
keywords runtime positive distributed
executeArgs -envValue $sampleValue

you must execute the following command:

export sampleValue=<find
'sampleValue' variable in the test environment>

java -classpath $TEST_HOME/classes^

   
com.sun.tck.satsa.tests.api.java.security.KeyFactory.generatePublicTests^

    -envValue $sampleValue

Or even simpler - JT harness result file contains the exact
command
and values of parameters that must be executed. See
section messages:(1/693)
in the example.

Unfortunately this works well only in CDC and Java SE
execution
modes. This approach will not work for CLDC/MIDP, where you
need the application to be packaged and deployed before you can execute
it.

Tip 4 - store for offline analysis

In CLDC/MIDP execution  mode, instead of using autotest
feature, one can download individual test applications manually for
offline analysis, using web browser, for example. This can help
identify problems related to interpreting the application descriptor
and
manifest.

Tip N - use common sense

The list is not and can not be complete, yet more tips from the top of
the head:

  • Vary configuration parameters
  • Add trace printing into test itself or into tested code
  • Execute the test on the another implementation to verify
    its correctness
  • ...

We try many different debugging techniques all the time and very often
they work.  But the most effective solution, that is supposed
to help with digging out
the most complex problems is ... Test Export.

Test Export

The ME Framework is a quite complex set of interacting
components involved in test execution. When a test is
reported as failed by the JT
Harness
it's not always evident where the bug is and how to
track
it down. Most often, when users are trying to track down a
bug, they want to run the failed test standalone on
the device under test. This is where the  style="font-weight: bold;">Test Export feature
can help. Test Export converts an individual test into a small
standalone Java ME application containing the test, which can be
executed, debugged, or used in any way convenient for the user.

Getting Test Export as a side-effect of Autotest execution

A very simple, partial implementation of the test export was
available
in the ME Framework from the beginning. It was a hack of a standard
CLDC/MIDP execution mechanism., described by the following scheme.

texec.PNG

style="font-weight: bold;">FIGURE X. Autotest mode.

In order to turn test into CLDC/MIDP application that executes
separately
from the harness, one must replace all these arrows, representing
some message exchanges, with static data.

For the initial ME Framework implementation of the test
export feature, test packages where
supplied with all information necessary for test execution (that
replaced links 3 and 4). The execution server was packaging
test bundles
without waiting for any requests from AMS (links 1 and 2). For the test
harness, test
status indicated success of packaging and not actual test execution
(link 5). When executed, tests did not send result to the
execution
server but just printed them on the console (link 5). Tests
were packaged with all test data but not automatically downloaded to
the device and test packages not removed after execution cycle
completed.

Even this simple and limited approach allowed to solve some
troubleshooting-related tasks. For example, a QA engineer responsible
for TCK execution on the development build of the implementation could
use these exported test packages as attachments to bug reports. A
development engineer not familiar with TCK execution
could use it to reproduce the problem and verify the fix.

Test Export in
ME Framework 1.2

The previous implementation of the Test Export was not
complete. Evident limitations were that only simple
non-distributed tests were
exported and not test sources. For the user, this meant that a created
application could not be modified and rebuilt. The previous
implementation also exported only .jar files and not .jad files.

Implementing the full Test Export feature for all types of
tests is a challenging task requiring very accurate tracking of all
test dependencies. The following are examples of such dependencies:

  • Custom jad/manifest entries
  • Test specific security conditions
  • Code and resource dependencies of test and test suite level

Identifying all test sources associated with a distributed
network test is not a simple task. To make this feature really
user-friendly, one should provide parts of the test execution
infrastructure, such as the provisioning server for MIDP.

Many of these problems are addressed in the new Test Export
implementation that was recently integrated into the ME Framework by
Dmitry Trounine.

To initiate Test Export for a ME Framework 1.2-based test
suite, the
user should first open the Configuration Editor and, in the interview,
set test export mode on. Then the user should specify the export
directory (the location where exported test will be saved) and maybe
some additional parameters, such as the 'Prefix of URLs to JAR
files
' parameter specific for the MIDP platform. See the
following screen-shot of the JT harness Configuration Editor:
Configuring Test Export.

Finally, the user should select the test or tests to export
and run them just as in a regular test run. In test export mode,
instead of executing tests on a device, the selected test or tests are
saved in the specified test export directory.

What
you get with test export?

When the test export is
done, the following
content is created in the export directory:

  • Java ME applications in JAR files named test1.jar,
    test2.jar, etc. containing the exported tests.

Just as with any other Java application, these applications
can be uploaded on any suitable device and launched. The exported tests
are executed in the same way that they are executed by ME Framework but
in standalone mode, printing the diagnostics and test result on the
device screen.

  • In MIDP mode, Java application descriptors (JAD files) are
    generated: test1.jad, test2.jad, etc.

These descriptors are properly configured:

  • They contain the same attributes as in regular test runs.
  • They are signed and include certificates if the test
    suite was configured for trusted MIDP security mode.
  • They contain proper links to the exported JAR files.

You can use the generated JAD files for OTA provisioning of
exported tests by putting them on any suitable HTTP server or by using
the special provisioning server included with the
ME Framework.

  • Java sources of exported tests in src subdirectory.

These source files can be used for debugging or rebuilding
the exported tests.

  • Class files in classes subdirectory.

  • Ant build script in build.xml file.

Yeah! Users can change the exported tests by modifying its
sources and rebuilding them with this build script. It contains all
necessary targets for compiling sources, updating JAR files from
classes, and signing JAD files. In addition, build.properties is also
generated in the export directory. It defines properties used by the
build script and can be used to configure the build. For example,
property trusted, if defined, triggers signing JAD
files when updating exported tests.

  • Additional libraries and tools in lib subdirectories.

These are not part of exported tests and are not required
for using them, but can be also helpful. For example, exportSigner.jar
is a tool for updating signatures and certificates of
MIDlet suites with tests in those cases where the tests contained in it
have been modified and its JAR file has changed.

Let's look at this new feature from the point of view of a Sun
Java Wireless Toolkit (Wireless Toolkit) user. Before the test export
feature was introduced, users could execute tests on the emulator by
using its autotest mode:



$WTK_HOME/bin/emulator -Xautotest
http://localhost:8080/test/getNextApp.jad

This command launches the emulator and specifies an URL at
which the emulator should look for tests to download. At this URL
resides the execution server of ME Framework. The execution server
responds to 'getNextApp.jad' requests by sending new and new tests from
a large test suite. The emulator fails in a loop repeating three steps
(install, execute, remove) for each downloaded test until the execution
server has no more tests to send. There was no way to get just one test
(corresponding JAD and JAR files) for standalone execution. In
addition, users of the Wireless Toolkit know that autotest mode doesn't
allow debugging!

Now, with the test export feature, a user can select the test
to debug and export it to any appropriate location. JAR and JAD files
are generated and can be executed on emulator. The following is a
typical command for launching the exported test on the Wireless Toolkit
emulator:



$WTK_HOME/bin/emulator -Xdescriptor $EXPORT_DIR/test3.jad

During its execution, the test prints the output to stdout.
Click here to view an
example.

This time, debugging the test during its execution is easy.
Just add the -Xdebug argument to last command
with all necessary parameters. You can then use your preferred debugger
with the IDE of your choice to debug the test with its sources which
are also exported.

Future Plans

The next planned improvement for debugging support will come
from the integration of  Skavas's GUI Agent. This will allow
you to use GUI and RMS in addition to console output for execution of
exported tests. Watch for more information about this in the future -
it's a topic for another post.

The Test Export feature is under development right now and you
can improve it by posting on ME Framework href="http://forums.java.net/jive/category.jspa?categoryID=57">forums.
Propose your improvement or new feature and you may see it in one of
the next releases.

Related Topics >>