Skip to main content

JavaOne report: Concurrency

Posted by emcmanus on May 27, 2008 at 8:47 AM PDT

This is the third installment in my summary of the sessions I
attended at JavaOne this year.

The previous installments
covered Java

and Java
programming practice
. This one covers concurrency.

Once again, the titles here are not always the ones that appear
in the conference programme. I tried to undo the mangling that
the Trademark Police typically impose on talk titles, so let me
just state up front that Java, JMX, and JVM are trademarks of
Sun Microsystems, Inc., and any other abbreviation beginning
with J has a sporting chance of being one too.

Here are the talks summarized in this installment:

Let's Resync: What's New for Concurrency in Java SE

Experiences with Debugging Data Races

Transactional Memory in Java Systems

In the remaining installments, I'll cover JMX and miscellaneous
other stuff.

TS-5515, Let's
Resync: What's New for Concurrency in Java SE
, Brian
Goetz. It's well known by now that processors are not getting
much faster, but they are getting much more parallel,
so applications need to be parallel to exploit them. Brian
suggests that the existing tools in java.util.concurrent are
fine when you're dealing with a small number of CPU cores.
But what about the future, when there may be hundreds or even
thousands? "All the cores you can eat."

The answer might be a massive migration to functional
programming languages, but assuming people stick with Java, they
will need the
being designed for Java 7. The basic idea is to
facilitate a divide-and-conquer approach, where you divide
your problem into subproblems, and those into subsubproblems,
and so on until you have enough work for your cores. A nifty
technique called
makes this fill up available cores without
requiring work-queue bottlenecks.

A class called
will allow the expression of parallel tasks without having to
code an explicit divide-and-conquer algorithm. The idea is that
you can write something like this...

double highestGpa = students.withFilter(graduatesThisYear)

...where graduatesThisYear is a predicate object
and selectGpa is a transforming object. (This is
one of the main use cases cited
for closures,
by the way.) The calls to withFilter
and withMapping just return new objects, but the
call to max triggers the whole computation of
filtering and mapping in parallel. Very nice!

Experiences with Debugging Data Races, Siva Annamalai,
Cliff Click. I was already familiar with most of the material
in this talk, but I thought the most important message was this.
You might be debugging your data races with println, or maybe
with a hand-crafted ring buffer because println perturbs the
timing and makes the race go away. You might be thinking that
there must surely be a better way. But one of the world's
leading experts on concurrency confirms that, although tools can
help in some cases, in general you do still need to be able to
do the println stuff.

Transactional Memory in Java Systems, Vyacheslav Shakin;
Suresh Srinivas. The idea of Transactional Memory is that your
programming language allows you to write things like this...

atomic {
    return map.get(k);

...and the system gives you the familiar
ACID properties
(well, maybe not Durability) in the face of other concurrent
accesses. This can be implemented entirely in software or with
hardware assistance. Hardware systems build on logic already
needed for cache coherence. There are "weak" atomic systems
where transactions are only ACID relative to other transactions,
and "strong" ones where they are ACID relative to all memory
accesses. The slides are a good summary of the domain and I'd
recommend them to anyone who wants to get an idea of what it's
about. Unfortunately they're not yet present on the site but I'll update this entry when they
are. (They'll probably
be at this address.)

In the next installment I'll cover the sessions on
JMX technology.



Related Topics >>