Skip to main content

Don't break the optimistic locking

Posted by felipegaucho on August 3, 2009 at 12:41 PM PDT

During Jazoon 2009 I got few minutes of private attention from
Keith to my last article about domain models. That small time was worthy
the whole conference for me since Mike pointed the gaps in my text as
well as some valuable hints on how to better translate domain models in
JPA annotations. From that short conversation, a special sentence
remains alive in my memory: don't break the optimistic locking.
After a review on my original code I agreed with Mike that I was
ignoring the optimistic locking in my service layer - a common mistake
noticed over the Internet and also in conversation with other friends.
The problem is not new and the solution is neither new, but I decided to
blog it shortly to my personal reference and eventually for your help.

The problem: breaking the optimistic locking.

When exposing domain models through web-services you should
serialize your entities between the client and the service, and
every time you do that you have a detached JPA entity. In order to
persist the detached objects in the database you need to re-attach them
in to a new persistence context - and that's where the problem begins.
Concurrent threads can access the same write method, reading a same
entity, modifying it and then writing back the detached entity in the
database. In my original code I was reading the latest version of the
entity and then copying the field values from the external entity to the
latest one. In this way I guaranteed an unbreakable writing code but I
committed a basic JPA mistake: I broke the consistency of the
entities. From the Mike's book: it is just an accident waiting to
. Below you find the trap example from my original code:

    public FpUser update(FpUser entity) throws Exception {
        FpUser attached =
            manager.find(FpUser.class, entity.getId());
        // Here I am modifying the latest entity and not the detached one.
        return manager.merge(attached);

From the code above, we can enumerate the steps required to
bypass the optimistic locking:

  1. Client A reads entity.v1
  2. Client B reads entity.v1
  3. Client A modifies the entity.version1 and starts an
  4. Client B modifies the entity.version1 and starts an
  5. update.transaction#1 updates the fields received from Client
    A, merge the entity - that receives the version v2 - but get suspended
    before to finish.
  6. update.transaction#2 updates the fields and received from
    Client B, updates the version to v3 and finishes returning the
    entity.v3 to the Client B.
  7. update.transaction#1 finishes returning the entity.v2 to the
    Client A.

At the end of the above execution, we have the following

  • Client A has an instance of the entity version 2
  • Client B has an instance of the entity version 3 (it actually
    jumped directly from version 1 to 3, without even noticing the changes
    of the version 2)
  • The database has the data from version 3

The worse side effect of this trap is that Client A believes the
current data persisted in the database is the ones from version 2, but
actually it is wrong since the version 3 is currently stored in the
database. The inconsistency could be easily detected by the optimistic
locking of JPA, but since I am reading the latest version on every
update operation the code won't throw the proper exception and the
clients will become inconsistent with the server side.

Solution: keep it simple

The default mechanism specified in JPA to avoid inconsistencies
is a Version field applicable to the entities through the href="">@Version
annotation. Once you included the version field in your entities, you
can just invoke the merge operation to re-attach detached objects and
the container will handle the versioning for you - simple and easy (and

The above code can be rewritten in a sound manner:


    public FpUser update(FpUser entity) throws Exception {
        return manager.merge(entity);

And that's it, fewer lines of code with a more sound and more
robust code. I will fix the code in the footprint repository, so the
article readers will find a better code in the repository - and perhaps
the staff help me to include an addendum to my article warning
the readers about that. At least we both know about that from now on :)

Other interesting blogs about similar problems:

  • href="">Some
    J2EE Performance Tips - from Carol McDonald's
  • href="">Retrying
    transactions in Java - from Panagiotis Astithas
  • Introduction to Concurrency Control - from Scott W. Ambler
  • TopLink
    - from Oracle
  • EclipseLink
    - it replaces the TopLink in JavaEE 6

Before to release your eyes to a next blogger, let me ask you the
intrigant question:

What if I care only partially about my Entity locking?

It is subject for another blog entry, but during my talk with
Mike he confessed the next JPA 2.0 includes this feature: the ability to
lock partially an entity. In this way, I don't need to throw an
exception in a transaction that will affect minor priority fields (the
idea behind the common trap I demonstrated above). People that implement
a code to avoid exceptions during updates are actually preventing the
client to receive exceptions, resolving manually the locking problems.
This suicidal trick seems to make sense where some fields support data
overwriting - usually an optional or very low priority data.

As soon I got a good example I return to this point, now you are
free to give me your feedback or to find something else to have fun on
the web.

My vacations are over :) time to update my working environment and
nothing better than a short blog to warm up my brain to the third
quarter of Java in 2009. Next step: to conclude the second part of the article, reviewing the gaps and offering a good quality material of Java EE 5 - the last step before to start my complete migration to Java EE 6.

PS: this thread is quite interesting and add good insights about the above proposal.

Related Topics >>