Skip to main content

Persistence Caching Part II

Posted by jhook on January 5, 2007 at 10:07 PM PST

Often when you are looking at caching 'requirements', you have a couple things to think about:

  1. Allowable duration (only guarantee the object/data for 30 or 20 minutes at most)
  2. Optimized memory use (don't cache too much)

No where do requirements base themselves on 'do not store more than 200 widgets at once in memory'. So why do these caching implementations work that way? Granted, there's opportunity to roll over to the file systems in these cache frameworks, but really-- if I wanted to guarantee persistence like that, I'd just store it to the disk in the first place or in the DB. Seriously. Anything I'd throw into a 'cache' would be just extra. Hibernate has strongly referenced 1st level caching, which makes complete sense for transactions, second level caches are just there to help with performance.

So what do we do? Lets take OSCache as an example. What I really want to do is specify a time to live (requirement 1) and either softly or weakly reference the data to allow the JVM to naturally optimize memory use. This means I can store an unbounded set of widgets in memory (None of this LRU/FIFO/LFU stuff), but still enforce that if the widget sticks around long enough, to put some restriction on how long until we allow it to be dirty again (forcing a re-fetch).

If you are dealing with bounded sets-- such as a couple of known articles or highlighted data, then fine, use a cache capacity/LRU-- but for unbounded sets where caching is purely there to enhance performance of the overall system-- just weakly or softly reference the data. It is a cache after all.

This becomes especially important in web applications where traditionally, we store data per session to optimize the user experience. But-- what if 80% of that data is required by another user? Could we push the cache to a global scope, out of the session and weakly reference it, such that all users share the same memory? I guess you could still store reference in your session, but globally weakly referencing the data would be an 'automatic' way of sharing common session data that's purely there for cache optimization.

The other thing would be to softly reference everything. In this case, you would just be able to store whatever you can globally, knowing that it will get cleaned up as the VM needs to allocate memory elsewhere. Still retaining the time to live policy here is important to prevent objects from sticking around for days at a time when you want to 'update' the cached instance after 20 or 30 minutes.

Another performance tweak to this with soft references, if the overhead of using soft references becomes too much, you can decide to 'chunk' your caches into segments where a whole series of caches can be softly referenceable, instead of per object.

net.java.project.Widget.cache.duration=1800
net.java.project.Widget.cache.ref=soft
net.java.project.Widget.cache.chunk=100

Lookups would then be Key to Reference, Reference to Cache, Key to Instance.

Anyways, again, let me know your thoughts-- I've committed to using an existing cache implementation, but all of the settings of these popular libraries just seems disjoint to actual application use cases.

Update from Bob's Comments

I implemented a straight ConcurrentHashMap<String,SoftReference> as a cache, then experimented with also queuing a TimerTask to force clean up of older objects. As a control, I also used a straight hard reference cache ConcurrentHashMap<String,Object>.

What I wanted to see is how accurate the use of SoftReferences in keeping the largest chunk of recently added objects. It'd be bad if the GC tossed out all the references you never cared about. Adding the TimerTask would hopefully help the GC recover the correct memory by cleaning references ourselves. So lets see what happens with varying sized objects:

  • Ran until added 50,000 objects, or ran out of memory.
  • Size is the size of the cache at the end of the run (how many retained).
  • Time is the time in ms to populate the cache until success/fail.
  • Failed is true-- if it failed.
  • Oldest is walking from the last in to see what the oldest continous block of objects available, starting from the 50,000th.
  • Hits mean different things, SoftValueCache counts hits as how many were gc'ed from the cache before ending the test. ThreadCache is how many of those TimerTask runs actually removed the object before the gc did (higher the better).
Small Objects (100b):
HardCache[total=14057,size=14057,time=735,failed=true,oldest=0,hits=14057]
SoftValueCache[total=50000,size=3759,time=1547,failed=false,oldest=46241,hits=46241]
ThreadCache[total=50000,size=9031,time=1984,failed=false,oldest=40969,hits=24124]

Medium Objects (500b):
HardCache[total=3437,size=3437,time=781,failed=true,oldest=0,hits=3437]
SoftValueCache[total=50000,size=427,time=7297,failed=false,oldest=49573,hits=49573]
ThreadCache[total=50000,size=2085,time=8844,failed=false,oldest=47915,hits=43633]

Large Objects (1,000b):
HardCache[total=1849,size=1849,time=547,failed=true,oldest=0,hits=1849]
SoftValueCache[total=50000,size=430,time=10078,failed=false,oldest=49570,hits=49570]
ThreadCache[total=50000,size=393,time=11391,failed=false,oldest=49607,hits=45769]

So what do the numbers mean? ThreadCaching adds some overhead, to time/memory, but when the timer is able to clean up references before the GC, then we do get a better set of recent objects sticking around in the cache (good). For large objects, they get gc'ed before the timer can get to them, rendering them useless.

Related Topics >>