Mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Claus Ibsen <>
Subject Re: [HEADS UP] - Camel uses soft reference cache now
Date Tue, 03 May 2011 11:16:34 GMT

I have attached a patch on CAMLE-3922.
Its for a SoftReference Queue which can be used for the memory cache
for ServicePools.

For example camel-mina, camel-ftp etc. uses a service pool to cache
its producers (they are not thread safe for concurrent usages).
So the patch can help ensure to shrink those pools in case the JVM
needs more memory.

That said, it used to be JMX that was the big cause for eating memory.
But thats fixed now, that Camel only enlist resources in JMX during
startup (by default). So when you use dynamic endpoints etc. those
wont just grow in the JMX.

This patch only helps if any of the service pools start to eat up much
memory as well. But I doubt any of the mina or ftp producers does
that. I tried simulating a test on my laptop creating 10000 ftp
producers and they didn't run out of memory (128mb JVM). The profiler
shown that I didn't use more then 20-30mb at most. The producer was
also active as I had them upload a file to a FTP server.

The patch allows us to get the number of elements the JVM have
unreferenced (when it GC for free memory). We may consider adding
similar information to the first patch. Then we can see that stat in
JMX etc.

On Mon, May 2, 2011 at 4:00 PM, Claus Ibsen <> wrote:
> Hi
> Just a heads up on
> I have committed that to trunk in SVN rev: 1098574.
> The change will use a SoftReference cache for the following internal
> caches in camel-core
> - endpoint cache
> - producer cache
> - consumer cache
> - bean info cache
> - property editor type converter (miss cache)
> So what we will have in org.apache.camel.util is 2 kind of LRU caches
> - LRUCache = uses strong reference cache and ensure data in cache is
> kept until explicit removed
> - LRUSoftCache = uses soft reference cache, will allows JVM to GC
> values from the cache, in case it runs low on memory
> There is an LRUSoftCacheTest unit test which demonstrates the
> situation with the JVM running out of memory.
> If you change the test to use the LRUCache then you will quickly run
> out of memory.
> So what the LRUSoftCache offers over the LRUCache is that in case the
> JVM is running low on memory, it allows the JVM to claim
> the memory for the values in the cache. Its kinda like auto-shrink if
> we run low on memory. Since its a cache, then we will just re-create
> the value in case there was a cache miss. Also the max cache size is
> of course still in-play. So if the cache has a limit of 1000, then
> only
> at most 1000 values is stored in the cache. And its LRU based, so we
> prefer to keep the most used.
> I gave the full project a test and didn't encounter any issues due
> this change. But in case you discover some odd behavior after this
> change, then let us know.
> Also if you are keen on cache code, then take a look and review the
> LRUSoftCache source code.
> I will add some JMX stats for the cache, so we at runtime can see some
> stats such as cache hit/miss etc.
> You can read about soft references here (notice it indicate its for
> memory sensitive caches)
> If you wonder why we are not using weak references instead, then
> that's actually a poor choice for caches. See for example what Google
> say:
> --
> Claus Ibsen
> -----------------
> FuseSource
> Email:
> Web:
> CamelOne 2011:
> Twitter: davsclaus
> Blog:
> Author of Camel in Action:

Claus Ibsen
CamelOne 2011:
Twitter: davsclaus
Author of Camel in Action:

View raw message