Saturday, 10 March 2012

Reducing Java Memory Usage and Garbage Collections with the UseCompressedOops VM Option


A live trial of the UseCompressedOops JVM argument (-XX:+UseCompressedOops) has shown to reduce the memory usage of the application by 14% and the number of garbage collections by 24%. This means reduced latency due to fewer pauses when performing a full Garbage collection, reduced load on the CPU due to fewer minor Garbage Collections, and a reduced memory requirement on the machine.
The results of this trial clearly show that this VM option should be used, though since the release of 6u23 this option is actually set on by default.


The application currently runs on the 64 bit version of the 6u21 JVM and due to the larger 64 bit memory adresses the java heap is significantly larger than when running on a 32bit JVM. Fortunately  since version 6u14 there has been a JVM option, UseCompressedOops, intended to win back much of the memory lost due to the larger address space. How this is achieved is by representing the addresses on the heap in 32-bits then, when they are loaded, scaled by a factor of 8 and added to a base address to decode them to the full 64-bits. You can read a more detailed write up here;

One thing to beware of is that the maximum heap size of the JVM will be limited to 32Gb.


We set a single application server to use use compressed oops and compared its memory and GC characteristics over the following hours with a placebo without compressed oops.


Garbage Collection
Over a 1 hour period the following number of garbage collections occurred.
ReductionMinor GCFull GCAvg. Sec. Between GC
Without UseCompressedOops0%37949.3
With UseCompressedOops24%292312.2
Memory Usage
This table shows the average memory over the hour retained in the old generation after a full GC, this is a good guide to the overall reduction in memory used by the application.
Without UseCompressedOops0%685M
With UseCompressedOops14%593M


  1. worth in including my list of 10 useful JVM options java programmer should know. Thanks for your analysis, pretty useful.

  2. What's the side-effect of your compressed oops? Will there be any performance impact?

    1. There should be no negative impact of using this, except for the limitation mentioned above restricting the Java heap to 32Gb.

      There is hit on CPU performance from moving from 64-bit to 32-bit VM as there are increased pipeline stalls coming from more cache misses, this is due to less data fitting in the caches as the larger adresses take up more space.

    2. You mean moving from 32 to 64-bit takes the hit right? Your comment mentions it the other way around.

    3. There should be a small performance impact as compressed 32-bit pointers need to be expanded back into 64-bit pointers (via a shift) and vice versa.

      Whether or not this ends up being faster than the extra GCs triggered by the increased memory usage of 64-bit pointers very much depends on your application.

    4. The cost of the shift really isn't very relevant for modern x86 cpus. For most cases it can be achieved as a part of the normal load -- x86 allows for memory accesses of type base+{1,2,4,8}*index at the same cost as direct access, so direct pointer access are not slowed down at all when doing this.

  3. That has reduced my JVM heap by a huge amount - thanks for the tip.