Skip to end of banner
Go to start of banner

Problem with memory footprint

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Overview

Currently there is a problem in the indexing structure, because the heap space needed to keep the indexing tree is too high. Usually the indexing tree occupies 10%-25% of the complete space reserved for the buffer based on the structure of data received from agent. There are two main factors that influence the size of the indexing tree:

  1. The average amount of children in the invocations - if the buffer is filled with smaller invocations (up to 100 children) then indexing size will tent to be closer to the 10% occupancy line. This is because then different data objects are equally distributed in the indexing tree (thus throughout several maps). When we have invocation with many children, then the leafs for SQLs and especially timers then to be much bigger, thus occupying more space due to the map expansion policies (storing 600K elements in the map requires array of 1M entries).
  2. Time interval data in buffer is created - when we have a high load and agent sends lot of data in short interval we have a bigger indexing structure, because our time-stamp indexer is indexing in intervals of 15 minutes. Meaning that if all data in the buffer is created in same 15-min interval all data will fail into same branch, which would effectively mean that number of maps for storing data will be smaller.

Thus, the conclusion is that the main problem we have is the amount of space maps where we index data occupy in memory. The problems is bigger if the amount of maps is small or we have one or more maps that have many elements (500K+).

The Java maps are known to be non-storage friendly. It's assumed that often 80%+ of the map size is used for maintenance, where only 20% are real data stored in the map. Note that we use ConcurrentHashMap.

Current size

A test was performed to check the amount of indexed elements in the maps as well as the size of the complete indexing tree. The test was executed with normal load pace and invocations having ~50-100 children. Max size of buffer is 335MB. The results are:

Tree size
2014-11-14 15:44:18,125: 1239302 [indexing-thread] DEBUG it.cmr.cache.impl.AtomicBuffer - Indexing tree new size: 32934038
2014-11-14 15:44:18,125: 1239302 [indexing-thread] DEBUG it.cmr.cache.impl.AtomicBuffer - Indexing occupancy percentage: 10.049%
Map sizes
2014-11-14 15:49:13,842: 1535019 [pool-2-thread-1] INFO  ctit.indexing.buffer.impl.Leaf - Leaf map size prior to clean = 17245
2014-11-14 15:49:13,843: 1535020 [pool-2-thread-1] INFO  ctit.indexing.buffer.impl.Leaf - Leaf map size prior to clean = 300778
2014-11-14 15:49:13,843: 1535020 [pool-2-thread-1] INFO  ctit.indexing.buffer.impl.Leaf - Leaf map size prior to clean = 4554
2014-11-14 15:49:13,843: 1535020 [pool-2-thread-1] INFO  ctit.indexing.buffer.impl.Leaf - Leaf map size prior to clean = 32965
2014-11-14 15:49:13,843: 1535020 [pool-2-thread-1] INFO  ctit.indexing.buffer.impl.Leaf - Leaf map size prior to clean = 644
2014-11-14 15:49:13,843: 1535020 [pool-2-thread-1] INFO  ctit.indexing.buffer.impl.Leaf - Leaf map size prior to clean = 10604
2014-11-14 15:49:13,843: 1535020 [pool-2-thread-1] INFO  ctit.indexing.buffer.impl.Leaf - Leaf map size prior to clean = 598
2014-11-14 15:49:13,843: 1535020 [pool-2-thread-1] INFO  ctit.indexing.buffer.impl.Leaf - Leaf map size prior to clean = 10604

As seen, indexing tree in this situation occupies 10% of buffer, totaling to over 30MB. In my opinion this is ideal situation and under 10% we can not go.

We have total of 8 maps, with different sizes, biggest one having 300,778 indexed elements.

ConcurrentHashMap compared with other thread-safe map

We furthermore tested the ConcurrentHashMap with two other map implementations: NonBlockingHashMapLong from Cliff's Click high scale lib and synchronized Long2ObjectMap from FastUtil lib. The results shows that NonBlockingHashMap beats heavily other maps in both size and performance measurements:

Size

 

Number of elementsConcurrentHashMapNonBlockingHashMapLongLong2ObjectMap
SizeCh. (%)SizeCh. (%)SizeCh. (%)
Empty224-712+317%568+253%
8800-824+3%696-13%
322720-1400-48%1496-45%
25620640-7672-63%10904-47%
102482080-29176-64%43160-47%
4096327840-115192-65%172184-47%

As seen in the table there are three stages of elements numbers:

  1. empty map makes Java implementation the smallest
  2. with 8 elements have maps similar size
  3. from 256 elements the max saving for NonBlickingHashMapLong and Long2ObjectMap is achieved

Performance

Performance
Java
_______________________
0 - 2 producers 2 consumers: 4,782,614 ops/sec - MapsPerfTest
1 - 2 producers 2 consumers: 4,194,545 ops/sec - MapsPerfTest
2 - 2 producers 2 consumers: 4,726,670 ops/sec - MapsPerfTest
3 - 2 producers 2 consumers: 5,339,190 ops/sec - MapsPerfTest
4 - 2 producers 2 consumers: 4,952,414 ops/sec - MapsPerfTest

Cliff's
______________________
0 - 2 producers 2 consumers: 23,030,573 ops/sec - MapsPerfTest
1 - 2 producers 2 consumers: 31,244,208 ops/sec - MapsPerfTest
2 - 2 producers 2 consumers: 30,630,539 ops/sec - MapsPerfTest
3 - 2 producers 2 consumers: 28,825,699 ops/sec - MapsPerfTest
4 - 2 producers 2 consumers: 26,588,300 ops/sec - MapsPerfTest

Long2ObjectMap (FastUtil)
______________________
0 - 2 producers 2 consumers: 5,569,578 ops/sec - MapsPerfTest
1 - 2 producers 2 consumers: 5,940,526 ops/sec - MapsPerfTest
2 - 2 producers 2 consumers: 5,883,532 ops/sec - MapsPerfTest
3 - 2 producers 2 consumers: 5,798,396 ops/sec - MapsPerfTest
4 - 2 producers 2 consumers: 5,537,185 ops/sec - MapsPerfTest

 

 

 

  • No labels