并发性:Java Map

8
什么是将2000万实体推入Java映射对象的最佳方法?
  1. 不使用多线程需要大约40秒。
  2. 使用ForkJoinPool需要大约25秒,其中我创建了2个任务,每个任务都推送1000万个实体。

我相信这两个任务在两个不同的核心上运行。

问题:当我创建一个推送1000万数据的任务时,它需要大约9秒钟,但是当运行每个任务推送1000万数据时,为什么需要大约26秒钟?我做错了什么吗?

是否有不同的解决方案可以在少于10秒的时间内插入20M数据?


7
你使用哪种地图实现?是HashMap还是ConcurrentHashMap? - kapex
2
你为什么要将2000万个字符串映射到同一个硬编码值? - Michael
地图的键是String数据类型吗? - Vineet Kasat
3
@PhiberOptixz,提供一些代码可以解决问题。我们不能确定缓慢是因为什么原因导致的:可能是从文件系统读取数据,可能是对象创建花费了时间,可能插入了重复的键而导致映射冲突等等。请提供更多细节以便更好地定位问题。 - iavanish
2
@PhiberOptixz,你先说是ConcurrentHashMap,最后一句话又说是HashMap。那到底是哪一个呢? - Eugene
显示剩余10条评论
3个回答

1

如果没有看到您的代码,导致性能不佳的最有可能原因是垃圾回收活动。为了证明这一点,我编写了以下程序:

import java.lang.management.ManagementFactory;
import java.util.*;
import java.util.concurrent.*;

public class TestMap {
  // we assume NB_ENTITIES is divisible by NB_TASKS
  static final int NB_ENTITIES = 20_000_000, NB_TASKS = 2;
  static Map<String, String> map = new ConcurrentHashMap<>();

  public static void main(String[] args) {
    try {
      System.out.printf("running with nb entities = %,d, nb tasks = %,d, VM args = %s%n", NB_ENTITIES, NB_TASKS, ManagementFactory.getRuntimeMXBean().getInputArguments());
      ExecutorService executor = Executors.newFixedThreadPool(NB_TASKS);
      int entitiesPerTask = NB_ENTITIES / NB_TASKS;
      List<Future<?>> futures = new ArrayList<>(NB_TASKS);
      long startTime = System.nanoTime();
      for (int i=0; i<NB_TASKS; i++) {
        MyTask task = new MyTask(i * entitiesPerTask, (i + 1) * entitiesPerTask - 1);
        futures.add(executor.submit(task));
      }
      for (Future<?> f: futures) {
        f.get();
      }
      long elapsed = System.nanoTime() - startTime;
      executor.shutdownNow();
      System.gc();
      Runtime rt = Runtime.getRuntime();
      long usedMemory = rt.maxMemory() - rt.freeMemory();
      System.out.printf("processing completed in %,d ms, usedMemory after GC = %,d bytes%n", elapsed/1_000_000L, usedMemory);
    } catch (Exception e) {
      e.printStackTrace();
    }
  }

  static class MyTask implements Runnable {
    private final int startIdx, endIdx;

    public MyTask(final int startIdx, final int endIdx) {
      this.startIdx = startIdx;
      this.endIdx = endIdx;
    }

    @Override
    public void run() {
      long startTime = System.nanoTime();
      for (int i=startIdx; i<=endIdx; i++) {
        map.put("sambit:rout:" + i, "C:\\Images\\Provision_Images");
      }
      long elapsed = System.nanoTime() - startTime;
      System.out.printf("task[%,d - %,d], completed in %,d ms%n", startIdx, endIdx, elapsed/1_000_000L);
    }
  }
}

在处理结束时,此代码通过立即执行System.gc(),然后执行Runtime.maxMemory() - Runtime.freeMemory()来计算所使用的内存的近似值。这表明,具有2000万条目的映射大约需要不到2.2 GB的内存,这是相当可观的。我已经使用1个和2个线程运行了它,对于-Xmx和-Xms JVM参数的各种值,以下是生成的输出(仅为清楚起见:2560m = 2.5g):
running with nb entities = 20,000,000, nb tasks = 1, VM args = [-Xms2560m, -Xmx2560m]
task[0 - 19,999,999], completed in 11,781 ms
processing completed in 11,782 ms, usedMemory after GC = 2,379,068,760 bytes

running with nb entities = 20,000,000, nb tasks = 2, VM args = [-Xms2560m, -Xmx2560m]
task[0 - 9,999,999], completed in 8,269 ms
task[10,000,000 - 19,999,999], completed in 12,385 ms
processing completed in 12,386 ms, usedMemory after GC = 2,379,069,480 bytes

running with nb entities = 20,000,000, nb tasks = 1, VM args = [-Xms3g, -Xmx3g]
task[0 - 19,999,999], completed in 12,525 ms
processing completed in 12,527 ms, usedMemory after GC = 2,398,339,944 bytes

running with nb entities = 20,000,000, nb tasks = 2, VM args = [-Xms3g, -Xmx3g]
task[0 - 9,999,999], completed in 12,220 ms
task[10,000,000 - 19,999,999], completed in 12,264 ms
processing completed in 12,265 ms, usedMemory after GC = 2,382,777,776 bytes

running with nb entities = 20,000,000, nb tasks = 1, VM args = [-Xms4g, -Xmx4g]
task[0 - 19,999,999], completed in 7,363 ms
processing completed in 7,364 ms, usedMemory after GC = 2,402,467,040 bytes

running with nb entities = 20,000,000, nb tasks = 2, VM args = [-Xms4g, -Xmx4g]
task[0 - 9,999,999], completed in 5,466 ms
task[10,000,000 - 19,999,999], completed in 5,511 ms
processing completed in 5,512 ms, usedMemory after GC = 2,381,821,576 bytes

running with nb entities = 20,000,000, nb tasks = 1, VM args = [-Xms8g, -Xmx8g]
task[0 - 19,999,999], completed in 7,778 ms
processing completed in 7,779 ms, usedMemory after GC = 2,438,159,312 bytes

running with nb entities = 20,000,000, nb tasks = 2, VM args = [-Xms8g, -Xmx8g]
task[0 - 9,999,999], completed in 5,739 ms
task[10,000,000 - 19,999,999], completed in 5,784 ms
processing completed in 5,785 ms, usedMemory after GC = 2,396,478,680 bytes

这些结果可以总结在以下表格中:
--------------------------------
heap      | exec time (ms) for: 
size (gb) | 1 thread | 2 threads
--------------------------------
2.5       |    11782 |     12386
3.0       |    12527 |     12265
4.0       |     7364 |      5512
8.0       |     7779 |      5785
--------------------------------

我还观察到,对于2.5g和3g堆大小,由于GC活动,整个处理时间内都有高CPU活动,并且出现了100%的峰值,而对于4g和8g,则仅在最后由于System.gc()调用而观察到。
总之: 1.如果您的堆大小不适当,垃圾回收将破坏您希望获得的任何性能增益。您应该使其足够大,以避免长时间的GC暂停的副作用。 2.您还必须意识到,使用ConcurrentHashMap等并发集合会带来显着的性能开销。为了说明这一点,我略微修改了代码,使每个任务都使用自己的HashMap,然后在结束时将所有映射聚合(使用Map.putAll())到第一个任务的映射中。处理时间降至约3200毫秒。

1
如果您的CPU运行速度为3GHz,那么加法运算可能只需要一个CPU周期,大约是0.3纳秒。如果重复进行20M次,则会变成6000000纳秒或6毫秒。因此,您测量的结果更受启动线程、线程切换、JIT编译等开销的影响,而不是您尝试测量的操作本身。
垃圾回收也可能导致减慢速度。
我建议您使用专门的微基准测试库,如jmh。
感谢assylias's的帖子,它帮助我撰写了我的回答。

1
在HashMap中添加一个值绝对不仅仅需要一个CPU周期。1. 计算要添加的对象的哈希码,2. 找到相应的存储桶,3. 检查容量,并根据需要进行扩展,4. 插入到存储桶中(可以是列表或树),5. 如果存在先前关联的值,则返回该值。这些步骤并不按照这个顺序执行,但我的观点是这不是一个简单的操作。 - Hulk

0

虽然我没有尝试过多线程,但我确实尝试了Java 11提供的所有7种适当的Map类型。

我的结果比你报告的25到40秒要快得多。对于20,000,000个<String, UUID>条目,我的结果是任何7个映射类的时间都在3-9秒之间。

我正在使用Java 13:

Model Name: Mac mini
Model Identifier:   Macmini8,1
Processor Name: Intel Core i5
Processor Speed:    3 GHz
Number of Processors:   1
Total Number of Cores:  6
L2 Cache (per Core):    256 KB
L3 Cache:   9 MB
Memory: 32 GB

准备中。 实例的大小:20000000 uuids 的大小:20000000 测试运行中。 java.util.HashMap 耗时:PT3.645250368S java.util.WeakHashMap 耗时:PT3.199812894S java.util.TreeMap 耗时:PT8.97788412S java.util.concurrent.ConcurrentSkipListMap 耗时:PT7.347253106S java.util.concurrent.ConcurrentHashMap 耗时:PT4.494560252S java.util.LinkedHashMap 耗时:PT2.78054883S java.util.IdentityHashMap 耗时:PT5.608737472S
System.out.println( "Preparing." );

int limit = 20_000_000; // 20_000_000
Set < String > instantsSet = new TreeSet <>();  // Use `Set` to forbid duplicates.
List < UUID > uuids = new ArrayList <>( limit );

while ( instantsSet.size() < limit )
{
    instantsSet.add( Instant.now().toString() );
}
List < String > instants = new ArrayList <>( instantsSet );
for ( int i = 0 ; i < limit ; i++ )
{
    uuids.add( UUID.randomUUID() );
}
System.out.println( "size of instants: " + instants.size() );
System.out.println( "size of uuids: " + uuids.size() );

System.out.println( "Running test." );
// Using 7 of the 10 `Map` implementations bundled with Java 11.
// Omitting `EnumMap`, as it requires enums for the key.
// Omitting `Map.of` because it is for literals.
// Omitting `HashTable` because it is outmoded, replaced by `ConcurrentHashMap`.
List < Map < String, UUID > > maps = List.of(
        new HashMap <>( limit ) ,
        new WeakHashMap <>( limit ) ,
        new TreeMap <>() ,
        new ConcurrentSkipListMap <>() ,
        new ConcurrentHashMap <>( limit ) ,
        new LinkedHashMap <>( limit ) ,
        new IdentityHashMap <>( limit )
);
for ( Map < String, UUID > map : maps )
{
    long start = System.nanoTime();
    for ( int i = 0 ; i < instants.size() ; i++ )
    {
        map.put( instants.get( i ) , uuids.get( i ) );
    }
    long stop = System.nanoTime();
    Duration d = Duration.of( stop - start , ChronoUnit.NANOS );
    System.out.println( map.getClass().getName() + " took: " + d );

    // Free up memory.
    map = null;
    System.gc(); // Request garbage collector do its thing. No guarantee!
    try
    {
        Thread.sleep( TimeUnit.SECONDS.toMillis( 4 ) );  // Wait for garbage collector to hopefully finish. No guarantee!
    }
    catch ( InterruptedException e )
    {
        e.printStackTrace();
    }
}
System.out.println("Done running test.");

这里是我写的一个表格,比较了各种Map实现。

Table of map implementations in Java 11, comparing their features


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接