Spark - 容器运行超出物理内存限制

17

我有两个工作节点的集群。 Worker_Node_1 - 64GB RAM Worker_Node_2 - 32GB RAM

背景概述 : 我正在尝试在yarn-cluster上执行spark-submit,对图形应用Pregel计算从一个源顶点到所有其他顶点的最短路径距离,并将值打印在控制台上。 实验:

  1. 对于具有15个顶点的小图,执行完成,应用程序最终状态为:SUCCEEDED
  2. 我的代码可以完美地为源顶点计算241个顶点图的最短距离,但是存在问题。

问题 : 当我查看日志文件时,任务在4分钟26秒内成功完成,但终端仍然显示应用程序状态为Running,大约12分钟后任务执行终止,显示 -

Application application_1447669815913_0002 failed 2 times due to AM Container for appattempt_1447669815913_0002_000002 exited with exitCode: -104 For more detailed output, check application tracking page:http://myserver.com:8088/proxy/application_1447669815913_0002/
Then, click on links to logs of each attempt. 
Diagnostics: Container [pid=47384,containerID=container_1447669815913_0002_02_000001] is running beyond physical memory limits. Current usage: 17.9 GB of 17.5 GB physical memory used; 18.7 GB of 36.8 GB virtual memory used. Killing container.

Dump of the process-tree for container_1447669815913_0002_02_000001 : 
 |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 47387 47384 47384 47384 (java) 100525 13746 20105633792 4682973 /usr/lib/jvm/java-7-oracle-cloudera/bin/java -server -Xmx16384m -Djava.io.tmpdir=/yarn/nm/usercache/cloudera/appcache/application_1447669815913_0002/container_1447669815913_0002_02_000001/tmp -Dspark.eventLog.enabled=true -Dspark.eventLog.dir=hdfs://myserver.com:8020/user/spark/applicationHistory -Dspark.executor.memory=14g -Dspark.shuffle.service.enabled=false -Dspark.yarn.executor.memoryOverhead=2048 -Dspark.yarn.historyServer.address=http://myserver.com:18088 -Dspark.driver.extraLibraryPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native -Dspark.shuffle.service.port=7337 -Dspark.yarn.jar=local:/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/spark/lib/spark-assembly.jar -Dspark.serializer=org.apache.spark.serializer.KryoSerializer -Dspark.authenticate=false -Dspark.app.name=com.path.PathFinder -Dspark.master=yarn-cluster -Dspark.executor.extraLibraryPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native -Dspark.yarn.am.extraLibraryPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1447669815913_0002/container_1447669815913_0002_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class com.path.PathFinder --jar file:/home/cloudera/Documents/Longest_Path_Data_1/Jars/ShortestPath_Loop-1.0.jar --arg /home/cloudera/workspace/Spark-Integration/LongestWorstPath/configFile --executor-memory 14336m --executor-cores 32 --num-executors 2
|- 47384 47382 47384 47384 (bash) 2 0 17379328 853 /bin/bash -c LD_LIBRARY_PATH=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native::/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native /usr/lib/jvm/java-7-oracle-cloudera/bin/java -server -Xmx16384m -Djava.io.tmpdir=/yarn/nm/usercache/cloudera/appcache/application_1447669815913_0002/container_1447669815913_0002_02_000001/tmp '-Dspark.eventLog.enabled=true' '-Dspark.eventLog.dir=hdfs://myserver.com:8020/user/spark/applicationHistory' '-Dspark.executor.memory=14g' '-Dspark.shuffle.service.enabled=false' '-Dspark.yarn.executor.memoryOverhead=2048' '-Dspark.yarn.historyServer.address=http://myserver.com:18088' '-Dspark.driver.extraLibraryPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native' '-Dspark.shuffle.service.port=7337' '-Dspark.yarn.jar=local:/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/spark/lib/spark-assembly.jar' '-Dspark.serializer=org.apache.spark.serializer.KryoSerializer' '-Dspark.authenticate=false' '-Dspark.app.name=com.path.PathFinder' '-Dspark.master=yarn-cluster' '-Dspark.executor.extraLibraryPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native' '-Dspark.yarn.am.extraLibraryPath=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hadoop/lib/native' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1447669815913_0002/container_1447669815913_0002_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'com.path.PathFinder' --jar file:/home/cloudera/Documents/Longest_Path_Data_1/Jars/ShortestPath_Loop-1.0.jar --arg '/home/cloudera/workspace/Spark-Integration/LongestWorstPath/configFile' --executor-memory 14336m --executor-cores 32 --num-executors 2 1> /var/log/hadoop-yarn/container/application_1447669815913_0002/container_1447669815913_0002_02_000001/stdout 2> /var/log/hadoop-yarn/container/application_1447669815913_0002/container_1447669815913_0002_02_000001/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.

我尝试过的事情:

  1. yarn.scheduler.maximum-allocation-mb - 32GB
  2. mapreduce.map.memory.mb = 2048(以前是1024)
  3. 尝试将--driver-memory变化到24g

请问如何配置资源管理器以处理大型图形(> 300K个顶点)?谢谢。


1
有一个类似的之前的问题:https://dev59.com/E2Ei5IYBdhLWcg3wueV0 - Carlos AG
@aditya 你找到什么了吗?其他人没能帮到我。 - Gaurav Shah
您需要根据集群的容量对应用程序进行微调。在使用yarn-cluster进行spark-submit时,参数--driver-memory、--executor-memory、--executor-cores和--num-executors非常重要。 - mn0102
请阅读此链接:http://spark.apache.org/docs/latest/tuning.html - mn0102
我有同样的问题。 有人知道如何理解哪个操作导致OutOfMemory? 如果它是某个连接或缓存数据,该怎么办? 谢谢! - Erica
有解决这个问题的办法吗?我遇到了同样的问题。https://stackoverflow.com/questions/49209905/spark-application-throws-container-physical-memory-error - Surender Raja
5个回答

4

只需将spark.driver.memory的默认配置从512m增加到2g,即可解决我的问题。

如果仍然出现相同的错误,您可以将内存设置得更高一些。然后,您可以不断减少内存,直到出现相同的错误,以便确定用于作业的最佳驱动程序内存。


3

处理的数据量越大,每个Spark任务需要的内存就越多。如果您的执行器运行了太多任务,则可能会耗尽内存。当我处理大量数据时,通常是由于没有正确平衡每个执行器的核心数而导致问题。请尝试减少核心数或增加执行器内存。

检查Spark UI上的执行器选项卡是检查内存问题的一种简单方法。如果您看到很多红色条形图表示垃圾收集时间长,则可能在执行器中用尽了内存。


他说__容器__的内存不足,而不是执行器。 - CpILL

2

我解决了一个问题,那就是增加spark.yarn.executor.memoryOverhead的配置以增加堆外内存。当你增加driver-memory和executor-memory的数量时,不要忘记这个配置项。


1
我有类似的问题:
关键错误信息:
- 退出码: -104 - "物理"内存限制
Application application_1577148289818_10686 failed 2 times due to AM Container for appattempt_1577148289818_10686_000002 exited with **exitCode: -104**

Failing this attempt.Diagnostics: [2019-12-26 09:13:54.392]Container [pid=18968,containerID=container_e96_1577148289818_10686_02_000001] is running 132722688B beyond the **'PHYSICAL' memory limit**. Current usage: 1.6 GB of 1.5 GB physical memory used; 4.6 GB of 3.1 GB virtual memory used. Killing container.

增加spark.executor.memoryspark.executor.memoryOverhead没有生效。

然后我增加了spark.driver.memory,问题得到解决。


0

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接