使用Pycuda和PySpark - nvcc未找到

6

我的环境: 我正在使用Hortonworks HDP 2.4,Spark 1.6.1和4个g2.2xlarge实例的小型AWS EC2集群,这些实例运行Ubuntu 14.04操作系统。每个实例都安装了CUDA 7.5,Anaconda Python 3.5和Pycuda 2016.1.1。

在/etc/bash.bashrc中,我已经设置了:

CUDA_HOME=/usr/local/cuda
CUDA_ROOT=/usr/local/cuda
PATH=$PATH:/usr/local/cuda/bin

在这四台机器上,我可以访问ubuntu用户、root用户和yarn用户的命令行中的nvcc。
我的问题是: 我有一个Python-Pycuda项目,我已经改编成在Spark上运行。它在我的Mac上的本地Spark安装上运行得很好,但是当我在AWS上运行它时,会出现如下错误:FileNotFoundError: [Errno 2] No such file or directory: 'nvcc'。由于它在我的Mac上以本地模式运行,所以我猜测这可能是CUDA/Pycuda在工作进程中的配置问题,但我真的不知道可能是什么。您有任何想法吗?下面是其中一个作业失败的堆栈跟踪:
16/11/10 22:34:54 INFO ExecutorAllocationManager: Requesting 13 new executors because tasks are backlogged (new desired total will be 17)
16/11/10 22:34:57 INFO TaskSetManager: Starting task 16.0 in stage 2.0 (TID 34, ip-172-31-26-35.ec2.internal, partition 16,RACK_LOCAL, 2148 bytes)
16/11/10 22:34:57 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on ip-172-31-26-35.ec2.internal:54657 (size: 32.2 KB, free: 511.1 MB)
16/11/10 22:35:03 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 18, ip-172-31-26-35.ec2.internal): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pytools/prefork.py", line 46, in call_capture_output
    popen = Popen(cmdline, cwd=cwd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
  File "/home/ubuntu/anaconda3/lib/python3.5/subprocess.py", line 947, in __init__
    restore_signals, start_new_session)
  File "/home/ubuntu/anaconda3/lib/python3.5/subprocess.py", line 1551, in _execute_child
    raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'nvcc'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/hadoop/yarn/local/usercache/ubuntu/appcache/application_1478814770538_0004/container_e40_1478814770538_0004_01_000009/pyspark.zip/pyspark/worker.py", line 111, in main
    process()
  File "/hadoop/yarn/local/usercache/ubuntu/appcache/application_1478814770538_0004/container_e40_1478814770538_0004_01_000009/pyspark.zip/pyspark/worker.py", line 106, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2346, in pipeline_func
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2346, in pipeline_func
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 317, in func
  File "/home/ubuntu/pycuda-euler/src/cli_spark_gpu.py", line 36, in <lambda>
    hail_mary = data.mapPartitions(lambda x: ec.assemble2(k, buffer=x, readLength = dataLength,readCount=dataCount)).saveAsTextFile('hdfs://172.31.26.32/genome/sra_output')
  File "./eulercuda.zip/eulercuda/eulercuda.py", line 499, in assemble2
    lmerLength, evList, eeList, levEdgeList, entEdgeList, readCount)
  File "./eulercuda.zip/eulercuda/eulercuda.py", line 238, in constructDebruijnGraph
    lmerCount, h_kmerKeys, h_kmerValues, kmerCount, numReads)
  File "./eulercuda.zip/eulercuda/eulercuda.py", line 121, in readLmersKmersCuda
    d_lmers = enc.encode_lmer_device(buffer, partitionReadCount, d_lmers, readLength, lmerLength)
  File "./eulercuda.zip/eulercuda/pyencode.py", line 78, in encode_lmer_device
    """)
  File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 265, in __init__
    arch, code, cache_dir, include_dirs)
  File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 255, in compile
    return compile_plain(source, options, keep, nvcc, cache_dir, target)
  File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 78, in compile_plain
    checksum.update(preprocess_source(source, options, nvcc).encode("utf-8"))
  File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 50, in preprocess_source
    result, stdout, stderr = call_capture_output(cmdline, error_on_nonzero=False)
  File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pytools/prefork.py", line 197, in call_capture_output
    return forker[0].call_capture_output(cmdline, cwd, error_on_nonzero)
  File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pytools/prefork.py", line 54, in call_capture_output
    % ( " ".join(cmdline), e))
pytools.prefork.ExecError: error invoking 'nvcc --preprocess -arch sm_30 -I/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/cuda /tmp/tmpkpqwoaxf.cu --compiler-options -P': [Errno 2] No such file or directory: 'nvcc'

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

1
你尝试过将CUDA_ROOT设置为bin/目录吗?就像这个回答中所说的那样:https://dev59.com/s13Va4cB1Zd3GeqPE-hG? - Mariusz
我做过,但我不记得我是在bash.bashrc中做的。我会测试一下并找出来。 - zenlc2000
同样的结果,不幸的是。 - zenlc2000
你有更多的错误堆栈跟踪信息吗,还是只有这个“找不到文件或目录”? - Mariusz
2个回答

1
为了解决这个问题,我最终找到了解决方法。
注意:我知道这对于大多数人来说并不是一个好的或永久性的答案,但在我的情况下,我正在运行我的论文的POC代码,一旦我得到一些最终结果,我将停止使用这些服务器。我怀疑这个答案对大多数用户来说都不合适或不恰当。
最终我在Pycuda的compiler.py文件的compile_plain()函数中硬编码了nvcc的完整路径。
部分列表如下:
def compile_plain(source, options, keep, nvcc, cache_dir, target="cubin"):
    from os.path import join

    assert target in ["cubin", "ptx", "fatbin"]
    nvcc = '/usr/local/cuda/bin/'+nvcc
    if cache_dir:
        checksum = _new_md5()

希望这能为其他人指明正确的方向。


0

是的,我知道所有这些,并花了将近两周的时间尝试找到将该路径发送到Spark工作器的“正确”方法。 - zenlc2000
@zenlc2000 那有什么问题吗?由于环境是继承的,您只需要为一些“根”进程设置它,这正是链接所涵盖的情况。除非您以某种奇怪的方式运行工作程序和/或pyspark出于某种奇怪的原因重置环境 - 后者似乎不太可能 - ivan_pozdeev
进程由 Hadoop 集群上的 yarn 用户运行。如上所述,当我切换到 yarn 后,我可以从命令行运行 nvcc,但是当框架运行作业时,工作者没有此路径。最终,我发现了一个“足够好”的解决方法。如果我继续这条研究路线,我会找到“正确”的方法,但那时我的学位和毕业将不再受影响。 - zenlc2000
顺便说一句 - 我很感激您的关注和尝试回答这个问题。谢谢。 - zenlc2000
@zenlc2000 现在不是AWS EC2了,而是Hadoop?嗯,基本上是一样的(已更新)。 - ivan_pozdeev
Hadoop是一个大数据处理框架,我在AWS EC2上运行它。 - zenlc2000

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接