Python调用mpirun实现嵌入式功能

8
我将尝试使用PyOpt运行一些并行优化。困难的部分是,我希望在目标函数中运行一个使用mpi的C++代码。
我的Python脚本如下:
#!/usr/bin/env python    
# Standard Python modules
import os, sys, time, math
import subprocess


# External Python modules
try:
    from mpi4py import MPI
    comm = MPI.COMM_WORLD
    myrank = comm.Get_rank()
except:
    raise ImportError('mpi4py is required for parallelization')

# Extension modules
from pyOpt import Optimization
from pyOpt import ALPSO

# Predefine the BashCommand
RunCprogram = "mpirun -np 2 CProgram" # Parallel C++ program


######################### 
def objfunc(x):

    f = -(((math.sin(2*math.pi*x[0])**3)*math.sin(2*math.pi*x[1]))/((x[0]**3)*(x[0]+x[1])))

    # Run CProgram 
    os.system(RunCprogram) #where the mpirun call occurs

    g = [0.0]*2
    g[0] = x[0]**2 - x[1] + 1
    g[1] = 1 - x[0] + (x[1]-4)**2

    time.sleep(0.01)
    fail = 0
    return f,g, fail

# Instantiate Optimization Problem 
opt_prob = Optimization('Thermal Conductivity Optimization',objfunc)
opt_prob.addVar('x1','c',lower=5.0,upper=1e-6,value=10.0)
opt_prob.addVar('x2','c',lower=5.0,upper=1e-6,value=10.0)
opt_prob.addObj('f')
opt_prob.addCon('g1','i')
opt_prob.addCon('g2','i')

# Solve Problem (DPM-Parallelization)
alpso_dpm = ALPSO(pll_type='DPM')
alpso_dpm.setOption('fileout',0)
alpso_dpm(opt_prob)
print opt_prob.solution(0)

我使用以下方法运行该代码:

mpirun -np 20 python Script.py

然而,我遇到了以下错误:
[user:28323] *** Process received signal ***
[user:28323] Signal: Segmentation fault (11)
[user:28323] Signal code: Address not mapped (1)
[user:28323] Failing at address: (nil)
[user:28323] [ 0] /lib64/libpthread.so.0() [0x3ccfc0f500]
[user:28323] *** End of error message ***

我认为两个不同的 mpirun 调用(一个调用 Python 脚本,另一个在脚本内部)互相冲突。有什么解决方法吗?
谢谢!

你是否使用mpi通信在python进程之间交换数据,还是只是使用mpi4py来运行多个隔离实例。如果是这种情况,你可以考虑在python中使用subprocess模块生成多个线程,每个线程都可以调用一个mpirun实例(使用subprocess.Popen)。我经常这样做,没有遇到任何问题。如果你要在多台机器上运行Script.py,可能就不可能这样做了... - Ed Smith
1个回答

3
请查看调用MPI应用程序的串行子进程中的MPI二进制文件:最安全的方法是使用MPI_Comm_spawn()。例如,请参阅此管理器-工作人员示例
快速解决方法是使用@EdSmith提到的subprocess.Popen。然而,请注意,subprocess.Popen的默认行为使用父进程的环境变量。我猜os.system()也是如此。不幸的是,某些环境变量由mpirun添加,具体取决于MPI实现,例如OMPI_COMM_WORLD_RANKOMPI_MCA_orte_ess_num_procs。要查看这些环境变量,请在mpi4py代码和基本Python shell中键入import os; print os.environ。这些环境变量可能导致子进程失败。因此,我不得不添加一行来摆脱它们...这相当肮脏...它归结为:
    args = shlex.split(RunCprogram)
    env=os.environ
    # to remove all environment variables with "MPI" in it...rather dirty...
    new_env = {k: v for k, v in env.iteritems() if "MPI" not in k}

    #print new_env
    # shell=True : watch for security issues...
    p = subprocess.Popen(RunCprogram,shell=True, env=new_env,stdout=subprocess.PIPE, stdin=subprocess.PIPE)
    p.wait()
    result="process myrank "+str(myrank)+" got "+p.stdout.read()
    print result

完整的测试代码,通过 mpirun -np 2 python opti.py 运行:

#!/usr/bin/env python    
# Standard Python modules
import os, sys, time, math
import subprocess
import shlex


# External Python modules
try:
    from mpi4py import MPI
    comm = MPI.COMM_WORLD
    myrank = comm.Get_rank()
except:
    raise ImportError('mpi4py is required for parallelization')

# Predefine the BashCommand
RunCprogram = "mpirun -np 2 main" # Parallel C++ program


######################### 
def objfunc(x):

    f = -(((math.sin(2*math.pi*x[0])**3)*math.sin(2*math.pi*x[1]))/((x[0]**3)*(x[0]+x[1])))

    # Run CProgram 
    #os.system(RunCprogram) #where the mpirun call occurs
    args = shlex.split(RunCprogram)
    env=os.environ
    new_env = {k: v for k, v in env.iteritems() if "MPI" not in k}

    #print new_env
    p = subprocess.Popen(RunCprogram,shell=True, env=new_env,stdout=subprocess.PIPE, stdin=subprocess.PIPE)
    p.wait()
    result="process myrank "+str(myrank)+" got "+p.stdout.read()
    print result



    g = [0.0]*2
    g[0] = x[0]**2 - x[1] + 1
    g[1] = 1 - x[0] + (x[1]-4)**2

    time.sleep(0.01)
    fail = 0
    return f,g, fail

print objfunc([1.0,0.0])

基本工作程序,通过mpiCC main.cpp -o main编译:

#include "mpi.h"

int main(int argc, char* argv[]) { 
    int rank, size;

    MPI_Init (&argc, &argv);    
    MPI_Comm_rank (MPI_COMM_WORLD, &rank);  
    MPI_Comm_size (MPI_COMM_WORLD, &size);  

    if(rank==0){
        std::cout<<" size "<<size<<std::endl;
    }
    MPI_Finalize();

    return 0;

}

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接