Python多进程的进程数量

10

我正在使用Python的多进程池模块创建一个进程池并把任务分配给它。

我创建了4个进程并分配了2个任务,但尝试显示它们的进程号时,我只看到一个进程号“6952”...难道不应该打印两个进程号吗?

from multiprocessing import Pool
from time import sleep

def f(x):
    import os 
    print "process id = " , os.getpid()
    return x*x

if __name__ == '__main__':
    pool = Pool(processes=4)              # start 4 worker processes

    result  =  pool.map_async(f, (11,))   #Start job 1 
    result1 =  pool.map_async(f, (10,))   #Start job 2
    print "result = ", result.get(timeout=1)  
    print "result1 = ", result1.get(timeout=1)

结果:--

result = process id =  6952
process id =  6952
 [121]
result1 =  [100]

你在使用Windows吗? - dano
2个回答

4

这只是时间问题。Windows需要在Pool中生成4个进程,然后需要启动、初始化并准备从Queue中消耗。在Windows上,这要求每个子进程重新导入__main__模块,并在每个子进程中取消使用Pool内部使用的Queue实例。这需要一定的时间。事实上,当你执行两个map_async()调用时,所有Pool中的进程甚至还没有全部启动和运行。如果您为Pool中每个工作程序运行的函数添加一些跟踪,您可以看到这一点:

while maxtasks is None or (maxtasks and completed < maxtasks):
    try:
        print("getting {}".format(current_process()))
        task = get()  # This is getting the task from the parent process
        print("got {}".format(current_process()))

输出:

getting <ForkServerProcess(ForkServerPoolWorker-1, started daemon)>
got <ForkServerProcess(ForkServerPoolWorker-1, started daemon)>
process id =  5145
getting <ForkServerProcess(ForkServerPoolWorker-1, started daemon)>
got <ForkServerProcess(ForkServerPoolWorker-1, started daemon)>
process id =  5145
getting <ForkServerProcess(ForkServerPoolWorker-1, started daemon)>
result =  [121]
result1 =  [100]
getting <ForkServerProcess(ForkServerPoolWorker-2, started daemon)>
getting <ForkServerProcess(ForkServerPoolWorker-3, started daemon)>
getting <ForkServerProcess(ForkServerPoolWorker-4, started daemon)>
got <ForkServerProcess(ForkServerPoolWorker-1, started daemon)>

正如您所看到的,Worker-1 在其他 2-4 个工作进程尝试从 Queue 中消耗任务之前启动并消耗了两个任务。如果在主进程中实例化 Pool,但在调用 map_async 之前添加 sleep 调用,则会看到不同的进程处理每个请求:

getting <ForkServerProcess(ForkServerPoolWorker-1, started daemon)>
getting <ForkServerProcess(ForkServerPoolWorker-2, started daemon)>
getting <ForkServerProcess(ForkServerPoolWorker-3, started daemon)>
getting <ForkServerProcess(ForkServerPoolWorker-4, started daemon)>
# <sleeping here>
got <ForkServerProcess(ForkServerPoolWorker-1, started daemon)>
process id =  5183
got <ForkServerProcess(ForkServerPoolWorker-2, started daemon)>
process id =  5184
getting <ForkServerProcess(ForkServerPoolWorker-1, started daemon)>
getting <ForkServerProcess(ForkServerPoolWorker-2, started daemon)>
result =  [121]
result1 =  [100]
got <ForkServerProcess(ForkServerPoolWorker-3, started daemon)>
got <ForkServerProcess(ForkServerPoolWorker-4, started daemon)>
got <ForkServerProcess(ForkServerPoolWorker-1, started daemon)>
got <ForkServerProcess(ForkServerPoolWorker-2, started daemon)>

(请注意,您看到的额外的“getting/got”语句是向每个进程发送哨兵以优雅地关闭它们。)
在Linux上使用Python 3.x,我能够使用“spawn”和“forkserver”上下文来复现这种行为,但不能使用“fork”。这可能是因为分叉子进程比生成它们并重新导入“__main__”要快得多。

0

它会打印出2个进程ID。

result = process id =  6952  <=== process id = 6952
process id =  6952  <=== process id = 6952
 [121]
result1 =  [100]

这是因为您的工作进程很快完成并准备处理另一个请求。
result  =  pool.map_async(f, (11,))   #Start job 1 
result1 =  pool.map_async(f, (10,))   #Start job 2

在上面的代码中,您的工作人员完成了工作并返回到池中,准备完成第二项工作。这可能发生多种原因。最常见的原因是工作人员正在忙碌或者没有准备好。
以下是一个示例,我们将拥有4个工作人员,但只有其中一个会立即准备好。因此,我们知道哪一个将完成工作。
# https://gist.github.com/dnozay/b2462798ca89fbbf0bf4

from multiprocessing import Pool,Queue
from time import sleep

def f(x):
    import os 
    print "process id = " , os.getpid()
    return x*x

# Queue that will hold amount of time to sleep
# for each worker in the initialization
sleeptimes = Queue()
for times in [2,3,0,2]:
    sleeptimes.put(times)

# each worker will do the following init.
# before they are handed any task.
# in our case the 3rd worker won't sleep
# and get all the work.
def slowstart(q):
    import os
    num = q.get()
    print "slowstart: process id = {0} (sleep({1}))".format(os.getpid(),num)
    sleep(num)

if __name__ == '__main__':
    pool = Pool(processes=4,initializer=slowstart,initargs=(sleeptimes,))    # start 4 worker processes
    result  =  pool.map_async(f, (11,))   #Start job 1 
    result1 =  pool.map_async(f, (10,))   #Start job 2
    print "result = ", result.get(timeout=3)
    print "result1 = ", result1.get(timeout=3)

例子:

$ python main.py 
slowstart: process id = 97687 (sleep(2))
slowstart: process id = 97688 (sleep(3))
slowstart: process id = 97689 (sleep(0))
slowstart: process id = 97690 (sleep(2))
process id =  97689
process id =  97689
result =  [121]
result1 =  [100]

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接