我正在尝试使用Python 3.2中引入的新模块concurrent.futures进行实验,并且注意到,几乎相同的代码,使用multiprocessing.Pool比使用concurrent.futures中的Pool慢得多。以下是使用multiprocessing的版本:
我无法使用Python分析器对其进行分析,因为会出现pickle错误。有什么想法吗?
def hard_work(n):
# Real hard work here
pass
if __name__ == '__main__':
from multiprocessing import Pool, cpu_count
try:
workers = cpu_count()
except NotImplementedError:
workers = 1
pool = Pool(processes=workers)
result = pool.map(hard_work, range(100, 1000000))
这是使用concurrent.futures:
def hard_work(n):
# Real hard work here
pass
if __name__ == '__main__':
from concurrent.futures import ProcessPoolExecutor, wait
from multiprocessing import cpu_count
try:
workers = cpu_count()
except NotImplementedError:
workers = 1
pool = ProcessPoolExecutor(max_workers=workers)
result = pool.map(hard_work, range(100, 1000000))
使用从Eli Bendersky的文章中获取的朴素分解函数,在我的电脑上(i7,64位,Arch Linux),这是结果:
[juanlu@nebulae]─[~/Development/Python/test]
└[10:31:10] $ time python pool_multiprocessing.py
real 0m10.330s
user 1m13.430s
sys 0m0.260s
[juanlu@nebulae]─[~/Development/Python/test]
└[10:31:29] $ time python pool_futures.py
real 4m3.939s
user 6m33.297s
sys 0m54.853s
我无法使用Python分析器对其进行分析,因为会出现pickle错误。有什么想法吗?