我正在尝试使用concurrent.futures。
目前我的future调用了time.sleep(secs)
。
似乎Future.cancel()的作用不及我所想。
如果future已经在执行,那么time.sleep()
将无法被取消。
wait()的超时参数同样无法取消time.sleep()
。
如何取消并发执行的time.sleep()
?
我使用ThreadPoolExecutor进行测试。
我正在尝试使用concurrent.futures。
目前我的future调用了time.sleep(secs)
。
似乎Future.cancel()的作用不及我所想。
如果future已经在执行,那么time.sleep()
将无法被取消。
wait()的超时参数同样无法取消time.sleep()
。
如何取消并发执行的time.sleep()
?
我使用ThreadPoolExecutor进行测试。
import concurrent.futures as f
import time
T = f.ThreadPoolExecutor(1) # Run at most one function concurrently
def block5():
time.sleep(5)
return 1
q = T.submit(block5)
m = T.submit(block5)
print q.cancel() # Will fail, because q is already running
print m.cancel() # Will work, because q is blocking the only thread, so m is still queued
n
秒后继续执行。multiprocessing
,可以中断sleep
,因为你当然可以“杀死”其他进程。 - Phillip我不太了解concurrent.futures,但是你可以使用这个逻辑来控制时间。使用循环而不是sleep.time()或wait()。
for i in range(sec):
sleep(1)
可以使用中断或者break语句来跳出循环。
from concurrent.futures import ThreadPoolExecutor
import queue
import time
class Runner:
def __init__(self):
self.q = queue.Queue()
self.exec = ThreadPoolExecutor(max_workers=2)
def task(self):
while True:
try:
self.q.get(block=True, timeout=1)
break
except queue.Empty:
pass
print('running')
def run(self):
self.exec.submit(self.task)
def stop(self):
self.q.put(None)
self.exec.shutdown(wait=False,cancel_futures=True)
r = Runner()
r.run()
time.sleep(5)
r.stop()
我最近也遇到了这个问题。我有两个任务需要同时运行,其中一个任务需要不时地休眠。在下面的代码中,假设task2是需要休眠的任务。
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(max_workers=2)
executor.submit(task1)
executor.submit(task2)
executor.shutdown(wait=True)
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(max_workers=1)
executor.submit(task1)
task2()
executor.shutdown(wait=True)
import concurrent.futures
import urllib.request
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
with urllib.request.urlopen(url, timeout=timeout) as conn:
return conn.read()
# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print('%r generated an exception: %s' % (url, exc))
else:
print('%r page is %d bytes' % (url, len(data)))