我不太喜欢现有答案的任何方法。
Timo的代码:可能会在CURLM_CALL_MULTI_PERFORM期间进行sleep/select(),这是错误的。当($still_running > 0 && $exec != CURLM_CALL_MULTI_PERFORM)时,它也可能无法进行睡眠,这可能会使代码以100%的CPU使用率(1个核心)旋转而没有任何原因。
Sudhir的代码:当$still_running > 0时不会进行睡眠,并且spam-call 异步函数curl_multi_exec()直到所有内容都被下载完成,这会导致PHP在所有内容被下载完成之前使用100%的CPU(1个CPU核心),换句话说,它在下载时未能进行睡眠。
这里提供了一种没有上述问题的方法:
$websites = array(
"http://google.com",
"http://example.org"
// $url2,
// $url3,
// ...
// $url15
);
$mh = curl_multi_init();
foreach ($websites as $website) {
$worker = curl_init($website);
curl_setopt_array($worker, [
CURLOPT_RETURNTRANSFER => 1
]);
curl_multi_add_handle($mh, $worker);
}
for (;;) {
$still_running = null;
do {
$err = curl_multi_exec($mh, $still_running);
} while ($err === CURLM_CALL_MULTI_PERFORM);
if ($err !== CURLM_OK) {
// handle curl multi error?
}
if ($still_running < 1) {
// all downloads completed
break;
}
// some haven't finished downloading, sleep until more data arrives:
curl_multi_select($mh, 1);
}
$results = [];
while (false !== ($info = curl_multi_info_read($mh))) {
if ($info["result"] !== CURLE_OK) {
// handle download error?
}
$results[curl_getinfo($info["handle"], CURLINFO_EFFECTIVE_URL)] = curl_multi_getcontent($info["handle"]);
curl_multi_remove_handle($mh, $info["handle"]);
curl_close($info["handle"]);
}
curl_multi_close($mh);
var_export($results);
请注意,所有3种方法(包括我的回答、Sudhir的回答和Timo的回答)共同面临的一个问题是,它们会同时打开所有连接。如果您需要获取100万个网站,这些脚本将尝试同时打开100万个连接。如果您只想一次下载50个网站,或者类似情况,请尝试:
$websites = array(
"http://google.com",
"http://example.org"
);
var_dump(fetch_urls($websites,50));
function fetch_urls(array $urls, int $max_connections, int $timeout_ms = 10000, bool $return_fault_reason = true): array
{
if ($max_connections < 1) {
throw new InvalidArgumentException("max_connections MUST be >=1");
}
foreach ($urls as $key => $foo) {
if (! is_string($foo)) {
throw new \InvalidArgumentException("all urls must be strings!");
}
if (empty($foo)) {
unset($urls[$key]);
}
}
unset($foo);
$ret = array();
$mh = curl_multi_init();
$workers = array();
$work = function () use (&$ret, &$workers, &$mh, $return_fault_reason) {
while (1) {
do {
$err = curl_multi_exec($mh, $still_running);
} while ($err === CURLM_CALL_MULTI_PERFORM);
if ($still_running < count($workers)) {
break;
}
$cms = curl_multi_select($mh, 1);
}
while (false !== ($info = curl_multi_info_read($mh))) {
{
if ($info['msg'] !== CURLMSG_DONE) {
continue;
}
if ($info['result'] !== CURLE_OK) {
if ($return_fault_reason) {
$ret[$workers[(int) $info['handle']]] = print_r(array(
false,
$info['result'],
"curl_exec error " . $info['result'] . ": " . curl_strerror($info['result'])
), true);
}
} elseif (CURLE_OK !== ($err = curl_errno($info['handle']))) {
if ($return_fault_reason) {
$ret[$workers[(int) $info['handle']]] = print_r(array(
false,
$err,
"curl error " . $err . ": " . curl_strerror($err)
), true);
}
} else {
$ret[$workers[(int) $info['handle']]] = curl_multi_getcontent($info['handle']);
}
curl_multi_remove_handle($mh, $info['handle']);
assert(isset($workers[(int) $info['handle']]));
unset($workers[(int) $info['handle']]);
curl_close($info['handle']);
}
}
};
foreach ($urls as $url) {
while (count($workers) >= $max_connections) {
$work();
}
$neww = curl_init($url);
if (! $neww) {
trigger_error("curl_init() failed! probably means that max_connections is too high and you ran out of system resources", E_USER_WARNING);
if ($return_fault_reason) {
$ret[$url] = array(
false,
- 1,
"curl_init() failed"
);
}
continue;
}
$workers[(int) $neww] = $url;
curl_setopt_array($neww, array(
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_SSL_VERIFYHOST => 0,
CURLOPT_SSL_VERIFYPEER => 0,
CURLOPT_TIMEOUT_MS => $timeout_ms
));
curl_multi_add_handle($mh, $neww);
}
while (count($workers) > 0) {
$work();
}
curl_multi_close($mh);
return $ret;
}
这将下载整个列表,并且不会同时下载超过50个URL(但即使使用这种方法,所有结果也会存储在RAM中,因此即使使用这种方法也可能会耗尽RAM;如果想将其存储在数据库中而不是内存中,则可以修改curl_multi_getcontent部分以将其存储在数据库中而不是存储在持久的RAM变量中)。