我有大型的Excel工作表,希望能够使用PHPExcel将其读入MySQL。我正在使用最近的补丁,它允许您在不打开整个文件的情况下读取工作表。这样我就可以一次读取一个工作表。
然而,一个Excel文件大小为27MB。我可以成功地读取第一个工作表,因为它很小,但第二个工作表太大了,以至于启动进程的cron作业在22:00时开始,到早上8:00仍未完成,工作表太大了。 有没有办法逐行读取工作表,例如像这样:
然而,一个Excel文件大小为27MB。我可以成功地读取第一个工作表,因为它很小,但第二个工作表太大了,以至于启动进程的cron作业在22:00时开始,到早上8:00仍未完成,工作表太大了。 有没有办法逐行读取工作表,例如像这样:
$inputFileType = 'Excel2007';
$inputFileName = 'big_file.xlsx';
$objReader = PHPExcel_IOFactory::createReader($inputFileType);
$worksheetNames = $objReader->listWorksheetNames($inputFileName);
foreach ($worksheetNames as $sheetName) {
//BELOW IS "WISH CODE":
foreach($row = 1; $row <=$max_rows; $row+= 100) {
$dataset = $objReader->getWorksheetWithRows($row, $row+100);
save_dataset_to_database($dataset);
}
}
附录
@mark,我使用您发布的代码创建了以下示例:
function readRowsFromWorksheet() {
$file_name = htmlentities($_POST['file_name']);
$file_type = htmlentities($_POST['file_type']);
echo 'Read rows from worksheet:<br />';
debug_log('----------start');
$objReader = PHPExcel_IOFactory::createReader($file_type);
$chunkSize = 20;
$chunkFilter = new ChunkReadFilter();
$objReader->setReadFilter($chunkFilter);
for ($startRow = 2; $startRow <= 240; $startRow += $chunkSize) {
$chunkFilter->setRows($startRow, $chunkSize);
$objPHPExcel = $objReader->load('data/' . $file_name);
debug_log('reading chunk starting at row '.$startRow);
$sheetData = $objPHPExcel->getActiveSheet()->toArray(null, true, true, true);
var_dump($sheetData);
echo '<hr />';
}
debug_log('end');
}
如下日志文件所示,它在一个小的8K Excel 文件上运行正常,但是当我在一个3MB的 Excel文件上运行它时,它从未超过第一块,有没有办法优化这段代码以提高性能?否则看起来它不足以从大型Excel文件中获取块。
2011-01-12 11:07:15: ----------start
2011-01-12 11:07:15: reading chunk starting at row 2
2011-01-12 11:07:15: reading chunk starting at row 22
2011-01-12 11:07:15: reading chunk starting at row 42
2011-01-12 11:07:15: reading chunk starting at row 62
2011-01-12 11:07:15: reading chunk starting at row 82
2011-01-12 11:07:15: reading chunk starting at row 102
2011-01-12 11:07:15: reading chunk starting at row 122
2011-01-12 11:07:15: reading chunk starting at row 142
2011-01-12 11:07:15: reading chunk starting at row 162
2011-01-12 11:07:15: reading chunk starting at row 182
2011-01-12 11:07:15: reading chunk starting at row 202
2011-01-12 11:07:15: reading chunk starting at row 222
2011-01-12 11:07:15: end
2011-01-12 11:07:52: ----------start
2011-01-12 11:08:01: reading chunk starting at row 2
(...at 11:18, CPU usage at 93% still running...)
附录2
当我注释掉以下内容时:
//$sheetData = $objPHPExcel->getActiveSheet()->toArray(null, true, true, true);
//var_dump($sheetData);
然后它以一个可接受的速度(大约每秒2行)进行解析,是否有任何方法可以提高toArray()
的性能?
2011-01-12 11:40:51: ----------start
2011-01-12 11:40:59: reading chunk starting at row 2
2011-01-12 11:41:07: reading chunk starting at row 22
2011-01-12 11:41:14: reading chunk starting at row 42
2011-01-12 11:41:22: reading chunk starting at row 62
2011-01-12 11:41:29: reading chunk starting at row 82
2011-01-12 11:41:37: reading chunk starting at row 102
2011-01-12 11:41:45: reading chunk starting at row 122
2011-01-12 11:41:52: reading chunk starting at row 142
2011-01-12 11:42:00: reading chunk starting at row 162
2011-01-12 11:42:07: reading chunk starting at row 182
2011-01-12 11:42:15: reading chunk starting at row 202
2011-01-12 11:42:22: reading chunk starting at row 222
2011-01-12 11:42:22: end
附录3
例如,这似乎在至少 3 MB 的文件上能够足够地工作:
for ($startRow = 2; $startRow <= 240; $startRow += $chunkSize) {
echo 'Loading WorkSheet using configurable filter for headings row 1 and for rows ', $startRow, ' to ', ($startRow + $chunkSize - 1), '<br />';
$chunkFilter->setRows($startRow, $chunkSize);
$objPHPExcel = $objReader->load('data/' . $file_name);
debug_log('reading chunk starting at row ' . $startRow);
foreach ($objPHPExcel->getActiveSheet()->getRowIterator() as $row) {
$cellIterator = $row->getCellIterator();
$cellIterator->setIterateOnlyExistingCells(false);
echo '<tr>';
foreach ($cellIterator as $cell) {
if (!is_null($cell)) {
//$value = $cell->getCalculatedValue();
$rawValue = $cell->getValue();
debug_log($rawValue);
}
}
}
}