我想从某个网站上的 PDF 文件中提取文本。该网站包含到 PDF 文档的链接,但当我点击该链接时,它会自动下载该文件。是否可能在不下载该文件的情况下提取其中的文本?
import fitz # this is pymupdf lib for text extraction
from bs4 import BeautifulSoup
import requests
from io import StringIO
url = "https://www.blv.admin.ch/blv/de/home/lebensmittel-und-ernaehrung/publikationen-und-forschung/statistik-und-berichte-lebensmittelsicherheit.html"
headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.content, 'html.parser')
all_news = soup.select("div.mod.mod-download a")[0]
pdf = "https://www.blv.admin.ch"+all_news["href"]
#https://www.blv.admin.ch/dam/blv/de/dokumente/lebensmittel-und-ernaehrung/publikationen-forschung/jahresbericht-2017-2019-oew-rr-rasff.pdf.download.pdf/Jahresbericht_2017-2019_DE.pdf
这是从PDF文件中提取文本的代码。当文件被下载时,它的工作效果很好:
my_pdf_doc = fitz.open(pdf)
text = ""
for page in my_pdf_doc:
text += page.getText()
print(text)
同样的问题是,如果链接不会自动下载PDF文件,例如此链接:
"https://amsoldingen.ch/images/files/Bekanntgabe-Stimmausschuss-13.12.2020.pdf"
我该如何从那个文件中提取文本
我也尝试过这个方法:
pdf_content = requests.get(pdf)
print(type(pdf_content.content))
file = StringIO()
print(file.write(pdf_content.content.decode("utf-32")))
但是我遇到错误:
Traceback (most recent call last):
File "/Users/aleksandardevedzic/Desktop/pdf extraction scrapping.py", line 25, in <module>
print(file.write(pdf_content.content.decode("utf-32")))
UnicodeDecodeError: 'utf-32-le' codec can't decode bytes in position 0-3: code point not in range(0x110000)