禁用Scrapy中的SSL证书验证

5

我目前在使用Scrapy时遇到了一个问题。每当我使用Scrapy来爬取一个HTTPS网站,并且证书的CN值与服务器的域名匹配时,Scrapy运行良好!但是,当我尝试爬取证书的CN值与服务器的域名不匹配的网站时,我会遇到以下问题:

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/twisted/protocols/tls.py", line 415, in dataReceived
    self._write(bytes)
  File "/usr/local/lib/python2.7/dist-packages/twisted/protocols/tls.py", line 554, in _write
    sent = self._tlsConnection.send(toSend)
  File "/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 1270, in send
    result = _lib.SSL_write(self._ssl, buf, len(buf))
  File "/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 926, in wrapper
    callback(Connection._reverse_mapping[ssl], where, return_code)
--- <exception caught here> ---
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/_sslverify.py", line 1055, in infoCallback
    return wrapped(connection, where, ret)
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/_sslverify.py", line 1154, in _identityVerifyingInfoCallback
    verifyHostname(connection, self._hostnameASCII)
  File "/usr/local/lib/python2.7/dist-packages/service_identity/pyopenssl.py", line 30, in verify_hostname
    obligatory_ids=[DNS_ID(hostname)],
  File "/usr/local/lib/python2.7/dist-packages/service_identity/_common.py", line 235, in __init__
    raise ValueError("Invalid DNS-ID.")
exceptions.ValueError: Invalid DNS-ID.

我已经查阅了尽可能多的文档,据我所知Scrapy没有禁用SSL证书验证的方法。即使是Scrapy请求对象的文档(我认为这里应该有这个功能),也没有提到相关内容:

http://doc.scrapy.org/en/1.0/topics/request-response.html#scrapy.http.Request https://github.com/scrapy/scrapy/blob/master/scrapy/http/request/init.py

此外,Scrapy没有解决这个问题的设置:

http://doc.scrapy.org/en/1.0/topics/settings.html

除了使用Scrapy源代码并根据需要修改源代码之外,是否有任何想法可以禁用SSL证书验证?

谢谢!


2
从文档上看,你可以修改DOWNLOAD_HANDLERSDOWNLOAD_HANDLERS_BASE设置来改变Scrapy处理https的方式。接下来,你可能需要创建自己修改过的HttpDownloadHandler,以便能够解决你遇到的错误。 - Kyle Pittman
我在桌子上猛击了一下头。这看起来很有希望。你能把它写成答案,让我接受,然后添加我用过的代码以供将来参考吗? - MoarCodePlz
1个回答

4
根据你提供的文档链接(设置),看起来你应该可以修改DOWNLOAD_HANDLERS设置。根据文档:

"""
    A dict containing the request download handlers enabled by default in
    Scrapy. You should never modify this setting in your project, modify
    DOWNLOAD_HANDLERS instead.
"""

DOWNLOAD_HANDLERS_BASE = {
    'file': 'scrapy.core.downloader.handlers.file.FileDownloadHandler',
    'http': 'scrapy.core.downloader.handlers.http.HttpDownloadHandler',
    'https': 'scrapy.core.downloader.handlers.http.HttpDownloadHandler',
    's3': 'scrapy.core.downloader.handlers.s3.S3DownloadHandler',
}

然后在您的设置中,类似于以下内容:
""" 
    Configure your download handlers with something custom to override
    the default https handler
"""
DOWNLOAD_HANDLERS = {
    'https': 'my.custom.downloader.handler.https.HttpsDownloaderIgnoreCNError',
}

因此,通过为 https 协议定义自定义处理程序,您应该能够处理您遇到的错误并允许Scrapy继续进行其业务。


4
太好了,看起来正好解决了我遇到的问题。我会尝试修改代码,看看能否让它工作,并在此发布我的解决方案!谢谢! - MoarCodePlz
@MoarCodePlz,你找到解决方案了吗?有兴趣发布一些链接吗? - Dawson

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接