Scrapy:连接被拒绝

6

在尝试测试Scrapy安装时,我收到了一个错误:

$ scrapy shell http://www.google.es
j2011-02-16 10:54:46+0100 [scrapy] INFO: Scrapy 0.12.0.2536 started (bot: scrapybot)
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Enabled extensions: TelnetConsole, SpiderContext, WebService, CoreStats, MemoryUsage, CloseSpider
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Enabled scheduler middlewares: DuplicatesFilterMiddleware
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpProxyMiddleware, HttpCompressionMiddleware, DownloaderStats
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Enabled item pipelines: 
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2011-02-16 10:54:46+0100 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2011-02-16 10:54:46+0100 [default] INFO: Spider opened
2011-02-16 10:54:47+0100 [default] DEBUG: Retrying <GET http://www.google.es> (failed 1 times): Connection was refused by other side: 111: Connection refused.
2011-02-16 10:54:47+0100 [default] DEBUG: Retrying <GET http://www.google.es> (failed 2 times): Connection was refused by other side: 111: Connection refused.
2011-02-16 10:54:47+0100 [default] DEBUG: Discarding <GET http://www.google.es> (failed 3 times): Connection was refused by other side: 111: Connection refused.
2011-02-16 10:54:47+0100 [default] ERROR: Error downloading <http://www.google.es>: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.error.ConnectionRefusedError'>: Connection was refused by other side: 111: Connection refused.
    ]
2011-02-16 10:54:47+0100 [scrapy] ERROR: Shell error
    Traceback (most recent call last):
    Failure: scrapy.exceptions.IgnoreRequest: Connection was refused by other side: 111: Connection refused.

2011-02-16 10:54:47+0100 [default] INFO: Closing spider (shutdown)
2011-02-16 10:54:47+0100 [default] INFO: Spider closed (shutdown)

版本信息:

  • Scrapy 0.12.0.2536
  • Python 2.6.6
  • 操作系统:Ubuntu 10.10

编辑:我可以使用浏览器、wget、telnet google.es 80 访问它,所有网站都有这个问题。


有任何解决方案吗?我在尝试使用privoxy代理和scrapy时也遇到了这个问题... - Vajk Hermecz
3个回答

11

任务1: Scrapy会使用带有“bot”字样的用户代理发送请求。有些网站可能会基于用户代理进行阻止。

尝试在settings.py中覆盖USER_AGENT。

例如:USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64; rv:7.0.1) Gecko/20100101 Firefox/7.7'

任务2: 尝试在请求之间设置延迟,以模拟人类发送请求。

DOWNLOAD_DELAY = 0.25 

任务三: 如果什么都不起作用,安装wireshark并查看scrapy发送请求头或者数据时和浏览器发送时的差异。


1

0

我也遇到了这个错误。原来是由于我访问的端口被防火墙阻止了。我的服务器默认情况下会阻止端口,除非将其加入白名单。


你是如何发现你的服务器阻止了非白名单端口的连接? - Han

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接