在Scrapy中获取http.response对象的最简单方法

5

我是Scrapy的新手,正在尝试将网页内容获取到响应对象中(如果我理解正确的话)。

我正在遵循http://doc.scrapy.org/en/latest/topics/selectors.html,但这个方法只适用于scrapy shell。我想直接在Python代码中使用它。

我编写了代码来爬取http://doc.scrapy.org/en/latest/_static/selectors-sample1.html

import scrapy
from scrapy.http import HtmlResponse
URL = 'http://doc.scrapy.org/en/latest/_static/selectors-sample1.html'
response = HtmlResponse(url=URL)    
print response.selector.xpath('//title/text()')

并且输出结果为:
>> []

为什么我无法获取正确的标题值?似乎HtmlResponse()未从网上下载数据...为什么?我该如何解决!

非常感谢您!

Cap


为了充分利用Scrapy,您应该遵循教程,响应对象会自动从请求到请求构建。 - Rafael Almeida
1个回答

11

您的声明

response = HtmlResponse(url=URL)

只建立“本地范围”HtmlResponse对象,空body。它不会下载任何东西,尤其不会下载http://doc.scrapy.org/en/latest/_static/selectors-sample1.html的资源。

在Scrapy中,通常不需要手动创建HtmlResponse对象,而是由Scrapy框架为您构建,当它完成处理您提供给它的一个Request实例后,例如:Request(url ='http://doc.scrapy.org/en/latest/_static/selectors-sample1.html')

如果您想试用Scrapy,建议使用scrapy shell:在交互式shell中,您可以使用fetch('http://someurl')触发下载(并获得“真实”的Response对象进行操作):

$ scrapy shell
2016-06-14 10:59:31 [scrapy] INFO: Scrapy 1.1.0 started (bot: scrapybot)
(...)
[s] Available Scrapy objects:
[s]   crawler    <scrapy.crawler.Crawler object at 0x7f1a6591d588>
[s]   item       {}
[s]   settings   <scrapy.settings.Settings object at 0x7f1a6ce290f0>
[s] Useful shortcuts:
[s]   shelp()           Shell help (print this help)
[s]   fetch(req_or_url) Fetch request (or URL) and update local objects
[s]   view(response)    View response in a browser
>>> fetch('http://doc.scrapy.org/en/latest/_static/selectors-sample1.html')
2016-06-14 10:59:51 [scrapy] INFO: Spider opened
2016-06-14 10:59:51 [scrapy] DEBUG: Crawled (200) <GET http://doc.scrapy.org/en/latest/_static/selectors-sample1.html> (referer: None)
>>> response.xpath('//title/text()').extract()
['Example website']

在外部环境中,为了实际下载数据,您需要执行以下操作:

  • 子类化scrapy.Spider
  • 定义从哪里开始下载的URL,
  • 编写回调方法以处理下载的数据,这些方法被包装在传递给它们的Response对象中。

下面是一个非常简单的示例(文件名为test.py):

import scrapy


class TestSpider(scrapy.Spider):

    name = 'testspider'

    # start_urls is special and internally it builds Request objects for each of the URLs listed
    start_urls = ['http://doc.scrapy.org/en/latest/_static/selectors-sample1.html']

    def parse(self, response):
        yield {
            'title': response.xpath('//h1/text()').extract_first()
        }

接下来,您需要运行爬虫。Scrapy有一个用于运行单个文件爬虫的命令:

$ scrapy runspider test.py 

然后您会在控制台中看到以下内容:

2016-06-14 10:48:05 [scrapy] INFO: Scrapy 1.1.0 started (bot: scrapybot)
2016-06-14 10:48:05 [scrapy] INFO: Overridden settings: {}
2016-06-14 10:48:06 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats', 'scrapy.extensions.corestats.CoreStats']
2016-06-14 10:48:06 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-06-14 10:48:06 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-06-14 10:48:06 [scrapy] INFO: Enabled item pipelines:
[]
2016-06-14 10:48:06 [scrapy] INFO: Spider opened
2016-06-14 10:48:06 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-06-14 10:48:06 [scrapy] DEBUG: Crawled (200) <GET http://doc.scrapy.org/en/latest/_static/selectors-sample1.html> (referer: None)
2016-06-14 10:48:06 [scrapy] DEBUG: Scraped from <200 http://doc.scrapy.org/en/latest/_static/selectors-sample1.html>
{'title': 'Example website'}
2016-06-14 10:48:06 [scrapy] INFO: Closing spider (finished)
2016-06-14 10:48:06 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 252,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 501,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 6, 14, 8, 48, 6, 564591),
 'item_scraped_count': 1,
 'log_count/DEBUG': 2,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2016, 6, 14, 8, 48, 6, 85693)}
2016-06-14 10:48:06 [scrapy] INFO: Spider closed (finished)

如果你真的想使用选择器进行操作,而不需要下载任何网络数据,假设你已经本地拥有数据(比如从浏览器中复制了view-source:),你可以这样做,但是需要提供 body

>>> response = HtmlResponse(url=URL, body='''
... <!DOCTYPE html>
... <html>
...   <head>
...   </head>
...   <body>
...       <h1>Herman Melville - Moby-Dick</h1>
... 
...       <div>
...         <p>
...           Availing himself of the mild, summer-cool weather that now reigned in these latitudes, ... them a care-killing competency.
...         </p>
...       </div>
...   </body>
... </html>''', encoding='utf8')
>>> response.xpath('//h1')
[<Selector xpath='//h1' data='<h1>Herman Melville - Moby-Dick</h1>'>]
>>> response.xpath('//h1').extract()
['<h1>Herman Melville - Moby-Dick</h1>']
>>> 

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接