Scrapy爬虫无法跟随链接

3
我正在编写一个scrapy爬虫,从今天的纽约时报主页爬取文章,但出于某种原因它不会跟随任何链接。当我在 scrapy shell http://www.nytimes.com 中实例化链接提取器时,使用 le.extract_links(response) 成功提取了文章url列表,但我无法让我的爬行命令 (scrapy crawl nyt -o out.json) 爬取除主页以外的任何内容。我已经束手无策了。这是因为主页没有从解析函数中产生文章吗?非常感谢您的帮助。
from datetime import date                                                       

import scrapy                                                                   
from scrapy.contrib.spiders import Rule                                         
from scrapy.contrib.linkextractors import LinkExtractor                         


from ..items import NewsArticle                                                 

with open('urls/debug/nyt.txt') as debug_urls:                                  
    debug_urls = debug_urls.readlines()                                         

with open('urls/release/nyt.txt') as release_urls:                              
    release_urls = release_urls.readlines() # ["http://www.nytimes.com"]                                 

today = date.today().strftime('%Y/%m/%d')                                       
print today                                                                     


class NytSpider(scrapy.Spider):                                                 
    name = "nyt"                                                                
    allowed_domains = ["nytimes.com"]                                           
    start_urls = release_urls                                                      
    rules = (                                                                      
            Rule(LinkExtractor(allow=(r'/%s/[a-z]+/.*\.html' % today, )),          
                 callback='parse', follow=True),                                   
    )                                                                              

    def parse(self, response):                                                     
        article = NewsArticle()                                                                         
        for story in response.xpath('//article[@id="story"]'):                     
            article['url'] = response.url                                          
            article['title'] = story.xpath(                                        
                    '//h1[@id="story-heading"]/text()').extract()                  
            article['author'] = story.xpath(                                       
                    '//span[@class="byline-author"]/@data-byline-name'             
            ).extract()                                                         
            article['published'] = story.xpath(                                 
                    '//time[@class="dateline"]/@datetime').extract()            
            article['content'] = story.xpath(                                   
                    '//div[@id="story-body"]/p//text()').extract()              
            yield article  
1个回答

4
我已经找到了解决问题的方法。我做错了两件事:
  1. 如果要自动爬取子链接,我需要继承CrawlSpider而不是Spider.
  2. 当使用CrawlSpider时,我需要使用回调函数而不是覆盖parse。根据文档,覆盖parse会破坏CrawlSpider的功能。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接