Scrapy在爬取几页后停止爬取

4

我正在学习Scrapy和网站爬虫的基础知识,非常感谢您的帮助。我是按照教程建立了一个简单的Scrapy爬虫程序。

它可以正常工作,但是它无法像预期一样爬取所有页面。

我的爬虫代码如下:

from scrapy.spider       import BaseSpider
from scrapy.selector     import HtmlXPathSelector
from scrapy.http.request import Request
from fraist.items        import FraistItem
import re

class fraistspider(BaseSpider):
    name = "fraistspider"
    allowed_domain = ["99designs.com"]
    start_urls = ["http://99designs.com/designer-blog/"]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        links = hxs.select("//div[@class='pagination']/a/@href").extract()

        #We stored already crawled links in this list
        crawledLinks    = []

        #Pattern to check proper link
        linkPattern     = re.compile("^(?:ftp|http|https):\/\/(?:[\w\.\-\+]+:{0,1}[\w\.\-\+]*@)?(?:[a-z0-9\-\.]+)(?::[0-9]+)?(?:\/|\/(?:[\w#!:\.\?\+=&%@!\-\/\(\)]+)|\?(?:[\w#!:\.\?\+=&%@!\-\/\(\)]+))?$")

        for link in links:
            # If it is a proper link and is not checked yet, yield it to the Spider
            if linkPattern.match(link) and not link in crawledLinks:
                crawledLinks.append(link)
                yield Request(link, self.parse)

        posts = hxs.select("//article[@class='content-summary']")
        items = []
        for post in posts:
            item = FraistItem()
            item["title"] = post.select("div[@class='summary']/h3[@class='entry-title']/a/text()").extract()
            item["link"] = post.select("div[@class='summary']/h3[@class='entry-title']/a/@href").extract()
            item["content"] = post.select("div[@class='summary']/p/text()").extract()
            items.append(item)
        for item in items:
            yield item

输出结果为:

         'title': [u'Design a poster in the style of Saul Bass']}
2015-05-20 16:22:41+0100 [fraistspider] DEBUG: Scraped from <200 http://nnbdesig
ner.wpengine.com/designer-blog/>
        {'content': [u'Helping a company come up with a branding strategy can be
 exciting\xa0and intimidating, all at once. It gives a designer the opportunity
to make a great visual impact with a brand, but requires skills in logo, print a
nd digital design. If you\u2019ve been hesitating to join a 99designs Brand Iden
tity Pack contest, here are a... '],
         'link': [u'http://99designs.com/designer-blog/2015/05/07/tips-brand-ide
ntity-pack-design-success/'],
         'title': [u'99designs\u2019 tips for a successful Brand Identity Pack d
esign']}
2015-05-20 16:22:41+0100 [fraistspider] DEBUG: Redirecting (301) to <GET http://
nnbdesigner.wpengine.com/> from <GET http://99designs.com/designer-blog/page/10/
>
2015-05-20 16:22:41+0100 [fraistspider] DEBUG: Redirecting (301) to <GET http://
nnbdesigner.wpengine.com/> from <GET http://99designs.com/designer-blog/page/11/
>
2015-05-20 16:22:41+0100 [fraistspider] INFO: Closing spider (finished)
2015-05-20 16:22:41+0100 [fraistspider] INFO: Stored csv feed (100 items) in: da
ta.csv
2015-05-20 16:22:41+0100 [fraistspider] INFO: Dumping Scrapy stats:
        {'downloader/request_bytes': 4425,
         'downloader/request_count': 16,
         'downloader/request_method_count/GET': 16,
         'downloader/response_bytes': 126915,
         'downloader/response_count': 16,
         'downloader/response_status_count/200': 11,
         'downloader/response_status_count/301': 5,
         'dupefilter/filtered': 41,
         'finish_reason': 'finished',
         'finish_time': datetime.datetime(2015, 5, 20, 15, 22, 41, 738000),
         'item_scraped_count': 100,
         'log_count/DEBUG': 119,
         'log_count/INFO': 8,
         'request_depth_max': 5,
         'response_received_count': 11,
         'scheduler/dequeued': 16,
         'scheduler/dequeued/memory': 16,
         'scheduler/enqueued': 16,
         'scheduler/enqueued/memory': 16,
         'start_time': datetime.datetime(2015, 5, 20, 15, 22, 40, 718000)}
2015-05-20 16:22:41+0100 [fraistspider] INFO: Spider closed (finished)

您可以看到,'item_scraped_count'为100,尽管总共有122页,每页10篇文章,应该更多。

从输出中,我可以看到存在301重定向问题,但我不明白为什么会导致问题。我尝试了另一种方法重写我的爬虫代码,但是在几个条目后,它再次出现问题,大约在同一部分。

非常感谢任何帮助。谢谢!

1个回答

2
似乎你正击中了默认的http://doc.scrapy.org/en/latest/topics/settings.html#concurrent-items的100项限制。
对于这种情况,我会使用CrawlSpider来爬取多个页面,所以你需要定义一个规则来匹配99designs.com中的页面,并稍微修改你的解析函数来处理数据项。
从Scrapy文档中复制示例代码
import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor

class MySpider(CrawlSpider):
    name = 'example.com'
    allowed_domains = ['example.com']
    start_urls = ['http://www.example.com']

    rules = (
        # Extract links matching 'category.php' (but not matching 'subsection.php')
        # and follow links from them (since no callback means follow=True by default).
        Rule(LinkExtractor(allow=('category\.php', ), deny=('subsection\.php', ))),

        # Extract links matching 'item.php' and parse them with the spider's method parse_item
        Rule(LinkExtractor(allow=('item\.php', )), callback='parse_item'),
    )

    def parse_item(self, response):
        self.log('Hi, this is an item page! %s' % response.url)
        item = scrapy.Item()
        item['id'] = response.xpath('//td[@id="item_id"]/text()').re(r'ID: (\d+)')
        item['name'] = response.xpath('//td[@id="item_name"]/text()').extract()
        item['description'] = response.xpath('//td[@id="item_description"]/text()').extract()
        return item

编辑:我刚刚发现了包含一个有用示例的此博客文章

感谢你的回复Fasouto。这很有帮助。我已经设法重写我的spider,但不幸的是它也爬取其他链接(例如http://support.99designs.com/access/unauthenticated[...]),所以它无法考虑我的规则。这是我的新代码:https://jsfiddle.net/umav9axf/ 我不确定我是否在正确的地方定义了links变量。编辑:简而言之,它将浏览所有内部链接(站点范围),而不是页面导航中的链接。 - Adrian
嗨Adrian。是的,这是因为您没有正确定义规则,在我粘贴的示例中,规则将匹配包含category.php的页面,但您放置了allow(""),基本上允许爬虫访问站点上的任何页面。在那里放置一个正则表达式,匹配"page/<numbers>/"或类似的内容。 - fasouto

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接