使用Scrapy和Selenium进行爬虫

6
我有一个Scrapy爬虫,用于抓取一个通过页面上的JavaScript重新加载内容的网站。为了切换到下一页进行抓取,我一直使用Selenium来点击页面顶部的月份链接。
问题是,尽管我的代码按预期移动到每个链接,但爬虫仅抓取第一个月(9月)的数据,并返回此重复数据。
我该如何解决这个问题?
from selenium import webdriver

class GigsInScotlandMain(InitSpider):
        name = 'gigsinscotlandmain'
        allowed_domains = ["gigsinscotland.com"]
        start_urls = ["http://www.gigsinscotland.com"]


    def __init__(self):
        InitSpider.__init__(self)
        self.br = webdriver.Firefox()

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        self.br.get(response.url)
        time.sleep(2.5)
        # Get the string for each month on the page.
        months = hxs.select("//ul[@id='gigsMonths']/li/a/text()").extract()

        for month in months:
            link = self.br.find_element_by_link_text(month)
            link.click()
            time.sleep(5)

            # Get all the divs containing info to be scraped.
            listitems = hxs.select("//div[@class='listItem']")
            for listitem in listitems:
                item = GigsInScotlandMainItem()
                item['artist'] = listitem.select("div[contains(@class, 'artistBlock')]/div[@class='artistdiv']/span[@class='artistname']/a/text()").extract()
                #
                # Get other data ...
                #
                yield item
1个回答

6
问题在于您正在重用为初始响应定义的“HtmlXPathSelector”。请从Selenium浏览器的“source_code”重新定义它:
...
for month in months:
    link = self.br.find_element_by_link_text(month)
    link.click()
    time.sleep(5)

    hxs = HtmlXPathSelector(self.br.page_source)

    # Get all the divs containing info to be scraped.
    listitems = hxs.select("//div[@class='listItem']")
...

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接