我将使用Scrapy爬取我的网站http://www.cseblog.com。
以下是我的爬虫代码:
以下是我的爬虫代码:
from scrapy.spider import BaseSpider
from bs4 import BeautifulSoup ## This is BeautifulSoup4
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from blogscraper.items import BlogArticle ## This is for saving data. Probably insignificant.
class BlogArticleSpider(BaseSpider):
name = "blogscraper"
allowed_domains = ["cseblog.com"]
start_urls = [
"http://www.cseblog.com/",
]
rules = (
Rule(SgmlLinkExtractor(allow=('\d+/\d+/*"', ), deny=( ))),
)
def parse(self, response):
site = BeautifulSoup(response.body_as_unicode())
items = []
item = BlogArticle()
item['title'] = site.find("h3" , {"class": "post-title" } ).text.strip()
item['link'] = site.find("h3" , {"class": "post-title" } ).a.attrs['href']
item['text'] = site.find("div" , {"class": "post-body" } )
items.append(item)
return items
我应该在哪里指定需要递归爬取以下类型的链接:
- http://www.cseblog.com/{d+}/{d+}/{*}.html
- http://www.cseblog.com/search/{*}
但只保存来自以下链接的数据:
http://www.cseblog.com/{d+}/{d+}/{*}.html