我需要进行一些短时实时的爬虫,并在我的Django REST控制器中返回结果数据。
尝试使用Scrapy:
根据Scrapy架构,items将被返回到
然而,我遇到了一个问题——如何通过Django REST APIview返回已抓取的items列表?
期望的使用示例:
import scrapy
from scrapy.selector import Selector
from . models import Product
class MysiteSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'https://www.something.com/browse?q=dfd',
]
allowed_domains = ['something.com']
def parse(self, response):
items_list = Selector(response).xpath('//li[@itemprop="itemListElement"]')
for value in items_list:
item = Product()
item['picture_url'] = value.xpath('img/@src').extract()[0]
item['title'] = value.xpath('h2').text()
item['price'] = value.xpath('p[contains(@class, "ad-price")]').text()
yield item
项目模型
import scrapy
class Product(scrapy.Item):
name = scrapy.Field()
price = scrapy.Field()
picture_url = scrapy.Field()
published_date = scrapy.Field(serializer=str)
根据Scrapy架构,items将被返回到
Item Pipeline
(https://doc.scrapy.org/en/1.2/topics/item-pipeline.html),用于存储数据到DB/保存到文件等。然而,我遇到了一个问题——如何通过Django REST APIview返回已抓取的items列表?
期望的使用示例:
from rest_framework.views import APIView
from rest_framework.response import Response
from .service.mysite_spider import MysiteSpider
class AggregatorView(APIView):
mysite_spider = MysiteSpider()
def get(self, request, *args, **kwargs):
self.mysite_spider.parse()
return Response('good')