如何修复“TypeError: Cannot mix str and non-str arguments”错误?

7
我正在编写一些网络爬虫的代码,并遇到了如上所述的错误。以下是我的代码。
# -*- coding: utf-8 -*-
import scrapy
from myproject.items import Headline


class NewsSpider(scrapy.Spider):
    name = 'IC'
    allowed_domains = ['kosoku.jp']
    start_urls = ['http://kosoku.jp/ic.php']

    def parse(self, response):
        """
        extract target urls and combine them with the main domain
        """
        for url in response.css('table a::attr("href")'):
            yield(scrapy.Request(response.urljoin(url), self.parse_topics))

    def parse_topics(self, response):
        """
        pick up necessary information
        """
        item=Headline()
        item["name"]=response.css("h2#page-name ::text").re(r'.*(インターチェンジ)')
        item["road"]=response.css("div.ic-basic-info-left div:last-of-type ::text").re(r'.*道$')
        yield item

当我在shell脚本中单独执行它们时,可以得到正确的响应,但一旦被包含在程序中并运行,就无法实现。

    2017-11-27 18:26:17 [scrapy.core.scraper] ERROR: Spider error processing <GET http://kosoku.jp/ic.php> (referer: None)
Traceback (most recent call last):
  File "/Users/sonogi/envs/scrapy/lib/python3.5/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
    yield next(it)
  File "/Users/sonogi/envs/scrapy/lib/python3.5/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
    for x in result:
  File "/Users/sonogi/envs/scrapy/lib/python3.5/site-packages/scrapy/spidermiddlewares/referer.py", line 339, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/Users/sonogi/envs/scrapy/lib/python3.5/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/Users/sonogi/envs/scrapy/lib/python3.5/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/Users/sonogi/scraping/myproject/myproject/spiders/IC.py", line 16, in parse
    yield(scrapy.Request(response.urljoin(url), self.parse_topics))
  File "/Users/sonogi/envs/scrapy/lib/python3.5/site-packages/scrapy/http/response/text.py", line 82, in urljoin
    return urljoin(get_base_url(self), url)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/parse.py", line 424, in urljoin
    base, url, _coerce_result = _coerce_args(base, url)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/parse.py", line 120, in _coerce_args
    raise TypeError("Cannot mix str and non-str arguments")
TypeError: Cannot mix str and non-str arguments
2017-11-27 18:26:17 [scrapy.core.engine] INFO: Closing spider (finished)

我很困惑,非常感激任何人的帮助!

2个回答

6
根据Scrapy文档,您正在使用的.css(selector)方法返回一个SelectorList实例。如果要获取URL的实际(unicode)字符串版本,请调用extract()方法:
def parse(self, response):
    for url in response.css('table a::attr("href")').extract():
        yield(scrapy.Request(response.urljoin(url), self.parse_topics))

非常感谢您的帮助! - Sonogi Yang
对我来说,缺少了for in部分...非常感谢你,你救了我! - Soufiane Sabiri

1
因为第15行代码导致了这个错误。由于response.css('table a::attr("href")')返回的是list类型的对象,所以您需要先将url的类型从list转换为str,然后才能将代码解析到另一个函数中。此外,attr语法可能会导致错误,因为正确的attr标签没有"",所以应该使用a::attr(href)而不是a::attr("href")。因此,在解决以上两个问题后,代码将如下所示:
def parse(self, response):
        """
        extract target urls and combine them with the main domain
        """

        url = response.css('table a::attr(href)')
        url_str = ''.join(map(str, url))     #coverts list to str
        yield response.follow(url_str, self.parse_topics)

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接