使用Python 3进行网络爬虫教程?

5

我正在尝试学习Python 3.x,以便能够爬取网站。人们建议我使用Beautiful Soup 4或lxml.html。有人可以指点我使用Python 3.x的BeautifulSoup教程或示例吗?

谢谢您的帮助。


2
如果你想进行网页抓取,使用Python 2。Scrapy是到目前为止最好的Python网络抓取框架,并且没有3.x的等效版本。 - Blender
1个回答

16

我实际上刚刚写了一份完整的网络爬虫指南,其中包括Python中的一些示例代码。我在Python 2.7上编写和测试了它,但是我使用的两个软件包(requests和BeautifulSoup)根据Wall of Shame是完全兼容Python 3的。

这里是一些Python网络爬虫的起步代码:

import sys
import requests
from BeautifulSoup import BeautifulSoup


def scrape_google(keyword):

    # dynamically build the URL that we'll be making a request to
    url = "http://www.google.com/search?q={term}".format(
        term=keyword.strip().replace(" ", "+"),
    )

    # spoof some headers so the request appears to be coming from a browser, not a bot
    headers = {
        "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5)",
        "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
        "accept-charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.3",
        "accept-encoding": "gzip,deflate,sdch",
        "accept-language": "en-US,en;q=0.8",
    }

    # make the request to the search url, passing in the the spoofed headers.
    r = requests.get(url, headers=headers)  # assign the response to a variable r

    # check the status code of the response to make sure the request went well
    if r.status_code != 200:
        print("request denied")
        return
    else:
        print("scraping " + url)

    # convert the plaintext HTML markup into a DOM-like structure that we can search
    soup = BeautifulSoup(r.text)

    # each result is an <li> element with class="g" this is our wrapper
    results = soup.findAll("li", "g")

    # iterate over each of the result wrapper elements
    for result in results:

        # the main link is an <h3> element with class="r"
        result_anchor = result.find("h3", "r").find("a")

        # print out each link in the results
        print(result_anchor.contents)


if __name__ == "__main__":

    # you can pass in a keyword to search for when you run the script
    # be default, we'll search for the "web scraping" keyword
    try:
        keyword = sys.argv[1]
    except IndexError:
        keyword = "web scraping"

    scrape_google(keyword)

如果您只是想更多地了解Python 3并且已经熟悉Python 2.x,则可能会对这篇文章有所帮助,它介绍了从Python 2转换到Python 3的内容。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接