Scrapy登录vBulletin指南需要

4
我已经阅读了很多有关该主题的帖子(包括scrapy文档),但由于某种原因,我无法登录到vBulletin网站。让我澄清一下,我不是开发人员,我的编程/抓取知识非常基础,因此如果任何人决定帮助,请更具体地说明以便理解。
现在让我解释一下细节:
我正在尝试登录到我们公司的论坛,从中抓取信息并将其组织成Excel电子表格。登录网站地址为:https://forums.chaosgroup.com/auth/login-form 除了用户名(scrapy)和密码(12345)字段之外,源页面中还有几个隐藏值/字段。
<input type="hidden" name="url" value="aHR0cHM6Ly9mb3J1bXMuY2hhb3Nncm91cC5jb20v" />
<input type="hidden" id="vb_loginmd5" name="vb_login_md5password" value="">
<input type="hidden" id="vb_loginmd5_utf8" name="vb_login_md5password_utf" value="">

当我从网站提交数据时,我在Chrome检查工具中得到了以下POST请求:
url:aHR0cHM6Ly9mb3J1bXMuY2hhb3Nncm91cC5jb20v
username:scrapy
password:
vb_login_md5password:827ccb0eea8a706c4c34a16891f84e7b
vb_login_md5password_utf:827ccb0eea8a706c4c34a16891f84e7b

在大多数情况下,这是静态信息。很少情况下我发现隐藏的url:value会改变最后一个字符,但总体上一切都保持不变。
现在我尝试从Scrapy爬虫(以下是代码)提交该数据以进行登录,但爬虫返回到登录页面,而不是打开实际的论坛。
# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import FormRequest
from scrapy.utils.response import open_in_browser
from scrapy.shell import inspect_response


class ForumsSpider(scrapy.Spider):
    name = 'forums'
    start_urls = ['https://forums.chaosgroup.com/auth/login-form/']


    def parse(self, response):
        return FormRequest.from_response(response,
                                         formdata={'url':'aHR0cHM6Ly9mb3J1bXMuY2hhb3Nncm91cC5jb20v',
                                                   'username':'scrapy',
                                                   'password':'',
                                                   'vb_login_md5password':'827ccb0eea8a706c4c34a16891f84e7b',
                                                   'vb_login_md5password_utf':'827ccb0eea8a706c4c34a16891f84e7b'},
                                         callback=self.scrape_home_page)


    def scrape_home_page(self, response):
        open_in_browser(response)
        a = response.css('h1::text').extract_first()
        print(a)
        yield a

我从Scrapy中得到的日志文件是:https://pastebin.com/XtPHnBcF(为了更好的阅读)。
D:\Scrapy\forum>scrapy crawl forums 2018-02-24 11:42:10 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: forum) 2018-02-24 11:42:10 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2
2.9.5, cssselect 1.0.3, parsel 1.3.1, w3lib 1.19.0, Twisted 17.9.0, Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:04:45) [M SC v.1900 32 bit (Intel)], pyOpenSSL 17.5.0 (OpenSSL 1.1.0g  2 Nov 2017), cryptography 2.1.4, Platform Windows-8.1-6.3.9600-SP0 2018-02-24 11:42:10 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'forum', 'COOKIES_DEBUG': True, 'DOWNLOAD_DELAY': 3, 'NEWSPIDER_MODULE': 'forum.spiders', 'SPIDER_MODULES': ['forum.spiders ']} 2018-02-24 11:42:10 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats',  'scrapy.extensions.telnet.TelnetConsole',  'scrapy.extensions.logstats.LogStats'] 2018-02-24 11:42:10 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',  'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',  'scrapy.downloadermiddlewares.retry.RetryMiddleware',  'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',  'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',  'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',  'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',  'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2018-02-24 11:42:10 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',  'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',  'scrapy.spidermiddlewares.referer.RefererMiddleware',  'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',  'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2018-02-24 11:42:10 [scrapy.middleware] INFO: Enabled item pipelines: [] 2018-02-24 11:42:10 [scrapy.core.engine] INFO: Spider opened 2018-02-24 11:42:10 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2018-02-24 11:42:10 [scrapy.extensions.telnet] DEBUG: Telnet console listening on
127.0.0.1:6023 2018-02-24 11:42:11 [scrapy.downloadermiddlewares.cookies] DEBUG: Received cookies from: <200 https://forums.chaosgroup.com/auth/login-form/> Set-Cookie: bbsessionhash=97ed47f40f0376dd5c33276eefe2cb53; path=/; secure; HttpOnly

Set-Cookie: bblastvisit=1519465318; path=/; secure; HttpOnly

Set-Cookie: bblastactivity=1519465318; path=/; secure; HttpOnly

2018-02-24 11:42:11 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://forums.chaosgroup.com/auth/login-form/> (referer: None) 2018-02-24 11:42:11 [scrapy.downloadermiddlewares.cookies] DEBUG: Sending cookies to: <POST https://forums.chaosgroup.com/auth/login> Cookie: bbsessionhash=97ed47f40f0376dd5c33276eefe2cb53; bblastvisit=1519465318; bblastactivity=1519465318

2018-02-24 11:42:13 [scrapy.downloadermiddlewares.cookies] DEBUG: Received cookies from: <200 https://forums.chaosgroup.com/auth/login> Set-Cookie: bblastactivity=1519465321; path=/; secure; HttpOnly

Set-Cookie: bbsessionhash=58e04286cf781704ef718c38d4dbb0a2; path=/; secure; HttpOnly

2018-02-24 11:42:13 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://forums.chaosgroup.com/auth/login> (referer: https://forums.chaosgroup.com/auth/login-form/) None 2018-02-24 11:42:13 [scrapy.core.engine] INFO: Closing spider (finished) 2018-02-24 11:42:13 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 862,  'downloader/request_count': 2,  'downloader/request_method_count/GET': 1,  'downloader/request_method_count/POST': 1,  'downloader/response_bytes': 3538,  'downloader/response_count': 2,  'downloader/response_status_count/200': 2,  'finish_reason': 'finished',  'finish_time': datetime.datetime(2018, 2, 24, 9, 42, 13, 954670),  'log_count/DEBUG': 6,  'log_count/INFO': 7,  'request_depth_max': 1,  'response_received_count': 2,  'scheduler/dequeued': 2,  'scheduler/dequeued/memory': 2,  'scheduler/enqueued': 2,  'scheduler/enqueued/memory': 2,  'start_time': datetime.datetime(2018, 2, 24, 9, 42, 10, 928535)} 2018-02-24 11:42:13 [scrapy.core.engine] INFO: Spider closed (finished)

我正在努力弄清楚自己做错了什么,将我的代码与其他类似的代码进行比较,尝试(并成功)登录其他网站,但是我无法让它在我们的vBulletin网站上运行。

我做错了什么,我缺少什么?如果有人能指导我正确的方向,我将非常感激,并尝试以某种方式回报您。

提前感谢大家。


你好 - 亲爱的Svellozar,祝你有美好的一天:非常感谢你开启了这个主题;这太棒了: 非常感谢亲爱的Svellozar - 分享你对这些过程的见解。祝你度过愉快的一天。 - zero
1个回答

1

您的登录数据已经发布到https://forums.chaosgroup.com/auth/login

如果您查看该页面的源代码(在scrape_home_page()中使用response.text),您将会看到以下内容:

<div class="redirectMessage-wrapper">
        <div id="redirectMessage">Logging in...</div>
</div>


<script type="text/javascript">
(function()
{
        var url = "https://forums.chaosgroup.com" || "/";

        //remove hash from the url of the top most window (if any)
        var a = document.createElement('a');
        a.setAttribute('href', url);
        if (a.hash) {
                url = url.replace(a.hash, '');
        }
        else if (url.lastIndexOf('#') != -1) { //a.hash with just # returns empty
                url = url.replace('#', '');
        }



        window.open(url, '_top');
})();
</script>

这表明登录成功了,你将使用JavaScript被重定向到主页。
因此,你已经登录,只需要前往主页即可继续抓取数据。

我从来没有想过登录功能正常,问题出在之后的页面。现在我需要阅读并学习如何使用Scrapy正确处理重定向链接,并看看是否能够从那里构建蜘蛛程序。 非常感谢,我现在会朝这个方向努力,并在之后更新线程。 - Svetlozar Draganov
最终我成功打开了主论坛页面。非常感谢,我已经在这上面工作了一个月,如果没有您宝贵的意见,我是不可能成功的。 - Svetlozar Draganov
大家好 - 亲爱的Stranac和Svellozar,祝你们有美好的一天:非常感谢提供的信息 - 换句话说,上述示例能够成功登录到一个“通用”的vbulletin吗?非常感谢亲爱的Stranac和Svellozar - 分享您对这些过程的见解。祝你们有愉快的一天。 - zero

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接