让我先说一下,我不是新手程序员,但对Python非常陌生。我使用urllib2编写了一个程序,请求一个网页并将其保存到文件中。这个网页大约有300KB,虽然我觉得并不算特别大,但似乎足以给我带来麻烦,所以我称之为“大型”网页。我使用简单的调用从
我不知道为什么这会让程序如此困扰?
urlopen
返回的对象直接复制到文件中:
file.write(webpage.read())
但它会花费数分钟的时间,试图将内容写入文件,最终我会收到以下错误信息:Traceback (most recent call last):
File "program.py", line 51, in <module>
main()
File "program.py", line 43, in main
f.write(webpage.read())
File "/usr/lib/python2.7/socket.py", line 351, in read
data = self._sock.recv(rbufsize)
File "/usr/lib/python2.7/httplib.py", line 541, in read
return self._read_chunked(amt)
File "/usr/lib/python2.7/httplib.py", line 592, in _read_chunked
value.append(self._safe_read(amt))
File "/usr/lib/python2.7/httplib.py", line 649, in _safe_read
raise IncompleteRead(''.join(s), amt)
httplib.IncompleteRead: IncompleteRead(6384 bytes read, 1808 more expected)
我不知道为什么这会让程序如此困扰?
编辑 |
这是我获取页面的方法
jar = cookielib.CookieJar()
cookie_processor = urllib2.HTTPCookieProcessor(jar);
opener = urllib2.build_opener(cookie_processor)
urllib2.install_opener(opener)
requ_login = urllib2.Request(LOGIN_PAGE,
data = urllib.urlencode( { 'destination' : "", 'username' : USERNAME, 'password' : PASSWORD } ))
requ_page = urllib2.Request(WEBPAGE)
try:
#login
urllib2.urlopen(requ_login)
#get desired page
portfolio = urllib2.urlopen(requ_page)
except urllib2.URLError as e:
print e.code, ": ", e.reason