我是一名能够帮助翻译的助手。
我在Mac上使用Spyder,Spyder中的Python版本为2.7。几个月前,我使用以下代码来抓取推文,但现在发现它已经不起作用了。首先,我不能再使用:
现在使用:
我在Mac上使用Spyder,Spyder中的Python版本为2.7。几个月前,我使用以下代码来抓取推文,但现在发现它已经不起作用了。首先,我不能再使用:
from urllib.request import url open
现在使用:
from urllib2 import url open
然而,我无法运行下面的代码并出现以下错误:“with open('%s_tweets.csv' % screen_name,'w',newline =' ',encoding ='utf-8-sig')as f:TypeError:file()最多接受3个参数(给出4个)”
import sys
from urllib2 import urlopen
default_encoding = 'utf-8'
import tweepy #https://github.com/tweepy/tweepy
import csv
#Twitter API credentials
consumer_key = ""
consumer_secret = ""
access_key = ""
access_secret = ""
screenNamesList = []
def redirect(url):
page = urlopen(url)
return page.geturl()
def get_all_tweets(screen_name):
#Twitter only allows access to a users most recent 3240 tweets with this method
#authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth, wait_on_rate_limit = True)
#initialize a list to hold all the tweepy Tweets
alltweets = []
#make initial request for most recent tweets (200 is the maximum allowed count)
new_tweets = api.user_timeline(screen_name = screen_name,count=200)
#save most recent tweets
alltweets.extend(new_tweets)
#save the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
#keep grabbing tweets until there are no tweets left to grab
while len(new_tweets) > 0:
#print "getting tweets before %s" % (oldest)
#all subsiquent requests use the max_id param to prevent duplicates
new_tweets = api.user_timeline(screen_name = screen_name,count=200,max_id=oldest)
#save most recent tweets
alltweets.extend(new_tweets)
#update the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
#print "...%s tweets downloaded so far" % (len(alltweets))
#transform the tweepy tweets into a 2D array that will populate the csv
outtweets = [[tweet.id_str, tweet.created_at, tweet.text, tweet.retweet_count, tweet.coordinates, tweet.favorite_count, tweet.author.followers_count, tweet.author.description, tweet.author.location, tweet.author.name] for tweet in alltweets]
#write the csv
with open('%s_tweets.csv' % screen_name, 'w', newline='', encoding='utf-8-sig') as f:
writer = csv.writer(f)
writer.writerow(["id", "created_at", "text", "retweet_count", "coordinates", "favorite_count", "followers_count", "description", "location", "name"])
writer.writerows(outtweets)
pass
if __name__ == '__main__':
#pass in the username of the account you want to download
for i, user in enumerate(screenNamesList):
get_all_tweets(screenNamesList[i])
i+=1
io
模块来处理这个问题。 - Jean-François Fabre