我正在使用tweepy api收集推文,想要获取完整的推文文本。参考https://github.com/tweepy/tweepy/issues/974、tweepy Streaming API : full text和Tweepy Truncated Status中的示例,我尝试使用extended_mode进行操作。但是,我却遇到了一个错误,如下AttributeError:'Status' object has no attribute 'full_text'。
从以上示例中我得知,如果推文不超过140个字符,则可以按照平常一样获取文本。然而,这些示例都是针对StreamListener的,而我并没有使用StreamListener。我该如何像tweepy Streaming API : full text中一样使用try catch块来解决我遇到的错误,并获取推文的full_text?我应该如何修改我的下面的代码?
从以上示例中我得知,如果推文不超过140个字符,则可以按照平常一样获取文本。然而,这些示例都是针对StreamListener的,而我并没有使用StreamListener。我该如何像tweepy Streaming API : full text中一样使用try catch块来解决我遇到的错误,并获取推文的full_text?我应该如何修改我的下面的代码?
getData.py
import tweepy
import csv
# Twitter API credentials
consumer_key = ""
consumer_secret = ""
access_key = ""
access_secret = ""
def get_all_tweets(screen_name):
# Twitter only allows access to a users most recent 3240 tweets with this method
# authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
# initialize a list to hold all the tweepy Tweets
alltweets = []
# make initial request for most recent tweets (200 is the maximum allowed count)
new_tweets = api.user_timeline(screen_name=screen_name, count=200)
# save most recent tweets
alltweets.extend(new_tweets)
# save the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
# keep grabbing tweets until there are no tweets left to grab
while len(new_tweets) > 0:
print
"getting tweets before %s" % (oldest)
# all subsiquent requests use the max_id param to prevent duplicates
new_tweets = api.user_timeline(screen_name=screen_name, count=200, max_id=oldest, include_entities=True,
tweet_mode='extended')
# save most recent tweets
alltweets.extend(new_tweets)
# update the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
print
"...%s tweets downloaded so far" % (len(alltweets))
user = api.get_user(screen_name)
followers_count = user.followers_count
# transform the tweepy tweets into a 2D array that will populate the csv
outtweets = [[tweet.id_str, tweet.created_at, tweet.full_text.encode("utf-8"), 1 if 'media' in tweet.entities else 0,
1 if tweet.entities.get('hashtags') else 0, followers_count, tweet.retweet_count, tweet.favorite_count]
for tweet in alltweets]
# write the csv
with open('tweets.csv', mode='a', encoding='utf-8') as f:
writer = csv.writer(f)
writer.writerow(["id", "created_at", "text", "hasMedia", "hasHashtag", "followers_count", "retweet_count", "favourite_count"])
writer.writerows(outtweets)
pass
def main():
get_all_tweets("@MACcosmetics")
if __name__ == '__main__':
main()