从Twitter获取用户位置信息

7
我正在尝试从Twitter上爬取用户的经纬度,针对的是用户名称。用户名称列表是一个csv文件,其中一个输入文件中有50个以上的名称。以下是我迄今为止所做的两次尝试。其中任何一个程序都似乎没有起作用。欢迎在任一程序中进行更正或采用全新的方法。
我有一个User_names列表,并尝试查找用户资料并从资料或时间轴中提取geolocation。我在互联网上找不到太多样本。
我正在寻找更好的方法来获取Twitter用户的地理位置。我甚至找不到一个单一的例子,显示如何通过User_name或user_id收集用户位置。首先这是否可能?
输入:输入文件有50k行以上。
AfsarTamannaah,6.80E+17,12/24/2015,#chennaifloods
DEEPU_S_GIRI,6.80E+17,12/24/2015,#chennaifloods
DEEPU_S_GIRI,6.80E+17,12/24/2015,#weneverletyoudownstr
ndtv,6.80E+17,12/24/2015,#chennaifloods
1andonlyharsha,6.79E+17,12/21/2015,#chennaifloods
Shashkya,6.79E+17,12/21/2015,#moneyonmobile
Shashkya,6.79E+17,12/21/2015,#chennaifloods
timesofindia,6.79E+17,12/20/2015,#chennaifloods
ANI_news,6.78E+17,12/20/2015,#chennaifloods
DrAnbumaniPMK,6.78E+17,12/19/2015,#chennaifloods
timesofindia,6.78E+17,12/18/2015,#chennaifloods
SRKCHENNAIFC,6.78E+17,12/18/2015,#dilwalefdfs
SRKCHENNAIFC,6.78E+17,12/18/2015,#chennaifloods
AmeriCares,6.77E+17,12/16/2015,#india
AmeriCares,6.77E+17,12/16/2015,#chennaifloods
ChennaiRainsH,6.77E+17,12/15/2015,#chennairainshelp
ChennaiRainsH,6.77E+17,12/15/2015,#chennaifloods
AkkiPritam,6.77E+17,12/15/2015,#chennaifloods

代码:

import tweepy
from tweepy import Stream
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
import pandas as pd
import json
import csv
import sys
import time

CONSUMER_KEY = 'XYZ'
CONSUMER_SECRET = 'XYZ'
ACCESS_KEY = 'XYZ'
ACCESS_SECRET = 'XYZ'

auth = OAuthHandler(CONSUMER_KEY,CONSUMER_SECRET)
api = tweepy.API(auth)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)

data = pd.read_csv('user_keyword.csv')

df = ['user_name', 'user_id', 'date', 'keyword']

test = api.lookup_users(user_ids=['user_name'])

for user in test:
    print user.user_name
    print user.user_id
    print user.date
    print user.keyword
    print user.geolocation

错误:

Traceback (most recent call last):
  File "user_profile_location.py", line 24, in <module>
    test = api.lookup_users(user_ids=['user_name'])
  File "/usr/lib/python2.7/dist-packages/tweepy/api.py", line 150, in lookup_users
    return self._lookup_users(list_to_csv(user_ids), list_to_csv(screen_names))
  File "/usr/lib/python2.7/dist-packages/tweepy/binder.py", line 197, in _call
    return method.execute()
  File "/usr/lib/python2.7/dist-packages/tweepy/binder.py", line 173, in execute
    raise TweepError(error_msg, resp)
tweepy.error.TweepError: [{'message': 'No user matches for specified terms.', 'code': 17}]

我了解并不是每个用户都分享地理位置,但那些公开个人资料的用户如果能提供地理位置就更好了。
我需要的是用户位置名称和/或经纬度。
如果这种方法不正确,我也可以接受其他方案。
更新一:经过深入搜索,我找到了这个网站,它提供了一个非常接近的解决方案,但是当我尝试从输入文件中读取userName时出现错误。
这意味着只能获取100个用户的信息,有更好的方法来解除这个限制吗?
代码:
import sys
import string
import simplejson
from twython import Twython
import csv
import pandas as pd

#WE WILL USE THE VARIABLES DAY, MONTH, AND YEAR FOR OUR OUTPUT FILE NAME
import datetime
now = datetime.datetime.now()
day=int(now.day)
month=int(now.month)
year=int(now.year)


#FOR OAUTH AUTHENTICATION -- NEEDED TO ACCESS THE TWITTER API
t = Twython(app_key='ABC', 
    app_secret='ABC',
    oauth_token='ABC',
    oauth_token_secret='ABC')

#INPUT HAS NO HEADER NO INDEX
ids = pd.read_csv('user_keyword.csv', header=['userName', 'userID', 'Date', 'Keyword'], usecols=['userName'])

#ACCESS THE LOOKUP_USER METHOD OF THE TWITTER API -- GRAB INFO ON UP TO 100 IDS WITH EACH API CALL

users = t.lookup_user(user_id = ids)

#NAME OUR OUTPUT FILE - %i WILL BE REPLACED BY CURRENT MONTH, DAY, AND YEAR
outfn = "twitter_user_data_%i.%i.%i.csv" % (now.month, now.day, now.year)

#NAMES FOR HEADER ROW IN OUTPUT FILE
fields = "id, screen_name, name, created_at, url, followers_count, friends_count, statuses_count, \
    favourites_count, listed_count, \
    contributors_enabled, description, protected, location, lang, expanded_url".split()

#INITIALIZE OUTPUT FILE AND WRITE HEADER ROW   
outfp = open(outfn, "w")
outfp.write(string.join(fields, "\t") + "\n")  # header

#THE VARIABLE 'USERS' CONTAINS INFORMATION OF THE 32 TWITTER USER IDS LISTED ABOVE
#THIS BLOCK WILL LOOP OVER EACH OF THESE IDS, CREATE VARIABLES, AND OUTPUT TO FILE
for entry in users:
    #CREATE EMPTY DICTIONARY
    r = {}
    for f in fields:
        r[f] = ""
    #ASSIGN VALUE OF 'ID' FIELD IN JSON TO 'ID' FIELD IN OUR DICTIONARY
    r['id'] = entry['id']
    #SAME WITH 'SCREEN_NAME' HERE, AND FOR REST OF THE VARIABLES
    r['screen_name'] = entry['screen_name']
    r['name'] = entry['name']
    r['created_at'] = entry['created_at']
    r['url'] = entry['url']
    r['followers_count'] = entry['followers_count']
    r['friends_count'] = entry['friends_count']
    r['statuses_count'] = entry['statuses_count']
    r['favourites_count'] = entry['favourites_count']
    r['listed_count'] = entry['listed_count']
    r['contributors_enabled'] = entry['contributors_enabled']
    r['description'] = entry['description']
    r['protected'] = entry['protected']
    r['location'] = entry['location']
    r['lang'] = entry['lang']
    #NOT EVERY ID WILL HAVE A 'URL' KEY, SO CHECK FOR ITS EXISTENCE WITH IF CLAUSE
    if 'url' in entry['entities']:
        r['expanded_url'] = entry['entities']['url']['urls'][0]['expanded_url']
    else:
        r['expanded_url'] = ''
    print r
    #CREATE EMPTY LIST
    lst = []
    #ADD DATA FOR EACH VARIABLE
    for f in fields:
        lst.append(unicode(r[f]).replace("\/", "/"))
    #WRITE ROW WITH DATA IN LIST
    outfp.write(string.join(lst, "\t").encode("utf-8") + "\n")

outfp.close()    

错误:

File "user_profile_location.py", line 35, in <module>
    ids = pd.read_csv('user_keyword.csv', header=['userName', 'userID', 'Date', 'Keyword'], usecols=['userName'])
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 562, in parser_f
    return _read(filepath_or_buffer, kwds)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 315, in _read
    parser = TextFileReader(filepath_or_buffer, **kwds)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 645, in __init__
    self._make_engine(self.engine)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 799, in _make_engine
    self._engine = CParserWrapper(self.f, **self.options)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 1202, in __init__
    ParserBase.__init__(self, kwds)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 918, in __init__
    raise ValueError("cannot specify usecols when "
ValueError: cannot specify usecols when specifying a multi-index header

1
你在问什么?你不明白从 Tweepy 得到的错误吗?你不知道如何处理错误吗? - jonrsharpe
1
请检查你的代码,你正在请求 user_ids=['user_name'],这很有可能会失败,因为 Twitter 上没有一个叫做 user_name 的用户。 - oystein
@oystein,非常感谢您能帮助编写代码以获取带有用户名的位置信息。 - Sitz Blogz
你的问题完全不清楚。请编辑你的代码并明确陈述你的需求,这样人们才会尝试帮助你。 - kmario23
@kmario23 谢谢您的评论。请注意,我在内容开头确实包含了直接的问题,至于代码,如果我知道代码中的问题,我就不会在这里发帖了。 - Sitz Blogz
所以,你有一个“screen_names”列表,想知道他们发推文的地理位置? - kmario23
1个回答

5
假设您只想获取用户在其个人资料页面上提供的位置信息,您可以使用Tweepy中的API.get_user。以下是可工作的代码。
#!/usr/bin/env python
from __future__ import print_function

#Import the necessary methods from tweepy library
import tweepy
from tweepy import OAuthHandler


#user credentials to access Twitter API 
access_token = "your access token here"
access_token_secret = "your access token secret key here"
consumer_key = "your consumer key here"
consumer_secret = "your consumer secret key here"


def get_user_details(username):
        userobj = api.get_user(username)
        return userobj


if __name__ == '__main__':
    #authenticating the app (https://apps.twitter.com/)
    auth = tweepy.auth.OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_token, access_token_secret)
    api = tweepy.API(auth)

    #for list of usernames, put them in iterable and call the function
    username = 'thinkgeek'
    userOBJ = get_user_details(username)
    print(userOBJ.location)

注意:这是一个简单的实现。编写一个合适的睡眠函数以遵守Twitter API访问限制。


完全相同的输出以及所有输入列...让我检查完整数据。用户名列表超过一个输入的50k。我希望它适用于那个。我会尽快回复。 - Sitz Blogz
我在问题中编写的上述两个程序的问题在于从包含约50K个用户名的文件中读取用户。请问能否帮我解决这一部分的问题? - Sitz Blogz

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接