Python文本处理:AttributeError:'list'对象没有'lower'属性。

11

我是Python和Stackoverflow的新手(请温柔一点),想学习如何进行情感分析。我正在使用我在教程和这里找到的代码组合:Python - AttributeError: 'list' object has no attribute 然而,我一直遇到以下问题:

Traceback (most recent call last):
    File "C:/Python27/training", line 111, in <module>
    processedTestTweet = processTweet(row)
  File "C:/Python27/training", line 19, in processTweet
    tweet = tweet.lower()
AttributeError: 'list' object has no attribute 'lower'`

这是我的代码:

import csv
#import regex
import re
import pprint
import nltk.classify


#start replaceTwoOrMore
def replaceTwoOrMore(s):
    #look for 2 or more repetitions of character
    pattern = re.compile(r"(.)\1{1,}", re.DOTALL)
    return pattern.sub(r"\1\1", s)

# process the tweets
def processTweet(tweet):
    #Convert to lower case
    tweet = tweet.lower()
    #Convert www.* or https?://* to URL
    tweet = re.sub('((www\.[\s]+)|(https?://[^\s]+))','URL',tweet)
    #Convert @username to AT_USER
    tweet = re.sub('@[^\s]+','AT_USER',tweet)
    #Remove additional white spaces
    tweet = re.sub('[\s]+', ' ', tweet)
    #Replace #word with word
    tweet = re.sub(r'#([^\s]+)', r'\1', tweet)
    #trim
    tweet = tweet.strip('\'"')
    return tweet

#start getStopWordList
def getStopWordList(stopWordListFileName):
    #read the stopwords file and build a list
    stopWords = []
    stopWords.append('AT_USER')
    stopWords.append('URL')

    fp = open(stopWordListFileName, 'r')
    line = fp.readline()
    while line:
        word = line.strip()
        stopWords.append(word)
        line = fp.readline()
    fp.close()
    return stopWords

def getFeatureVector(tweet, stopWords):
    featureVector = []
    words = tweet.split()
    for w in words:
        #replace two or more with two occurrences
        w = replaceTwoOrMore(w)
        #strip punctuation
        w = w.strip('\'"?,.')
        #check if it consists of only words
        val = re.search(r"^[a-zA-Z][a-zA-Z0-9]*[a-zA-Z]+[a-zA-Z0-9]*$", w)
        #ignore if it is a stopWord
        if(w in stopWords or val is None):
            continue
        else:
            featureVector.append(w.lower())
     return featureVector

def extract_features(tweet):
    tweet_words = set(tweet)
    features = {}
    for word in featureList:
        features['contains(%s)' % word] = (word in tweet_words)
    return features


#Read the tweets one by one and process it
inpTweets = csv.reader(open('C:/GsTraining.csv', 'rb'),
                       delimiter=',',
                       quotechar='|')
stopWords = getStopWordList('C:/stop.txt')
count = 0;
featureList = []
tweets = []

for row in inpTweets:
    sentiment = row[0]
    tweet = row[1]
    processedTweet = processTweet(tweet)
    featureVector = getFeatureVector(processedTweet, stopWords)
    featureList.extend(featureVector)
    tweets.append((featureVector, sentiment))

# Remove featureList duplicates
featureList = list(set(featureList))

# Generate the training set
training_set = nltk.classify.util.apply_features(extract_features, tweets)

# Train the Naive Bayes classifier
NBClassifier = nltk.NaiveBayesClassifier.train(training_set)

# Test the classifier
with open('C:/CleanedNewGSMain.txt', 'r') as csvinput:
    with open('GSnewmain.csv', 'w') as csvoutput:
    writer = csv.writer(csvoutput, lineterminator='\n')
    reader = csv.reader(csvinput)

    all=[]
    row = next(reader)

    for row in reader:
        processedTestTweet = processTweet(row)
        sentiment = NBClassifier.classify(
            extract_features(getFeatureVector(processedTestTweet, stopWords)))
        row.append(sentiment)
        processTweet(row[1])

    writer.writerows(all)

非常感谢任何帮助。

1个回答

11
从csv读取器得到的结果是一个列表,但是lower只能用于字符串。可以推测该列表中是字符串,有两个选项:要么对每个元素都调用lower方法,要么将列表转换为字符串再调用lower方法。请注意保留HTML标签。
# the first approach
[item.lower() for item in tweet]

# the second approach
' '.join(tweet).lower()

但更合理的做法是(没有更多信息很难确定),您只需要从列表中获取一个项目。可以采取以下方式:

for row in reader:
    processedTestTweet = processTweet(row[0]) # Again, can't know if this is actually correct without seeing the file

此外,我猜想您并没有像您认为的那样正确地使用csv reader,因为现在您每次都在对单个示例进行朴素贝叶斯分类器的训练,然后让它预测它被训练的那一个示例。也许您可以解释一下您的目标是什么?

谢谢您的快速回复。我正在尝试的是:我有一个带标签的小型.csv训练集,其中包含1000个正面和1000个负面语句。测试它似乎有效,因为我通过硬编码测试语句(例如“太棒了!”)进行了测试。然而,我有一个包含约10000条推文和Facebook帖子的文件,我想在这个程序中打开它,并使用朴素贝叶斯测试其情感。我认为我也没有正确使用csv reader,但我还无法确定问题所在。 - user3670554
在第二种方法中直接将其转换是否更容易? tweet = str(tweet).lower - LKT
@LKT 这样做不行。str(tweet).lower 会让你得到带有括号、逗号和其他符号的结果,而不是只将列表中的项目转换为小写。 - Slater Victoroff

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接