如何在R中高效地从ngram标记列表中去除停用词

21

这是一个对于一种我已经能够低效地完成的操作提出改进的呼吁:使用“停用词”过滤一系列n-gram标记,使得任何停用词出现在n-gram中都会触发移除。

我非常希望有一个解决方案可以同时适用于unigrams和n-grams,虽然有一个“固定”的标志和一个“正则表达式”的标志也可以。我把问题的两个方面放在一起,因为有人可能会尝试不同的方法来解决包括固定和正则表达式停用词模式的问题。

格式:

  • tokens 是一个字符向量列表,可能是unigrams,也可能是由_(下划线)字符连接的n-grams。

  • stopwords 是一个字符向量。现在,我满足于让它成为一个固定的字符串,但使用以正则表达式格式化的停用词也将是一个不错的奖励。

期望输出:与输入tokens 匹配但其中任何组成部分匹配到停用词的都要被删除。(这意味着一个unigram匹配,或者匹配到n-gram中的一个术语。)

示例,测试数据,以及可供建立的工作代码和基准:

tokens1 <- list(text1 = c("this", "is", "a", "test", "text", "with", "a", "few", "words"), 
                text2 = c("some", "more", "words", "in", "this", "test", "text"))
tokens2 <- list(text1 = c("this_is", "is_a", "a_test", "test_text", "text_with", "with_a", "a_few", "few_words"), 
                text2 = c("some_more", "more_words", "words_in", "in_this", "this_text", "text_text"))
tokens3 <- list(text1 = c("this_is_a", "is_a_test", "a_test_text", "test_text_with", "text_with_a", "with_a_few", "a_few_words"),
                text2 = c("some_more_words", "more_words_in", "words_in_this", "in_this_text", "this_text_text"))
stopwords <- c("is", "a", "in", "this")

# remove any single token that matches a stopword
removeTokensOP1 <- function(w, stopwords) {
    lapply(w, function(x) x[-which(x %in% stopwords)])
}

# remove any word pair where a single word contains a stopword
removeTokensOP2 <- function(w, stopwords) {
    matchPattern <- paste0("(^|_)", paste(stopwords, collapse = "(_|$)|(^|_)"), "(_|$)")
    lapply(w, function(x) x[-grep(matchPattern, x)])
}

removeTokensOP1(tokens1, stopwords)
## $text1
## [1] "test"  "text"  "with"  "few"   "words"
## 
## $text2
## [1] "some"  "more"  "words" "test"  "text" 

removeTokensOP2(tokens1, stopwords)
## $text1
## [1] "test"  "text"  "with"  "few"   "words"
## 
## $text2
## [1] "some"  "more"  "words" "test"  "text" 

removeTokensOP2(tokens2, stopwords)
## $text1
## [1] "test_text" "text_with" "few_words"
## 
## $text2
## [1] "some_more"  "more_words" "text_text" 

removeTokensOP2(tokens3, stopwords)
## $text1
## [1] "test_text_with"
## 
## $text2
## [1] "some_more_words"

# performance benchmarks for answers to build on
require(microbenchmark)
microbenchmark(OP1_1 = removeTokensOP1(tokens1, stopwords),
               OP2_1 = removeTokensOP2(tokens1, stopwords),
               OP2_2 = removeTokensOP2(tokens2, stopwords),
               OP2_3 = removeTokensOP2(tokens3, stopwords),
               unit = "relative")
## Unit: relative
## expr      min       lq     mean   median       uq      max neval
## OP1_1 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000   100
## OP2_1 5.119066 3.812845 3.438076 3.714492 3.547187 2.838351   100
## OP2_2 5.230429 3.903135 3.509935 3.790143 3.631305 2.510629   100
## OP2_3 5.204924 3.884746 3.578178 3.753979 3.553729 8.240244   100

在tm或qdap中的停用词去除方法不够吗?虽然它们的工作方式相反,但首先删除停用词,然后创建n-gram。 - phiver
1
不,那很容易,我正在尝试找出一种有效的方法,在构建后删除包含停用词的ngrams。 - Ken Benoit
你有没有看过 Tyler Rinker 在 Github 上的新包 termco?那看起来很有前途。我还没来得及去看它。 - phiver
基本上是 grepl 的向量化版本,用 c 语言编写,适用于长向量。是的,我也希望有人能写出来 :} @Rcore - rawr
stringi在某种程度上接近,但在这里需要的向量化方式不是它所擅长的。出于这个原因,我没有在示例/基础代码中使用stringi(尽管它具有许多其他吸引人的属性,但在我的测试中对于这个任务来说并不更快)。但也许有人会证明我错了! - Ken Benoit
显示剩余5条评论
3个回答

5

这并不是一个答案,更像是对rawr评论的回复。如果有一个更长的stopwords列表,使用类似%in%的方法似乎不会受到维度问题的影响。

library(purrr)
removetokenstst <- function(tokens, stopwords) 
  map2(tokens, 
       lapply(tokens3, function(x) { 
         unlist(lapply(strsplit(x, "_"), function(y) { 
           any(y %in% stopwords) 
         })) 
       }), 
       ~ .x[!.y])

require(microbenchmark)
microbenchmark(OP1_1 = removeTokensOP1(tokens1, morestopwords),
           OP2_1 = removeTokensOP2(tokens1, morestopwords),
           OP2_2 = removeTokensOP2(tokens2, morestopwords),
           OP2_3 = removeTokensOP2(tokens3, morestopwords),
           Ak_3 = removetokenstst(tokens3, stopwords),
           Ak_3msw = removetokenstst(tokens3, morestopwords),
           unit = "relative")

Unit: relative
    expr       min        lq       mean    median        uq      max neval
   OP1_1   1.00000   1.00000   1.000000  1.000000  1.000000  1.00000   100
   OP2_1 278.48260 176.22273  96.462854 79.787932 76.904987 38.31767   100
   OP2_2 280.90242 181.22013  98.545148 81.407928 77.637006 64.94842   100
   OP2_3 279.43728 183.11366 114.879904 81.404236 82.614739 72.04741   100
    Ak_3  15.74301  14.83731   9.340444  7.902213  8.164234 11.27133   100
 Ak_3msw  18.57697  14.45574  12.936594  8.513725  8.997922 24.03969   100

Stopwords

morestopwords = c("a", "about", "above", "after", "again", "against", "all", 
"am", "an", "and", "any", "are", "arent", "as", "at", "be", "because", 
"been", "before", "being", "below", "between", "both", "but", 
"by", "cant", "cannot", "could", "couldnt", "did", "didnt", "do", 
"does", "doesnt", "doing", "dont", "down", "during", "each", 
"few", "for", "from", "further", "had", "hadnt", "has", "hasnt", 
"have", "havent", "having", "he", "hed", "hell", "hes", "her", 
"here", "heres", "hers", "herself", "him", "himself", "his", 
"how", "hows", "i", "id", "ill", "im", "ive", "if", "in", "into", 
"is", "isnt", "it", "its", "its", "itself", "lets", "me", "more", 
"most", "mustnt", "my", "myself", "no", "nor", "not", "of", "off", 
"on", "once", "only", "or", "other", "ought", "our", "ours", 
"ourselves", "out", "over", "own", "same", "shant", "she", "shed", 
"shell", "shes", "should", "shouldnt", "so", "some", "such", 
"than", "that", "thats", "the", "their", "theirs", "them", "themselves", 
"then", "there", "theres", "these", "they", "theyd", "theyll", 
"theyre", "theyve", "this", "those", "through", "to", "too", 
"under", "until", "up", "very", "was", "wasnt", "we", "wed", 
"well", "were", "weve", "were", "werent", "what", "whats", "when", 
"whens", "where", "wheres", "which", "while", "who", "whos", 
"whom", "why", "whys", "with", "wont", "would", "wouldnt", "you", 
"youd", "youll", "youre", "youve", "your", "yours", "yourself", 
"yourselves", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", 
"k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", 
"x", "y", "z")

但这并不完全相同,因为%in%仅针对表进行匹配(请参见https://github.com/wch/r-source/blob/b156e3a711967f58131e23c1b1dc1ea90e2f0c43/src/main/unique.c#L922),即停用词的长度或在拆分字符串时获得的任何内容,而`grepl`是逐个字符进行匹配(请参见https://github.com/wch/r-source/blob/b156e3a711967f58131e23c1b1dc1ea90e2f0c43/src/main/grep.c#L679)。 因此,对于stopwords <- c("is", "a", "in", "this")%in%有四个要做的事情,而grepl则取决于目标向量的长度和那些字符串的长度。 - rawr

1
我们可以使用parallel包来改进lapply,特别是在列表中有许多级别的情况下。 创建多个级别
tokens2 <- list(text1 = c("this_is", "is_a", "a_test", "test_text", "text_with", "with_a", "a_few", "few_words"), 
                text2 = c("some_more", "more_words", "words_in", "in_this", "this_text", "text_text"))
tokens2 <- lapply(1:500,function(x) sample(tokens2,1)[[1]])

我们这样做是因为并行包有很多开销需要设置,所以仅仅增加microbenchmark的迭代次数将继续产生这种成本。通过增加列表的大小,您可以看到真正的改进。
library(parallel)
#Setup
cl <- detectCores()
cl <- makeCluster(cl)

#Two functions:

#original
removeTokensOP2 <- function(w, stopwords) { 
  matchPattern <- paste0("(^|_)", paste(stopwords, collapse = "(_|$)|(^|_)"), "(_|$)")
  lapply(w, function(x) x[-grep(matchPattern, x)])
}

#new
removeTokensOPP <- function(w, stopwords) {
  matchPattern <- paste0("(^|_)", paste(stopwords, collapse = "(_|$)|(^|_)"), "(_|$)")
  return(w[-grep(matchPattern, w)])
}

#compare

microbenchmark(
  OP2_P = parLapply(cl,tokens2,removeTokensOPP,stopwords),
  OP2_2 = removeTokensOP2(tokens2, stopwords),
  unit = 'relative'
)

Unit: relative
  expr      min       lq     mean   median       uq      max neval
 OP2_P 1.000000 1.000000 1.000000 1.000000 1.000000  1.00000   100
 OP2_2 1.730565 1.653872 1.678781 1.562258 1.471347 10.11306   100

随着列表级别的增加,性能将会提高。

1
您可以考虑简化正则表达式,^和$会增加额外的负担。
remove_short <- function(x, stopwords) {
  stopwords_regexp <- paste0('(^|_)(', paste(stopwords, collapse = '|'), ')(_|$)')
  lapply(x, function(x) x[!grepl(stopwords_regexp, x)])
}
require(microbenchmark)
microbenchmark(OP1_1 = removeTokensOP1(tokens1, stopwords),
               OP2_1 = removeTokensOP2(tokens2, stopwords),
               OP2_2 = remove_short(tokens2, stopwords),
               unit = "relative")
Unit: relative
  expr      min       lq     mean   median       uq      max neval cld
 OP1_1 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000   100 a  
 OP2_1 5.178565 4.768749 4.465138 4.441130 4.262399 4.266905   100   c
 OP2_2 3.452386 3.247279 3.063660 3.068571 2.963794 2.948189   100  b 

但是我从停用词"if"等中得到了"beautiful"的正匹配。 - Ken Benoit
1
你是对的。但是,您的正则表达式仍有一个小优化: 您可以将其编写为(^|_)(is|a|in|this)(_|$),而不是(^|_)is(_|$)|(^|_)a(_|$)|(^|_)in(_|$)|(^|_)this(_|$)。 我已经编辑了我的答案以反映这种差异。 - Vlados

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接