使用data.table高效处理按组重复的值

3
从分组内获取重复列(即每行相同值)的单个值,有两种方法:使用variable[1]或将该变量包括在by语句中并使用.BY$variable。假设我想返回的值包括variable作为列。
从以下测试中可以清楚地看出:将其他变量放入by语句会减慢速度,即使排除通过这些额外变量来键入(或使用诡计告诉data.table不需要进行额外键入)的成本。为什么其他已经键入的by变量会减慢速度?
我曾希望包括已经键入的by变量是一个方便的语法技巧,用于在j语句中返回数据表时无需显式命名这些变量,但似乎这是不可取的,因为即使它们已经键入,额外的by变量也会产生一定开销。那么我的问题是,是什么导致了这种开销?
一些示例数据:
library(data.table)
n <- 1e8
y <- data.table(sample(1:5,n,replace=TRUE),rnorm(n),rnorm(n))
y[,sumV2:=sum(V2),keyby=V1]

时间表明,使用variable[1]方法(在此情况下,sumV2[1])更快。

x <- copy(y)
system.time(x[, list(out=sum(V3*V2)/sumV2[1],sumV2[1]),keyby=V1])
system.time(x[, list(out=sum(V3*V2)/.BY$sumV2),keyby=list(V1,sumV2)])

我猜这并不奇怪,因为 data.table 不知道由 setkey(V1) 和 setkey(V1,sumV2) 定义的群组实际上是相同的。

让我感到惊讶的是,即使数据表以 setkey(V1,sumV2) 作为关键字(我们完全忽略设置新关键字所需的时间),使用 sumV2[1] 仍然更快。为什么呢?

x <- copy(y)
setkey(x,V1,sumV2)
system.time(x[, list(out=sum(V3*V2)/sumV2[1],sumV2[1]),by=V1])
system.time(x[, list(out=sum(V3*V2)/.BY$sumV2),by=list(V1,sumV2)])

此外,执行setkey(x,V2,sumV2)所需的时间是不可忽略的。是否有办法欺骗data.table,告诉它键实际上并没有实质性地发生改变,从而跳过实际重新设置键值的步骤呢?
x <- copy(y)
system.time(setkey(x,V1,sumV2))

回答我的问题,似乎当我们设置键时,只需分配“sorted”属性就可以跳过排序。这样做是否被允许?它会破坏什么吗?
x <- copy(y)
system.time({
  setattr(x, "sorted", c("V1","sumV2"))
  x[, list(out=sum(V3*V2)/.BY$sumV2),by=list(V1,sumV2)]
})

无论是不良做法还是可能会破坏东西,我都不知道。但使用 setattr 技巧比显式键入要快得多:
x <- copy(y)
system.time({
  setkey(x,V1,sumV2)
  x[, list(out=sum(V3*V2)/.BY$sumV2),by=list(V1,sumV2)]
})

但即使使用setattr的技巧,并将sumV2与by语句结合使用,仍然不如完全省略sumV2在by语句中时快:

x <- copy(y)
system.time(x[, list(out=sum(V3*V2)/sumV2[1],sumV2[1]),keyby=V1])

使用属性设置键并将sumV2作为每个组内变量的长度为1的向量,应该比仅针对V1进行键入并使用sumV2 [1] 更快。如果未将sumV2指定为by变量,则在每个组之前需要生成重复值向量的整个向量,然后将其子集化为sumV2 [1] 。相比之下,当sumV2是by变量时,在每个组中只有一个长度为1的向量。显然,我这里的推理是不正确的。有人能解释一下为什么吗?为什么sumV2 [1] 是最快的选项,即使使用setattr技巧将sumV2作为by变量?
顺便说一句,我很惊讶地发现使用attr < -与setattr一样快(都瞬间完成,表明根本没有复制)。这与我理解的基础R foo <-函数会复制数据相反。
x <- copy(y)
system.time(setattr(x, "sorted", c("V1","sumV2")))
x <- copy(y)
system.time(attr(x,"sorted") <- c("V1","sumV2"))

与此问题相关的SessionInfo()用于:

data.table version 1.12.2
R version 3.5.3

我不确定我理解了。由于您使用 :=keyby 创建 sumV2,所以在 x[,sumV2:=sum(V2),keyby=V1] 之后,x 已经按 V1 键入,并且 data.table 知道这一点。您可以在调用 setkey 之前调用 key(x) 来证实,或者在调用 setkey(在您的最后一个示例中)时设置 verbose=TRUE - Alexis
你的示例不需要使用 by 子句。V3*V2/sumV2 可以直接作为矢量化操作运行。 - Frank
1
@Frank--哎呀,我忘记在sum函数中包装V3*V2了。这里的想法是通过由V1定义的类别中V2定义的权重对V3的值进行加权平均。我编辑了问题来修复这个问题。 - Michael
我要补充一点,这有点像是一个草人论点,因为最快的方法可能根本不需要将sumV2作为x的一列创建,而是在单个复合j语句中创建sumV2作为j中的临时变量,并用它来计算加权平均值。但是你可能同时想要在x中重复sumV2的值和另一个包含加权平均值的更短的data.table。 - Michael
1个回答

0

好的,所以我没有一个很好的技术答案,但是我认为在概念上我已经通过options(datatable.verbose=TRUE)弄清楚了。

创建数据

library(data.table)
n <- 1e8

y_unkeyed_5groups <- data.table(sample(1:10000,n,replace=TRUE),rnorm(n),rnorm(n))
y_unkeyed_5groups[,sumV2:=sum(V2),keyby=V1]
y_unkeyed_10000groups <- data.table(sample(1:10000,n,replace=TRUE),rnorm(n),rnorm(n))
y_unkeyed_10000groups[,sumV2:=sum(V2),keyby=V1]

以慢速方式运行

x <- copy(y)
system.time({
  setattr(x, "sorted", c("V1","sumV2"))
  x[, list(out=sum(V3*V2)/.BY$sumV2),by=list(V1,sumV2)]
})
# Detected that j uses these columns: V3,V2 
# Finding groups using uniqlist on key ... 1.050s elapsed (1.050s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'list(sum(V3 * V2)/.BY$sumV2)'
# GForce is on, left j unchanged
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ... 
# memcpy contiguous groups took 0.305s for 6 groups
# eval(j) took 0.254s for 6 calls
# 0.560s elapsed (0.510s cpu) 
# user  system elapsed 
# 1.81    0.09    1.72 

以最快的方式运行:

x <- copy(y)
system.time(x[, list(out=sum(V3*V2)/sumV2[1],sumV2[1]),keyby=V1])
# Detected that j uses these columns: V3,V2,sumV2 
# Finding groups using uniqlist on key ... 0.060s elapsed (0.070s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'list(sum(V3 * V2)/sumV2[1], sumV2[1])'
# GForce is on, left j unchanged
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ... 
# memcpy contiguous groups took 0.328s for 6 groups
# eval(j) took 0.291s for 6 calls
# 0.610s elapsed (0.580s cpu) 
# user  system elapsed 
# 1.08    0.08    0.82 

finding groups 部分是差异的原因。我猜这里发生的事情是设置 key 实际上只是排序(从属性名称可以猜到!),并没有实际定义组的起始和结束位置。因此,即使 data.table 知道 sumV2 已经排序,它也不知道它们都是相同的值,因此必须找到 sumV2 中组的起始和结束位置。

我猜想,在技术上,可能有可能编写 data.table 的方式,其中键入不仅仅是排序,而且实际上存储了每个键入变量中每个组的起始/结束行,但这可能会占用大量内存,对于具有许多组的数据表而言尤其如此。

了解了这一点,似乎建议是不要一遍又一遍地重复相同的 by 语句,而是在单个 by 语句中完成所有需要完成的操作。这通常是一个好的建议,但对于少数组而言并非如此。请参见以下反例:

我以我认为使用 data.table 可能是最快的方式进行了重写(仅使用单个 by 语句,并利用 GForce):

library(data.table)
n <- 1e8
y_unkeyed_5groups <- data.table(sample(1:5,n, replace=TRUE),rnorm(n),rnorm(n))
y_unkeyed_10000groups <- data.table(sample(1:10000,n, replace=TRUE),rnorm(n),rnorm(n))

x <- copy(y_unkeyed_5groups)
system.time({
  x[, product:=V3*V2]
  outDT <- x[,list(sumV2=sum(V2),sumProduct=sum(product)),keyby=V1]
  outDT[,`:=`(out=sumProduct/sumV2,sumProduct=NULL) ]
  setkey(x,V1)
  x[outDT,sumV2:=sumV2,all=TRUE]
  x[,product:=NULL]
  outDT
})

# Detected that j uses these columns: V3,V2 
# Assigning to all 100000000 rows
# Direct plonk of unnamed RHS, no copy.
# Detected that j uses these columns: V2,product 
# Finding groups using forderv ... 0.350s elapsed (0.810s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'list(sum(V2), sum(product))'
# GForce optimized j to 'list(gsum(V2), gsum(product))'
# Making each group and running j (GForce TRUE) ... 1.610s elapsed (4.550s cpu) 
# Detected that j uses these columns: sumProduct,sumV2 
# Assigning to all 5 rows
# RHS for item 1 has been duplicated because NAMED is 3, but then is being plonked. length(values)==2; length(cols)==2)
# forder took 0.98 sec
# reorder took 3.35 sec
# Starting bmerge ...done in 0.000s elapsed (0.000s cpu) 
# Detected that j uses these columns: sumV2 
# Assigning to 100000000 row subset of 100000000 rows
# Detected that j uses these columns: product 
# Assigning to all 100000000 rows
# user  system elapsed 
# 11.00    1.75    5.33 


x2 <- copy(y_unkeyed_5groups)
system.time({
  x2[,sumV2:=sum(V2),keyby=V1]
  outDT2 <- x2[, list(sumV2=sumV2[1],out=sum(V3*V2)/sumV2[1]),keyby=V1]
})
# Detected that j uses these columns: V2 
# Finding groups using forderv ... 0.310s elapsed (0.700s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'sum(V2)'
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ... 
# collecting discontiguous groups took 0.714s for 5 groups
# eval(j) took 0.079s for 5 calls
# 1.210s elapsed (1.160s cpu) 
# setkey() after the := with keyby= ... forder took 1.03 sec
# reorder took 3.21 sec
# 1.600s elapsed (3.700s cpu) 
# Detected that j uses these columns: sumV2,V3,V2 
# Finding groups using uniqlist on key ... 0.070s elapsed (0.070s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'list(sumV2[1], sum(V3 * V2)/sumV2[1])'
# GForce is on, left j unchanged
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ... 
# memcpy contiguous groups took 0.347s for 5 groups
# eval(j) took 0.265s for 5 calls
# 0.630s elapsed (0.620s cpu) 
# user  system elapsed 
# 6.57    0.98    3.99 

all.equal(x,x2)
# TRUE
all.equal(outDT,outDT2)
# TRUE

好的,事实证明,在只有5个组时,通过不重复使用by语句并使用GForce获得的效率并不那么重要。但是对于更多的组,这确实会有所影响(尽管我还没有以一种分离仅使用一个by语句和不使用GForce的好处与使用GForce和多个by语句的好处的方式编写此内容):

x <- copy(y_unkeyed_10000groups)
system.time({
  x[, product:=V3*V2]
  outDT <- x[,list(sumV2=sum(V2),sumProduct=sum(product)),keyby=V1]
  outDT[,`:=`(out=sumProduct/sumV2,sumProduct=NULL) ]
  setkey(x,V1)
  x[outDT,sumV2:=sumV2,all=TRUE]
  x[,product:=NULL]
  outDT
})
# 
# Detected that j uses these columns: V3,V2 
# Assigning to all 100000000 rows
# Direct plonk of unnamed RHS, no copy.
# Detected that j uses these columns: V2,product 
# Finding groups using forderv ... 0.740s elapsed (1.220s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'list(sum(V2), sum(product))'
# GForce optimized j to 'list(gsum(V2), gsum(product))'
# Making each group and running j (GForce TRUE) ... 0.810s elapsed (2.390s cpu) 
# Detected that j uses these columns: sumProduct,sumV2 
# Assigning to all 10000 rows
# RHS for item 1 has been duplicated because NAMED is 3, but then is being plonked. length(values)==2; length(cols)==2)
# forder took 1.97 sec
# reorder took 11.95 sec
# Starting bmerge ...done in 0.000s elapsed (0.000s cpu) 
# Detected that j uses these columns: sumV2 
# Assigning to 100000000 row subset of 100000000 rows
# Detected that j uses these columns: product 
# Assigning to all 100000000 rows
# user  system elapsed 
# 18.37    2.30    7.31 

x2 <- copy(y_unkeyed_10000groups)
system.time({
  x2[,sumV2:=sum(V2),keyby=V1]
  outDT2 <- x[, list(sumV2=sumV2[1],out=sum(V3*V2)/sumV2[1]),keyby=V1]
})

# Detected that j uses these columns: V2 
# Finding groups using forderv ... 0.770s elapsed (1.490s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'sum(V2)'
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ... 
# collecting discontiguous groups took 1.792s for 10000 groups
# eval(j) took 0.111s for 10000 calls
# 3.960s elapsed (3.890s cpu) 
# setkey() after the := with keyby= ... forder took 1.62 sec
# reorder took 13.69 sec
# 4.660s elapsed (14.4s cpu) 
# Detected that j uses these columns: sumV2,V3,V2 
# Finding groups using uniqlist on key ... 0.070s elapsed (0.070s cpu) 
# Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
# lapply optimization is on, j unchanged as 'list(sumV2[1], sum(V3 * V2)/sumV2[1])'
# GForce is on, left j unchanged
# Old mean optimization is on, left j unchanged.
# Making each group and running j (GForce FALSE) ... 
# memcpy contiguous groups took 0.395s for 10000 groups
# eval(j) took 0.284s for 10000 calls
# 0.690s elapsed (0.650s cpu) 
# user  system elapsed 
# 20.49    1.67   10.19 

all.equal(x,x2)
# TRUE
all.equal(outDT,outDT2)
# TRUE

更一般地说,data.table非常快速,但为了提取最快和最有效的计算,以最大程度地利用底层C代码,您需要特别关注data.table的内部工作原理。我最近了解到data.table中的GForce优化,似乎特定形式的j语句(涉及像mean和sum这样的简单函数)在有by语句时会直接在C中解析和执行。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接