高效地压缩矩阵

4

I have a matrix of this format:

set.seed(1)
mat <- matrix(round(runif(25,0,1)),nrow=5,ncol=5)
colnames(mat) <- c("a1::C","a1::A","a1::B","b1::D","b1::A")

     a1::C a1::A a1::B b1::D b1::A
[1,]     0     1     0     0     1
[2,]     0     1     0     1     0
[3,]     1     1     1     1     1
[4,]     1     1     0     0     0
[5,]     0     0     1     1     0

每一列都代表一个主题和特征(由双冒号隔开的列名表示)。每一行中,值为1表示该主题具有该特征,值为0表示没有。可能某个主题在特定行中所有列都是0。
我想构建一个新矩阵,其中列是主题(即每个主题一列),而行是按字母顺序排序并用逗号隔开的该主题具有的特征。如果某个主题没有任何特征(即该主题在某行的所有列中都是0),则应使用值“W”(没有任何特征的值为“W”)。
以下是基于mat的新矩阵的样子:
cnames = unique(sapply(colnames(mat), function(x) strsplit(x,split="::")[[1]][1]))
new_mat <- matrix(c("A","A","A,B,C","A,C","B",
                    "A","D","A,D","W","D"),
                  nrow=nrow(mat),ncol=length(cnames))
colnames(new_mat) = cnames

     a1      b1   
[1,] "A"     "A"  
[2,] "A"     "D"  
[3,] "A,B,C" "A,D"
[4,] "A,C"   "W"  
[5,] "B"     "D"

有没有什么高效且优雅的方法可以实现这个?
2个回答

4

步骤一:矩阵列主元调整

mat <- mat[, order(colnames(mat))]

#      a1::A a1::B a1::C b1::A b1::D
# [1,]     1     0     0     1     0
# [2,]     1     0     0     0     1
# [3,]     1     1     1     1     1
# [4,]     1     0     1     0     0
# [5,]     0     1     0     0     1

步骤2.1:列名分解


在这一步中,我们将对列名进行分解。
## decompose levels, get main levels (before ::) and sub levels (post ::)
decom <- strsplit(colnames(mat), "::")

main_levels <- sapply(decom, "[", 1)
# [1] "a1" "a1" "a1" "b1" "b1"

sub_levels <- sapply(decom, "[", 2)
# [1] "A" "B" "C" "A" "D"

步骤2.2:分组索引生成
## generating grouping index
main_index <- paste(rep(main_levels, each = nrow(mat)), rep(1:nrow(mat), times = ncol(mat)), sep = "#")
sub_index <- rep(sub_levels, each = nrow(mat))
sub_index[!as.logical(mat)] <- ""  ## 0 values in mat implies ""

## in unclear of what "main_index" and "sub_index" are, check:

## matrix(main_index, nrow(mat))
#      [,1]   [,2]   [,3]   [,4]   [,5]  
# [1,] "a1#1" "a1#1" "a1#1" "b1#1" "b1#1"
# [2,] "a1#2" "a1#2" "a1#2" "b1#2" "b1#2"
# [3,] "a1#3" "a1#3" "a1#3" "b1#3" "b1#3"
# [4,] "a1#4" "a1#4" "a1#4" "b1#4" "b1#4"
# [5,] "a1#5" "a1#5" "a1#5" "b1#5" "b1#5"

## matrix(sub_index, nrow(mat))
#      [,1] [,2] [,3] [,4] [,5]
# [1,] "A"  ""   ""   "A"  ""  
# [2,] "A"  ""   ""   ""   "D" 
# [3,] "A"  "B"  "C"  "A"  "D" 
# [4,] "A"  ""   "C"  ""   ""  
# [5,] ""   "B"  ""   ""   "D" 

步骤2.3:条件折叠粘贴

## collapsed paste of "sub_index" conditional on "main_index"
x <- unname(tapply(sub_index, main_index, paste0, collapse = ""))
x[x == ""] <- "W"
# [1] "A"   "A"   "ABC" "AC"  "B"   "A"   "D"   "AD"  "W"   "D" 

步骤三:后处理

我并不是特别满意这个方法,但是没有找到其他的替代方案。

x <- sapply(strsplit(x, ""), paste0, collapse = ",")
#  [1] "A"   "A"   "A,B,C"  "A,C"   "B"   "A"   "D"   "A,D"  "W"  "D"

步骤四:矩阵

x <- matrix(x, nrow = nrow(mat))
colnames(x) <- unique(main_levels)

#      a1      b1   
# [1,] "A"     "A"  
# [2,] "A"     "D"  
# [3,] "A,B,C" "A,D"
# [4,] "A,C"   "W"  
# [5,] "B"     "D" 

效率考虑

使用向量化的方法本身相当高效,并且不需要手动输入分组信息。例如,当您有数百个主要组(在::之前)和数百个子组(在::之后)时,可以使用相同的代码。

唯一需要考虑的是减少不必要的内存复制。在这方面,我们应该尽可能使用匿名函数,而不是像上面演示的显式矩阵赋值。这将是好的(已经测试过):

 decom <- strsplit(sort(colnames(mat)), "::")
 main_levels <- sapply(decom, "[", 1)

 sub_index <- rep(sapply(decom, "[", 2), each = nrow(mat))
 sub_index[!as.logical(mat[, order(colnames(mat))])] <- ""

 x <- unname(tapply(sub_index,
                    paste(rep(main_levels, each = nrow(mat)),
                          rep(1:nrow(mat), times = ncol(mat)),
                          sep = "#"),
                    paste0, collapse = ""))

 x <- matrix(sapply(strsplit(x, ""), paste0, collapse = ","),
             nrow = nrow(mat))

 colnames(x) <- unique(main_levels)

2

这里是一个起点。根据你有多少变量,这可能会变得繁琐。

library(data.table)
dt = data.table(id = seq_len(nrow(mat)), mat)
longDt <- melt(dt, id.vars = "id", measure = patterns("^a1::", "^b1::"))

longDt[, .(a1 = list(sort(c("C", "A", "B")[as.logical(value1)])), 
           b1 = list(sort(c("D", "A")[as.logical(value2)]))), .(id)]
   id    a1  b1
1:  1     A   A
2:  2     A   D
3:  3 A,B,C A,D
4:  4   A,C    
5:  5     B   D

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接