方案 #1
考虑到第一个输入中每一列都只有一个条目,我们可以使用np.bincount
函数处理输入的rows
、values
和X
,从而避免创建稀疏矩阵S
。
def sparse_matrix_mult(rows, values, X):
nrows = rows.max()+1
ncols = X.shape[1]
nelem = nrows * ncols
ids = rows + nrows*np.arange(ncols)[:,None]
sums = np.bincount(ids.ravel(), (X.T*values).ravel(), minlength=nelem)
out = sums.reshape(ncols,-1).T
return out
样例运行 -
In [746]: import numpy as np
...: from scipy.sparse import csr_matrix
...: import scipy as sp
...:
In [747]: np.random.seed(1234)
...: m,n = 3,4
...: rows = np.random.randint(0, m, n)
...: values = np.random.randint(2,10,n)
...: X = np.random.randint(2, 10, (n,5))
...:
In [748]: S = csr_matrix( (values, (rows, sp.arange(n))), shape = (m,n))
In [749]: S.dot(X)
Out[749]:
array([[42, 27, 45, 78, 87],
[24, 18, 18, 12, 24],
[18, 6, 8, 16, 10]])
In [750]: sparse_matrix_mult(rows, values, X)
Out[750]:
array([[ 42., 27., 45., 78., 87.],
[ 24., 18., 18., 12., 24.],
[ 18., 6., 8., 16., 10.]])
方法二
使用np.add.reduceat
替换np.bincount
-
def sparse_matrix_mult_v2(rows, values, X):
nrows = rows.max()+1
ncols = X.shape[1]
scaled_ar = X*values[:,None]
sidx = rows.argsort()
rows_s = rows[sidx]
cut_idx = np.concatenate(([0],np.flatnonzero(rows_s[1:] != rows_s[:-1])+1))
sums = np.add.reduceat(scaled_ar[sidx],cut_idx,axis=0)
out = np.empty((nrows, ncols),dtype=sums.dtype)
row_idx = rows_s[cut_idx]
out[row_idx] = sums
return out
运行测试
我无法按照问题中提到的那些大小运行它,因为那对我来说太大了。因此,在缩小的数据集上,这是我得到的结果 -
In [149]: m,n = 1000, 100000
...: rows = np.random.randint(0, m, n)
...: values = np.random.randint(2,10,n)
...: X = np.random.randint(2, 10, (n,2500))
...:
In [150]: S = csr_matrix( (values, (rows, sp.arange(n))), shape = (m,n))
In [151]: %timeit csr_matrix( (values, (rows, sp.arange(n))), shape = (m,n))
100 loops, best of 3: 16.1 ms per loop
In [152]: %timeit S.dot(X)
1 loop, best of 3: 193 ms per loop
In [153]: %timeit sparse_matrix_mult(rows, values, X)
1 loop, best of 3: 4.4 s per loop
In [154]: %timeit sparse_matrix_mult_v2(rows, values, X)
1 loop, best of 3: 2.81 s per loop
因此,提出的方法似乎在性能方面不如numpy.dot
厉害,但它们在内存效率方面应该很好。
对于稀疏的X
对于稀疏的X
,我们需要进行一些修改,具体列在下面的修改方法中 -
from scipy.sparse import find
def sparse_matrix_mult_sparseX(rows, values, Xs):
nrows = rows.max()+1
ncols = Xs.shape[1]
nelem = nrows * ncols
scaled_vals = Xs.multiply(values[:,None])
r,c,v = find(scaled_vals)
ids = rows[r] + c*nrows
sums = np.bincount(ids, v, minlength=nelem)
out = sums.reshape(ncols,-1).T
return out