Pandas多级透视表的高效实现

3

我正在使用Pandas处理大约300MB的金融数据,这些数据对应于一场拍卖中的限价单。这是多维数据,看起来像这样:

                                bid                                                                                                                                                                                                                                                                                                                                                                                       ask                                                                                                                                                                                                                                                                                                                                                                           
                                  0                  1                  2                  3                  4                  5                  6                  7                  8                  9                 10                 11                 12                 13                 14                 15                 16                 17                 18               19                  0                  1                  2                  3                  4                  5                  6                  7                  8                  9                 10                 11                 12                 13                 14                 15               16               17               18               19          
                              price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity  price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity    price  quantity  price  quantity  price  quantity  price  quantity  price  quantity
2014-05-13 08:47:16.180000  102.298   1000000  102.297   1500000  102.296   6500000  102.295   8000000  102.294   3000000  102.293  24300000  102.292   6000000  102.291   1000000  102.290   1000000  102.289   2500000  102.288  11000000  102.287   4000000  102.286  10100000  102.284   5000000  102.280   1500000  102.276   3000000  102.275   8100000  102.265   9500000      NaN       NaN    NaN       NaN  102.302   2000000  102.303   6100000  102.304  14700000  102.305   3500000  102.307   9800000  102.308  15500000  102.310   5000000  102.312   7000000  102.313   1000000  102.315   8000000  102.316   4500000  102.320   4000000  102.321   1000000  102.324   4000000  102.325   9500000      NaN       NaN    NaN       NaN    NaN       NaN    NaN       NaN    NaN       NaN
2014-05-13 08:47:17.003000  102.298   1000000  102.297   2500000  102.296   6500000  102.295   7000000  102.294   3000000  102.293  24300000  102.292   6000000  102.291   1000000  102.290   1000000  102.289   2500000  102.288  11000000  102.287   4000000  102.286  10100000  102.284   5000000  102.280   1500000  102.276   3000000  102.275   8100000  102.265   9500000      NaN       NaN    NaN       NaN  102.302   2000000  102.303   5100000  102.304  14700000  102.305   4500000  102.307   9800000  102.308  15500000  102.310   5000000  102.312   7000000  102.313   1000000  102.315   8000000  102.316   4500000  102.320   4000000  102.321   1000000  102.324   4000000  102.325   9500000      NaN       NaN    NaN       NaN    NaN       NaN    NaN       NaN    NaN       NaN
2014-05-13 08:47:17.005000  102.298   3000000  102.297   3500000  102.296   6000000  102.295   9300000  102.294   4000000  102.293  17500000  102.292   2000000  102.291   4000000  102.290   1000000  102.289   2500000  102.288   6000000  102.287   4000000  102.286  10100000  102.284   5000000  102.280   1500000  102.276   3000000  102.275   8100000  102.265   9500000      NaN       NaN    NaN       NaN  102.302   2000000  102.303   5100000  102.304  14700000  102.305   4500000  102.307   9000000  102.308  16300000  102.310   5000000  102.312   7000000  102.313   1000000  102.315   8000000  102.316   4500000  102.320   4000000  102.321   1000000  102.324   4000000  102.325   9500000      NaN       NaN    NaN       NaN    NaN       NaN    NaN       NaN    NaN       NaN
2014-05-13 08:47:17.006000  102.299   1000000  102.298   3000000  102.297   6500000  102.296   5000000  102.295   5300000  102.294   4000000  102.293  15500000  102.292   2000000  102.291   4000000  102.290   1000000  102.289   2500000  102.288   6000000  102.287   4000000  102.286  10100000  102.284   5000000  102.280   1500000  102.276   3000000  102.275   8100000  102.265   9500000    NaN       NaN  102.302   2000000  102.303   5100000  102.304  11700000  102.305   7500000  102.307   9000000  102.308  11300000  102.309   5000000  102.310   5000000  102.312   7000000  102.313   1000000  102.315   8000000  102.316   4500000  102.320   4000000  102.321   1000000  102.324   4000000  102.325   9500000    NaN       NaN    NaN       NaN    NaN       NaN    NaN       NaN
2014-05-13 08:47:17.007000  102.299   1000000  102.298   3000000  102.297   8500000  102.296   4000000  102.295   4300000  102.294   5000000  102.293  14500000  102.292   2000000  102.291   4000000  102.290   1000000  102.289   2500000  102.288   6000000  102.287   4000000  102.286  10100000  102.284   5000000  102.280   1500000  102.276   3000000  102.275   8100000  102.265   9500000    NaN       NaN  102.302   2000000  102.303   4100000  102.304  13700000  102.305   7500000  102.307   8000000  102.308  12300000  102.309   5000000  102.310   5000000  102.312   7000000  102.313   1000000  102.315   8000000  102.316   4500000  102.320   4000000  102.321   1000000  102.324   4000000  102.325   9500000    NaN       NaN    NaN       NaN    NaN       NaN    NaN       NaN

请注意,当您达到20时,第一级别会发生变化。对于表格的长格式,我们表示歉意...

我需要进行一些数据透视操作来处理数据。例如,不是使用0、1、2、3...(队列中订单的相对位置),而是使用102.297、102.296等,即订单价格作为索引。以下是这种操作的示例:

x.stack([0,0]).reset_index(drop=True,level=2).set_index("price",append=True).unstack([1,2]).fillna(0).diff().stack([1,1])

yielding:

                                         quantity
                           side price            
2014-05-13 08:47:17.003000 ask  102.300         0
                                102.301         0
                                102.302         0
                                102.303  -1000000
                                102.304         0

这可以通过stack/unstack/reset_index的组合实现,但似乎效率非常低。我没有查看代码,但我猜想每个stack/unstack都会复制表格,导致我的8GB系统耗尽内存并开始使用页面文件。我认为在这种情况下也不能使用pivot,因为所需列位于多重索引中。
您有什么建议可以加速吗?
这是一个示例输入CSV文件,如评论所述:
side,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,bid,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask,ask
level,0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15,16,16,17,17,18,18,19,19,0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15,16,16,17,17,18,18,19,19
value,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity,price,quantity
2014-05-13 08:47:16.18,102.298,1000000.0,102.297,1500000.0,102.296,6500000.0,102.295,8000000.0,102.294,3000000.0,102.293,2.43E7,102.292,6000000.0,102.291,1000000.0,102.29,1000000.0,102.289,2500000.0,102.288,1.1E7,102.287,4000000.0,102.286,1.01E7,102.284,5000000.0,102.28,1500000.0,102.276,3000000.0,102.275,8100000.0,102.265,9500000.0,N/A,N/A,N/A,N/A,102.302,2000000.0,102.303,6100000.0,102.304,1.47E7,102.305,3500000.0,102.307,9800000.0,102.308,1.55E7,102.31,5000000.0,102.312,7000000.0,102.313,1000000.0,102.315,8000000.0,102.316,4500000.0,102.32,4000000.0,102.321,1000000.0,102.324,4000000.0,102.325,9500000.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
2014-05-13 08:47:17.003,102.298,1000000.0,102.297,2500000.0,102.296,6500000.0,102.295,7000000.0,102.294,3000000.0,102.293,2.43E7,102.292,6000000.0,102.291,1000000.0,102.29,1000000.0,102.289,2500000.0,102.288,1.1E7,102.287,4000000.0,102.286,1.01E7,102.284,5000000.0,102.28,1500000.0,102.276,3000000.0,102.275,8100000.0,102.265,9500000.0,N/A,N/A,N/A,N/A,102.302,2000000.0,102.303,5100000.0,102.304,1.47E7,102.305,4500000.0,102.307,9800000.0,102.308,1.55E7,102.31,5000000.0,102.312,7000000.0,102.313,1000000.0,102.315,8000000.0,102.316,4500000.0,102.32,4000000.0,102.321,1000000.0,102.324,4000000.0,102.325,9500000.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
2014-05-13 08:47:17.005,102.298,3000000.0,102.297,3500000.0,102.296,6000000.0,102.295,9300000.0,102.294,4000000.0,102.293,1.75E7,102.292,2000000.0,102.291,4000000.0,102.29,1000000.0,102.289,2500000.0,102.288,6000000.0,102.287,4000000.0,102.286,1.01E7,102.284,5000000.0,102.28,1500000.0,102.276,3000000.0,102.275,8100000.0,102.265,9500000.0,N/A,N/A,N/A,N/A,102.302,2000000.0,102.303,5100000.0,102.304,1.47E7,102.305,4500000.0,102.307,9000000.0,102.308,1.63E7,102.31,5000000.0,102.312,7000000.0,102.313,1000000.0,102.315,8000000.0,102.316,4500000.0,102.32,4000000.0,102.321,1000000.0,102.324,4000000.0,102.325,9500000.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
2014-05-13 08:47:17.006,102.299,1000000.0,102.298,3000000.0,102.297,6500000.0,102.296,5000000.0,102.295,5300000.0,102.294,4000000.0,102.293,1.55E7,102.292,2000000.0,102.291,4000000.0,102.29,1000000.0,102.289,2500000.0,102.288,6000000.0,102.287,4000000.0,102.286,1.01E7,102.284,5000000.0,102.28,1500000.0,102.276,3000000.0,102.275,8100000.0,102.265,9500000.0,N/A,N/A,102.302,2000000.0,102.303,5100000.0,102.304,1.17E7,102.305,7500000.0,102.307,9000000.0,102.308,1.13E7,102.309,5000000.0,102.31,5000000.0,102.312,7000000.0,102.313,1000000.0,102.315,8000000.0,102.316,4500000.0,102.32,4000000.0,102.321,1000000.0,102.324,4000000.0,102.325,9500000.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
2014-05-13 08:47:17.007,102.299,1000000.0,102.298,3000000.0,102.297,8500000.0,102.296,4000000.0,102.295,4300000.0,102.294,5000000.0,102.293,1.45E7,102.292,2000000.0,102.291,4000000.0,102.29,1000000.0,102.289,2500000.0,102.288,6000000.0,102.287,4000000.0,102.286,1.01E7,102.284,5000000.0,102.28,1500000.0,102.276,3000000.0,102.275,8100000.0,102.265,9500000.0,N/A,N/A,102.302,2000000.0,102.303,4100000.0,102.304,1.37E7,102.305,7500000.0,102.307,8000000.0,102.308,1.23E7,102.309,5000000.0,102.31,5000000.0,102.312,7000000.0,102.313,1000000.0,102.315,8000000.0,102.316,4500000.0,102.32,4000000.0,102.321,1000000.0,102.324,4000000.0,102.325,9500000.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A

你应该在一个易于复制粘贴的示例中展示你想要的内容。 - Jeff
@Jeff 不确定你是指结果还是输入数据。我已经在帖子中添加了一个示例输入csv。关于结果,我也添加了一个示例。但它可以是各种不同的转换 - 具体结果并不重要,但我更感兴趣的是如何高效地在多索引数据框上执行透视操作。如果不清楚的话,请原谅。 - Luciano
那是一长串的点。你能否只做一个pandas步骤?我对pandas内部和它们如何与垃圾收集器交互的了解也很模糊,但我的感觉是你最终会在内存中拥有比实际需要更多的数据副本。 - Raman Shah
可能相关:Pandas中的垃圾回收可能会很微妙,但有一些解决方法可以防止旧副本占用所有内存。分解数据处理(例如,y = y.reset_index(drop=True,level=2)等)并在行之间添加一些gc.collect()可能会解决此问题。https://github.com/pydata/pandas/issues/2659 - Raman Shah
1
谢谢@RamanShah,我已经尝试过了,但令人惊讶的是它似乎并没有产生太大的差异。不确定为什么,因为我同意你的逻辑。 - Luciano
1个回答

1

Unstack本质上创建了一个索引x列的枚举,因此当你有很多列和行时,它可以创建一个巨大的内存空间。

这里有一个解决方案,虽然速度较慢,但峰值内存使用应该要低得多(我认为)。它给出了稍微小一些的总空间,因为原始数据中可能有一些零条目在此处不存在(但你可以重新索引并填充以修复这个问题)。

定义这个函数,它可能已经针对这种情况进行了优化(因为已经在级别上进行了分组)。

In [79]: def f(x):                                                              
    try:
        y = x.stack([0,0]).reset_index(drop=True,level=2).set_index("price",append=True).unstack([1,2]).fillna(0).diff().stack([1,1])
        return y[y!=0].dropna()
    except:
        return None
   ....:     

按列分组'level'并应用f函数;不要直接使用apply,而是将结果作为行连接起来(这是“展开”部分)。
但是这会创建重复项(在价格级别上),因此需要对它们进行聚合。
In [76]: concat([ f(grp) for g, grp in df.groupby(level='level',axis=1) ]).groupby(level=[0,1,2]).sum().sortlevel()
Out[76]: 
value                                 quantity
                        side price            
2014-05-13 08:47:17.003 ask  102.303  -1000000
                             102.305   1000000
                        bid  102.295  -1000000
                             102.297   1000000
2014-05-13 08:47:17.005 ask  102.307   -800000
                             102.308    800000
                        bid  102.288  -5000000
                             102.291   3000000
                             102.292  -4000000
                             102.293  -6800000
                             102.294   1000000
                             102.295   2300000
                             102.296   -500000
                             102.297   1000000
                             102.298   2000000
2014-05-13 08:47:17.006 ask  102.304  -3000000
                             102.305   3000000
                             102.308  -5000000
                             102.309   5000000
                             102.310         0
                             102.312         0
                             102.313         0
                             102.315         0
                             102.316         0
                             102.320         0
                             102.321         0
                             102.324         0
                             102.325         0
                        bid  102.265  -9500000
                             102.275         0
                             102.276         0
                             102.280         0
                             102.284         0
                             102.286         0
                             102.287         0
                             102.288         0
                             102.289         0
                             102.290         0
                             102.291         0
                             102.292         0
                             102.293  -2000000
                             102.294         0
                             102.295  -4000000
                             102.296  -1000000
                             102.297   3000000
                             102.298         0
                             102.299   1000000
2014-05-13 08:47:17.007 ask  102.303  -1000000
                             102.304   2000000
                             102.307  -1000000
                             102.308   1000000
                        bid  102.293  -1000000
                             102.294   1000000
                             102.295  -1000000
                             102.296  -1000000
                             102.297   2000000

时间(我认为优化f将使其运行速度更快)
In [77]: %timeit concat([ f(grp) for g, grp in df.groupby(level='level',axis=1) ]).groupby(level=[0,1,2]).sum().sortlevel()
1 loops, best of 3: 319 ms per loop

In [78]: %memit concat([ f(grp) for g, grp in df.groupby(level='level',axis=1) ]).groupby(level=[0,1,2]).sum().sortlevel()
maximum of 1: 67.515625 MB per loop

原始方法

In [7]: %timeit df.stack([0,0]).reset_index(drop=True,level=2).set_index("price",append=True).unstack([1,2]).fillna(0).diff().stack([1,1])
10 loops, best of 3: 56.4 ms per loop

In [8]: %memit df.stack([0,0]).reset_index(drop=True,level=2).set_index("price",append=True).unstack([1,2]).fillna(0).diff().stack([1,1])
maximum of 1: 61.187500 MB per loop

谢谢您的回复。我并不认为在您的基准测试中进行时间/内存权衡是值得的。减少内存占用的原因是为了避免分页,这才是真正导致速度变慢的原因。本质上,通过使用“块大小(chunksize)”读取文件并修改聚合结果的算法,可以实现类似于您的结果。这就是我目前正在做的,以避免内存耗尽。此外,如果省略了diff()(即如果它不是稀疏矩阵),这也无济于事... - Luciano
听起来很合理;我认为你只需要更多的内存。或者将其分解成单独的子问题并组合起来。正如我所说,我认为实际算法(例如f)可以简化很多,但这取决于价格空间的稀疏程度。 - Jeff

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接