连续插入那么多行,无法获得良好的性能。
我是一个IndexedDB
dev,在你所谈论的规模下(连续写入数十万行),我有实际经验。这并不太美观。
在我看来,当需要连续写入大量数据时,IDB不适合使用。如果我要设计一个需要大量数据的IndexedDB应用程序,我会想办法慢慢地进行种子播撒。
问题在于写入,我认为问题在于写入速度缓慢,加上它们的I/O密集性,随着时间的推移会变得更糟。(就价值而言,读取在IDB中始终非常快速。)
首先,您可以通过重复使用事务来节省开销。因此,您的第一直觉可能是尝试将所有内容都塞进同一个事务中。但是,例如在Chrome中,我发现浏览器似乎不喜欢长时间运行的写入操作,可能是因为某种机制旨在限制表现不佳的选项卡。
I'm not sure what kind of performance you're seeing, but average numbers might fool you depending on the size of your test. The limiting factor is throughput, but if you're trying to insert large amounts of data consecutively pay attention to writes over time specifically.
I happen to be working on a demo with several hundred thousand rows at my disposal, and have statistics. With my visualization disabled, running pure dash on IDB, here's what I see right now in Chrome 32 on a single object store with a single non-unique index with an auto-incrementing primary key.
For a much smaller dataset of 27k rows, I saw 60-70 entries/second:
* Around 30 seconds: 921 entries/second on average (there's always a great burst of inserts at the start), 62/second at the moment I sampled
* Around 60 seconds: 389/second average (sustained decreases starting to outweigh effect initial burst) 71/second at the moment
* Around 1:30: 258/second, 67/second at the moment
* Around 2:00 (~1/3 done): 188/second on average, 66/second at moment
使用较小的数据集进行的一些示例表现得更好,但具有类似的特征。同样对于更大的数据集,这种影响会被夸大,并且当多个小时离开时,我曾经看到仅不到每秒1个记录。