我想测量在Python 3中执行加法操作所需的时钟周期数。
我编写了一个计算加法操作平均值的程序:
from timeit import timeit
def test(n):
for i in range(n):
1 + 1
if __name__ == '__main__':
times = {}
for i in [2 ** n for n in range(10)]:
t = timeit.timeit("test(%d)" % i, setup="from __main__ import test", number=100000)
times[i] = t
print("%d additions takes %f" % (i, t))
keys = sorted(list(times.keys()))
for i in range(len(keys) - 2):
print("1 addition takes %f" % ((times[keys[i+1]] - times[keys[i]]) / (keys[i+1] - keys[i])))
输出:
16 additions takes 0.288647
32 additions takes 0.422229
64 additions takes 0.712617
128 additions takes 1.275438
256 additions takes 2.415222
512 additions takes 5.050155
1024 additions takes 10.381530
2048 additions takes 21.185604
4096 additions takes 43.122559
8192 additions takes 88.323853
16384 additions takes 194.353927
1 addition takes 0.008292
1 addition takes 0.010068
1 addition takes 0.008654
1 addition takes 0.010318
1 addition takes 0.008349
1 addition takes 0.009075
1 addition takes 0.008794
1 addition takes 0.008905
1 addition takes 0.010293
1 addition takes 0.010413
1 addition takes 0.010551
1 addition takes 0.010711
1 addition takes 0.011035
所以根据这个输出,一次加法大约需要0.0095微秒。根据这个页面的说明,我计算出一次加法需要25个CPU周期。这个值正常吗?为什么?因为汇编指令ADD只需要1-2个CPU周期。
test
函数内的for循环无论迭代次数如何,都会影响时间。 - Michaeltimes[keys[i]] / keys[i]
。你对于0.0095秒对应多少个周期的计算也是错误的。以2 GHz运行的CPU在0.0095秒内执行了19,000,000个周期。Python非常慢,它的操作所需的时间比相当的汇编指令长得多。 - Ross Ridge((times[keys[i+1]] - times[keys[i]]) / (keys[i+1] - keys[i]))
这段代码没有问题。通过除以增量,我可以调整计算的精度。 - ArsenRDTSC
调用。 - not2qubit