可以使用pyTorch创建一个先进先出(FIFO)队列吗?

4

我需要在pyTorch中创建一个固定长度的Tensor,它类似于FIFO队列。

我有以下函数可以实现:

def push_to_tensor(tensor, x):
    tensor[:-1] = tensor[1:]
    tensor[-1] = x
    return tensor

例如,我有以下内容:
tensor = Tensor([1,2,3,4])

>> tensor([ 1.,  2.,  3.,  4.])

然后使用该函数将会得到:

push_to_tensor(tensor, 5)

>> tensor([ 2.,  3.,  4.,  5.])

不过,我在想:

  • pyTorch有没有原生方法来做这件事?
  • 如果没有,是否有更聪明的方法来完成它���

3
据我所知,没有原生的方法可以实现这个功能,我也不认为你能够改进你的实现方式。 - iacolippo
3个回答

9

我实现了另一个FIFO队列:

def push_to_tensor_alternative(tensor, x):
    return torch.cat((tensor[1:], Tensor([x])))

虽然功能相同,但我在速度方面进行了比较:

# Small Tensor
tensor = Tensor([1,2,3,4])

%timeit push_to_tensor(tensor, 5)
>> 30.9 µs ± 1.26 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

%timeit push_to_tensor_alternative(tensor, 5)
>> 22.1 µs ± 2.25 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

# Larger Tensor
tensor = torch.arange(10000)

%timeit push_to_tensor(tensor, 5)
>> 57.7 µs ± 4.88 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

%timeit push_to_tensor_alternative(tensor, 5)
>> 28.9 µs ± 570 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

看起来使用torch.cat(而不是将所有项向左移动)的push_to_tensor_alternative更快。


2
也许有点晚了,但我发现另一种方法可以做到这一点,并节省一些时间。在我的情况下,我需要一个类似的FIFO结构,但我只需要每N次迭代实际解析一次FIFO张量。也就是说,我需要一个FIFO结构来保存n个整数,并且每n次迭代我需要通过我的模型解析该张量。我发现实现一个collections.deque结构并将其转换为torch张量要快得多。
import time
import torch
from collections import deque
length = 5000

que = deque([0]*200)

ten = torch.tensor(que)

s = time.time()
for i in range(length):
    for j in range(200):  
        que.pop()      
        que.appendleft(j*10)        
    torch.tensor(que)
    # after some appending/popping elements, cast to tensor
print("finish deque:", time.time()-s)


s = time.time()
for i in range(length):
    for j in range(200):
        newelem = torch.tensor([j*10])
        ten = torch.cat((ten[1:], newelem))
        #using tensor as FIFO, no need to cast to tensor
print("finish tensor:", time.time()-s)

以下是结果:
finish deque: 0.15857529640197754
finish tensor: 9.483643531799316

我还注意到,当使用deque并始终将其转换为torch.tensor而不是使用push_alternative时,可以提高约20%的时间效率。

s = time.time()
for j in range(length):    
        que.pop()      
        que.appendleft(j*10)        
        torch.tensor(que)    
print("finish queue:", time.time()-s)


s = time.time()
for j in range(length):    
        newelem = torch.tensor([j*10])
        ten = torch.cat((ten[1:], newelem))
print("finish tensor:", time.time()-s)

结果:

finish queue: 8.422480821609497
finish tensor: 11.169137477874756

0
一个更通用的版本,可以控制双端队列的大小,支持批量插入,并且可以控制推送维度,参考@Bruno_Lubascher的答案。
def push_to_deque(deque, x, deque_size=None, dim=0):
    """Handling `deque` tensor as a (set of) deque/FIFO, push the content of `x` into it."""
    if deque_size is None:
        deque_size = deque.shape[dim]
    deque_dims = deque.dim()
    input_size = x.shape[dim]
    dims_right = deque_dims - dim - 1
    deque_slicing = (
        (slice(None),) * dim
        + (
            slice(
                input_size - deque_size
                if input_size < deque_size
                else deque.shape[dim],
                None,
            ),
        )
        + (slice(None),) * dims_right
    )
    input_slicing = (
        (slice(None),) * dim + (slice(-deque_size, None),) + (slice(None),) * dims_right
    )
    deque = torch.cat((deque[deque_slicing], x[input_slicing]), dim=dim)
    return deque

示例:

>>> # Consider batched deques containing vectors of shape (2,):
>>> batch_size, vector_size = 1, 2  
>>> deque_size = 4
>>> # Initialize the empty deques:
>>> deques = torch.empty((batch_size, 0, vector_size)) 
>>> # Push at once more vectors than the batched FIFOs can contain:
>>> vals = torch.arange(10).view((batch_size, 5, vector_size))
>>> deque = push_to_deque(deque, vals, deque_size=deque_size, dim=1) 
>>> deque
tensor([[[2., 3.],
         [4., 5.],
         [6., 7.],
         [8., 9.]]])
>>> # Push some more:
>>> vals = torch.arange(10, 20).view((batch_size, 5, vector_size))
>>> deque = push_to_deque(deque, vals, deque_size=deque_size, dim=1) 
>>> deque
tensor([[[12., 13.],
         [14., 15.],
         [16., 17.],
         [18., 19.]]])
>>> vals = torch.arange(20, 24).view((batch_size, 2, vector_size))
>>> deque = push_to_deque(deque, vals, deque_size=deque_size, dim=1) 
>>> deque
tensor([[[16., 17.],
         [18., 19.],
         [20., 21.],
         [22., 23.]]])
>>> # Verify the method can also handle oversized FIFOs:
>>> deque = torch.zeros(batch_size, 10, vector_size)
>>> vals = torch.arange(4).view((batch_size, 2, vector_size))
>>> deque = push_to_deque(deque, vals, deque_size=deque_size, dim=1)
>>> deque
tensor([[[0., 0.],
         [0., 0.],
         [0., 1.],
         [2., 3.]]])

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接