替代numpy.gradient的方法

4

我一直在努力寻找一种可以计算三维数组梯度的强大函数。 numpy.gradient只支持二阶精度。是否有其他方法可以计算精度更高的梯度?谢谢。


提供给大家一个替代 @Mehdi 提到的优秀的 Theano 的选择:autograd - sascha
3个回答

4

最后我找到了这个:4阶梯度。

希望numpy也能集成这个功能...

https://gist.github.com/deeplycloudy/1b9fa46d5290314d9be02a5156b48741

def gradientO4(f, *varargs):
"""Calculate the fourth-order-accurate gradient of an N-dimensional scalar function.
Uses central differences on the interior and first differences on boundaries
to give the same shape.
Inputs:
  f -- An N-dimensional array giving samples of a scalar function
  varargs -- 0, 1, or N scalars giving the sample distances in each direction
Outputs:
  N arrays of the same shape as f giving the derivative of f with respect
   to each dimension.
"""
N = len(f.shape)  # number of dimensions
n = len(varargs)
if n == 0:
    dx = [1.0]*N
elif n == 1:
    dx = [varargs[0]]*N
elif n == N:
    dx = list(varargs)
else:
    raise SyntaxError, "invalid number of arguments"

# use central differences on interior and first differences on endpoints

#print dx
outvals = []

# create slice objects --- initially all are [:, :, ..., :]
slice0 = [slice(None)]*N
slice1 = [slice(None)]*N
slice2 = [slice(None)]*N
slice3 = [slice(None)]*N
slice4 = [slice(None)]*N

otype = f.dtype.char
if otype not in ['f', 'd', 'F', 'D']:
    otype = 'd'

for axis in range(N):       
    # select out appropriate parts for this dimension
    out = np.zeros(f.shape, f.dtype.char)

    slice0[axis] = slice(2, -2)
    slice1[axis] = slice(None, -4)
    slice2[axis] = slice(1, -3)
    slice3[axis] = slice(3, -1)
    slice4[axis] = slice(4, None)
    # 1D equivalent -- out[2:-2] = (f[:4] - 8*f[1:-3] + 8*f[3:-1] - f[4:])/12.0
    out[slice0] = (f[slice1] - 8.0*f[slice2] + 8.0*f[slice3] - f[slice4])/12.0

    slice0[axis] = slice(None, 2)
    slice1[axis] = slice(1, 3)
    slice2[axis] = slice(None, 2)
    # 1D equivalent -- out[0:2] = (f[1:3] - f[0:2])
    out[slice0] = (f[slice1] - f[slice2])

    slice0[axis] = slice(-2, None)
    slice1[axis] = slice(-2, None)
    slice2[axis] = slice(-3, -1)
    ## 1D equivalent -- out[-2:] = (f[-2:] - f[-3:-1])
    out[slice0] = (f[slice1] - f[slice2])


    # divide by step size
    outvals.append(out / dx[axis])

    # reset the slice object in this dimension to ":"
    slice0[axis] = slice(None)
    slice1[axis] = slice(None)
    slice2[axis] = slice(None)
    slice3[axis] = slice(None)
    slice4[axis] = slice(None)

if N == 1:
    return outvals[0]
else:
    return outvals

2
我发现了这个:FINDIFF--一个Python包,用于在任意维度中进行有限差分数值导数和偏微分方程。

https://github.com/maroba/findiff

干杯!

2
我建议使用名为Theano的符号库(http://deeplearning.net/software/theano/)。它主要设计用于神经网络和深度学习等领域,但非常适合您的需求。
安装theano后,以下是计算1-d向量梯度的简单代码。您可以自己扩展到3-d。
    import numpy as np
    import theano
    import theano.tensor as T

    x = T.dvector('x')
    J, updates = theano.scan(lambda i, x : (x[i+1] - x[i])/2, sequences=T.arange(x.shape[0] - 1), non_sequences=[x])
    f = theano.function([x], J, updates=updates)

    f(np.array([1, 2, 4, 7, 11, 16], dtype='float32'))
    f(np.array([1, 2, 4, 7.12345, 11, 16], dtype='float32'))

1
看起来非常方便!我猜Theano的用户不会有动力编写所有高阶微分主题。 - gastro

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接