Scipy最小化 / Scipy曲线拟合 / lmfit

3

log(VA) = gamma - (1/eta)log[alphaL^(-eta) + betaK^(-eta)]

我试图用非线性最小二乘法来估计上述函数。我使用了3种不同的软件包(Scipy-minimize、Scipy-curve_fit和lmfit-Model),但是每个软件包给出的参数结果都不同。我不明白为什么会这样。如果有人能提供解决方案或另一种解决方法,我将非常感激。

SCIPY-MINIMIZE

import numpy as np
from scipy.optimize import minimize, curve_fit
from lmfit import Model, Parameters

L  = np.array([0.299, 0.295, 0.290, 0.284, 0.279, 0.273, 0.268, 0.262, 0.256, 0.250])
K  = np.array([2.954, 3.056, 3.119, 3.163, 3.215, 3.274, 3.351, 3.410, 3.446, 3.416])
VA = np.array([0.919, 0.727, 0.928, 0.629, 0.656, 0.854, 0.955, 0.981, 0.908, 0.794])

def f(param):
    gamma = param[0]
    alpha = param[1]
    beta  = param[2]
    eta   = param[3]
    VA_est = gamma - (1/eta)*np.log(alpha*L**-eta + beta*K**-eta)
    
    return np.sum((np.log(VA) - VA_est)**2)

bnds = [(1, np.inf), (0,1),(0,1),(-1, np.inf)]
x0 = (1,0.01,0.98, 1)
con = {"type":"eq", "fun":c}

result = minimize(f, x0, bounds = bnds)

print(result.fun)
print(result.message)
print(result.x[0],result.x[1],result.x[2],result.x[3])

SCIPY-MINIMIZE - 输出

0.30666062040617503
CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
1.0 0.5587147011643757 0.9371430857380681 5.873041615873815

SCIPY-CURVE_FIT

def f(X, gamma, alpha, beta, eta):
    L,K = X
  
    return gamma - (1/eta) * np.log(alpha*L**-eta + beta*K**-eta)

p0 = 1,0.01,0.98, 1

res, cov = curve_fit(f, (L, K), np.log(VA), p0,  bounds = ((1,0,0,-1),(np.inf,1,1,np.inf)) )
gamma, alpha, beta, eta = res[0],res[1],res[2],res[3] 
gamma, alpha, beta, eta

SCIPY-CURVE_FIT - 输出

(1.000000000062141,
 0.26366547263939205,
 0.9804436474926481,
 13.449747863921704)

LMFIT-MODEL

def f(x, gamma, alpha, beta, eta):
    L = x[0]
    K = x[1]
    
    return gamma - (1/eta)*np.log(alpha*L**-eta + beta*K**-eta)

fmodel = Model(f)
params = Parameters()
params.add('gamma', value = 1,    vary=True, min = 1)
params.add('alpha', value = 0.01, vary=True, max = 1, min = 0)
params.add('beta',  value = 0.98, vary=True, max = 1, min = 0)
params.add('eta',   value = 1,    vary=True, min = -1)

result = fmodel.fit(np.log(VA), params, x=(L,K))
print(result.fit_report())

LMFIT-MODEL - OUT LMFIT-MODEL - 输出
[[Model]]
    Model(f)
[[Fit Statistics]]
    # fitting method   = leastsq
    # function evals   = 103
    # data points      = 10
    # variables        = 4
    chi-square         = 0.31749840
    reduced chi-square = 0.05291640
    Akaike info crit   = -26.4986758
    Bayesian info crit = -25.2883354
##  Warning: uncertainties could not be estimated:
    gamma:  at initial value
    gamma:  at boundary
    alpha:  at boundary
[[Variables]]
    gamma:  1.00000000 (init = 1)
    alpha:  1.3245e-13 (init = 0.01)
    beta:   0.20130064 (init = 0.98)
    eta:    447.960413 (init = 1)
1个回答

3
一个适合的算法总是寻找潜在最小二乘问题的局部最小值。注意,你的问题是凸但不是严格凸的,因此不存在唯一的全局极小值点。但每个局部最小值都是一个全局最小值点。通过对于每个找到的解评估第一个函数f,我们可以观察到它们都有相同的目标函数值。因此,每个解都是一个全局最小值点。
为什么每种方法都找到了不同的极小值点呢?原因很简单。每个方法都使用不同的算法来解决潜在的非线性优化问题,例如,scipy.optimize.minimize 使用' L-BFGS-B '算法,而 scipy.optimize.curve_fit 使用 scipy.optimize.least_squares 带有信任区域反射 (TRF)算法。简言之,在一个严格凸问题中,你才能期望不同算法得到相同的解决方案。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接