我使用 scipy.optimize
来最小化一个带有12个参数的函数。
我一段时间前开始了优化,目前仍在等待结果。
有没有一种方法可以强制 scipy.optimize
显示其进度(例如已完成多少,当前的最佳点是什么)?
我使用 scipy.optimize
来最小化一个带有12个参数的函数。
我一段时间前开始了优化,目前仍在等待结果。
有没有一种方法可以强制 scipy.optimize
显示其进度(例如已完成多少,当前的最佳点是什么)?
正如mg007建议的那样,一些scipy.optimize例程允许使用回调函数(不幸的是,目前leastsq不允许这样做)。以下是一个示例,使用“fmin_bfgs”例程,在其中使用回调函数显示每次迭代时参数的当前值以及目标函数的值。
import numpy as np
from scipy.optimize import fmin_bfgs
Nfeval = 1
def rosen(X): #Rosenbrock function
return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
(1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2
def callbackF(Xi):
global Nfeval
print '{0:4d} {1: 3.6f} {2: 3.6f} {3: 3.6f} {4: 3.6f}'.format(Nfeval, Xi[0], Xi[1], Xi[2], rosen(Xi))
Nfeval += 1
print '{0:4s} {1:9s} {2:9s} {3:9s} {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)')
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
[xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \
fmin_bfgs(rosen,
x0,
callback=callbackF,
maxiter=2000,
full_output=True,
retall=False)
输出结果如下:
Iter X1 X2 X3 f(X)
1 1.031582 1.062553 1.130971 0.005550
2 1.031100 1.063194 1.130732 0.004973
3 1.027805 1.055917 1.114717 0.003927
4 1.020343 1.040319 1.081299 0.002193
5 1.005098 1.009236 1.016252 0.000739
6 1.004867 1.009274 1.017836 0.000197
7 1.001201 1.002372 1.004708 0.000007
8 1.000124 1.000249 1.000483 0.000000
9 0.999999 0.999999 0.999998 0.000000
10 0.999997 0.999995 0.999989 0.000000
11 0.999997 0.999995 0.999989 0.000000
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 11
Function evaluations: 85
Gradient evaluations: 17
至少这样你可以观察到优化器的最小跟踪
跟随@joel的示例,有一种简洁高效的方法来完成类似的事情。以下示例展示了如何摆脱global
变量、call_back
函数以及多次重新评估目标函数。
import numpy as np
from scipy.optimize import fmin_bfgs
def rosen(X, info): #Rosenbrock function
res = (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
(1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2
# display information
if info['Nfeval']%100 == 0:
print '{0:4d} {1: 3.6f} {2: 3.6f} {3: 3.6f} {4: 3.6f}'.format(info['Nfeval'], X[0], X[1], X[2], res)
info['Nfeval'] += 1
return res
print '{0:4s} {1:9s} {2:9s} {3:9s} {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)')
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
[xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \
fmin_bfgs(rosen,
x0,
args=({'Nfeval':0},),
maxiter=1000,
full_output=True,
retall=False,
)
这将会生成类似以下的输出
Iter X1 X2 X3 f(X)
0 1.100000 1.100000 1.100000 2.440000
100 1.000000 0.999999 0.999998 0.000000
200 1.000000 0.999999 0.999998 0.000000
300 1.000000 0.999999 0.999998 0.000000
400 1.000000 0.999999 0.999998 0.000000
500 1.000000 0.999999 0.999998 0.000000
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 0.000000
Iterations: 12
Function evaluations: 502
Gradient evaluations: 98
这里没有免费的午餐,所以我使用 函数评估次数
代替 算法迭代次数
来计算。有些算法在单次迭代中可能会多次评估目标函数。
尝试使用:
options={'disp': True}
强制 scipy.optimize.minimize
打印中间结果。
scipy.optimize.brute
中设置disp: True
后,我也没有得到任何输出 - 只有最小化结束,就像@Juanjo所说的那样。 - tsandodifferential_evolution
时也遇到了类似的奇怪行为,但可能与我保存中间结果的方式有关--> https://github.com/scipy/scipy/issues/10325#event-2422335734 - jjrrscipy.optimize.minimize
的'trust-constr'方法是一个例外)。我曾遇到类似的问题,并通过创建目标函数的包装器并使用回调函数来解决它。这里不会进行额外的函数评估,因此这应该是一个高效的解决方案。import numpy as np
class Simulator:
def __init__(self, function):
self.f = function # actual objective function
self.num_calls = 0 # how many times f has been called
self.callback_count = 0 # number of times callback has been called, also measures iteration count
self.list_calls_inp = [] # input of all calls
self.list_calls_res = [] # result of all calls
self.decreasing_list_calls_inp = [] # input of calls that resulted in decrease
self.decreasing_list_calls_res = [] # result of calls that resulted in decrease
self.list_callback_inp = [] # only appends inputs on callback, as such they correspond to the iterations
self.list_callback_res = [] # only appends results on callback, as such they correspond to the iterations
def simulate(self, x, *args):
"""Executes the actual simulation and returns the result, while
updating the lists too. Pass to optimizer without arguments or
parentheses."""
result = self.f(x, *args) # the actual evaluation of the function
if not self.num_calls: # first call is stored in all lists
self.decreasing_list_calls_inp.append(x)
self.decreasing_list_calls_res.append(result)
self.list_callback_inp.append(x)
self.list_callback_res.append(result)
elif result < self.decreasing_list_calls_res[-1]:
self.decreasing_list_calls_inp.append(x)
self.decreasing_list_calls_res.append(result)
self.list_calls_inp.append(x)
self.list_calls_res.append(result)
self.num_calls += 1
return result
def callback(self, xk, *_):
"""Callback function that can be used by optimizers of scipy.optimize.
The third argument "*_" makes sure that it still works when the
optimizer calls the callback function with more than one argument. Pass
to optimizer without arguments or parentheses."""
s1 = ""
xk = np.atleast_1d(xk)
# search backwards in input list for input corresponding to xk
for i, x in reversed(list(enumerate(self.list_calls_inp))):
x = np.atleast_1d(x)
if np.allclose(x, xk):
break
for comp in xk:
s1 += f"{comp:10.5e}\t"
s1 += f"{self.list_calls_res[i]:10.5e}"
self.list_callback_inp.append(xk)
self.list_callback_res.append(self.list_calls_res[i])
if not self.callback_count:
s0 = ""
for j, _ in enumerate(xk):
tmp = f"Comp-{j+1}"
s0 += f"{tmp:10s}\t"
s0 += "Objective"
print(s0)
print(s1)
self.callback_count += 1
可以定义一个简单的测试{{test}}
from scipy.optimize import minimize, rosen
ros_sim = Simulator(rosen)
minimize(ros_sim.simulate, [0, 0], method='BFGS', callback=ros_sim.callback, options={"disp": True})
print(f"Number of calls to Simulator instance {ros_sim.num_calls}")
Comp-1 Comp-2 Objective
1.76348e-01 -1.31390e-07 7.75116e-01
2.85778e-01 4.49433e-02 6.44992e-01
3.14130e-01 9.14198e-02 4.75685e-01
4.26061e-01 1.66413e-01 3.52251e-01
5.47657e-01 2.69948e-01 2.94496e-01
5.59299e-01 3.00400e-01 2.09631e-01
6.49988e-01 4.12880e-01 1.31733e-01
7.29661e-01 5.21348e-01 8.53096e-02
7.97441e-01 6.39950e-01 4.26607e-02
8.43948e-01 7.08872e-01 2.54921e-02
8.73649e-01 7.56823e-01 2.01121e-02
9.05079e-01 8.12892e-01 1.29502e-02
9.38085e-01 8.78276e-01 4.13206e-03
9.73116e-01 9.44072e-01 1.55308e-03
9.86552e-01 9.73498e-01 1.85366e-04
9.99529e-01 9.98598e-01 2.14298e-05
9.99114e-01 9.98178e-01 1.04837e-06
9.99913e-01 9.99825e-01 7.61051e-09
9.99995e-01 9.99989e-01 2.83979e-11
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 19
Function evaluations: 96
Gradient evaluations: 24
Number of calls to Simulator instance 96
def simulate(self, x, *args)
和 result = self.f(x, *args)
。 - Antoine Collet以下是最初的回答:
def f_(x): # The rosenbrock function
return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2
def conjugate_gradient(x0, f):
all_x_i = [x0[0]]
all_y_i = [x0[1]]
all_f_i = [f(x0)]
def store(X):
x, y = X
all_x_i.append(x)
all_y_i.append(y)
all_f_i.append(f(X))
optimize.minimize(f, x0, method="CG", callback=store, options={"gtol": 1e-12})
return all_x_i, all_y_i, all_f_i
举个例子:
conjugate_gradient([2, -1], f_)
在需要最小化的函数中,也可以包含一个简单的print()语句。如果您导入该函数,则可以创建一个包装器。
import numpy as np
from scipy.optimize import minimize
def rosen(X): #Rosenbrock function
print(X)
return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
(1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
minimize(rosen,
x0)
好了!(注意:大多数情况下,全局变量是不良实践。)
from scipy.optimize import minimize
import numpy as np
f = lambda x, b=.1 : x[0]**2 + b * x[1]**2
x0 = np.array( [2.,2.] )
P = [ x0 ]
def save(x):
global P
P.append(x)
minimize(f, x0=x0, callback=save)
fun: 4.608946876190852e-13
hess_inv: array([[4.99995194e-01, 3.78976566e-04],
[3.78976566e-04, 4.97011817e+00]])
jac: array([ 5.42429092e-08, -4.27698767e-07])
message: 'Optimization terminated successfully.'
nfev: 24
nit: 7
njev: 8
status: 0
success: True
x: array([ 1.96708740e-08, -2.14594442e-06])
print(P)
[array([2., 2.]),
array([0.99501244, 1.89950125]),
array([-0.0143533, 1.4353279]),
array([-0.0511755 , 1.11283405]),
array([-0.03556007, 0.39608524]),
array([-0.00393046, -0.00085631]),
array([-0.00053407, -0.00042556]),
array([ 1.96708740e-08, -2.14594442e-06])]
callback
参数? - mg007x f g
值,然后可以将它们写入文件以进行绘图。 - denis