如评论所述:文档和源代码并未表明这在sklearn中得到了支持!
我刚尝试了使用现成的凸优化求解器的替代方法。这只是一个简单的原型方法,可能不适合您(任务定义不完整)(样本大小?)。
一些评论:
- 实现/模型构建很容易
- 问题比我想象中更难解决
- 求解器ECOS存在一般性问题
- 求解器SCS达到了很好的精度(比sklearn差)
- 但:调整迭代次数以提高精度会破坏求解器
- 问题对于SCS将变得无法实现!
- SCS +基于bigM的公式(约束作为罚项发布在目标内)看起来可用,但可能需要调整
- 只测试了开源求解器,商业求解器可能更好。
进一步尝试:
- 解决大型问题(性能比鲁棒性和精确性更重要),(加速)投影随机梯度方法看起来很有前途。
代码
""" data """
from time import perf_counter as pc
import numpy as np
from sklearn import datasets
diabetes = datasets.load_diabetes()
A = diabetes.data
y = diabetes.target
alpha=0.1
print('Problem-size: ', A.shape)
def obj(x):
return (1. / (2*A.shape[0])) * np.square(np.linalg.norm(A.dot(x) - y, 2)) + alpha * np.linalg.norm(x, 1)
""" sklearn """
print('\nsklearn classic l1')
from sklearn import linear_model
clf = linear_model.Lasso(alpha=alpha, fit_intercept=False)
t0 = pc()
clf.fit(A, y)
print('used (secs): ', pc() - t0)
print(obj(clf.coef_))
print('sum x: ', np.sum(clf.coef_))
""" cvxpy """
print('\ncvxpy + scs classic l1')
from cvxpy import *
x = Variable(A.shape[1])
objective = Minimize((1. / (2*A.shape[0])) * sum_squares(A*x - y) + alpha * norm(x, 1))
problem = Problem(objective, [])
t0 = pc()
problem.solve(solver=SCS, use_indirect=False, max_iters=10000, verbose=False)
print('used (secs): ', pc() - t0)
print(obj(x.value.flat))
print('sum x: ', np.sum(x.value.flat))
""" cvxpy -> sum x == 1 """
print('\ncvxpy + scs sum == 1 / 1st approach')
objective = Minimize((1. / (2*A.shape[0])) * sum_squares(A*x - y))
constraints = [sum(x) == 1]
problem = Problem(objective, constraints)
t0 = pc()
problem.solve(solver=SCS, use_indirect=False, max_iters=10000, verbose=False)
print('used (secs): ', pc() - t0)
print(obj(x.value.flat))
print('sum x: ', np.sum(x.value.flat))
""" cvxpy approach 2 -> sum x == 1 """
print('\ncvxpy + scs sum == 1 / 2nd approach')
M = 1e6
objective = Minimize((1. / (2*A.shape[0])) * sum_squares(A*x - y) + M*(sum(x) - 1))
constraints = [sum(x) == 1]
problem = Problem(objective, constraints)
t0 = pc()
problem.solve(solver=SCS, use_indirect=False, max_iters=10000, verbose=False)
print('used (secs): ', pc() - t0)
print(obj(x.value.flat))
print('sum x: ', np.sum(x.value.flat))
输出
Problem-size: (442, 10)
sklearn classic l1
used (secs): 0.001451024380348898
13201.3508496
sum x: 891.78869298
cvxpy + scs classic l1
used (secs): 0.011165673357417458
13203.6549995
sum x: 872.520510561
cvxpy + scs sum == 1 / 1st approach
used (secs): 0.15350853891775978
13400.1272148
sum x: -8.43795102327
cvxpy + scs sum == 1 / 2nd approach
used (secs): 0.012579569383536493
13397.2932976
sum x: 1.01207061047
编辑
仅仅为了好玩,我使用了加速投影梯度方法实现了一个非常慢而且没有优化的原型求解器(代码中有备注!)。
尽管这是一种一阶方法,并且在这里表现缓慢(因为没有优化),但对于巨大问题来说,它应该能够更好地扩展。它应该有很多潜力!
警告:可能会被一些人视为高级数值优化 :-)。
编辑 2:我忘记在投影上添加非负约束条件(如果x可以为非负数,则sum(x) == 1没有太多意义!)。这使得求解变得更难(存在数值问题),显然应该使用其中的一种快速专用投影算法(我现在太懒了;我认为n * log n算法可用)。再次强调:这个APG求解器是一个原型,不适用于实际任务。
代码
""" accelerated pg -> sum x == 1 """
def solve_pg(A, b, momentum=0.9, maxiter=1000):
""" remarks:
algorithm: accelerated projected gradient
projection: proj on probability-simplex
-> naive and slow using cvxpy + ecos
line-search: armijo-rule along projection-arc (Bertsekas book)
-> suffers from slow projection
stopping-criterion: naive
gradient-calculation: precomputes AtA
-> not needed and not recommended for huge sparse data!
"""
M, N = A.shape
x = np.zeros(N)
AtA = A.T.dot(A)
Atb = A.T.dot(b)
stop_count = 0
x_ = Variable(N)
v_ = Parameter(N)
objective_ = Minimize(0.5 * square(norm(x_ - v_, 2)))
constraints_ = [sum(x_) == 1]
problem_ = Problem(objective_, constraints_)
def gradient(x):
return AtA.dot(x) - Atb
def obj(x):
return 0.5 * np.linalg.norm(A.dot(x) - b)**2
it = 0
while True:
grad = gradient(x)
alpha = 1
beta = 0.5
sigma=1e-2
old_obj = obj(x)
while True:
new_x = x - alpha * grad
new_obj = obj(new_x)
if old_obj - new_obj >= sigma * grad.dot(x - new_x):
break
else:
alpha *= beta
x_old = x[:]
x = x - alpha*grad
v_.value = x
problem_.solve()
x = np.array(x_.value.flat)
y = x + momentum * (x - x_old)
if np.abs(old_obj - obj(x)) < 1e-2:
stop_count += 1
else:
stop_count = 0
if stop_count == 3:
print('early-stopping @ it: ', it)
return x
it += 1
if it == maxiter:
return x
print('\n acc pg')
t0 = pc()
x = solve_pg(A, y)
print('used (secs): ', pc() - t0)
print(obj(x))
print('sum x: ', np.sum(x))
输出
acc pg
early-stopping @ it: 367
used (secs): 0.7714511330487027
13396.8642379
sum x: 1.00000000002