为什么PyTorch和Scratch代码中的正则化不匹配,PyTorch中使用的正则化公式是什么?

3

我一直在尝试在PyTorch上对二元分类模型进行L2正则化,但是当我将PyTorch的结果与基于代码的结果匹配时,它们不匹配。

PyTorch代码:

class LogisticRegression(nn.Module):
  def __init__(self,n_input_features):
    super(LogisticRegression,self).__init__()
    self.linear=nn.Linear(4,1)
    self.linear.weight.data.fill_(0.0)
    self.linear.bias.data.fill_(0.0)

  def forward(self,x):
    y_predicted=torch.sigmoid(self.linear(x))
    return y_predicted

model=LogisticRegression(4)

criterion=nn.BCELoss()
optimizer=torch.optim.SGD(model.parameters(),lr=0.05,weight_decay=0.1)
dataset=Data()
train_data=DataLoader(dataset=dataset,batch_size=1096,shuffle=False)

num_epochs=1000
for epoch in range(num_epochs):
  for x,y in train_data:
    y_pred=model(x)
    loss=criterion(y_pred,y)
    loss.backward()
    optimizer.step()
    optimizer.zero_grad()

Scratch代码:

def sigmoid(z):
    s = 1/(1+ np.exp(-z))
    return s  

def yinfer(X, beta):
  return sigmoid(beta[0] + np.dot(X,beta[1:]))

def cost(X, Y, beta, lam):
    sum = 0
    sum1 = 0
    n = len(beta)
    m = len(Y)
    for i in range(m): 
        sum = sum + Y[i]*(np.log( yinfer(X[i],beta)))+ (1 -Y[i])*np.log(1-yinfer(X[i],beta))
    for i in range(0, n): 
        sum1 = sum1 + beta[i]**2
        
    return  (-sum + (lam/2) * sum1)/(1.0*m)

def pred(X,beta):
  if ( yinfer(X, beta) > 0.5):
    ypred = 1
  else :
    ypred = 0
  return ypred

beta = np.zeros(5)
iterations = 1000
arr_cost = np.zeros((iterations,4))
print(beta)
n = len(Y_train)
for i in range(iterations):
    Y_prediction_train=np.zeros(len(Y_train))
    Y_prediction_test=np.zeros(len(Y_test)) 

    for l in range(len(Y_train)):
        Y_prediction_train[l]=pred(X[l,:],beta)
    
    for l in range(len(Y_test)):
        Y_prediction_test[l]=pred(X_test[l,:],beta)
    
    train_acc = format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100)
    test_acc = 100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100   
    arr_cost[i,:] = [i,cost(X,Y_train,beta,lam),train_acc,test_acc]
    temp_beta = np.zeros(len(beta))

    ''' main code from below '''

    for j in range(n): 
        temp_beta[0] = temp_beta[0] + yinfer(X[j,:], beta) - Y_train[j]
        temp_beta[1:] = temp_beta[1:] + (yinfer(X[j,:], beta) - Y_train[j])*X[j,:]
    
    for k in range(0, len(beta)):
        temp_beta[k] = temp_beta[k] +  lam * beta[k]  #regularization here
    
    temp_beta= temp_beta / (1.0*n)
    
    beta = beta - alpha*temp_beta

损失图表

训练准确率图表

测试准确率图表

请问有人能告诉我为什么会这样?L2值=0.1


可以分享 PyTorch 模型的代码。 - Girish Hegde
@GirishDattatrayHegde 我已经编辑了代码并添加了 model - Rest1ve
1个回答

2
很棒的问题。我在PyTorch文档中进行了很多搜索并找到了答案。这个答案非常棘手。基本上有两种方法来计算正则化。(如果要总结,请跳到最后一节)。
PyTorch使用第一种类型(其中正则化因子没有被批量大小除以),以下是演示它的样例代码:
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import torch.optim as optim
 
class model(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(1, 1)
        self.linear.weight.data.fill_(1.0)
        self.linear.bias.data.fill_(1.0)

    def forward(self, x):
        return self.linear(x)


model     = model()
optimizer = optim.SGD(model.parameters(), lr=0.1, weight_decay=1.0)

input     = torch.tensor([[2], [4]], dtype=torch.float32)
target    = torch.tensor([[7], [11]], dtype=torch.float32)

optimizer.zero_grad()
pred      = model(input)
loss      = F.mse_loss(pred, target)

print(f'input: {input[0].data, input[1].data}')
print(f'prediction: {pred[0].data, pred[1].data}')
print(f'target: {target[0].data, target[1].data}')

print(f'\nMSEloss: {loss.item()}\n')

loss.backward()

print('Before updation:')
print('--------------------------------------------------------------------------')
print(f'weight [data, gradient]: {model.linear.weight.data, model.linear.weight.grad}')
print(f'bias [data, gradient]: {model.linear.bias.data, model.linear.bias.grad}')
print('--------------------------------------------------------------------------')
 
optimizer.step()

print('After updation:')
print('--------------------------------------------------------------------------')
print(f'weight [data]: {model.linear.weight.data}')
print(f'bias [data]: {model.linear.bias.data}')
print('--------------------------------------------------------------------------')

这将输出

input: (tensor([2.]), tensor([4.]))
prediction: (tensor([3.]), tensor([5.]))
target: (tensor([7.]), tensor([11.]))

MSEloss: 26.0

Before updation:
--------------------------------------------------------------------------
weight [data, gradient]: (tensor([[1.]]), tensor([[-32.]]))
bias [data, gradient]: (tensor([1.]), tensor([-10.]))
--------------------------------------------------------------------------
After updation:
--------------------------------------------------------------------------
weight [data]: tensor([[4.1000]])
bias [data]: tensor([1.9000])
--------------------------------------------------------------------------

在这里,m = 批次大小 = 2, lr = 学习率 = 0.1, lambda = 权重衰减 = 1

现在考虑张量 weight,其值为1,梯度为-32

情况1(类型1正则化):

 weight = weight - lr(grad + weight_decay.weight)
 weight = 1 - 0.1(-32 + 1(1))
 weight = 4.1

案例二(类型二正则化):

 weight = weight - lr(grad + (weight_decay/batch size).weight)
 weight = 1 - 0.1(-32 + (1/2)(1))
 weight = 4.15

输出中,我们可以看到更新后的权重=4.1000。这表明PyTorch使用type1正则化。

因此,在您的代码中,您正在遵循type2正则化。所以只需将一些最后几行更改为:

# for k in range(0, len(beta)):
#    temp_beta[k] = temp_beta[k] +  lam * beta[k]  #regularization here

temp_beta= temp_beta / (1.0*n)

beta = beta - alpha*(temp_beta + lam * beta)

此外,PyTorch的损失函数不包括正则化项(实现在优化器中),因此在您的自定义成本函数中也要删除正则化项。
总之:
1. Pytorch使用以下正则化函数: https://istack.dev59.com/AT2oH.webp 2. 正则化已经在优化器中实现了(weight_decay参数)。
3. PyTorch的损失函数不包括正则化项。
4. 如果使用正则化,则偏差也会被正则化。
5. 要使用正则化,请尝试: torch.nn.optim.optimiser_name(model.parameters(), lr, weight_decay=lambda)。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接