在Python中帮我实现反向传播算法

8

编辑2:

新的训练集...

输入:

[
 [0.0, 0.0], 
 [0.0, 1.0], 
 [0.0, 2.0], 
 [0.0, 3.0], 
 [0.0, 4.0], 
 [1.0, 0.0], 
 [1.0, 1.0], 
 [1.0, 2.0], 
 [1.0, 3.0], 
 [1.0, 4.0], 
 [2.0, 0.0], 
 [2.0, 1.0], 
 [2.0, 2.0], 
 [2.0, 3.0], 
 [2.0, 4.0], 
 [3.0, 0.0], 
 [3.0, 1.0], 
 [3.0, 2.0], 
 [3.0, 3.0], 
 [3.0, 4.0],
 [4.0, 0.0], 
 [4.0, 1.0], 
 [4.0, 2.0], 
 [4.0, 3.0], 
 [4.0, 4.0]
]

输出:

[
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [1.0], 
 [1.0], 
 [0.0], 
 [0.0], 
 [0.0], 
 [1.0], 
 [1.0]
]

编辑1:

我已经更新了我的代码。我修复了一些小问题,但是在网络学习后,所有输入组合的输出仍然相同。

这里解释了反向传播算法:反向传播算法


是的,这是一份作业,需要在一种简单的神经网络上实现一个简单的反向传播算法。

我选择使用Python作为语言,并选择了一个像这样的神经网络:

3层:1个输入层、1个隐藏层和1个输出层:

O         O

                    O

O         O

输入神经元上都有一个整数,输出神经元上有1或0。

这是我的完整实现(稍微有点长)。在下面,我将选择一些较短的相关代码片段,在其中我认为可能出现错误:

import os
import math
import Image
import random
from random import sample

#------------------------------ class definitions

class Weight:
    def __init__(self, fromNeuron, toNeuron):
        self.value = random.uniform(-0.5, 0.5)
        self.fromNeuron = fromNeuron
        self.toNeuron = toNeuron
        fromNeuron.outputWeights.append(self)
        toNeuron.inputWeights.append(self)
        self.delta = 0.0 # delta value, this will accumulate and after each training cycle used to adjust the weight value

    def calculateDelta(self, network):
        self.delta += self.fromNeuron.value * self.toNeuron.error

class Neuron:
    def __init__(self):
        self.value = 0.0        # the output
        self.idealValue = 0.0   # the ideal output
        self.error = 0.0        # error between output and ideal output
        self.inputWeights = []
        self.outputWeights = []

    def activate(self, network):
        x = 0.0;
        for weight in self.inputWeights:
            x += weight.value * weight.fromNeuron.value
        # sigmoid function
        if x < -320:
            self.value = 0
        elif x > 320:
            self.value = 1
        else:
            self.value = 1 / (1 + math.exp(-x))

class Layer:
    def __init__(self, neurons):
        self.neurons = neurons

    def activate(self, network):
        for neuron in self.neurons:
            neuron.activate(network)

class Network:
    def __init__(self, layers, learningRate):
        self.layers = layers
        self.learningRate = learningRate # the rate at which the network learns
        self.weights = []
        for hiddenNeuron in self.layers[1].neurons:
            for inputNeuron in self.layers[0].neurons:
                self.weights.append(Weight(inputNeuron, hiddenNeuron))
            for outputNeuron in self.layers[2].neurons:
                self.weights.append(Weight(hiddenNeuron, outputNeuron))

    def setInputs(self, inputs):
        self.layers[0].neurons[0].value = float(inputs[0])
        self.layers[0].neurons[1].value = float(inputs[1])

    def setExpectedOutputs(self, expectedOutputs):
        self.layers[2].neurons[0].idealValue = expectedOutputs[0]

    def calculateOutputs(self, expectedOutputs):
        self.setExpectedOutputs(expectedOutputs)
        self.layers[1].activate(self) # activation function for hidden layer
        self.layers[2].activate(self) # activation function for output layer        

    def calculateOutputErrors(self):
        for neuron in self.layers[2].neurons:
            neuron.error = (neuron.idealValue - neuron.value) * neuron.value * (1 - neuron.value)

    def calculateHiddenErrors(self):
        for neuron in self.layers[1].neurons:
            error = 0.0
            for weight in neuron.outputWeights:
                error += weight.toNeuron.error * weight.value
            neuron.error = error * neuron.value * (1 - neuron.value)

    def calculateDeltas(self):
        for weight in self.weights:
            weight.calculateDelta(self)

    def train(self, inputs, expectedOutputs):
        self.setInputs(inputs)
        self.calculateOutputs(expectedOutputs)
        self.calculateOutputErrors()
        self.calculateHiddenErrors()
        self.calculateDeltas()

    def learn(self):
        for weight in self.weights:
            weight.value += self.learningRate * weight.delta

    def calculateSingleOutput(self, inputs):
        self.setInputs(inputs)
        self.layers[1].activate(self)
        self.layers[2].activate(self)
        #return round(self.layers[2].neurons[0].value, 0)
        return self.layers[2].neurons[0].value


#------------------------------ initialize objects etc


inputLayer = Layer([Neuron() for n in range(2)])
hiddenLayer = Layer([Neuron() for n in range(100)])
outputLayer = Layer([Neuron() for n in range(1)])

learningRate = 0.5

network = Network([inputLayer, hiddenLayer, outputLayer], learningRate)

# just for debugging, the real training set is much larger
trainingInputs = [
    [0.0, 0.0],
    [1.0, 0.0],
    [2.0, 0.0],
    [0.0, 1.0],
    [1.0, 1.0],
    [2.0, 1.0],
    [0.0, 2.0],
    [1.0, 2.0],
    [2.0, 2.0]
]
trainingOutputs = [
    [0.0],
    [1.0],
    [1.0],
    [0.0],
    [1.0],
    [0.0],
    [0.0],
    [0.0],
    [1.0]
]

#------------------------------ let's train

for i in range(500):
    for j in range(len(trainingOutputs)):
        network.train(trainingInputs[j], trainingOutputs[j])
        network.learn()

#------------------------------ let's check


for pattern in trainingInputs:
    print network.calculateSingleOutput(pattern)

现在的问题是,学习后网络似乎对所有输入组合都返回非常接近0.0的浮点数,即使那些应该接近1.0的组合也是如此。
我进行了100个周期的网络训练,在每个周期中执行以下操作:
对于训练集中的每组输入: - 设置网络输入 - 使用sigmoid函数计算输出 - 计算输出层中的误差 - 计算隐藏层中的误差 - 计算权重的增量
然后,我根据学习率和累积增量调整权重。
这是我的神经元激活函数:
def activationFunction(self, network):
    """
    Calculate an activation function of a neuron which is a sum of all input weights * neurons where those weights start
    """
    x = 0.0;
    for weight in self.inputWeights:
        x += weight.value * weight.getFromNeuron(network).value
    # sigmoid function
    self.value = 1 / (1 + math.exp(-x))

这是我计算增量的方法:
def calculateDelta(self, network):
    self.delta += self.getFromNeuron(network).value * self.getToNeuron(network).error

这是我的算法的一般流程:
for i in range(numberOfIterations):
    for k,expectedOutput in trainingSet.iteritems():
        coordinates = k.split(",")
        network.setInputs((float(coordinates[0]), float(coordinates[1])))
        network.calculateOutputs([float(expectedOutput)])
        network.calculateOutputErrors()
        network.calculateHiddenErrors()
        network.calculateDeltas()
    oldWeights = network.weights
    network.adjustWeights()
    network.resetDeltas()
    print "Iteration ", i
    j = 0
    for weight in network.weights:
        print "Weight W", weight.i, weight.j, ": ", oldWeights[j].value, " ............ Adjusted value : ", weight.value
        j += j

输出的最后两行是:
0.552785449458 # this should be close to 1
0.552785449458 # this should be close to 0

实际上,它返回所有输入组合的输出数字。

我有什么遗漏吗?


3
我认为你需要自己再做些工作——这份代码的规模超出了人们合理调试的范围。在所有重要的地方添加logging.log语句来追踪边缘权重,并使用计算器逐步检查数字,看看它们的不一致之处所在。 - Katriel
阅读此链接:https://dev59.com/1nA65IYBdhLWcg3wqAWt。对于Bayseian过滤器,这是一个标准问题,有一个标准解决方案。您似乎遇到了非常小的浮点数的相同标准问题。 - S.Lott
1
@S.Lott:问题不可能来自那里,因为OP已经使用了对数作为权重,这就是为什么需要math.exp的原因。这导致另一个问题:当x变得太小或太大时,Python会引发异常,但这与观察到的虚假行为无关(只是一个普通的错误)。 - kriss
1
只需在 calculateSingleOutput 中添加 self.layers[2].runActivationFunctionForAllNeurons(self),它就可以工作了。但除了修复错误之外,收敛性比您编辑后的第一个版本要差,这很令人惊讶。我不知道是哪个更改导致了这种效果。 - kriss
是的,现在它可以工作了。我在calculateSingleOutput方法中添加了self.layers[1].runActivationFunctionForAllNeurons(self)。但是学习速度有点慢。我原本期望学习过程更快。 - Richard Knop
显示剩余2条评论
1个回答

7
看起来你得到的是神经元的初始状态(几乎是self.idealValue)。也许在提供实际数据之前不应该初始化这个神经元?

编辑:好的,我深入研究了一下代码并简化了一下(将简化版本发布在下面)。基本上你的代码有两个小错误(看起来是你忽略了的事情),但这会导致一个绝对行不通的网络。

- 在学习阶段,你忘记设置输出层中expectedOutput的值。没有这个,网络肯定无法学习任何东西,并且总是停留在初始的理想值。这是我一开始读到时发现的行为。这一个甚至可以在你描述的训练步骤中发现(如果你没有发布代码,可能会发现,这是我知道的极少数情况之一,其中发布代码实际上隐藏了错误而不是使其显而易见)。你在编辑1之后解决了这个问题。

- 在calculateSingleOutputs中激活网络时,你忘记激活隐藏层。

显然,这两个问题中的任何一个都将导致一个不正常的网络。

一旦纠正过来,它就可以工作了(在我简化了你的代码后,它可以工作)。
错误不易发现,因为最初的代码过于复杂。在引入新类或新方法之前,应该三思而后行。创建过少的方法或类会使代码难以阅读和维护,但创建过多也可能会使其更难以阅读和维护。必须找到正确的平衡点。我个人找到这种平衡的方法是遵循代码异味和重构技术的方向。有时候添加方法或创建类,有时候则删除它们。这肯定不是完美的,但对我来说是有效的。
下面是我应用了一些重构后的代码版本。我花了大约一个小时来更改你的代码,但始终保持其功能等效。我将其视为一次良好的重构练习,因为最初的代码真的很难阅读。重构后只需要5分钟就能发现问题。
import os
import math

"""
A simple backprop neural network. It has 3 layers:
    Input layer: 2 neurons
    Hidden layer: 2 neurons
    Output layer: 1 neuron
"""

class Weight:
    """
    Class representing a weight between two neurons
    """
    def __init__(self, value, from_neuron, to_neuron):
        self.value = value
        self.from_neuron = from_neuron
        from_neuron.outputWeights.append(self)
        self.to_neuron = to_neuron
        to_neuron.inputWeights.append(self)

        # delta value, this will accumulate and after each training cycle
        # will be used to adjust the weight value
        self.delta = 0.0

class Neuron:
    """
    Class representing a neuron.
    """
    def __init__(self):
        self.value = 0.0        # the output
        self.idealValue = 0.0   # the ideal output
        self.error = 0.0        # error between output and ideal output
        self.inputWeights = []    # weights that end in the neuron
        self.outputWeights = []  # weights that starts in the neuron

    def activate(self):
        """
        Calculate an activation function of a neuron which is 
        a sum of all input weights * neurons where those weights start
        """
        x = 0.0;
        for weight in self.inputWeights:
            x += weight.value * weight.from_neuron.value
        # sigmoid function
        self.value = 1 / (1 + math.exp(-x))

class Network:
    """
    Class representing a whole neural network. Contains layers.
    """
    def __init__(self, layers, learningRate, weights):
        self.layers = layers
        self.learningRate = learningRate    # the rate at which the network learns
        self.weights = weights

    def training(self, entries, expectedOutput):
        for i in range(len(entries)):
            self.layers[0][i].value = entries[i]
        for i in range(len(expectedOutput)):
            self.layers[2][i].idealValue = expectedOutput[i]
        for layer in self.layers[1:]:
            for n in layer:
                n.activate()
        for n in self.layers[2]:
            error = (n.idealValue - n.value) * n.value * (1 - n.value)
            n.error = error
        for n in self.layers[1]:
            error = 0.0
            for w in n.outputWeights:
                error += w.to_neuron.error * w.value
            n.error = error
        for w in self.weights:
            w.delta += w.from_neuron.value * w.to_neuron.error

    def updateWeights(self):
        for w in self.weights:
            w.value += self.learningRate * w.delta

    def calculateSingleOutput(self, entries):
        """
        Calculate a single output for input values.
        This will be used to debug the already learned network after training.
        """
        for i in range(len(entries)):
            self.layers[0][i].value = entries[i]
        # activation function for output layer
        for layer in self.layers[1:]:
            for n in layer:
                n.activate()
        print self.layers[2][0].value


#------------------------------ initialize objects etc

neurons = [Neuron() for n in range(5)]

w1 = Weight(-0.79, neurons[0], neurons[2])
w2 = Weight( 0.51, neurons[0], neurons[3])
w3 = Weight( 0.27, neurons[1], neurons[2])
w4 = Weight(-0.48, neurons[1], neurons[3])
w5 = Weight(-0.33, neurons[2], neurons[4])
w6 = Weight( 0.09, neurons[3], neurons[4])

weights = [w1, w2, w3, w4, w5, w6]
inputLayer  = [neurons[0], neurons[1]]
hiddenLayer = [neurons[2], neurons[3]]
outputLayer = [neurons[4]]
learningRate = 0.3
network = Network([inputLayer, hiddenLayer, outputLayer], learningRate, weights)

# just for debugging, the real training set is much larger
trainingSet = [([0.0,0.0],[0.0]),
               ([1.0,0.0],[1.0]),
               ([2.0,0.0],[1.0]),
               ([0.0,1.0],[0.0]),
               ([1.0,1.0],[1.0]),
               ([2.0,1.0],[0.0]),
               ([0.0,2.0],[0.0]),
               ([1.0,2.0],[0.0]),
               ([2.0,2.0],[1.0])]

#------------------------------ let's train
for i in range(100): # training iterations
    for entries, expectedOutput in trainingSet:
        network.training(entries, expectedOutput)
    network.updateWeights()

#network has learned, let's check
network.calculateSingleOutput((1, 0)) # this should be close to 1
network.calculateSingleOutput((0, 0)) # this should be close to 0

顺便提一下,我还没有纠正第三个问题(但很容易纠正)。如果x太大或太小(> 320或< -320),math.exp()会引发异常。如果您进行训练迭代,比如几千次,就会出现这种情况。我认为最简单的纠正方法是检查x的值,如果太大或太小,则根据情况将神经元的值设置为0或1,即极限值。

好的,我明天会试试。 - Richard Knop
非常感谢。是的,我想我把它复杂化了。我只是想避免过程式编程,把所有东西都做成面向对象的,结果有点走火入魔了。 - Richard Knop
顺便说一下,尝试打印network.calculateSingleOutput(2.0, 1.0)。它会打印出不正确的输出 :) - Richard Knop
@Richard Knop:是的,我的版本有问题。我会检查并发布更正后的版本。 - kriss
该死。我发现我的代码也不能正常工作 :) 我必须调查一下我的代码中出了什么错误。它表现出与你的代码相同的不正确行为。 - Richard Knop
显示剩余18条评论

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接