Machine learning: an algorithmic perspective(2009)

The Multi-Layer Perceptron(MLP)

Training the MLP consists of two parts:

  1. working out what the outputs are for the given inputs and the current weights;
  2. and then updating the weights according to the error, which is a function of the difference between the outputs and the targets.

These are generally known as going forwards and backwards through network.

Going Backwards:Back-Propagation of Error

Which makes it clear that the errors are sent backwards through the network. It is a form of gradient descent(梯度下降法).