Multilayer perceptron

From testwiki
Revision as of 08:03, 29 December 2024 by imported>Citation bot (Added date. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Neural network architectures | #UCB_Category 5/30)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description Template:Machine learning

In deep learning, a multilayer perceptron (MLP) is a name for a modern feedforward neural network consisting of fully connected neurons with nonlinear activation functions, organized in layers, notable for being able to distinguish data that is not linearly separable.[1]

Modern neural networks are trained using backpropagation[2][3][4][5][6] and are colloquially referred to as "vanilla" networks.[7] MLPs grew out of an effort to improve single-layer perceptrons, which could only be applied to linearly separable data. A perceptron traditionally used a Heaviside step function as its nonlinear activation function. However, the backpropagation algorithm requires that modern MLPs use continuous activation functions such as sigmoid or ReLU.[8]

Multilayer perceptrons form the basis of deep learning,[9] and are applicable across a vast set of diverse domains.[10]

Timeline

  • In 1943, Warren McCulloch and Walter Pitts proposed the binary artificial neuron as a logical model of biological neural networks.[11]
  • In 1958, Frank Rosenblatt proposed the multilayered perceptron model, consisting of an input layer, a hidden layer with randomized weights that did not learn, and an output layer with learnable connections.[12]
  • In 1962, Rosenblatt published many variants and experiments on perceptrons in his book Principles of Neurodynamics, including up to 2 trainable layers by "back-propagating errors".[13] However, it was not the backpropagation algorithm, and he did not have a general method for training multiple layers.
  • In 1967, Shun'ichi Amari reported [17] the first multilayered neural network trained by stochastic gradient descent, was able to classify non-linearily separable pattern classes. Amari's student Saito conducted the computer experiments, using a five-layered feedforward network with two learning layers.[16]
  • In 2021, a very simple NN architecture combining two deep MLPs with skip connections and layer normalizations was designed and called MLP-Mixer; its realizations featuring 19 to 431 millions of parameters were shown to be comparable to vision transformers of similar size on ImageNet and similar image classification tasks.[25]

Mathematical foundations

Activation function

If a multilayer perceptron has a linear activation function in all neurons, that is, a linear function that maps the weighted inputs to the output of each neuron, then linear algebra shows that any number of layers can be reduced to a two-layer input-output model. In MLPs some neurons use a nonlinear activation function that was developed to model the frequency of action potentials, or firing, of biological neurons.

The two historically common activation functions are both sigmoids, and are described by

y(vi)=tanh(vi)andy(vi)=(1+evi)1.

The first is a hyperbolic tangent that ranges from −1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here yi is the output of the ith node (neuron) and vi is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models).

In recent developments of deep learning the rectified linear unit (ReLU) is more frequently used as one of the possible ways to overcome the numerical problems related to the sigmoids.

Layers

Template:Main The MLP consists of three or more layers (an input and an output layer with one or more hidden layers) of nonlinearly-activating nodes. Since MLPs are fully connected, each node in one layer connects with a certain weight wij to every node in the following layer.

Learning

Learning occurs in the perceptron by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation, a generalization of the least mean squares algorithm in the linear perceptron.

We can represent the degree of error in an output node j in the nth data point (training example) by ej(n)=dj(n)yj(n), where dj(n) is the desired target value for nth data point at node j, and yj(n) is the value produced by the perceptron at node j when the nth data point is given as an input.

The node weights can then be adjusted based on corrections that minimize the error in the entire output for the nth data point, given by

(n)=12output node jej2(n).

Using gradient descent, the change in each weight wij is

Δwji(n)=η(n)vj(n)yi(n)

where yi(n) is the output of the previous neuron i, and η is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations. In the previous expression, (n)vj(n) denotes the partial derivate of the error (n) according to the weighted sum vj(n) of the input connections of neuron i.

The derivative to be calculated depends on the induced local field vj, which itself varies. It is easy to prove that for an output node this derivative can be simplified to

(n)vj(n)=ej(n)ϕ(vj(n))

where ϕ is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is

(n)vj(n)=ϕ(vj(n))k(n)vk(n)wkj(n).

This depends on the change in weights of the kth nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function.[26]

References

Template:Reflist

Template:Artificial intelligence navbox

de:Perzeptron#Mehrlagiges Perzeptron

  1. Cybenko, G. 1989. Approximation by superpositions of a sigmoidal function Mathematics of Control, Signals, and Systems, 2(4), 303–314.
  2. Template:Cite thesis
  3. Template:Cite journal
  4. Rosenblatt, Frank. x. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington DC, 1961
  5. Template:Cite book
  6. Rumelhart, David E., Geoffrey E. Hinton, and R. J. Williams. "Learning Internal Representations by Error Propagation". David E. Rumelhart, James L. McClelland, and the PDP research group. (editors), Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundation. MIT Press, 1986.
  7. Hastie, Trevor. Tibshirani, Robert. Friedman, Jerome. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, New York, NY, 2009.
  8. Template:Cite web
  9. Template:Cite book
  10. Template:Cite journal
  11. Template:Cite journal
  12. Template:Cite journal
  13. Template:Cite book
  14. Template:Cite book
  15. Template:Cite book
  16. 16.0 16.1 16.2 Template:Cite arXiv
  17. Template:Cite journal
  18. Template:Cite thesis
  19. Template:Cite journal
  20. Template:Cite book
  21. Template:Cite book
  22. Template:Cite journal
  23. Rumelhart, David E., Geoffrey E. Hinton, and R. J. Williams. "Learning Internal Representations by Error Propagation". David E. Rumelhart, James L. McClelland, and the PDP research group. (editors), Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundation. MIT Press, 1986.
  24. Template:Cite journal
  25. Template:Cite web
  26. Template:Cite book