Feedforward neural network

From testwiki
Revision as of 05:14, 9 January 2025 by imported>WikiCleanerBot (v2.05b - Bot T20 CW#61 - Fix errors for CW project (Reference before punctuation))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description Template:More citations needed Template:Machine learning

In a feedforward network, information always moves in one direction; it never goes backwards.

Template:Multiple image Feedforward refers to recognition-inference architecture of neural networks. Artificial neural network architectures are based on inputs multiplied by weights to obtain outputs (inputs-to-output): feedforward.[1] Recurrent neural networks, or neural networks with loops allow information from later processing stages to feed back to earlier stages for sequence processing.[2] However, at every stage of inference a feedforward multiplication remains the core, essential for backpropagation[3][4][5][6][7] or backpropagation through time. Thus neural networks cannot contain feedback like negative feedback or positive feedback where the outputs feed back to the very same inputs and modify them, because this forms an infinite loop which is not possible to rewind in time to generate an error signal through backpropagation. This issue and nomenclature appear to be a point of confusion between some computer scientists and scientists in other fields studying brain networks.[8]

Mathematical foundations

Activation function

The two historically common activation functions are both sigmoids, and are described by

y(vi)=tanh(vi)andy(vi)=(1+evi)1.

The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here yi is the output of the ith node (neuron) and vi is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models).

In recent developments of deep learning the rectified linear unit (ReLU) is more frequently used as one of the possible ways to overcome the numerical problems related to the sigmoids.

Learning

Learning occurs by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation.

We can represent the degree of error in an output node j in the nth data point (training example) by ej(n)=dj(n)yj(n), where dj(n) is the desired target value for nth data point at node j, and yj(n) is the value produced at node j when the nth data point is given as an input.

The node weights can then be adjusted based on corrections that minimize the error in the entire output for the nth data point, given by

(n)=12output node jej2(n).

Using gradient descent, the change in each weight wij is

Δwji(n)=η(n)vj(n)yi(n)

where yi(n) is the output of the previous neuron i, and η is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations. In the previous expression, (n)vj(n) denotes the partial derivate of the error (n) according to the weighted sum vj(n) of the input connections of neuron i.

The derivative to be calculated depends on the induced local field vj, which itself varies. It is easy to prove that for an output node this derivative can be simplified to

(n)vj(n)=ej(n)ϕ(vj(n))

where ϕ is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is

(n)vj(n)=ϕ(vj(n))k(n)vk(n)wkj(n).

This depends on the change in weights of the kth nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function.[9]

History

Timeline

  • In 1943, Warren McCulloch and Walter Pitts proposed the binary artificial neuron as a logical model of biological neural networks.[15]
  • In 1958, Frank Rosenblatt proposed the multilayered perceptron model, consisting of an input layer, a hidden layer with randomized weights that did not learn, and an output layer with learnable connections.[16] R. D. Joseph (1960)[17] mentions an even earlier perceptron-like device:[12] "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject."
  • In 1960, Joseph[17] also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962)[18]Template:Rp cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning.
  • In 1965, Alexey Grigorevich Ivakhnenko and Valentin Lapa published Group Method of Data Handling, the first working deep learning algorithm, a method to train arbitrarily deep neural networks.[19][20] It is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates."[12] It was used to train an eight-layer neural net in 1971.
  • In 1967, Shun'ichi Amari reported [21] the first multilayered neural network trained by stochastic gradient descent, which was able to classify non-linearily separable pattern classes. Amari's student Saito conducted the computer experiments, using a five-layered feedforward network with two learning layers.[12]

Linear regression

Perceptron

Template:Main If using a threshold, i.e. a linear activation function, the resulting linear threshold unit is called a perceptron. (Often the term is used to denote just one of these units.) Multiple parallel non-linear units are able to approximate any continuous function from a compact interval of the real numbers into the interval [−1,1] despite the limited computational power of single unit with a linear threshold function.[30]

Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. It calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent.

Multilayer perceptron

A two-layer neural network capable of calculating XOR. The numbers within the neurons represent each neuron's explicit threshold. The numbers that annotate arrows represent the weight of the inputs. Note that If the threshold of 2 is met then a value of 1 is used for the weight multiplication to the next layer. Not meeting the threshold results in 0 being used. The bottom layer of inputs is not always considered a real neural network layer.

A multilayer perceptron (MLP) is a misnomer for a modern feedforward artificial neural network, consisting of fully connected neurons (hence the synonym sometimes used of fully connected network (FCN)), often with a nonlinear kind of activation function, organized in at least three layers, notable for being able to distinguish data that is not linearly separable.[31]

Other feedforward networks

1D convolutional neural network feed forward example

Examples of other feedforward networks include convolutional neural networks and radial basis function networks, which use a different activation function.

See also

References

Template:Reflist

Template:Differentiable computing

  1. Template:Cite book
  2. Template:Cite journal
  3. Template:Cite thesis
  4. Template:Cite journal
  5. Rosenblatt, Frank. x. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington DC, 1961
  6. 6.0 6.1 Template:Cite book
  7. 7.0 7.1 Rumelhart, David E., Geoffrey E. Hinton, and R. J. Williams. "Learning Internal Representations by Error Propagation". David E. Rumelhart, James L. McClelland, and the PDP research group. (editors), Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundation. MIT Press, 1986.
  8. Template:Cite journal
  9. Template:Cite book
  10. Merriman, Mansfield. A List of Writings Relating to the Method of Least Squares: With Historical and Critical Notes. Vol. 4. Academy, 1877.
  11. Template:Cite journal
  12. 12.0 12.1 12.2 12.3 12.4 Template:Cite arXiv
  13. Template:Cite book
  14. Template:Cite book
  15. Template:Cite journal
  16. Template:Cite journal
  17. 17.0 17.1 Template:Cite book
  18. Template:Cite book
  19. Template:Cite book
  20. Template:Cite book
  21. Template:Cite journal
  22. Template:Cite thesis
  23. Template:Cite journal
  24. Ostrovski, G.M., Volin,Y.M., and Boris, W.W. (1971). On the computation of derivatives. Wiss. Z. Tech. Hochschule for Chemistry, 13:382–384.
  25. 25.0 25.1 Template:Cite web
  26. Template:Cite book
  27. Template:Cite book
  28. Template:Cite journal
  29. Template:Cite journal
  30. Template:Cite journal
  31. Cybenko, G. 1989. Approximation by superpositions of a sigmoidal function Mathematics of Control, Signals, and Systems, 2(4), 303–314.