Search results

Jump to navigation Jump to search
View (previous 20 | ) (20 | 50 | 100 | 250 | 500)

Page title matches

  • [[File:Dropout_mechanism.png|thumb|On the left is a fully connected neural network with two hidden layers. On the right is the same network after appl ...date=2013-12-20|title=An empirical analysis of dropout in piecewise linear networks|eprint=1312.6197|class=stat.ML}}</ref><ref name="MyUser_Arxiv.org_July_26_2 ...
    10 KB (1,477 words) - 05:06, 29 August 2024
  • ...gmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/ |website=Machine Learning Mastery |access-date=8 April 2021 |date=8 Januar ...bott |first3 = L. F. |title = Lyapunov spectra of chaotic recurrent neural networks |date = 2020-06-03 |class = nlin.CD |eprint=2006.02427}}</ref> ...
    17 KB (2,434 words) - 14:21, 3 February 2025
  • {{see also|Physical neural network}} ...sics-informed nerural networks.png|thumb|upright=3|Physics-informed neural networks for solving Navier–Stokes equations|307x307px]] ...
    35 KB (4,886 words) - 02:48, 19 January 2025
  • {{Main|Artificial neural network}} An '''artificial neural network''' (ANN) combines biological principles with advanced statistics to ...
    12 KB (1,793 words) - 12:34, 24 February 2025
  • {{Short description|Classification of Artificial Neural Networks (ANNs)}} There are many '''types of artificial neural networks''' ('''ANN'''). ...
    89 KB (12,410 words) - 12:07, 29 January 2025
  • {{Short description|Feature of artificial neural networks}} ...ncreases, the output distribution simplifies, ultimately converging to a [[Neural network Gaussian process]] in the infinite width limit.]] ...
    9 KB (1,185 words) - 12:20, 5 February 2024
  • ...escription|THE LOTTERY TICKET HYPOTHESIS: FINDING SPARSE, TRAINABLE NEURAL NETWORKS}} ...han |title=The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks |date=2019-03-04 |eprint=1803.03635 |last2=Carbin |first2=Michael|class=cs. ...
    15 KB (2,052 words) - 06:56, 5 November 2024

Page text matches

  • ...lication/303969910 |title=Evaluation of QuickProp for Learning Deep Neural Networks -- A Critical Review}}</ref> following an algorithm inspired by the [[Newto ...ications/qp-tr.ps An Empirical Study of Learning Speed in Back-Propagation Networks]'', September 1988 ...
    2 KB (272 words) - 16:16, 19 July 2023
  • ...934 |pmid=18282872 |last1=Specht |first1=D. F. |title=A general regression neural network |s2cid=6266210 }}</ref> GRNN represents an improved technique in the neural networks based on the [[nonparametric regression]]. The idea is that every training ...
    3 KB (481 words) - 15:35, 18 May 2023
  • ....<ref>''[http://homepages.rpi.edu/~bennek/papers/NeuralNetworkTraining.pdf Neural Network Training via Linear Programing]'', Advances in Optimization and Par [[Category:Artificial neural networks]] ...
    2 KB (282 words) - 20:44, 27 October 2022
  • ...r=Martin T. Hagan |author2=Howard B. Demuth |author3=Mark H. Beale |title=Neural Network Design |edition=1st |date=January 2002 |orig-date=1996 |publisher=P The shunting model is one of Grossberg's neural network models, based on a [[Leaky integrator]], described by the different ...
    1 KB (203 words) - 20:55, 12 December 2024
  • ...qualities |url=http://link.springer.com/10.1007/s11063-018-9788-6 |journal=Neural Processing Letters |language=en |volume=48 |issue=3 |pages=1543–1561 |doi=1 ...
    3 KB (359 words) - 09:29, 18 November 2024
  • ...le models (typically [[Neural network (machine learning)|artificial neural networks]]) and combining them to produce a desired output, as opposed to creating j .... "Optimal ensemble averaging of neural networks." Network: Computation in Neural Systems 8, no. 3 (1997): 283&ndash;296.</ref> ...
    6 KB (952 words) - 16:06, 18 November 2024
  • {{Short description|Finding important subnetworks in neural networks}} ...etworks can contain sparse subnetworks capable of approximating any target neural network of smaller size without requiring additional training. This hypothe ...
    6 KB (753 words) - 11:32, 28 February 2025
  • {{short description|Technique for training recurrent neural networks}} ...uch as [[Recurrent neural network#Elman networks and Jordan networks|Elman networks]]. The algorithm was independently derived by numerous researchers.<ref>{{C ...
    6 KB (841 words) - 20:41, 12 November 2024
  • {{Short description|Feature of artificial neural networks}} ...ncreases, the output distribution simplifies, ultimately converging to a [[Neural network Gaussian process]] in the infinite width limit.]] ...
    9 KB (1,185 words) - 12:20, 5 February 2024
  • ...Systems}}</ref> In particular, a '''neural ordinary differential equation (neural ODE)''' is an [[ordinary differential equation]] of the form ...s each positive index ''t'' to a real value, representing the state of the neural network at that layer. ...
    6 KB (876 words) - 03:07, 25 February 2025
  • ...mperceptible to humans, exploit the high-dimensional space in which neural networks operate, revealing geometric vulnerabilities. ...g specifically on the geometric structure of decision boundaries in neural networks. The decision boundary is the surface in high-dimensional space that separa ...
    4 KB (581 words) - 13:30, 21 October 2024
  • {{short description|Type of neural network which utilizes recursion}} {{Distinguish|recurrent neural network}} ...
    8 KB (1,121 words) - 23:20, 2 January 2025
  • ...Reduction and Scaled Rprop-Based Training"]. ''IEEE Transactions of Neural Networks'' '''2''':673–686.</ref> .... Palm (2001). "Three Learning Phases for Radial-Basis-Function Network" ''Neural Netw.'' '''14''':439-458.</ref> ...
    4 KB (671 words) - 04:50, 31 July 2024
  • ...variational]] [[quantum states]] parameterized in terms of an [[artificial neural network]]. It was first introduced in 2017 by the [[physicists]] [[Giuseppe where <math> F(s_1 \ldots s_N; W) </math> is an [[artificial neural network]] of parameters (weights) <math> W </math>, <math> N </math> input ...
    5 KB (765 words) - 10:46, 21 February 2024
  • ...learning''' is a form of [[unsupervised learning]] in [[artificial neural networks]], in which nodes compete for the right to respond to a subset of the input ...f><ref>Haykin, Simon, "Neural Network. A comprehensive foundation." Neural Networks 2.2004 (2004).</ref> ...
    6 KB (835 words) - 00:05, 17 November 2024
  • [[neural networks]] as basic interpolating functions.<ref>{{Cite book |last1=Miao |first1=Qin ...chi^2</math>) of a set of PDFs parametrized by [[artificial neural network|neural network]]s on each of the above MC replicas of the data. PDFs are parametri ...
    5 KB (701 words) - 11:11, 27 November 2024
  • ...G |first=Diganta |last=Misra |title=Mish: A Self Regularized Non-Monotonic Neural Activation Function |date=2019}}</ref> SiLU was first proposed alongside the [[Rectifier (neural networks)|GELU]] in 2016,<ref name="Hendrycks-Gimpel_2016" /> then again proposed in ...
    6 KB (760 words) - 01:35, 21 February 2025
  • ...ing |title=EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks |date=2020-09-11 |arxiv=1905.11946 |last2=Le |first2=Quoc V.}}</ref> Its ke Architecturally, they optimized the choice of modules by [[neural architecture search]] (NAS), and found that the inverted bottleneck convolu ...
    6 KB (720 words) - 18:33, 20 October 2024
  • ...Vugar|date=December 2021|title= Ridge functions and applications in neural networks|url= https://www.ams.org/books/surv/263/ |location= Providence, RI |publish ...
    3 KB (509 words) - 14:47, 22 January 2025
  • {{short description|Type of artificial neural network}} ...d on work done in my labs |url=https://people.idsia.ch/~juergen/most-cited-neural-nets.html |access-date=2022-04-30 |work=AI Blog |location=IDSIA, Switzerlan ...
    11 KB (1,593 words) - 23:49, 19 January 2025
View (previous 20 | ) (20 | 50 | 100 | 250 | 500)