3 Matching Annotations
 Dec 2022

www.zhihu.com www.zhihu.com

目前，人工智能各个流派发展现状如何？


www.zhihu.com www.zhihu.com

graph convolutional network有什么比较好的应用task？
Tags
Annotators
URL

 Oct 2016

www.quora.com www.quora.com

Back prop is just gradient descent on individual errors. You compare the predictions of the neural network with the desired output and then compute the gradient of the errors with respect to the weights of the neural network. This gives you a direction in the parameter weight space in which the error would become smaller.Interestingly, due to the layered structure of the neural network and the chain rule for derivatives, the formulas you get can be interpreted as propagating the error back through the network. But that's mostly a computational aside, what you really (just) do is gradient descent, that is, changing the weights of the neural network a little bit to make the error on your training examples smaller.
梯度下降的解释
