0% found this document useful (0 votes)
25 views12 pages

NN Basic

The document outlines the basic concepts of neural networks, including the training process which involves forward propagation, loss calculation, backpropagation, and gradient descent. It emphasizes the importance of non-linear activation functions in creating complex functions rather than simple linear combinations. The goal of training is to minimize the total error by adjusting weights and biases through iterative updates.

Uploaded by

kongjun9423
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views12 pages

NN Basic

The document outlines the basic concepts of neural networks, including the training process which involves forward propagation, loss calculation, backpropagation, and gradient descent. It emphasizes the importance of non-linear activation functions in creating complex functions rather than simple linear combinations. The goal of training is to minimize the total error by adjusting weights and biases through iterative updates.

Uploaded by

kongjun9423
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Neural

Network

Basic ideas
What is
Neural
Network?
Pro c e s s o f t r a i n i n g a n e u r a l n e t w o r k

1. Forward propagate the data points through the network get the outputs

2. Use loss function to calculate the total error

3. Use backpropagation algorithm to calculate the gradient of the loss


function with respect to each weight and bias

4. Use Gradient descent to update the weights and biases at each layer

5. Repeat above steps to minimize the total error.


Passing the
information
through —
Feed Forward
 = +
Fe e d f o r w a rd
 )
 without a non-linear activation function,
Activation the neural network is merely calculating
function linear combinations of values , not
creating a new function
Loss
function- Cross-
Entropy
Pa s s i n g t h e
error — Back-
propagation

 Our goal here is to


modify the weights and
biases in such a way
that the loss is
minimized.
Back-
 Using chain rule
propagation
Back-
propagation
Gradient Descent: updating the
weights

1. We start from a point on the graph of a function

2. We find the direction from that point, in which the function decreases
fastest

3. We travel (down along the path) indicated by this direction in a small


step to arrive at a new point

You might also like