Neural Networks From Scratch in Python
Neural Networks From Scratch in Python
Neural Networks
from Scratch in
Python
Acknowledgements
Harrison Kinsley:
My wife, Stephanie, for her unfailing support and faith in me throughout the years. You’ve never
doubted me.
Each and every viewer and person who supported this book and project. Without my audience,
none of this would have been possible.
The Python programming community in general for being awesome!
Daniel Kukieła for your unwavering effort with this massive project that Neural Networks from
Scratch became. From learning C++ to make mods in GTA V, to Python for various projects, to
the calculus behind neural networks, there doesn’t seem to be any problem you cannot solve and
it is a pleasure to do this for a living with you. I look forward to seeing what’s next!
Preface - Neural Networks from Scratch in Python
4
Daniel Kukieła:
My son, Oskar, for his patience and understanding during the busy days. My wife, Katarzyna,
for the boundless love, faith and support in all the things I do, have ever done, and plan to do,
the sunlight during most stormy days and the morning coffee every single day.
Harrison for challenging me to learn Python then pushing me towards learning neural networks.
For showing me that things do not have to be perfectly done, all the support, and making me a
part of so many interesting projects including “let’s make a tutorial on neural networks from
scratch,” which turned into one the biggest challenges of my life — this book. I wouldn’t be at
where I am now if all of that didn’t happen.
The Python community for making me a better programmer and for helping me to improve my
language skills.
Preface - Neural Networks from Scratch in Python
5
Copyright
No part of this book may be reproduced in any form or by any electronic or mechanical means,
with the following exceptions:
The Python code/software in this book is contained under the following MIT License:
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
associated documentation files (the “Software”), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute,
sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
Readme
The objective of this book is to break down an extremely complex topic, neural networks, into
small pieces, consumable by anyone wishing to embark on this journey. Beyond breaking down
this topic, the hope is to dramatically demystify neural networks. As you will soon see, this
subject, when explored from scratch, can be an educational and engaging experience. This book is
for anyone willing to put in the time to sit down and work through it. In return, you will gain a far
deeper understanding than most when it comes to neural networks and deep learning.
This book will be easier to understand if you already have an understanding of Python or
another programming language. Python is one of the most clear and understandable
programming languages; we have no real interest in padding page counts and exhausting an
entire first chapter with a basics of Python tutorial. If you need one, we suggest you start here:
o cite this material:
https://pythonprogramming.net/python-fundamental-tutorials/ T
Harrison Kinsley & Daniel Kukieła Neural Networks from Scratch (NNFS) https://nnfs.io
Preface - Neural Networks from Scratch in Python
8
Chapter 1
We begin with a general idea of what neural networks are and why you might be interested in
them. Neural networks, also called Artificial Neural Networks (though it seems, in recent
years, we’ve dropped the “artificial” part), are a type of machine learning often conflated with
deep learning. The defining characteristic of a deep neural network is having two or more
hidden layers — a concept that will be explained shortly, but these hidden layers are ones that
the neural network controls. It’s reasonably safe to say that most neural networks in use are a
form of deep learning.
Fig 1.01: Depicting the various fields of artificial intelligence and where they fit in overall.
Preface - Neural Networks from Scratch in Python
9
A Brief History
Since the advent of computers, scientists have been formulating ways to enable machines to take
input and produce desired output for tasks like classification and regression. Additionally, in
general, there’s supervised and unsupervised machine learning. Supervised machine learning
is used when you have pre-established and labeled data that can be used for training. Let’s say
you have sensor data for a server with metrics such as upload/download rates, temperature, and
humidity, all organized by time for every 10 minutes. Normally, this server operates as intended
and has no outages, but sometimes parts fail and cause an outage. We might collect data and
then divide it into two classes: one class for times/observations when the server is operating
normally, and another class for times/observations when the server is experiencing an outage.
When the server is failing, we want to label that sensor data leading up to failure as data that
preceded a failure. When the server is operating normally, we simply label that data as
“normal.”
What each sensor measures in this example is called a feature. A group of features makes up a
feature set (represented as vectors/arrays), and the values of a feature set can be referred to as a
sample. Samples are fed into neural network models to train them to fit desired outputs from these
inputs or to predict based on them during the inference phase.
The “normal” and “failure” labels are classifications or labels. You may also see these referred
to as targets or ground-truths while we fit a machine learning algorithm. These targets are the
classifications that are the goal or target, known to be true and correct, for the algorithm to
learn. For this example, the aim is to eventually train an algorithm to read sensor data and
accurately predict when a failure is imminent. This is just one example of supervised learning
in the form of classification. In addition to classification, there’s also regression, which is used
to predict numerical values, like stock prices. There’s also unsupervised machine learning,
where the machine finds structure in data without knowing the labels/classes ahead of time.
There are additional concepts (e.g., reinforcement learning and semi-supervised machine
learning) that fall under the umbrella of neural networks. For this book, we will focus on
classification and regression with neural networks, but what we cover here leads to other
use-cases.
Neural networks were conceived in the 1940s, but figuring out how to train them remained a
mystery for 20 years. The concept of backpropagation (explained later) came in the 1960s, but
neural networks still did not receive much attention until they started winning competitions in
2010. Since then, neural networks have been on a meteoric rise due to their sometimes seemingly
Preface - Neural Networks from Scratch in Python
10
magical ability to solve problems previously deemed unsolvable, such as image captioning,
language translation, audio and video synthesis, and more.
Currently, neural networks are the primary solution to most competitions and challenging
technological problems like self-driving cars, calculating risk, detecting fraud, and early cancer
detection, to name a few.
“Artificial” neural networks are inspired by the organic brain, translated to the computer. It’s not
a perfect comparison, but there are neurons, activations, and lots of interconnectivity, even if the
underlying processes are quite different.
A single neuron by itself is relatively useless, but, when combined with hundreds or thousands
(or many more) of other neurons, the interconnectivity produces relationships and results that
frequently outperform any other machine learning methods.
Preface - Neural Networks from Scratch in Python
11
Fig 1.03: Example of a neural network with 3 hidden layers of 16 neurons each.
The above animation shows the examples of the model structures and the numbers of parameters
the model has to learn to adjust in order to produce the desired outputs. The details of what is seen
here are the subjects of future chapters.
It might seem rather complicated when you look at it this way. Neural networks are considered
to be “black boxes” in that we often have no idea why they reach the conclusions they do. We do
understand how t hey do this, though.
Dense layers, the most common layers, consist of interconnected neurons. In a dense layer, each
neuron of a given layer is connected to every neuron of the next layer, which means that its output
value becomes an input for the next neurons. Each connection between neurons has a weight
associated with it, which is a trainable factor of how much of this input to use, and this weight
gets multiplied by the input value. Once all of the inputs·weights flow into our neuron, they are
Preface - Neural Networks from Scratch in Python
12
summed, and a bias, another trainable parameter, is added. The purpose of the bias is to offset the
output positively or negatively, which can further help us map more real-world types of dynamic
data. In chapter 4, we will show some examples of how this works.
The concept of weights and biases can be thought of as “knobs” that we can tune to fit our model
to data. In a neural network, we often have thousands or even millions of these parameters tuned
by the optimizer during training. Some may ask, “why not just have biases or just weights?”
Biases and weights are both tunable parameters, and both will impact the neurons’ outputs, but
they do so in different ways. Since weights are multiplied, they will only change the magnitude or
even completely flip the sign from positive to negative, or vice versa. Output = weight·input+bias
is not unlike the equation for a line y = mx+b. We can visualize this with:
Fig 1.04: Graph of a single-input neuron’s output with a weight of 1, bias of 0 and input x.
Fig 1.05: Graph of a single-input neuron’s output with a weight of 2, bias of 0 and input x.
Preface - Neural Networks from Scratch in Python
13
As we increase the value of the weight, the slope will get steeper. If we decrease the weight, the
slope will decrease. If we negate the weight, the slope turns to a negative:
Fig 1.06: Graph of a single-input neuron’s output with a weight of -0.70, bias of 0 and input x.
This should give you an idea of how the weight impacts the neuron’s output value that we get
from inputs·weights+bias. Now, how about the bias parameter? The bias offsets the overall
function. For example, with a weight of 1.0 and a bias of 2.0:
Fig 1.07: Graph of a single-input neuron’s output with a weight of 1, bias of 2 and input x.
Preface - Neural Networks from Scratch in Python
14
As we increase the bias, the function output overall shifts upward. If we decrease the bias, then
the overall function output will move downward. For example, with a negative bias:
Fig 1.08: Graph of a single-input neuron’s output with a weight of 1.0, bias of -0.70 and input x.
As you can see, weights and biases help to impact the outputs of neurons, but they do so in
slightly different ways. This will make even more sense when we cover activation functions in
chapter 4. Still, you can hopefully already see the differences between weights and biases and
how they might individually help to influence output. Why this matters will be conveyed shortly.
Preface - Neural Networks from Scratch in Python
15
As a very general overview, the step function meant to mimic a neuron in the brain, either “firing”
or not — like an on-off switch. In programming, an on-off switch as a function would be called a
step function because it looks like a step if we graph it.
For a step function, if the neuron’s output value, which is calculated by sum(inputs · weights)
+ bias, is greater than 0, the neuron fires (so it would output a 1). Otherwise, it does not fire
and would pass along a 0. The formula for a single neuron might look something like:
output = activation(output)
While you can use a step function for your activation function, we tend to use something slightly
more advanced. Neural networks of today tend to use more informative activation functions
(rather than a step function), such as the Rectified Linear (ReLU) activation function, which we
will cover in-depth in Chapter 4. Each neuron’s output could be a part of the ending output layer,
as well as the input to another layer of neurons. While the full function of a neural network can
get very large, let’s start with a simple example with 2 hidden layers of 4 neurons each.
Preface - Neural Networks from Scratch in Python
16
Along with these 2 hidden layers, there are also two more layers here — the input and output
layers. The input layer represents your actual input data, for example, pixel values from an image
or data from a temperature sensor. While this data can be “raw” in the exact form it was collected,
you will typically preprocess your data through functions like normalization and scaling, and
your input needs to be in numeric form. Concepts like scaling and normalization will be covered
later in this book. However, it is common to preprocess data while retaining its features and
having the values in similar ranges between 0 and 1 or -1 and 1. To achieve this, you will use
either or both scaling and normalization functions. The output layer is whatever the neural
network returns. With classification, where we aim to predict the class of the input, the output
layer often has as many neurons as the training dataset has classes, but can also have a single
output neuron for binary (two classes) classification. We’ll discuss this type of model later and,
for now, focus on a classifier that uses a separate output neuron per each class. For example, if
our goal is to classify a collection of pictures as a “dog” or “cat,” then there are two classes in
total. This means our output layer will consist of two neurons; one neuron associated with “dog”
and the other with “cat.” You could also have just a single output neuron that is “dog” or “not
dog.”
Preface - Neural Networks from Scratch in Python
17
Fig 1.11: Visual depiction of passing image data through a neural network, getting a classification
For each image passed through this neural network, the final output will have a calculated value
in the “cat” output neuron, and a calculated value in the “dog” output neuron. The output neuron
that received the highest score becomes the class prediction for the image used as input.
Fig 1.12: Visual depiction of passing image data through a neural network, getting a classification
When represented as one giant function, an example of a neural network’s forward pass would be
computed with:
Fig 1.13: Full formula for the forward pass of an example neural network model.
Naturally, that looks extremely confusing, and the above is actually the easy part of neural
networks. This turns people away, understandably. In this book, however, we’re going to be
coding everything from scratch, and, when doing this, you should find that there’s no step along
the way to producing the above function that is very challenging to understand. For example, the
above function can also be represented in nested python functions like:
Preface - Neural Networks from Scratch in Python
19
Fig 1.14: Python code for the forward pass of an example neural network model.
There may be some functions there that you don’t understand yet. For example, maybe you do not
know what a log function is, but this is something simple that we’ll cover. Then we have a sum
operation, an exponentiating operation (again, you may not exactly know what this does, but it’s
nothing hard). Then we have a dot product, which is still just about understanding how it works,
there’s nothing there that is over your head if you know how multiplication works! Finally, we
have some transposes, noted as .T, which, again, once you learn what that operation does, is not a
challenging concept. Once we’ve separated each of these elements, learning what they do and
how they work, suddenly, things will not appear to be as daunting or foreign. Nothing in this
forward pass requires education beyond basic high school algebra! For an animation that depicts
how all of this works in Python, you can check out the following animation, but it’s certainly not
expected that you’d immediately understand what’s going on. The point is that this seemingly
complex topic can be broken down into small, easy to understand parts, which is the purpose of
the coming chapters!
Preface - Neural Networks from Scratch in Python
20
A typical neural network has thousands or even up to millions of adjustable parameters (weights
and biases). In this way, neural networks act as enormous functions with vast numbers of
parameters. The concept of a long function with millions of variables that could be used to solve
a problem isn’t all too difficult. With that many variables related to neurons, arranged as
interconnected layers, we can imagine there exist some combinations of values for these variables
that will yield desired outputs. Finding that combination of parameter (weight and bias) values is
the challenging part.
The end goal for neural networks is to adjust their weights and biases (the parameters), so when
applied to a yet-unseen example in the input, they produce the desired output. When supervised
machine learning algorithms are trained, we show the algorithm examples of inputs and their
associated desired outputs. One major issue with this concept is overfitting — when the
algorithm only learns to fit the training data but doesn’t actually “understand” anything about
underlying input-output dependencies. The network basically just “memorizes” the training data.
Thus, we tend to use “in-sample” data to train a model and then use “out-of-sample” data to
validate an algorithm (or a neural network model in our case). Certain percentages are set aside
for both datasets to partition the data. For example, if there is a dataset of 100,000 samples of data
and labels, you will immediately take 10,000 and set them aside to be your “out-of-sample” or
“validation” data. You will then train your model with the other 90,000 in-sample or “training”
data and finally validate your model with the 10,000 out-of-sample data that the model hasn’t yet
seen. The goal is for the model to not only accurately predict on the training data, but also to be
similarly accurate while predicting on the withheld out-of-sample validation data.
This is called generalization, which means learning to fit the data instead of memorizing it. The
idea is that we “train” (slowly adjusting weights and biases) a neural network on many examples
of data. We then take out-of-sample data that the neural network has never been presented with
and hope it can accurately predict on these data too.
You should now have a general understanding of what neural networks are, or at least what the
objective is, and how we plan to meet this objective. To train these neural networks, we calculate
Preface - Neural Networks from Scratch in Python
21
how “wrong” they are using algorithms to calculate the error (called loss), and attempt to slowly
adjust their parameters (weights and biases) so that, over many iterations, the network gradually
becomes less wrong. The goal of all neural networks is to generalize, meaning the network can
see many examples of never-before-seen data, and accurately output the values we hope to
achieve. Neural networks can be used for more than just classification. They can perform
regression (predict a scalar, singular, value), clustering (assign unstructured data into groups), and
many other tasks. Classification is just a common task for neural networks.
Chapter 2
While we assume that we’re all beyond beginner programmers here, we will still try to start
slowly and explain things the first time we see them. To begin, we will be using Python 3.7
(although any version of Python 3+ will likely work). We will also be using NumPy after
showing the pure-Python methods and Matplotlib for some visualizations. It should be the case
that a huge variety of versions should work, but you may wish to match ours exactly to rule out
any version issues. Specifically, we are using:
Python 3.7.5
NumPy 1.15.0
Matplotlib 3.1.1
Since this is a Neural Networks from Scratch in Python book, we will demonstrate how to do
things without NumPy as well, but NumPy is Python’s all-things-numbers package. Building
from scratch is the point of this book though ignoring NumPy would be a disservice since it is
among the most, if not the most, important and useful packages for data science in Python.
Chapter 2 - Coding Our First Neurons - Neural Networks from Scratch in Python
7
A Single Neuron
Let’s say we have a single neuron, and there are three inputs to this neuron. As in most cases,
when you initialize parameters in neural networks, our network will have weights initialized
randomly, and biases set as zero to start. Why we do this will become apparent later on. The input
will be either actual training data or the outputs of neurons from the previous layer in the neural
network. We’re just going to make up values to start with as input for now:
Each input also needs a weight associated with it. Inputs are the data that we pass into the model
to get desired outputs, while the weights are the parameters that we’ll tune later on to get these
results. Weights are one of the types of values that change inside the model during the training
phase, along with biases that also change during training. The values for weights and biases are
what get “trained,” and they are what make a model actually work (or not work). We’ll start by
making up weights for now. Let’s say the first input, at index 0, which is a 1, has a weight of
0.2, the second input has a weight of 0.8, and the third input has a weight of -0.5. Our input and
weights lists should now be:
Next, we need the bias. At the moment, we’re modeling a single neuron with three inputs. Since
we’re modeling a single neuron, we only have one bias, as there’s just one bias value per neuron.
The bias is an additional tunable value but is not associated with any input in contrast to the
weights. We’ll randomly select a value of 2 as the bias for this example:
output = (inputs[0]*weights[0] +
inputs[1]*weights[1] +
inputs[2]*weights[2] + bias)
print(output)
>>>
2.3
The output here should be 2.3. We will use >>> t o denote output in this book.
Fig 2.01: Visualizing the code that makes up the math of a basic neuron.
Fig 2.02: Visualizing how the inputs, weights, and biases from the code interact with the neuron.
All together in code, including the new input and weight, to produce output:
output = (inputs[0]*weights[0] +
inputs[1]*weights[1] +
inputs[2]*weights[2] +
inputs[3]*weights[3] bias)
+
Chapter 2 - Coding Our First Neurons - Neural Networks from Scratch in Python
10
print(output)
>>>
4.8
Visually:
Fig 2.03: Visualizing the code that makes up a basic neuron, with 4 inputs this time.
A Layer of Neurons
Neural networks typically have layers that consist of more than one neuron. Layers are nothing
more than groups of neurons. Each neuron in a layer takes exactly the same input — the input
given to the layer (which can be either the training data or the output from the previous layer),
but contains its own set of weights and its own bias, producing its own unique output. The layer’s
output is a set of each of these outputs — one per each neuron. Let’s say we have a scenario with
3 neurons in a layer and 4 inputs:
[
weights1 = 0.2, 0.8, -0.5, 1]
weights2 = [ 0.5, -0.91, 0.26, -0.5]
weights3 = [ -0.26, -0.27, 0.17, 0.87]
2
bias1 =
bias2 = 3
bias3 = 0 .5
outputs = [
# Neuron 1:
inputs[0]*weights1[0] +
inputs[1]*weights1[1] +
inputs[2]*weights1[2] +
inputs[3]*weights1[3] bias1,
+
# Neuron 2:
inputs[0]*weights2[0] +
inputs[1]*weights2[1] +
inputs[2]*weights2[2] +
inputs[3]*weights2[3] bias2,
+
# Neuron 3:
inputs[0]*weights3[0] +
inputs[1]*weights3[1] +
inputs[2]*weights3[2] +
inputs[3]*weights3[3] bias3]
+
print(outputs)
>>>
[4.8, 1.21, 2.385]
Chapter 2 - Coding Our First Neurons - Neural Networks from Scratch in Python
13
In this code, we have three sets of weights and three biases, which define three neurons. Each
neuron is “connected” to the same inputs. The difference is in the separate weights and bias
that each neuron applies to the input. This is called a fully connected neural network — every
neuron in the current layer has connections to every neuron from the previous layer. This is a
very common type of neural network, but it should be noted that there is no requirement to
fully connect everything like this. At this point, we have only shown code for a single layer
with very few neurons. Imagine coding many more layers and more neurons. This would get
very challenging to code using our current methods. Instead, we could use a loop to scale and
handle dynamically-sized inputs and layers. We’ve turned the separate weight variables into a
list of weights so we can iterate over them, and we changed the code to use loops instead of
the hardcoded operations.
Chapter 2 - Coding Our First Neurons - Neural Networks from Scratch in Python
14
inputs = [1, 2, 3, 2.5]
weights = [[0.2, 0.8, -0.5, 1],
[0.5, -0.91, 0.26, -0.5],
[-0.26, -0.27, 0.17, 0.87]]
biases = [2, 3, 0.5]
print(layer_outputs)
>>>
[4.8, 1.21, 2.385]
This does the same thing as before, just in a more dynamic and scalable way. If you find yourself
confused at one of the steps, print() out the objects to see what they are and what’s happening.
The z ip() function lets us iterate over multiple iterables (lists in this case) simultaneously.
Again, all we’re doing is, for each neuron (the outer loop in the code above, over neuron weights
and biases), taking each input value multiplied by the associated weight for that input (the inner
loop in the code above, over inputs and weights), adding all of these together, then adding a bias
at the end. Finally, sending the neuron’s output to the layer’s output list.
That’s it! How do we know we have three neurons? Why do we have three? We can tell we have
three neurons because there are 3 sets of weights and 3 biases. When you make a neural network
of your own, you also get to decide how many neurons you want for each of the layers. You can
combine however many inputs you are given with however many neurons that you desire. As you
progress through this book, you will gain some intuition of how many neurons to try using. We
will start by using trivial numbers of neurons to aid in understanding how neural networks work
at their core.
Chapter 2 - Coding Our First Neurons - Neural Networks from Scratch in Python
15
With our above code that uses loops, we could modify our number of inputs or neurons in our
layer to be whatever we wanted, and our loop would handle it. As we said earlier, it would be
a disservice not to show NumPy here since Python alone doesn’t do matrix/tensor/array math
very efficiently. But first, the reason the most popular deep learning library in Python is
called “TensorFlow” is that it’s all about doing operations on tensors.
Tensors are closely-related to arrays. If you interchange tensor/array/matrix when it comes to
machine learning, people probably won’t give you too hard of a time. But there are subtle
differences, and they are primarily either the context or attributes of the tensor object. To
understand a tensor, let’s compare and describe some of the other data containers in Python
(things that hold data). Let’s start with a list. A Python list is defined by comma-separated
objects contained in brackets. So far, we’ve been using lists.
l = [1,5,6,2]
A list of lists:
lol = [[1,5,6,2],
[3,2,1,3]]
lolol = [[[1,5,6,2],
[3,2,1,3]],
[[5,2,1,2],
[6,4,8,4]],
[[2,8,5,3],
[1,1,9,4]]]
Chapter 2 - Coding Our First Neurons - Neural Networks from Scratch in Python
16
Everything shown so far could also be an array or an array representation of a tensor. A list is just
a list, and it can do pretty much whatever it wants, including:
another_list_of_lists = [[4,2,3],
[5,1]]
The above list of lists cannot be an array because it is not homologous. A list of lists is
homologous if each list along a dimension is identically long, and this must be true for each
dimension. In the case of the list shown above, it’s a 2-dimensional list. The first dimension’s
length is the number of sublists in the total list (2). The second dimension is the length of each of
those sublists (3, then 2). In the above example, when reading across the “row” dimension (also
called the second dimension), the first list is 3 elements long, and the second list is 2 elements
long — this is not homologous and, therefore, cannot be an array. While failing to be consistent in
one dimension is enough to show that this example is not homologous, we could also read down
the “column” dimension (the first dimension); the first two columns are 2 elements long while the
third column only contains 1 element. Note that every dimension does not necessarily need to be
the same length; it is perfectly acceptable to have an array with 4 rows and 3 columns (i.e., 4x3).
A matrix is pretty simple. It’s a rectangular array. It has columns and rows. It is two dimensional.
So a matrix can be an array (a 2D array). Can all arrays be matrices? No. An array can be far
more than just columns and rows, as it could have four dimensions, twenty dimensions, and so on.
list_matrix_array = [[4,2],
[5,1],
[8,2]]
The above list could also be a valid matrix (because of its columns and rows), which
automatically means it could also be an array. The “shape” of this array would be 3x2, or more
formally described as a shape of (3, 2) as it has 3 rows and 2 columns.
To denote a shape, we need to check every dimension. As we’ve already learned, a matrix is a
2-dimensional array. The first dimension is what’s inside the most outer brackets, and if we look
4,2], [
at the above matrix, we can see 3 lists there: [ 5,1], and [
8,2]
; thus, the size in this
dimension is 3 and each of those lists has to be the same shape to form an array (and matrix in this
case). The next dimension’s size is the number of elements inside this more inner pair of brackets,
and we see that it’s 2 as all of them contain 2 elements.
Chapter 2 - Coding Our First Neurons - Neural Networks from Scratch in Python
17
With 3-dimensional arrays, like in lolol below, we’ll have a 3rd level of brackets:
lolol = [[[1,5,6,2],