STRUCTURE AND FUNCTIONS OF ARTIFICIAL NEURON.
An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural
network. Artificial neurons are elementary units in an artificial neural network. The artificial neuron
receives one or more inputs (representing excitatory postsynaptic potentials and inhibitory postsynaptic
potentials at neural dendrites) and sums them to produce an output (or activation, representing a neuron's
action potential which is transmitted along its axon). Usually each input is separately weighted, and the
sum is passed through a non-linear function known as an activation function or transfer function. The
transfer functions usually have a sigmoid shape, but they may also take the form of other non-linear
functions, piecewise linear functions, or step functions. They are also often monotonically increasing,
continuous, differentiable and bounded. The thresholding function has inspired building logic gates
referred to as threshold logic; applicable to building logic circuits resembling brain processing. For
example, new devices such as memristors have been extensively used to develop such logic in recent times.
STATE THE MAJOR DIFFERENCES BETWEEN BIOLOGICAL AND ARTIFICIANEURAL
NETWORKS
1. Size: Our brain contains about 86 billion neurons and more than a 100 synapses
(connections). The number of “neurons” in artificial networks is much less than that.
2. Signal transport and processing: The human brain works asynchronously, ANNs
work synchronously.
3. Processing speed: Single biological neurons are slow, while standard neurons in
ANNs are fast.
4. Topology: Biological neural networks have complicated topologies, while ANNs
are often in a tree structure.
5. Speed: certain biological neurons can fire around 200 times a second on average.
Signals travel at different speeds depending on the type of the nerve impulse,
ranging from 0.61 m/s up to 119 m/s. Signal travel speeds also vary from person to
person depending on their sex, age, height, temperature, medical condition, lack of
sleep etc. Information in artificial neurons is carried over by the continuous, floating
point number values of synaptic weights. There are no refractory periods for
artificial neural networks (periods while it is impossible to send another action
potential, due to the sodium channels being lock shut) and artificial neurons do not
experience “fatigue”: they are functions that can be calculated as many times and as
fast as the computer architecture would allow.
6. Fault-tolerance: biological neuron networks due to their topology are also fault-
tolerant. Artificial neural networks are not modeled for fault tolerance or self
regeneration (similarly to fatigue, these ideas are not applicable to matrix
operations), though recovery is possible by saving the current state (weight values)
of the model and continuing the training from that save state.
7. Power consumption: the brain consumes about 20% of all the human body’s
energy — despite it’s large cut, an adult brain operates on about 20 watts (barely
enough to dimly light a bulb) being extremely efficient. Taking into account how
humans can still operate for a while, when only given some c-vitamin rich lemon
juice and beef tallow, this is quite remarkable. For benchmark: a single Nvidia
GeForce Titan X GPU runs on 250 watts alone, and requires a power supply. Our
machines are way less efficient than biological systems. Computers also generate a
lot of heat when used, with consumer GPUs operating safely between 50–80°Celsius
instead of 36.5–37.5 °C.
8. Learning: we still do not understand how brains learn, or how redundant
connections store and recall information. By learning, we are building on
information that is already stored in the brain. Our knowledge deepens by repetition
and during sleep, and tasks that once required a focus can be executed automatically
once mastered. Artificial neural networks in the other hand, have a predefined
model, where no further neurons or connections can be added or removed. Only the
weights of the connections (and biases representing thresholds) can change during
training. The networks start with random weight values and will slowly try to reach
a point where further changes in the weights would no longer improve
performance. Biological networks usually don't stop / start learning. ANNs have
different fitting (train) and prediction (evaluate) phases.
9. Field of application: ANNs are specialized. They can perform one task. They
might be perfect at playing chess, but they fail at playing go (or vice versa).
Biological neural networks can learn completely new tasks.
10. Training algorithm: ANNs use Gradient Descent for learning. Human brains use
something different (but we don't know what).