Competitive Learning!
!
Lecture 10!
1!
Competitive Learning!
g
A form of unsupervised training where output units are said to
be in competition for input patterns!
n
During training, the output unit that provides the highest activation to a
given input pattern is declared the weights of the winner and is moved
closer to the input pattern, whereas the rest of the neurons are left
unchanged"
This strategy is also called winner-take-all since only the winning
neuron is updated"
Output units may have lateral inhibitory connections so that a winner
neuron can inhibit others by an amount proportional to its activation
level"
x1
O1
x2
O2
O3
xd
2!
Competitive Learning!
g
With normalized vectors, the activation function of the ith unit
can be computed as the inner product of the units weight
vector wi and a particular input pattern x(n!
gi (x (n ) = w Ti x (n
!
n
Note: the inner product of two normal vectors is the cosine of the angle between
them"
The neuron with largest activation is then adapted to be more
!
like the input that caused the excitation!
"
n
w i (t +1) = w i (t) + "x (n
Following update, the weight vector is renormalized (||w||=1)"
3!
Competitive Learning!
g
If weights and input patters are un-normalized, the
activation function becomes the Euclidean distance!
gi (x (n ) =
!
g
# (wi " x (ni )
The learning rule then become!
!
w i (t +1) = w i (t) + "(x (n # w i (t))
4!
Competitive Learning!
g
Competitive Learning Algorithm!
5!
Competitive Learning!
g
Demo:!
6!
Direction maps!
7!
mcb.berkeley,edu/!
Tonotopic maps!
8!
http://www.d.umn.edu/~jfitzake/Lectures/UndergradPharmacy/SensoryPhysiology/Audition/TonotopicMaps.html!
Phantom Digits!
9!
http://www.scholarpedia.org/article/Phantom_touch!
Kohonen Self Organizing Maps!
g
Kohonen Self-Organizing Maps (SOMs) produce a mapping
from a multidimensional input space onto a lattice of clusters
(or neurons)!
n
Unlike MLPs trained with the back-propagation algorithm,
SOMs have a strong neurobiological basis!
n
The key feature in SOMs is that the mapping is topology-preserving, in that
neighboring neurons respond to similar input patterns"
SOMs are typically organized as one- or two- dimensional lattices (i.e., a string
or a mesh) for the purpose of visualization and dimensionality reduction"
On the mammalian brain, visual, auditory and tactile inputs are mapped into a
number of sheets (folded planes) of cells [Gallant, 1993]"
Topology is preserved in these sheets; for example, if we touch parts of the body
that are close together, groups of cells will fire that are also close together"
Kohonen SOMs result from the synergy of three basic
processes!
n
n
n
Competition"
Cooperation"
Adaptation"
10!
Competition!
g
Each neuron in a SOM is assigned a weight vector with the same
dimensionality d as the input space!
Any given input pattern is compared to the weight vector of each
neuron and the closest neuron is declared the winner!
n
The Euclidean metric is commonly used to measure distance"
11!
Cooperation!
g
The activation of the winning neuron is spread to neurons in its
immediate neighborhood!
n
The winners neighborhood is determined on the lattice topology!
n
This allows topologically close neurons to become sensitive to similar patterns"
Distance in the lattice is a function of the number of lateral connections to the
winner (as in city-block distance)"
The size of the neighborhood is initially large, but shrinks over time!
n
n
An initially large neighborhood promotes a topology-preserving mapping"
Smaller neighborhoods allows neurons to specialize in the latter stages of
training"
12!
Adaptation!
g
During training, the winner neuron and its topological
neighbors are adapted to make their weight vectors more
similar to the input pattern that caused the activation!
n
n
The adaptation rule is similar to the one presented in slide 4"
Neurons that are closer to the winner will adapt more heavily than
neurons that are further away"
The magnitude of the adaptation is controlled with a learning rate, which
decays over time to ensure convergence of the SOM"
13!
SOM Algorithm!
14!
SOM Example(1d)!
15!
SOM Example(2d)!
16!
SOM Demo!
17!