Cellular
automata,
entropy
and
box-‐coun4ng
dimension
Cellular
Automata
Cellular
automata
(CA)
models
epitomize
the
idea
that
simple
rules
can
generate
complex
pa=erns.
A
CA
consists
of
an
array
of
cells
each
with
an
integer
’state’.
On
each
4me
step
a
local
update
rule
is
applied
to
the
cells.
The
update
rule
defines
how
the
a
par4cular
cell
will
update
its
state
as
a
funcion
of
its
neighbours
state.
The
CA
is
run
over
4me
and
the
evolu4on
of
the
state
is
observed.
Elementary
cellular
automata
The
elementary
one
dimensional
CA
is
defined
by
how
three
cells
influence
a
single
cell.
For
example,
The
rules
can
be
expressed
in
binary
form
where
a
0
represents
that
a
par4cular
configura4on
gives
a
white
output
and
a
1
denotes
a
black
output.
The
above
rule
in
binary
is
00011110.
Conver4ng
this
binary
number
to
base-‐10,
we
call
this
rule
30.
We
thus
have
a
set
of
different
rules
for
elementary
cellular
automata
from
0
up
to
255.
Different
rules
Elementary
CA
simulators
are
easily
found
on
the
internet.
For
example,
a
NetLogo
implementa4on
can
be
found
at
h=p://[Link]/netlogo/
We
will
look
at
rules
254,
250,
150,
90,
110,
30
and
193
in
this
simulator.
Classifying
elementary
CA
An
empirical
observa4on
is
that
cellular
automata
can
be
classied
according
to
the
complexity
and
informa4on
produced
by
the
behavior
of
the
pa=ern
they
produce:
Class
1
:
Fixed;
all
cells
converge
to
a
constant
black
or
white
set
Class
2
:
Periodic;
repeats
the
same
pa=ern,
like
a
loop
Class
3
:
Chao4c;
pseudo-‐random
Class
4
:
Complex
local
structures;
exhibits
behaviors
of
both
class
2
and
class
3;
with
long
lived
hard
to
classify
structure.
This
classica4on
has
a
bit
of
a
feeling
of
saying
there
are
three
types
we
roughly
understand
plus
one
(class
4)
we
don't.
In
par4cular,
classes
1
to
3
can
be
charcterised
by
Entropy
and
the
Lyapunov
exponent,
this
is
less
true
of
class
4.
Game
of
Life
Cellular
automata
can
be
extended
to
have
longer
range
interac4ons,
e.g.
the
ve
nearest
neighbours
determine
the
new
state
of
the
cell.
They
can
also
be
extended
to
two
or
more
dimensions.
A
par4cularly
well
studied
2D
cellular
automaton
is
called
The
Game
of
Life.
In
this
CA,
each
cell
checks
the
state
of
itself
and
its
eight
surrounding
neighbors
and
then
sets
itself
to
either
alive
or
dead
(hence
the
name
'Game
of
Life').
If
there
are
less
than
two
alive
neighbors,
then
the
cell
dies.
If
there
are
more
than
three
alive
neighbors,
the
cell
dies.
If
there
are
2
alive
neighbors,
the
cell
remains
in
the
state
it
is
in.
If
there
are
exactly
three
alive
neighbors,
the
cell
becomes
alive.
This
is
process
is
con4nued
indenitely.
Game
of
Life
is
also
available
at
the
NetLogo
site.
Stable
shapes,
gliders
etc…
There
are
many
interes4ng
pa=erns
generated
by
this
simple
rule.
One
of
these
is
the
glider.
This
set
up
travels
in
the
same
direc4on
as
long
as
it
doesn't
crash
with
anything
else.
Another
pictured
below
is
a
glider
gun,
which
creates
a
line
of
gliders.
Complexity
Explorer
A"er
this
lecture
watch
videos:
6.1
Introduc4on
6.2
Game
of
Life
6.3
Elementary
Cellular
Automata
6.4
Cellular
Automata
as
Dynamical
Systems
Measuring
complexity
• The
pa=erns
arising
from
cellular
automata
give
us
a
lot
to
think
about.
• We
see
a
very
simple
set
of
rules
generate
very
complex
pa=erns.
Moreover,
many
of
these
pa=erns
are
life-‐like.
The
problem
is
dening
what
we
mean
by
life-‐like
and
quan4fying
this
idea.
• Doing
this
is
by
no
means
a
completed
scien4c
ac4vity.
Kolmogorov
Complexity
One
of
the
first
defini4ons
of
complexity
of
a
string
of
1's
and
0's
was
proposed
by
Kolmogorov
in
the
1960s.
This
string
may,
for
example,
be
the
output
of
a
cellular
automata.
The
basic
idea
is
that
the
length
of
the
string's
shortest
descrip4on
in
terms
of
a
computer
program
is
its
complexity.
For
example,
an
array
of
0s
(i.e.
000[Link])
has
low
complexity,
because
we
can
write
a
program
that
says
'keep
wri4ng
0s'.
All
periodic
arrays
have
low
complexity,
since
we
can
write
a
program
that
says,
for
example,
"write
0110
ten
thousand
4mes".
Entropy
Kolmogorov
Complexity
By
Kolmogorov's
deni4on,
the
strings
which
are
most
complex
are
those
which
are
most
difficult
to
compact
without
losing
informa4on.
We
denote
Kolmogorov
complexity
with
K.
This
returns
us
to
our
second
example
of
entropy.
Imagine
we
can
divide
our
original
string
in
to
a
finite
set
of
components
and
label
these
components
A,
B,
C
etc.
(example
on
the
board).
We
can
now
replace
A,
B,
C
etc.
with
binary
strings
1,01,001,0001,
…
with
the
most
common
strings
receiving
the
shortest
encoding.
S
is
the
set
of
all
possible
le=ers
and
ps
be
the
probability
that
le=er
s
appears
then
(by
the
argument
in
example
2
of
entropy)
the
length
of
this
coding
with
be
at
most
its
Shannon
Entropy
plus
1.
H = −∑ ps log 2 ( ps ) +1
s∈S
Kolmogorov
Complexity
• Random
strings
then
have
the
highest
complexity!
• This
goes
against
our
intu4on
of
what
we
mean
by
complexity.
• There
are
however
examples
of
strings
with
high
Entropy
but
low
Kolmogorov
complexity
(for
example
middle
line
of
rule
30).
• Maybe
H-‐K
is
a
good
measure
of
complexity?
• But
then
finding
K
for
a
given
string
is
difficult!
Universal
Compu4ng
• One
possible
solu4on
to
defining
complexity
in
cellular
automata
is
by
saying
that
a
CA
rule
is
complex
if
it
it
can
act
as
a
universal
compu4ng
machine.
• A
universal
compu4ng
machine
(or
universal
Turing
machine)
is
a
computer
program
that
can
simulate
any
other
arbitrary
computer
program
to
produce
the
input
that
program
would
produce.
• The
main
point
comes
down
to
whether
one
can
construct
logical
gates
with
a
par4cular
CA
rule.
Rule
110
• It
turns
out
that
both
the
'Game
of
Life'
and
even
rule
110
from
an
elementary
CA
fullfil
the
criteria
for
universal
computa4on.
• Rule
110
is,
Rule
110
The
pictures
below
give
some
feeling
for
how
structures
in
rule
110
can
interact
to
build
a
compu4ng
device.
They
show
a
right
moving,
a
lej
moving,and
a
sta4onary
strucuture
Rule
110
When
interac4ng,
depending
on
how
they
meet,
they
can
either
pass
through
each
other
or
cause
the
forma4on
of
the
sta4onary
shape.
Pukng
these
sorts
of
shapes
together
allow
a
computer
program
to
be
executed,
although
the
complexity
of
the
program
itself
is
likely
to
be
enormous.
Universal
Compu4ng
• Although
interes4ng,
this
classica4on
of
CA
in
terms
of
how
much
computa4on
they
can
do
has
not
really
shed
much
light
on
the
ques4on
of
how
to
denfine
and
measure
complexity.
Complexity
Explorer
A"er
this
lecture
watch
videos:
4.3
Entropy
and
sta4s4cal
mechanics
4.4
Shannon
Informa4on
6.5
Cellular
Automata
as
Computers
Box
coun4ng
dimension
A
property
of
the
pa=erns
made
by
"complex"
CA
is
that
they
appear
to
repeat
on
very
dierent
scales.
For
example,
rule
90
gives
Box
coun4ng
dimension
This
structure
is
iden4cal
to
a
geometric
shape
known
as
Sierpinski
triangle,
which
can
also
be
constructed
by
the
following
itera4on.
The
Sierpinksi
triangle
is
one
example
of
a
fractal:
"a
rough
or
fragmented
geometric
shape
that
can
be
split
into
parts,
each
of
which
is
(at
least
approximately)
a
reduced-‐size
copy
of
the
whole,"
(Barnsley,
Michael
F.,
Fractals
Everywhere
1993).
Box
coun4ng
dimension
Box
coun4ng
dimension
An
important
property
of
the
Sierpinksi
triangle
and
fractals
in
general
is
self-‐similarity.
As
we
zoom
in
on
the
Sierpinski
triangle
we
see
another
iden4cal
triangle
and
so
on
forever.
For
the
Sierpinski
triangle
constructed
using
the
itera4ve
method,
this
is
a
consequence
of
the
way
we
constructed
it.
For
the
cellular
automata
construc4on
we
did
not
directly
encode
the
self-‐
similarity
in
the
resul4ng
output,
but
we
can
now
use
that
self-‐
similarity
to
analyse
the
structure.
The
main
tool
for
categorising
fractals
is
their
box-‐coun4ng
dimension.
The
basic
idea
is
this.
We
cover
an
object
with
square
boxes
of
some
par4cular
size
and
we
ask
what
is
the
number
of
boxes
needed
to
cover
all
the
black
parts
of
the
object.
Box
coun4ng
dimension
Box
coun4ng
dimension
Fractals
in
Nature
Complexity
Explorer
A"er
this
lecture
watch
videos:
3.2
The
Koch
curve
3.3
Fractal
dimension
3.4
Examples
of
fractal
dimension
3.6
Box
xoun4ng
dimension