Showing posts with label math. Show all posts
Showing posts with label math. Show all posts

Tuesday, October 13, 2015

Game of Life with Python

The game of life consists of a grid of cells where each cell can be dead or alive and the state of the cells can change at each time step. The state of a cell at the time step t depends on the state of the grid at time t-1 and it is determined with a very simple rule:

A cell is alive if it's already alive and has two living neighbours, or if it has three live neighbours.

We call the grid universe and the alive cells population. At each time step the population evolves and we have a new generation. The evolution of the population is a fascinating process to observe because it can generate an incredible variety of patterns (and also puzzles!).

Implementing the game of life in Python is quite straightforward:
import numpy as np

def life(X, steps):
    """
     Conway's Game of Life.
     - X, matrix with the initial state of the game.
     - steps, number of generations.
    """
    def roll_it(x, y):
        # rolls the matrix X in a given direction
        # x=1, y=0 on the left;  x=-1, y=0 right;
        # x=0, y=1 top; x=0, y=-1 down; x=1, y=1 top left; ...
        return np.roll(np.roll(X, y, axis=0), x, axis=1)

    for _ in range(steps):
        # count the number of neighbours 
        # the universe is considered toroidal
        Y = roll_it(1, 0) + roll_it(0, 1) + roll_it(-1, 0) \
            + roll_it(0, -1) + roll_it(1, 1) + roll_it(-1, -1) \
            + roll_it(1, -1) + roll_it(-1, 1)
        # game of life rules
        X = np.logical_or(np.logical_and(X, Y ==2), Y==3)
        X = X.astype(int)
        yield X
The function life takes in input a matrix X which represents the universe of the game where each cell is alive if its corresponding element has value 1 and dead if 0. The function returns the next steps generations. At each time step the number of neighbours of each cell is counted and the rule of the game is applied. Now we can create an universe with an initial state:
X = np.zeros((40, 40)) # 40 by 40 dead cells

# R-pentomino
X[23, 22:24] = 1
X[24, 21:23] = 1
X[25, 22] = 1
This initial state is known as the R-pentomino. It consists of five living cells organized as shown here (image from Wikipedia)

It is by far the most active polyomino with fewer than six cells, all of the others stabilize in at most 10 generations. Let's create a video to visualize the evolution of the system:
from matplotlib import pyplot as plt
import matplotlib.animation as manimation

FFMpegWriter = manimation.writers['ffmpeg']
metadata = dict(title='Game of life', artist='JustGlowing')
writer = FFMpegWriter(fps=10, metadata=metadata)

fig = plt.figure()
fig.patch.set_facecolor('black')
with writer.saving(fig, "game_of_life.mp4", 200):
    plt.spy(X)
    plt.axis('off')
    writer.grab_frame()
    plt.clf()
    for x in life(X, 800):
        plt.spy(x)
        plt.axis('off')
        writer.grab_frame()
        plt.clf()
The result is as follows:


In the video we can notice few very well known patters like gliders and blinkers. Also an exploding start at 0:55!

Saturday, May 25, 2013

A colorful Trifoil Knot

Some time ago I was looking for a cool way to visualize parametric equations and I started working on the the trefoil knot. It is the simplest example of a nontrivial knot and it can be defined with the following parametric equations:


After a while this snippet came out:
from numpy import linspace,pi,cos,sin
phi = linspace(0,2*pi,100)
x = sin(phi)+2*sin(2*phi)
y = cos(phi)-2*cos(2*phi)
z = -sin(3*phi)

from pylab import scatter,subplot,cm,show
from numpy import abs
s=abs(((y+3)+2)*10)
scatter(x,y,c=x,s=s,edgecolor='w',cmap=cm.gist_rainbow)
scatter(x/1.4,y/1.2,c=x,s=s,edgecolor='w',cmap=cm.gist_rainbow_r)
scatter(x/1.8,y/1.5,c=x,s=s,edgecolor='w',cmap=cm.gist_rainbow)
show()
So, here's my colorful version of the trifold knot:

Monday, February 18, 2013

Visualizing the tangent

The tangent to a curve is the straight line that touches the curve at a given point. One reason that tangents are so important is that they give the slopes of straight lines. So, if your curve represents a time series you can tell the ratio of change of your values just looking at the tangent.
Suppose that a curve is given as the graph of a function, y = f(x). We have that the slope in the point (a, f(a)) is equal to its derivative in a


and the equation of the tangent line can be stated as follows


With this in mind we can write a snippet of code which visualize the tangent of a curve:
from numpy import sin,linspace,power
from pylab import plot,show

def f(x): # sample function
 return x*sin(power(x,2))

# evaluation of the function
x = linspace(-2,4,150)
y = f(x)

a = 1.4
h = 0.1
fprime = (f(a+h)-f(a))/h # derivative
tan = f(a)+fprime*(x-a)  # tangent

# plot of the function and the tangent
plot(x,y,'b',a,f(a),'om',x,tan,'--r')
show()
The result is as follows:


The f is plotted with a blue curve and the tangent is the dashed line. Looking at the graph it is actually easy to observe that the tangent gives us a way to visualize the slope of a curve in a point. Let's see how this can help us in a practical example. Consider the fresh potatoes consumer price index between the years 1949 and 2006:
from numpy import arange
# Fresh potatoes: Annual Consumer price index, 1949-2006
# obatined at https://explore.data.gov/Agriculture/U-S-Potato-Statistics/cgk7-6ccj
price_index = [21.0,17.6,19.3,28.9,21.1,20.5,22.1,26.4,22.3,24.4,
24.6,28.0,24.7,24.9,25.7,31.6,39.1,31.3,31.3,32.1,34.4,38.0,36.7,
39.6,58.8,71.8,57.7,62.6,63.8,66.3,63.6,81.0,109.5,92.7,91.3,116.0,
101.5,96.1,116.0,119.1,153.5,162.6,144.6,141.5,154.6,174.3,174.7,
180.6,174.1,185.2,193.1,196.3,202.3,238.6,228.1,231.1,247.7,273.1]
t = np.arange(1949,2007)
From the calculus we have that the derivative is positive when f is increasing, it is negative when f is decreasing and zero when f has a saddle point. So, if we look at the tangent of the curve of the consumer price index in a certain year we have that it has a positive slope when the price index is increasing, a negative slope when the price are decreasing and it is constant when the trend is going to change. Using an interpolation of the data we loaded above we can plot the tangent in each year we want:
from scipy import interpolate

def draw_tangent(x,y,a):
 # interpolate the data with a spline
 spl = interpolate.splrep(x,y)
 small_t = arange(a-5,a+5)
 fa = interpolate.splev(a,spl,der=0)     # f(a)
 fprime = interpolate.splev(a,spl,der=1) # f'(a)
 tan = fa+fprime*(small_t-a) # tangent
 plot(a,fa,'om',small_t,tan,'--r')

draw_tangent(t,price_index,1991)
draw_tangent(t,price_index,1998)

plot(t,price_index,alpha=0.5)
show()



The graph shows the data contained in the array price_index and shows the tangent of the curve for the years 1991 and 1998. Using the tangent, this graph gives an emphasis about the fact that the price index is decreasing during the years around 1991 and increasing around 1998.

Find out more about derivative approximation in the post Finite differences with Toeplitz matrix.

Friday, March 16, 2012

SVD decomposition with numpy

The SVD decomposition is a factorization of a matrix, with many useful applications in signal processing and statistics. In this post we will see
  • how to compute the SVD decomposition of a matrix A using numpy,
  • how to compute the inverse of A using the matrices computed by the decomposition,
  • and how to solve a linear equation system Ax=b using using the SVD.
The SVD decomposition of a matrix A is of the fom


Since U and V are orthogonal (this means that U^T*U=I and V^T*V=I) we can write the inverse of A as (see Solving overdetermined systems with the QR decomposition for the tricks)


So, let's start computing the factorization and the inverse
from numpy import *

A = floor(random.rand(4,4)*20-10) # generating a random
b = floor(random.rand(4,1)*20-10) # system Ax=b

U,s,V = linalg.svd(A) # SVD decomposition of A

# computing the inverse using pinv
pinv = linalg.pinv(A)
# computing the inverse using the SVD decomposition
pinv_svd = dot(dot(V.T,linalg.inv(diag(s))),U.T)

print "Inverse computed by lingal.pinv()\n",pinv
print "Inverse computed using SVD\n",pinv_svd
As we can see, the output shows that pinv and pinv_svd are the equal
Inverse computed by lingal.pinv()
[[ 0.06578301 -0.04663721  0.0436917   0.089838  ]
 [ 0.15243004  0.044919   -0.03681885  0.00294551]
 [ 0.18213058 -0.01718213  0.06872852 -0.07216495]
 [ 0.03976436  0.09867452  0.03387334 -0.04270987]]
Inverse computed using SVD
[[ 0.06578301 -0.04663721  0.0436917   0.089838  ]
 [ 0.15243004  0.044919   -0.03681885  0.00294551]
 [ 0.18213058 -0.01718213  0.06872852 -0.07216495]
 [ 0.03976436  0.09867452  0.03387334 -0.04270987]]
Now, we can solve Ax=b using the inverse:


or solving the system

Multiplying by U^T we obtain


Then, letting c be U^Tb and w be V^Tx, we see


Since sigma is diagonal, we can easily obtain w solving the system above. And finally, we can obtain x solving

Let's compare the results of those methods:

x = linalg.solve(A,b) # solve Ax=b using linalg.solve

xPinv = dot(pinv_svd,b) # solving Ax=b computing x = A^-1*b

# solving Ax=b using the equation above
c = dot(U.T,b) # c = U^t*b
w = linalg.solve(diag(s),c) # w = V^t*c
xSVD = dot(V.T,w) # x = V*w

print "Ax=b solutions compared"
print x.T
print xSVD.T
print xPinv.T
As aspected, we have the same solutions:
Ax=b solutions compared
[[ 0.13549337 -0.37260677  1.62886598 -0.09720177]]
[[ 0.13549337 -0.37260677  1.62886598 -0.09720177]]
[[ 0.13549337 -0.37260677  1.62886598 -0.09720177]]

Tuesday, February 28, 2012

Finite differences with Toeplitz matrix

A Toeplitz matrix is a band matrix in which each descending diagonal from left to right is constant. In this post we will see how to approximate the derivative of a function f(x) as matrix-vector products between a Toeplitz matrix and a vector of equally spaced values of f. Let's see how to generate the matrices we need using the function toeplitz(...) provided by numpy:
from numpy import *
from scipy.linalg import toeplitz
import pylab

def forward(size):
 """ returns a toeplitz matrix
   for forward differences
 """
 r = zeros(size)
 c = zeros(size)
 r[0] = -1
 r[size-1] = 1
 c[1] = 1
 return toeplitz(r,c)

def backward(size):
 """ returns a toeplitz matrix
   for backward differences
 """
 r = zeros(size)
 c = zeros(size)
 r[0] = 1
 r[size-1] = -1
 c[1] = -1
 return toeplitz(r,c).T

def central(size):
 """ returns a toeplitz matrix
   for central differences
 """
 r = zeros(size)
 c = zeros(size)
 r[1] = .5
 r[size-1] = -.5
 c[1] = -.5
 c[size-1] = .5
 return toeplitz(r,c).T

# testing the functions printing some 4-by-4 matrices
print 'Forward matrix'
print forward(4)
print 'Backward matrix'
print backward(4)
print 'Central matrix'
print central(4)
The result of the test above is as follows:
Forward matrix
[[-1.  1.  0.  0.]
 [ 0. -1.  1.  0.]
 [ 0.  0. -1.  1.]
 [ 1.  0.  0. -1.]]
Backward matrix
[[ 1.  0.  0. -1.]
 [-1.  1.  0.  0.]
 [ 0. -1.  1.  0.]
 [ 0.  0. -1.  1.]]
Central matrix
[[ 0.   0.5  0.  -0.5]
 [-0.5  0.   0.5  0. ]
 [ 0.  -0.5  0.   0.5]
 [ 0.5  0.  -0.5  0. ]]
We can observe that the matrix-vector product between those matrices and the vector of equally spaced values of f(x) implements, respectively, the following equations:

Forward difference,
Backward difference,
And central difference,

where h is the step size between the samples. Those equations are called Finite Differences and they give us an approximate derivative of f. So, let's approximate some derivatives!
x = linspace(0,10,15)
y = cos(x) # recall, the derivative of cos(x) is sin(x)
# we need the step h to compute f'(x) 
# because the product gives h*f'(x)
h = x[1]-x[2]
# generating the matrices
Tf = forward(15)/h 
Tb = backward(15)/h
Tc = central(15)/h

pylab.subplot(211)
# approximation and plotting
pylab.plot(x,dot(Tf,y),'g',x,dot(Tb,y),'r',x,dot(Tc,y),'m')
pylab.plot(x,sin(x),'b--',linewidth=3)
pylab.axis([0,10,-1,1])

# the same experiment with more samples (h is smaller)
x = linspace(0,10,50)
y = cos(x)
h = x[1]-x[2]
Tf = forward(50)/h
Tb = backward(50)/h
Tc = central(50)/h

pylab.subplot(212)
pylab.plot(x,dot(Tf,y),'g',x,dot(Tb,y),'r',x,dot(Tc,y),'m')
pylab.plot(x,sin(x),'b--',linewidth=3)
pylab.axis([0,10,-1,1])
pylab.legend(['Forward', 'Backward', 'Central', 'True f prime'],loc=4)
pylab.show()
The resulting plot would appear as follows:


As the theory suggests, the approximation is better when h is smaller and the central differences are more accurate (note that, they have a higher order of accuracy respect to the backward and forward ones).

Saturday, January 21, 2012

Monte Carlo estimate for pi with numpy

In this post we will use a Monte Carlo method to approximate pi. The idea behind the method that we are going to see is the following:

Draw the unit square and the unit circle. Consider only the part of the circle inside the square and pick uniformly a large number of points at random over the square. Now, the unit circle has pi/4 the area of the square. So, it should be apparent that of the total number of points that hit within the square, the number of points that hit the circle quadrant is proportional to the area of that part. This gives a way to approximate pi/4 as the ratio between the number of points inside circle and the total number of points and multiplying it by 4 we have pi.

Let's see the python script that implements the method discussed above using the numpy's indexing facilities:
from pylab import plot,show,axis
from numpy import random,sqrt,pi

# scattering n points over the unit square
n = 1000000
p = random.rand(n,2)

# counting the points inside the unit circle
idx = sqrt(p[:,0]**2+p[:,1]**2) < 1

plot(p[idx,0],p[idx,1],'b.') # point inside
plot(p[idx==False,0],p[idx==False,1],'r.') # point outside
axis([-0.1,1.1,-0.1,1.1]) 
show()

# estimation of pi
print '%0.16f' % (sum(idx).astype('double')/n*4),'result'
print '%0.16f' % pi,'real pi'
The program will print the pi approximation on the standard out:
3.1457199999999998 result
3.1415926535897931 real pi
and will show a graph with the generated points:



Note that the lines of code used to estimate pi are just 3!

Saturday, January 14, 2012

How to plot a function of two variables with matplotlib

In this post we will see how to visualize a function of two variables in two ways. First, we will create an intensity image of the function and, second, we will use the 3D plotting capabilities of matplotlib to create a shaded surface plot. So, let's go with the code:
from numpy import exp,arange
from pylab import meshgrid,cm,imshow,contour,clabel,colorbar,axis,title,show

# the function that I'm going to plot
def z_func(x,y):
 return (1-(x**2+y**3))*exp(-(x**2+y**2)/2)
 
x = arange(-3.0,3.0,0.1)
y = arange(-3.0,3.0,0.1)
X,Y = meshgrid(x, y) # grid of point
Z = z_func(X, Y) # evaluation of the function on the grid

im = imshow(Z,cmap=cm.RdBu) # drawing the function
# adding the Contour lines with labels
cset = contour(Z,arange(-1,1.5,0.2),linewidths=2,cmap=cm.Set2)
clabel(cset,inline=True,fmt='%1.1f',fontsize=10)
colorbar(im) # adding the colobar on the right
# latex fashion title
title('$z=(1-x^2+y^3) e^{-(x^2+y^2)/2}$')
show()
The script would have the following output:



And now we are going to use the values stored in X,Y and Z to make a 3D plot using the mplot3d toolkit. Here's the snippet:
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib.pyplot as plt

fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, 
                      cmap=cm.RdBu,linewidth=0, antialiased=False)

ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))

fig.colorbar(surf, shrink=0.5, aspect=5)

plt.show()
And this is the result:

Tuesday, January 3, 2012

Fixed point iteration

A fixed point for a function is a point at which the value of the function does not change when the function is applied. More formally, x is a fixed point for a given function f if
and the fixed point iteration
converges to the a fixed point if f is continuous.
The following function implements the fixed point iteration algorithm:
from pylab import plot,show
from numpy import array,linspace,sqrt,sin
from numpy.linalg import norm

def fixedp(f,x0,tol=10e-5,maxiter=100):
 """ Fixed point algorithm """
 e = 1
 itr = 0
 xp = []
 while(e > tol and itr < maxiter):
  x = f(x0)      # fixed point equation
  e = norm(x0-x) # error at the current step
  x0 = x
  xp.append(x0)  # save the solution of the current step
  itr = itr + 1
 return x,xp
Let's find the fixed point of the square root funtion starting from x = 0.5 and plot the result
f = lambda x : sqrt(x)

x_start = .5
xf,xp = fixedp(f,x_start)

x = linspace(0,2,100)
y = f(x)
plot(x,y,xp,f(xp),'bo',
     x_start,f(x_start),'ro',xf,f(xf),'go',x,x,'k')
show()
The result of the program would appear as follows:
The red dot is the starting point, the blue ones are the sequence x_1,x_2,x_3,... and the green is the fixed point found.
In a similar way, we can compute the fixed point of function of multiple variables:
# 2 variables function
def g(x):
 x[0] = 1/4*(x[0]*x[0] + x[1]*x[1])
 x[1] = sin(x[0]+1)
 return array(x)

x,xf = fixedp(g,[0, 1])
print '   x =',x
print 'f(x) =',g(xf[len(xf)-1])
In this case g is a function of two variables and x is a vector, so the fixed point is a vector and the output is as follows:
   x = [ 0.          0.84147098]
f(x) = [ 0.          0.84147098]

Thursday, December 8, 2011

Lissajous curves

And after the Epitrochoids, we're going to see another family of wonderful figures: The Lissajous curves. The equations that describe these curves are the following


the curves vary with respect the parameter t and their appearance is determined by the ratio a/b and the value of δ.
As usual, I made a snippet to visualize them:
from numpy import sin,pi,linspace
from pylab import plot,show,subplot

a = [1,3,5,3] # plotting the curves for
b = [1,5,7,4] # different values of a/b
delta = pi/2
t = linspace(-pi,pi,300)

for i in range(0,4):
 x = sin(a[i] * t + delta)
 y = sin(b[i] * t)
 subplot(2,2,i+1)
 plot(x,y)

show()
This is the result

Thursday, November 17, 2011

Fun with Epitrochoids

An epitrochoid is a curve traced by a point attached to a circle of radius r rolling around the outside of a fixed circle of radius R, where the point is a distance d from the center of the exterior circle [Ref]. Lately I found the Epitrochoid's parametric equations on wikipedia:


So,I decided to plot them with pylab. This is the script I made
from numpy import sin,cos,linspace,pi
import pylab

# curve parameters
R = 14
r = 1
d = 18

t = linspace(0,2*pi,300)

# Epitrochoid parametric equations
x = (R-r)*cos(t)-d*cos( (R+r)*t / r )
y = (R-r)*sin(t)-d*sin( (R+r)*t / r )

pylab.plot(x,y,'r')
pylab.axis('equal')
pylab.show()
And this is the result
isn't it fashinating? :)

Thursday, September 22, 2011

The sampling theorem explained with numpy

The sampling theorem states that a continuous signal x(t) bandlimited to B Hz can be recovered from its samples x[n] = x(n*T), where n is an integer, if T is greater than or equal to 1/(2B) without loss of any information. And we call 2B the Nyquist rate.
Sampling at a rate below the Nyquist rate is called undersampling, it leads to the aliasing effect. Let's observe the aliasing effect with the following script:
from numpy import linspace,cos,pi,ceil,floor,arange
from pylab import plot,show,axis

# sampling a signal badlimited to 40 Hz 
# with a sampling rate of 800 Hz
f = 40;  # Hz
tmin = -0.3;
tmax = 0.3;
t = linspace(tmin, tmax, 400);
x = cos(2*pi*t) + cos(2*pi*f*t); # signal sampling
plot(t, x)

# sampling the signal with a sampling rate of 80 Hz
# in this case, we are using the Nyquist rate.
T = 1/80.0;
nmin = ceil(tmin / T);
nmax = floor(tmax / T);
n = arange(nmin,nmax);
x1 = cos(2*pi*n*T) + cos(2*pi*f*n*T);
plot(n*T, x1, 'bo')

# sampling the signal with a sampling rate of 35 Hz
# note that 35 Hz is under the Nyquist rate.
T = 1/35.0;
nmin = ceil(tmin / T);
nmax = floor(tmax / T);
n = arange(nmin,nmax);
x2 = cos(2*pi*n*T) + cos(2*pi*f*n*T);
plot(n*T, x2, '-r.',markersize=8)

axis([-0.3, 0.3, -1.5, 2.3])
show()
The following figure is the result:
The blue curve is the original signal, the blue dots are the samples obtained with the Nyquist rate and the red dots are the samples obtainde with 35 Hz. It's easy to see that the blue samples are enough to recover the blue curve, while the red ones are not enough to capture the oscillations of the signal.

Wednesday, August 10, 2011

Applying a Moebius transformation to an image

The Moebius transformation is defined as
where z is a complex variable different than -d/c. And a, b, c, d are complex numbers.
In this post we will see how to apply the Moebius transform to an image.
Given a set of three distinct points on the complex plane z1, z2, z3 and a second set of distinct points w1, w2, w3, there exists precisely one Moebius transformation f(z) which maps the zs to the ws. So, in the first step we have to determine f(z) from the given sets of points. We will use the explicit determinant formula to compute the coefficients:
from pylab import *
from numpy import *

zp=[157+148j, 78+149j, 54+143j]; # (zs) the complex point zp[i]
wa=[147+143j, 78+140j, 54+143j]; # (ws) will be in wa[i]

# transformation parameters
a = linalg.det([[zp[0]*wa[0], wa[0], 1], 
                [zp[1]*wa[1], wa[1], 1], 
                [zp[2]*wa[2], wa[2], 1]]);

b = linalg.det([[zp[0]*wa[0], wa[0], wa[0]], 
                [zp[1]*wa[1], wa[1], wa[1]], 
                [zp[2]*wa[2], wa[2], wa[2]]]);         

c = linalg.det([[zp[0], wa[0], 1], 
                [zp[1], wa[1], 1], 
                [zp[2], wa[2], 1]]);

d = linalg.det([[zp[0]*wa[0], zp[0], 1], 
                [zp[1]*wa[1], zp[1], 1], 
                [zp[2]*wa[2], zp[2], 1]]);
Now we can apply the transformation to the pixel coordinates.
img = fliplr(imread('mondrian.jpg')) # load an image

r = ones((500,500,3),dtype=uint8)*255 # empty-white image
for i in range(img.shape[0]):
 for j in range(img.shape[1]):
  z = complex(i,j)
  # transformation applied to the pixels coordinates
  w = (a*z+b)/(c*z+d)
  r[int(real(w)),int(imag(w)),:] = img[i,j,:] # copy of the pixel

subplot(1,2,1)
title('Original Mondrian')
imshow(img)
subplot(1,2,2)
title('Mondrian after the transformation')
imshow(roll(r,120,axis=1))
show()
And this is the result.
This is another image obtained changing the vectors zp and wa.
As we can see, the result of the transformation has some empty areas that we should fill using interpolation. So let's look to another way to implement a geometric transformation on an image. The following code uses the scipy.ndimage.geometric_transform to implement the inverse Moebius transform:
from scipy.ndimage import geometric_transform

def shift_func(coords):
 """ Define the moebius transformation, though backwards """
 #turn the first two coordinates into an imaginary number
 z = coords[0] + 1j*coords[1]
 w = (d*z-b)/(-c*z+a) #the inverse mobius transform
 #take the color along for the ride
 return real(w),imag(w),coords[2]
    
r = geometric_transform(img,shift_func,cval=255,output_shape=(450,350,3)) 
It gives the following result:
We can see that the geometric_transform provides the pixel interpolation automatically.


Monday, July 11, 2011

Prime factor decomposition of a number

The following function compute the prime factors (PFs) of an integer.
from math import floor

def factors(n):
 result = []
 for i in range(2,n+1): # test all integers between 2 and n
  s = 0;
  while n/i == floor(n/float(i)): # is n/i an integer?
   n = n/float(i)
   s += 1
  if s > 0:
   for k in range(s):
    result.append(i) # i is a pf s times
   if n == 1:
    return result

# test
print factors(90)
The result will be
[2, 3, 3, 5]
This means that 90 is equal to 2*3*3*5.