Cities are interesting. Some people enjoy living in cities, others tend to avoid them and rather prefer the countryside. Whatever is your preference, one thing is true: Cities generate a huge amount of data that are good source for analyses.
The following is a brief tutorial to get quick insight into Python data analyses and visualization. It is meant to be an introduction for beginners1. Much of this is already documented on the web, and sometimes with more detail. So why another guide?
Many guides treat every topic separately and use new datasets everytime a new data problem is presented. There are howvever advantages of using the same dataset for learning a large variety of analyses. One advantage is time-related, in that all attention can be focused on coding without losing "switching" time understanding new datasets. More importantly, working on a single dataset one experiences the evolution from preprocessing till visualization.
The goal of this tutorial is not to solve an important data science problem. For this, anyways, the presented data is too small. The goal is to use an intuitive dataset to learn Python, intuitive in the sense that it contains data that everyone can understand without prior domain knowledge. It should therefore be suitable for someone who is a starter in Data Science.
We start this tutorial creating data, loading data, data munging and in the end several visualizations. Do not hesitate to contact me if you have any questions or suggestions: [email protected]
First start up IPython or Jupyter notebook. An installation guide can be found at the following page2.
The current tutorial treats a great variety of data types, various preprocessing steps (for ex.: selecting data, merging data, replacing data, finding missing values), data visualization (for ex.: boxplots, barplots, pie-charts, geographic mapping), from different packages (for ex. Matplotlib, Seaborn, Plotly).
In Python, several abbreviations are used, such as pd for Pandas, np for numpy etc. The abbreviations you can use for imported packages are arbitrary, but in this tutorial I used those that are used most commonly on platforms such as StackOverflow. The following packages are needed throughout the tutorial.
import os
import csv
import pandas as pd
import numpy as np
from datetime import datetime
The packages here below are needed for visualizations, such as Matplotlib, Seaborn, and Plotly:
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from matplotlib import cm
import seaborn as sns
import plotly
import plotly.plotly as py
import plotly.graph_objs as go
The following packages will be used for geographic mapping:
from mpl_toolkits.basemap import Basemap as Basemap
from matplotlib.colors import rgb2hex
from matplotlib.patches import Polygon
from geopy.geocoders import Nominatim
import math
There is no problem of continuity when any of the visualizations or geographic mapping are skipped.
We are going to use data from European cities, consisting of categorical and numerical data. The data were obtained from Wikipedia. Temperatures are daily means in Celsius. Population is urban population. Altitude is the highest level reported.
We are going to create a DataFrame manually, later we will see how to load a DataFrame. After typing the following code, your DataFrame df1 is created.
data = {'Place': ['Amsterdam', 'Barcelona', 'Paris', 'Geneva', 'Munich', 'Athens', 'Vienna'],
'Temp_Jan': [3.4, 11.8, 5.0, 1.5, 0.3, 9.9, 1.2],
'Temp_Jul': [17.6, 25.7, 20.6, 20.2, 19.4, 28.5, 19.1],
'Language': ['Dutch', 'Catalan and Spanish', 'French', 'French', 'German', 'Greek', 'German'],
'Altitude': [-2, 12, np.nan, 375, 520, 338, 542],
'MeasureDate':[np.nan, np.nan, '01-01-2013', '01-12-2015', '31-12-2015', np.nan, '01-01-2017'],
'Population': [1351587,4740000,10601122,198072,1450381, 3090508, 2600000]}
df1 = pd.DataFrame(data, columns = ['Place', 'Temp_Jan', 'Temp_Jul', 'Language', 'Altitude','MeasureDate', 'Population'])
Inspecting the data will show us the following table. From now on, often the code .head() will be used. By default the first five rows are shown. Putting any number inbetween the parentheses will change the number of rows. The command .tail() does exactly the same but starting from the end.
df1.head(4)
To select only one column or variable, use the following:
df1['Temp_Jan'].head(3)
Another notation that is used and that gives the same result:
df1.Temp_Jan.head(3)
If you desire selecting multiple columns, do the following:
df1.loc[:,['Temp_Jan','Temp_Jul']].head(3)
To select a row, for example the first row, do the following. Python starts indexing from 0 (0=first row, 1=second row, etc.):
df1.ix[0]
To select the first column:
df1[[0]].head(3)
To select multiple rows:
df1[0:2]
Or use the following code:
df1.ix[[0,1]]
To select multiple columns:
df1[[0,1]].head(3)
To check the data type of df1.
type(df1)
Or to check the variable types:
df1.dtypes
Convert the variable Language into a categorical variable.
df1['Language'] = df1['Language'].astype('category')
After this we need to verify if we have gotten the right type.
df1['Language'].dtypes
Now we change the column MeasureDate to a date variable
df1['MeasureDate'].dtypes
df1['MeasureDate']=pd.to_datetime(df1['MeasureDate'])
df1['MeasureDate'].dtypes
df1.dtypes
The DataFrame can be dissected in its column- and rownames, and the values inside:
df1.columns
df1.index
df1.values
And if we wanted to select the first column name, the following can be done:
df1.columns[0]
The size of the DataFrame can be appreciated by the shape command:
df1.shape
And the length of the DataFrame, or in other words, the number of rows, by the len command:
len(df1)
We created a fairly small dataset. Most of the times it is possible to load data from a file. In the following section we will see how to load data using pd.read_csv. The file is from my GitHub page. CSV means that a file containing comma separated values is expected. An important remark: When loading a file, one needs to carefully check if the decimals are dots or commas (this can differ between countries and systems).3
url = 'https://raw.githubusercontent.com/RRighart/City/master/df2.csv'
df2 = pd.read_csv(url, parse_dates=True, delimiter=",", decimal=",")
So this should give the following data of cities:
df2.head(3)
The columns Language and MeasureDate are not the right class. So it would be best to change this directly.
df2.dtypes
Language should be of type "category".
df2['Language'] = df2['Language'].astype('category')
MeasureDate should be of type "datetime".
df2['MeasureDate']=pd.to_datetime(df2['MeasureDate'])
df2.dtypes
For categorical variables, it is essential to know the frequency of each category. Note that a slightly different notation, namely df2['Language'].value_counts(), works here as well.
df2.Language.value_counts()
If loading the data did not work, another indirect way is loading the data manually from my GitHub page: https://github.com/RRighart/City . Then check that the data are saved in your current directory. Use the following to check the path:
os.getcwd()
Using the command os.listdir(os.curdir) will produce a list of files that is in your directory. Now it is possible to verify if your copied .csv file is really in the right folder. To change the directory you can use the command os.chdir('path').
It is often the case that data are acquired from disparate sources, like we have here. So now the trick is to bind the two sets together.
df = df1.append(df2, ignore_index=True)
Displaying the whole DataFrame df now we can see that there are 23 rows.
df
Another way this can be done is by the pd.concat function4:
pd.concat([df1, df2], ignore_index=True).head(4)
One of the first data inspections is if and where there are missing values (data that are not available for various reasons). As we can see, there are missing values in the data; missing values are labeled differently in Python for numeric data (NaN) and dates (NaT)5. First let's see how to detect where missing values occur.
The following command will indicate those cells with "True" where there is a missing value.
pd.isnull(df).head(3)
The cells that mention "True" are missing values. They only occur for Altitude and MeasureDate. It is also possible to inspect specific variables, just to avoid the output of the whole datasheet.
pd.isnull(df['Altitude']).head(5)
There is a simple way to check the number of missing values per variable, using the is.null function.
df.isnull().sum()
The inverse is also possible, that is counting the number of nonmissing values.
df.count()
Statistics could be computed while ignoring the missing values. Another possibility is to drop the row(s) that contains any missing value, using df.dropna(how='any'). If you put how='all' it will only drop the row(s) if all values are missing. Note that only writing it to df will change the DataFrame. So if df = df.dropna(how='any') were used, the DataFrame df would have changed.
df.dropna(how='any')
Missing values are automatically omitted from summary statistics, such as describe().
df['Altitude'].describe()
It is possible to replace values with NaNs that fulfill a certain condition, for example if Altitude is larger than 500 m. It is adviced to refrain from using chain indices, such as df['Altitude'][df['Altitude']>350]=np.nan6
df.loc[df['Altitude']>350, 'Altitude'] = np.nan
df['Altitude']
The columns of a DataFrame can be alphabetically sorted on the column name. For this we use sort_index with axis=1, and ascending=True.
df.sort_index(axis=1, ascending=True).head(3)
It is also possible to sort the rows based on a certain column, using sort_values. If the column is a string, it is sorted alphabetically. If the column is numeric, it is ranked according to value. For example, in the following the DataFrame is sorted alphabetically on Place, and the result is written to df.
df = df.sort_values(by='Place')
Displaying the first ten rows we can confirm that the sorting worked.
df.head(5)
It is sometimes important to inspect specific cells. For example, if you need to look up the data that is in the first row, first column, you could do the following:
df.iloc[0,0]
If you want to have multiple rows and columns, the following can be done:
df.iloc[:3,:2]
Select rows within a given range, for example first till fourth column, and second till third row.
df.iloc[1:3,0:4]
For selecting certain columns, but switching the position of the second and third
df.iloc[:,[1,3,2]].head(5)
There are circumstances in which you'd only want to select cases that fulfill a certain condition. The following only selects data if the average temperature in January is higher than 9 degrees Celsius:
df[df.Temp_Jan > 9]
Select cases where the average July temperature was equal to 28.5.
df[df.Temp_Jul == 28.5]
Select cases if July temperature was unequal to 28.5.
df[df.Temp_Jul != 28.5].head(3)
It is also possible to select data that fulfill multiple conditions, for example cities where the January temperature is larger than 5 and the July temperature is smaller than 17 degrees Celsius.
df[(df.Temp_Jan > 5) & (df.Temp_Jul < 17)]
The following selects the data if either January temperature is larger than 5 or July temperature is smaller than 17. Note that Cork is not selected in this case, since it fulfills both conditions.
df[(df.Temp_Jan > 5) ^ (df.Temp_Jul < 17)]
The following includes the Cork case. It select the data if January temperature is larger than 5 and/or July temperature is smaller than 17. That, means it also includes those cases that fulfill both cases.
df[(df.Temp_Jan > 5) | (df.Temp_Jul < 17)]
Imagine that you have a very large dataset, and you wanted to check data for the city Paris. Of course by alphabetically arranging the data (as we have done) it is not that difficult to find back the data. But in some cases you would need a search function. To select a row based on a value in a categorical variable. So if we want to find back the "Paris" data, do the following:
df[df['Place'].str.contains('Paris')]
And it also work if you are only using part of a string, for example "Stock"
df[df['Place'].str.contains('Stock')]
Another example, searching for "Spanish" gives the following hits:
df[df['Language'].str.contains('Spanish')]
Data can be added manually as well. Adding a row can be done in at least two ways:
First, we use df.loc to add a row at the right index, using len(df)+1, which takes the lenght of the DataFrame plus one. So this always places the new row at the end.
df.loc[len(df)+1,:]=['Rome', 7.5, 24.1, 'Italian', 21, np.nan, 4353775]
As we will see the new entry "Rome" is added in the last row.
df.tail(3)
A second way to do this is by using append. We first make newrow:
newrow = pd.Series(['Madrid', 6.3, 25.6, 'Spanish', 667, np.nan, 6240000], index= ['Place', 'Temp_Jan', 'Temp_Jul', 'Language', 'Altitude', 'MeasureDate', 'Population'])
type(newrow)
newrow
If we have created the row, we use append.
df = df.append(newrow, ignore_index=True)
df.tail(3)
To have a summary of a variable, use the describe() command. Note that median is the 50% indicator. It is also possible to use for ex. min(), mean(), median() and other stats separately. From these statistics it is directly clear that in July temperature is higher than January (not unexpected for European countries).
df['Temp_Jan'].describe()
df['Temp_Jul'].describe()
We are going to create boxplots using Matplotlib. Before running any code, we make sure that the plot layout is reset to the origin. As you will see later, we are going to use Seaborn, and this may change the layout of other plots7
sns.reset_orig()
Boxplots can be used to check for outliers and distribution of the variables. To position the two boxplots, fig.add_subplot(a,b,c) is used, where a is number of rows, b is the number of columns, and c is the position that increments rowwise.
fig = plt.figure()
fig.add_subplot(1,2,1)
df[['Temp_Jan']].boxplot(sym='.')
fig.add_subplot(1,2,2)
df[['Temp_Jul']].boxplot(sym='.')
plt.show()
Not bad these plots. Matplotlib gives a quick insight on data distribution and possible outliers. But the range of the axes are different for the two plots, and unnecessary space is wasted for two y-axes. So the question arises if we could change this.
First, we are going to select the two columns and make a new temporary DataFrame called temp.
temp = df.iloc[:,[1,2]]
Second, we'd like to adapt our columnnames slightly, and the following code does that:
temp.columns = ['January','July']
The following code puts the two boxplots in one figure. We could add a line plt.ylim to adjust the range of the y-axis.
temp.boxplot(sym='.')
plt.ylim(-10, 30)
plt.show()