Pandas - Cleaning Data
Data Cleaning Data cleaning means fixing bad data in your data set.
Bad data could be:
Empty cells Data in wrong format Wrong data Duplicates
Our Data Set In the next chapters we will use this data set:
Duration Date Pulse Maxpulse Calories
0 60 '2020/12/01' 110 130 409.1
1 60 '2020/12/02' 117 145 479.0
3 45 '2020/12/04' 109 175 282.4
4 45 '2020/12/05' 117 148 406.0
5 60 '2020/12/06' 102 127 300.0
6 60 '2020/12/07' 110 136 374.0
7 450 '2020/12/08' 104 134 253.3
8 30 '2020/12/09' 109 133 195.1
9 60 '2020/12/10' 98 124 269.0
10 60 '2020/12/11' 103 147 329.3
11 60 '2020/12/12' 100 120 250.7
12 60 '2020/12/12' 100 120 250.7
13 60 '2020/12/13' 106 128 345.3
14 60 '2020/12/14' 104 132 379.3
15 60 '2020/12/15' 98 123 275.0
16 60 '2020/12/16' 98 120 215.2
17 60 '2020/12/17' 100 120 300.0
18 45 '2020/12/18' 90 112 NaN
19 60 '2020/12/19' 103 123 323.0
20 45 '2020/12/20' 97 125 243.0
21 60 '2020/12/21' 108 131 364.2
22 45 NaN 100 119 282.0
23 60 '2020/12/23' 130 101 300.0
24 45 '2020/12/24' 105 132 246.0
25 60 '2020/12/25' 102 126 334.5
26 60 2020/12/26 100 120 250.0
27 60 '2020/12/27' 92 118 241.0
28 60 '2020/12/28' 103 132 NaN
29 60 '2020/12/29' 100 132 280.0
30 60 '2020/12/30' 102 129 380.0
31 60 '2020/12/31' 92 115 243.0
The data set contains some empty cells ("Date" in row 22, and "Calories" in row 18 and 28).
The data set contains wrong format ("Date" in row 26).
The data set contains wrong data ("Duration" in row 7).
The data set contains duplicates (row 11 and 12).
Pandas - Cleaning Empty Cells
Empty Cells Empty cells can potentially give you a wrong result when you analyze data.
Remove Rows One way to deal with empty cells is to remove rows that contain empty cells.
This is usually OK, since data sets can be very big, and removing a few rows will not have a big impact
on the result.
In [2]: #Example
#Return a new Data Frame with no empty cells:
#Import the pandas library.
import pandas as pd
#Read the CSV file into a DataFrame.
df = pd.read_csv(r"D:\Anaconda\[Link]")
#Create a new DataFrame by dropping rows with any NaN values
new_df = [Link]()
#Print the new DataFrame.
print(new_df.to_string())
Duration Date Pulse Maxpulse Calories
0 60 01-12-2020 110 130 409.1
1 60 02-12-2020 117 145 479.0
2 60 03-12-2020 103 135 340.0
3 45 04-12-2020 109 175 282.4
4 45 05-12-2020 117 148 406.0
5 60 06-12-2020 102 127 300.0
6 60 07-12-2020 110 136 374.0
7 450 08-12-2020 104 134 253.3
8 30 09-12-2020 109 133 195.1
9 60 10-12-2020 98 124 269.0
10 60 11-12-2020 103 147 329.3
11 60 12-12-2020 100 120 250.7
12 60 12-12-2020 100 120 250.7
13 60 13-12-2020 106 128 345.3
14 60 14-12-2020 104 132 379.3
15 60 15-12-2020 98 123 275.0
16 60 16-12-2020 98 120 215.2
17 60 17-12-2020 100 120 300.0
19 60 19-12-2020 103 123 323.0
20 45 20-12-2020 97 125 243.0
21 60 21-12-2020 108 131 364.2
23 60 23-12-2020 130 101 300.0
24 45 24-12-2020 105 132 246.0
25 60 25-12-2020 102 126 334.5
26 60 20201226 100 120 250.0
27 60 27-12-2020 92 118 241.0
29 60 29-12-2020 100 132 280.0
30 60 30-12-2020 102 129 380.3
31 60 31-12-2020 92 115 243.0
Note: By default, the dropna() method returns a new DataFrame, and will not change the original.
If you want to change the original DataFrame, use the inplace = True argument:
In [3]:
#Example
#Remove all rows with NULL values:
# Import the pandas library
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
# Drop rows with any NaN values in place
[Link](inplace=True)
# Print the DataFrame
print(df.to_string())
Duration Date Pulse Maxpulse Calories
0 60 01-12-2020 110 130 409.1
1 60 02-12-2020 117 145 479.0
2 60 03-12-2020 103 135 340.0
3 45 04-12-2020 109 175 282.4
4 45 05-12-2020 117 148 406.0
5 60 06-12-2020 102 127 300.0
6 60 07-12-2020 110 136 374.0
7 450 08-12-2020 104 134 253.3
8 30 09-12-2020 109 133 195.1
9 60 10-12-2020 98 124 269.0
10 60 11-12-2020 103 147 329.3
11 60 12-12-2020 100 120 250.7
12 60 12-12-2020 100 120 250.7
13 60 13-12-2020 106 128 345.3
14 60 14-12-2020 104 132 379.3
15 60 15-12-2020 98 123 275.0
16 60 16-12-2020 98 120 215.2
17 60 17-12-2020 100 120 300.0
19 60 19-12-2020 103 123 323.0
20 45 20-12-2020 97 125 243.0
21 60 21-12-2020 108 131 364.2
23 60 23-12-2020 130 101 300.0
24 45 24-12-2020 105 132 246.0
25 60 25-12-2020 102 126 334.5
26 60 20201226 100 120 250.0
27 60 27-12-2020 92 118 241.0
29 60 29-12-2020 100 132 280.0
30 60 30-12-2020 102 129 380.3
31 60 31-12-2020 92 115 243.0
Note: Now, the dropna(inplace = True) will NOT return a new DataFrame, but it will remove all rows
containing NULL values from the original DataFrame.
Replace Empty Values Another way of dealing with empty cells is to insert a new value instead.
This way you do not have to delete entire rows just because of some empty cells.
The fillna() method allows us to replace empty cells with a value:
In [4]:
# Example
# Replace NULL values with the number 130:
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
# Replace NULL values with 130
[Link](130, inplace=True)
# Print the DataFrame
print(df.to_string())
Duration Date Pulse Maxpulse Calories
0 60 01-12-2020 110 130 409.1
1 60 02-12-2020 117 145 479.0
2 60 03-12-2020 103 135 340.0
3 45 04-12-2020 109 175 282.4
4 45 05-12-2020 117 148 406.0
5 60 06-12-2020 102 127 300.0
6 60 07-12-2020 110 136 374.0
7 450 08-12-2020 104 134 253.3
8 30 09-12-2020 109 133 195.1
9 60 10-12-2020 98 124 269.0
10 60 11-12-2020 103 147 329.3
11 60 12-12-2020 100 120 250.7
12 60 12-12-2020 100 120 250.7
13 60 13-12-2020 106 128 345.3
14 60 14-12-2020 104 132 379.3
15 60 15-12-2020 98 123 275.0
16 60 16-12-2020 98 120 215.2
17 60 17-12-2020 100 120 300.0
18 45 18-12-2020 90 112 130.0
19 60 19-12-2020 103 123 323.0
20 45 20-12-2020 97 125 243.0
21 60 21-12-2020 108 131 364.2
22 45 130 100 119 282.0
23 60 23-12-2020 130 101 300.0
24 45 24-12-2020 105 132 246.0
25 60 25-12-2020 102 126 334.5
26 60 20201226 100 120 250.0
27 60 27-12-2020 92 118 241.0
28 60 28-12-2020 103 132 130.0
29 60 29-12-2020 100 132 280.0
30 60 30-12-2020 102 129 380.3
31 60 31-12-2020 92 115 243.0
Replace Only For Specified Columns The example above replaces all empty cells in the whole Data
Frame.
To only replace empty values for one column, specify the column name for the DataFrame:
In [1]:
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
# Fill NaN values in the "Calories" column with 130
df = [Link](Calories=df["Calories"].fillna(130))
# Print the DataFrame
print(df.to_string())
#This operation inserts 130 in empty cells in the "Calories" column (row 18
Duration Date Pulse Maxpulse Calories
0 60 01-12-2020 110 130 409.1
1 60 02-12-2020 117 145 479.0
2 60 03-12-2020 103 135 340.0
3 45 04-12-2020 109 175 282.4
4 45 05-12-2020 117 148 406.0
5 60 06-12-2020 102 127 300.0
6 60 07-12-2020 110 136 374.0
7 450 08-12-2020 104 134 253.3
8 30 09-12-2020 109 133 195.1
9 60 10-12-2020 98 124 269.0
10 60 11-12-2020 103 147 329.3
11 60 12-12-2020 100 120 250.7
12 60 12-12-2020 100 120 250.7
13 60 13-12-2020 106 128 345.3
14 60 14-12-2020 104 132 379.3
15 60 15-12-2020 98 123 275.0
16 60 16-12-2020 98 120 215.2
17 60 17-12-2020 100 120 300.0
18 45 18-12-2020 90 112 130.0
19 60 19-12-2020 103 123 323.0
20 45 20-12-2020 97 125 243.0
21 60 21-12-2020 108 131 364.2
22 45 NaN 100 119 282.0
23 60 23-12-2020 130 101 300.0
24 45 24-12-2020 105 132 246.0
25 60 25-12-2020 102 126 334.5
26 60 20201226 100 120 250.0
27 60 27-12-2020 92 118 241.0
28 60 28-12-2020 103 132 130.0
29 60 29-12-2020 100 132 280.0
30 60 30-12-2020 102 129 380.3
31 60 31-12-2020 92 115 243.0
Replace Using Mean, Median, or Mode A common way to replace empty cells, is to calculate the mean,
median or mode value of the column.
Pandas uses the mean() median() and mode() methods to calculate the respective values for a
specified column:
In [4]: #Example
#Calculate the MEAN, and replace any empty values with it:
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
# Calculate the mean of the "Calories" column
x = df["Calories"].mean()
# Fill NaN values in the "Calories" column with the mean value
df = [Link](Calories=df["Calories"].fillna(x))
# Print the DataFrame
print(df.to_string())
Duration Date Pulse Maxpulse Calories
0 60 01-12-2020 110 130 409.10
1 60 02-12-2020 117 145 479.00
2 60 03-12-2020 103 135 340.00
3 45 04-12-2020 109 175 282.40
4 45 05-12-2020 117 148 406.00
5 60 06-12-2020 102 127 300.00
6 60 07-12-2020 110 136 374.00
7 450 08-12-2020 104 134 253.30
8 30 09-12-2020 109 133 195.10
9 60 10-12-2020 98 124 269.00
10 60 11-12-2020 103 147 329.30
11 60 12-12-2020 100 120 250.70
12 60 12-12-2020 100 120 250.70
13 60 13-12-2020 106 128 345.30
14 60 14-12-2020 104 132 379.30
15 60 15-12-2020 98 123 275.00
16 60 16-12-2020 98 120 215.20
17 60 17-12-2020 100 120 300.00
18 45 18-12-2020 90 112 304.68
19 60 19-12-2020 103 123 323.00
20 45 20-12-2020 97 125 243.00
21 60 21-12-2020 108 131 364.20
22 45 NaN 100 119 282.00
23 60 23-12-2020 130 101 300.00
24 45 24-12-2020 105 132 246.00
25 60 25-12-2020 102 126 334.50
26 60 20201226 100 120 250.00
27 60 27-12-2020 92 118 241.00
28 60 28-12-2020 103 132 304.68
29 60 29-12-2020 100 132 280.00
30 60 30-12-2020 102 129 380.30
31 60 31-12-2020 92 115 243.00
Mean = the average value (the sum of all values divided by number of values).
In [6]:
#Example
#Calculate the MEDIAN, and replace any empty values with it:
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
# Calculate the median of the "Calories" column
x = df["Calories"].median()
# Fill NaN values in the "Calories" column with the median value
df = [Link](Calories=df["Calories"].fillna(x))
# Print the DataFrame
print(df.to_string())
Duration Date Pulse Maxpulse Calories
0 60 01-12-2020 110 130 409.1
1 60 02-12-2020 117 145 479.0
2 60 03-12-2020 103 135 340.0
3 45 04-12-2020 109 175 282.4
4 45 05-12-2020 117 148 406.0
5 60 06-12-2020 102 127 300.0
6 60 07-12-2020 110 136 374.0
7 450 08-12-2020 104 134 253.3
8 30 09-12-2020 109 133 195.1
9 60 10-12-2020 98 124 269.0
10 60 11-12-2020 103 147 329.3
11 60 12-12-2020 100 120 250.7
12 60 12-12-2020 100 120 250.7
13 60 13-12-2020 106 128 345.3
14 60 14-12-2020 104 132 379.3
15 60 15-12-2020 98 123 275.0
16 60 16-12-2020 98 120 215.2
17 60 17-12-2020 100 120 300.0
18 45 18-12-2020 90 112 291.2
19 60 19-12-2020 103 123 323.0
20 45 20-12-2020 97 125 243.0
21 60 21-12-2020 108 131 364.2
22 45 NaN 100 119 282.0
23 60 23-12-2020 130 101 300.0
24 45 24-12-2020 105 132 246.0
25 60 25-12-2020 102 126 334.5
26 60 20201226 100 120 250.0
27 60 27-12-2020 92 118 241.0
28 60 28-12-2020 103 132 291.2
29 60 29-12-2020 100 132 280.0
30 60 30-12-2020 102 129 380.3
31 60 31-12-2020 92 115 243.0
Median = the value in the middle, after you have sorted all values ascending.
In [9]:
#Example
#Calculate the MODE, and replace any empty values with it:
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
# Calculate the mode of the "Calories" column
x = df["Calories"].mode()[0]
# Fill NaN values in the "Calories" column with the mode value
df = [Link](Calories=df["Calories"].fillna(x))
# Print the DataFrame
print(df.to_string())
Duration Date Pulse Maxpulse Calories
0 60 01-12-2020 110 130 409.1
1 60 02-12-2020 117 145 479.0
2 60 03-12-2020 103 135 340.0
3 45 04-12-2020 109 175 282.4
4 45 05-12-2020 117 148 406.0
5 60 06-12-2020 102 127 300.0
6 60 07-12-2020 110 136 374.0
7 450 08-12-2020 104 134 253.3
8 30 09-12-2020 109 133 195.1
9 60 10-12-2020 98 124 269.0
10 60 11-12-2020 103 147 329.3
11 60 12-12-2020 100 120 250.7
12 60 12-12-2020 100 120 250.7
13 60 13-12-2020 106 128 345.3
14 60 14-12-2020 104 132 379.3
15 60 15-12-2020 98 123 275.0
16 60 16-12-2020 98 120 215.2
17 60 17-12-2020 100 120 300.0
18 45 18-12-2020 90 112 300.0
19 60 19-12-2020 103 123 323.0
20 45 20-12-2020 97 125 243.0
21 60 21-12-2020 108 131 364.2
22 45 NaN 100 119 282.0
23 60 23-12-2020 130 101 300.0
24 45 24-12-2020 105 132 246.0
25 60 25-12-2020 102 126 334.5
26 60 20201226 100 120 250.0
27 60 27-12-2020 92 118 241.0
28 60 28-12-2020 103 132 300.0
29 60 29-12-2020 100 132 280.0
30 60 30-12-2020 102 129 380.3
31 60 31-12-2020 92 115 243.0
Mode = the value that appears most frequently.
Pandas - Cleaning Data of Wrong Format
Data of Wrong Format Cells with data of wrong format can make it difficult, or even impossible, to
analyze data.
To fix it, you have two options: remove the rows, or convert all cells in the columns into the same
format.
Convert Into a Correct Format In our Data Frame, we have two cells with the wrong format. Check out
row 22 and 26, the 'Date' column should be a string that represents a date:
Let's try to convert all cells in the 'Date' column into dates.
Pandas has a to_datetime() method for this:
In [14]:
#Example
#Convert to date:
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
# Convert the 'Date' column to datetime format
df['Date'] = pd.to_datetime(df['Date'], errors='coerce')
# Print the DataFrame
print(df.to_string())
Duration Date Pulse Maxpulse Calories
0 60 2020-01-12 110 130 409.1
1 60 2020-02-12 117 145 479.0
2 60 2020-03-12 103 135 340.0
3 45 2020-04-12 109 175 282.4
4 45 2020-05-12 117 148 406.0
5 60 2020-06-12 102 127 300.0
6 60 2020-07-12 110 136 374.0
7 450 2020-08-12 104 134 253.3
8 30 2020-09-12 109 133 195.1
9 60 2020-10-12 98 124 269.0
10 60 2020-11-12 103 147 329.3
11 60 2020-12-12 100 120 250.7
12 60 2020-12-12 100 120 250.7
13 60 NaT 106 128 345.3
14 60 NaT 104 132 379.3
15 60 NaT 98 123 275.0
16 60 NaT 98 120 215.2
17 60 NaT 100 120 300.0
18 45 NaT 90 112 NaN
19 60 NaT 103 123 323.0
20 45 NaT 97 125 243.0
21 60 NaT 108 131 364.2
22 45 NaT 100 119 282.0
23 60 NaT 130 101 300.0
24 45 NaT 105 132 246.0
25 60 NaT 102 126 334.5
26 60 NaT 100 120 250.0
27 60 NaT 92 118 241.0
28 60 NaT 103 132 NaN
29 60 NaT 100 132 280.0
30 60 NaT 102 129 380.3
31 60 NaT 92 115 243.0
As you can see from the result, the date in row 26 was fixed, but the empty date in row 22 got a NaT
(Not a Time) value, in other words an empty value. One way to deal with empty values is simply
removing the entire row.
Removing Rows The result from the converting in the example above gave us a NaT value, which can
be handled as a NULL value, and we can remove the row by using the dropna() method.
In [15]: #Example
#Remove rows with a NULL value in the "Date" column:
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
[Link](subset=['Date'], inplace = True)
# Print the DataFrame
print(df.to_string())
Duration Date Pulse Maxpulse Calories
0 60 01-12-2020 110 130 409.1
1 60 02-12-2020 117 145 479.0
2 60 03-12-2020 103 135 340.0
3 45 04-12-2020 109 175 282.4
4 45 05-12-2020 117 148 406.0
5 60 06-12-2020 102 127 300.0
6 60 07-12-2020 110 136 374.0
7 450 08-12-2020 104 134 253.3
8 30 09-12-2020 109 133 195.1
9 60 10-12-2020 98 124 269.0
10 60 11-12-2020 103 147 329.3
11 60 12-12-2020 100 120 250.7
12 60 12-12-2020 100 120 250.7
13 60 13-12-2020 106 128 345.3
14 60 14-12-2020 104 132 379.3
15 60 15-12-2020 98 123 275.0
16 60 16-12-2020 98 120 215.2
17 60 17-12-2020 100 120 300.0
18 45 18-12-2020 90 112 NaN
19 60 19-12-2020 103 123 323.0
20 45 20-12-2020 97 125 243.0
21 60 21-12-2020 108 131 364.2
23 60 23-12-2020 130 101 300.0
24 45 24-12-2020 105 132 246.0
25 60 25-12-2020 102 126 334.5
26 60 20201226 100 120 250.0
27 60 27-12-2020 92 118 241.0
28 60 28-12-2020 103 132 NaN
29 60 29-12-2020 100 132 280.0
30 60 30-12-2020 102 129 380.3
31 60 31-12-2020 92 115 243.0
Pandas - Fixing Wrong Data
Wrong Data "Wrong data" does not have to be "empty cells" or "wrong format", it can just be wrong, like
if someone registered "199" instead of "1.99".
Sometimes you can spot wrong data by looking at the data set, because you have an expectation of
what it should be.
If you take a look at our data set, you can see that in row 7, the duration is 450, but for all the other
rows the duration is between 30 and 60.
It doesn't have to be wrong, but taking in consideration that this is the data set of someone's workout
sessions, we conclude with the fact that this person did not work out in 450 minutes.
How can we fix wrong values, like the one for "Duration" in row 7?
Replacing Values One way to fix wrong values is to replace them with something else.
In our example, it is most likely a typo, and the value should be "45" instead of "450", and we could just
insert "45" in row 7:
In [18]:
#Example
#Set "Duration" = 45 in row 7:
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
[Link][7, 'Duration'] = 45
# Print the DataFrame
print(df.to_string())
Duration Date Pulse Maxpulse Calories
0 60 01-12-2020 110 130 409.1
1 60 02-12-2020 117 145 479.0
2 60 03-12-2020 103 135 340.0
3 45 04-12-2020 109 175 282.4
4 45 05-12-2020 117 148 406.0
5 60 06-12-2020 102 127 300.0
6 60 07-12-2020 110 136 374.0
7 45 08-12-2020 104 134 253.3
8 30 09-12-2020 109 133 195.1
9 60 10-12-2020 98 124 269.0
10 60 11-12-2020 103 147 329.3
11 60 12-12-2020 100 120 250.7
12 60 12-12-2020 100 120 250.7
13 60 13-12-2020 106 128 345.3
14 60 14-12-2020 104 132 379.3
15 60 15-12-2020 98 123 275.0
16 60 16-12-2020 98 120 215.2
17 60 17-12-2020 100 120 300.0
18 45 18-12-2020 90 112 NaN
19 60 19-12-2020 103 123 323.0
20 45 20-12-2020 97 125 243.0
21 60 21-12-2020 108 131 364.2
22 45 NaN 100 119 282.0
23 60 23-12-2020 130 101 300.0
24 45 24-12-2020 105 132 246.0
25 60 25-12-2020 102 126 334.5
26 60 20201226 100 120 250.0
27 60 27-12-2020 92 118 241.0
28 60 28-12-2020 103 132 NaN
29 60 29-12-2020 100 132 280.0
30 60 30-12-2020 102 129 380.3
31 60 31-12-2020 92 115 243.0
For small data sets you might be able to replace the wrong data one by one, but not for big data sets.
To replace wrong data for larger data sets you can create some rules, e.g. set some boundaries for
legal values, and replace any values that are outside of the boundaries.
In [20]:
#Example
#Loop through all values in the "Duration" column.
#If the value is higher than 120, set it to 120:
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
for x in [Link]:
if [Link][x, "Duration"] > 120:
[Link][x, "Duration"] = 120
# Print the DataFrame
print(df.to_string())
Duration Date Pulse Maxpulse Calories
0 60 01-12-2020 110 130 409.1
1 60 02-12-2020 117 145 479.0
2 60 03-12-2020 103 135 340.0
3 45 04-12-2020 109 175 282.4
4 45 05-12-2020 117 148 406.0
5 60 06-12-2020 102 127 300.0
6 60 07-12-2020 110 136 374.0
7 120 08-12-2020 104 134 253.3
8 30 09-12-2020 109 133 195.1
9 60 10-12-2020 98 124 269.0
10 60 11-12-2020 103 147 329.3
11 60 12-12-2020 100 120 250.7
12 60 12-12-2020 100 120 250.7
13 60 13-12-2020 106 128 345.3
14 60 14-12-2020 104 132 379.3
15 60 15-12-2020 98 123 275.0
16 60 16-12-2020 98 120 215.2
17 60 17-12-2020 100 120 300.0
18 45 18-12-2020 90 112 NaN
19 60 19-12-2020 103 123 323.0
20 45 20-12-2020 97 125 243.0
21 60 21-12-2020 108 131 364.2
22 45 NaN 100 119 282.0
23 60 23-12-2020 130 101 300.0
24 45 24-12-2020 105 132 246.0
25 60 25-12-2020 102 126 334.5
26 60 20201226 100 120 250.0
27 60 27-12-2020 92 118 241.0
28 60 28-12-2020 103 132 NaN
29 60 29-12-2020 100 132 280.0
30 60 30-12-2020 102 129 380.3
31 60 31-12-2020 92 115 243.0
Removing Rows Another way of handling wrong data is to remove the rows that contains wrong data.
This way you do not have to find out what to replace them with, and there is a good chance you do not
need them to do your analyses.
In [21]: #Example
#Delete rows where "Duration" is higher than 120:
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
for x in [Link]:
if [Link][x, "Duration"] > 120:
[Link](x, inplace = True)
# Print the DataFrame
print(df.to_string())
Duration Date Pulse Maxpulse Calories
0 60 01-12-2020 110 130 409.1
1 60 02-12-2020 117 145 479.0
2 60 03-12-2020 103 135 340.0
3 45 04-12-2020 109 175 282.4
4 45 05-12-2020 117 148 406.0
5 60 06-12-2020 102 127 300.0
6 60 07-12-2020 110 136 374.0
8 30 09-12-2020 109 133 195.1
9 60 10-12-2020 98 124 269.0
10 60 11-12-2020 103 147 329.3
11 60 12-12-2020 100 120 250.7
12 60 12-12-2020 100 120 250.7
13 60 13-12-2020 106 128 345.3
14 60 14-12-2020 104 132 379.3
15 60 15-12-2020 98 123 275.0
16 60 16-12-2020 98 120 215.2
17 60 17-12-2020 100 120 300.0
18 45 18-12-2020 90 112 NaN
19 60 19-12-2020 103 123 323.0
20 45 20-12-2020 97 125 243.0
21 60 21-12-2020 108 131 364.2
22 45 NaN 100 119 282.0
23 60 23-12-2020 130 101 300.0
24 45 24-12-2020 105 132 246.0
25 60 25-12-2020 102 126 334.5
26 60 20201226 100 120 250.0
27 60 27-12-2020 92 118 241.0
28 60 28-12-2020 103 132 NaN
29 60 29-12-2020 100 132 280.0
30 60 30-12-2020 102 129 380.3
31 60 31-12-2020 92 115 243.0
Pandas - Removing Duplicates
Discovering Duplicates Duplicate rows are rows that have been registered more than one time.
By taking a look at our test data set, we can assume that row 11 and 12 are duplicates.
To discover duplicates, we can use the duplicated() method.
The duplicated() method returns a Boolean values for each row:
In [22]: #Example
#Returns True for every row that is a duplicate, otherwise False:
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
print([Link]())
# Print the DataFrame
print(df.to_string())
0 False
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 False
10 False
11 False
12 True
13 False
14 False
15 False
16 False
17 False
18 False
19 False
20 False
21 False
22 False
23 False
24 False
25 False
26 False
27 False
28 False
29 False
30 False
31 False
dtype: bool
Duration Date Pulse Maxpulse Calories
0 60 01-12-2020 110 130 409.1
1 60 02-12-2020 117 145 479.0
2 60 03-12-2020 103 135 340.0
3 45 04-12-2020 109 175 282.4
4 45 05-12-2020 117 148 406.0
5 60 06-12-2020 102 127 300.0
6 60 07-12-2020 110 136 374.0
7 450 08-12-2020 104 134 253.3
8 30 09-12-2020 109 133 195.1
9 60 10-12-2020 98 124 269.0
10 60 11-12-2020 103 147 329.3
11 60 12-12-2020 100 120 250.7
12 60 12-12-2020 100 120 250.7
13 60 13-12-2020 106 128 345.3
14 60 14-12-2020 104 132 379.3
15 60 15-12-2020 98 123 275.0
16 60 16-12-2020 98 120 215.2
17 60 17-12-2020 100 120 300.0
18 45 18-12-2020 90 112 NaN
19 60 19-12-2020 103 123 323.0
20 45 20-12-2020 97 125 243.0
21 60 21-12-2020 108 131 364.2
22 45 NaN 100 119 282.0
23 60 23-12-2020 130 101 300.0
24 45 24-12-2020 105 132 246.0
25 60 25-12-2020 102 126 334.5
26 60 20201226 100 120 250.0
27 60 27-12-2020 92 118 241.0
28 60 28-12-2020 103 132 NaN
29 60 29-12-2020 100 132 280.0
30 60 30-12-2020 102 129 380.3
31 60 31-12-2020 92 115 243.0
Removing Duplicates To remove duplicates, use the drop_duplicates() method.
In [23]:
#Example
#Remove all duplicates:
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
df.drop_duplicates(inplace = True)
# Print the DataFrame
print(df.to_string())
Duration Date Pulse Maxpulse Calories
0 60 01-12-2020 110 130 409.1
1 60 02-12-2020 117 145 479.0
2 60 03-12-2020 103 135 340.0
3 45 04-12-2020 109 175 282.4
4 45 05-12-2020 117 148 406.0
5 60 06-12-2020 102 127 300.0
6 60 07-12-2020 110 136 374.0
7 450 08-12-2020 104 134 253.3
8 30 09-12-2020 109 133 195.1
9 60 10-12-2020 98 124 269.0
10 60 11-12-2020 103 147 329.3
11 60 12-12-2020 100 120 250.7
13 60 13-12-2020 106 128 345.3
14 60 14-12-2020 104 132 379.3
15 60 15-12-2020 98 123 275.0
16 60 16-12-2020 98 120 215.2
17 60 17-12-2020 100 120 300.0
18 45 18-12-2020 90 112 NaN
19 60 19-12-2020 103 123 323.0
20 45 20-12-2020 97 125 243.0
21 60 21-12-2020 108 131 364.2
22 45 NaN 100 119 282.0
23 60 23-12-2020 130 101 300.0
24 45 24-12-2020 105 132 246.0
25 60 25-12-2020 102 126 334.5
26 60 20201226 100 120 250.0
27 60 27-12-2020 92 118 241.0
28 60 28-12-2020 103 132 NaN
29 60 29-12-2020 100 132 280.0
30 60 30-12-2020 102 129 380.3
31 60 31-12-2020 92 115 243.0
Remember: The (inplace = True) will make sure that the method does NOT return a new DataFrame, but
it will remove all duplicates from the original DataFrame.
Pandas - Data Correlations
Finding Relationships A great aspect of the Pandas module is the corr() method.
The corr() method calculates the relationship between each column in your data set.
The examples in this page uses a CSV file called: '[Link]'.
In [27]:
#Example
#Show the relationship between the columns:
import pandas as pd
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
# Select only numeric columns
numeric_df = df.select_dtypes(include=[float, int])
# Calculate the correlation matrix
correlation_matrix = numeric_df.corr()
# Print the correlation matrix
print(correlation_matrix.to_string())
Duration Pulse Maxpulse Calories
Duration 1.000000 0.004410 0.049959 -0.114169
Pulse 0.004410 1.000000 0.276583 0.513186
Maxpulse 0.049959 0.276583 1.000000 0.357460
Calories -0.114169 0.513186 0.357460 1.000000
Note: The corr() method ignores "not numeric" columns.
Result Explained The Result of the corr() method is a table with a lot of numbers that represents how
well the relationship is between two columns.
The number varies from -1 to 1.
1 means that there is a 1 to 1 relationship (a perfect correlation), and for this data set, each time a
value went up in the first column, the other one went up as well.
0.9 is also a good relationship, and if you increase one value, the other will probably increase as well.
-0.9 would be just as good relationship as 0.9, but if you increase one value, the other will probably go
down.
0.2 means NOT a good relationship, meaning that if one value goes up does not mean that the other
will.
What is a good correlation? It depends on the use, but I think it is safe to say you have to have at least
0.6 (or -0.6) to call it a good correlation.
Perfect Correlation: We can see that "Duration" and "Duration" got the number 1.000000, which makes
sense, each column always has a perfect relationship with itself.
Good Correlation: "Duration" and "Calories" got a 0.922721 correlation, which is a very good
correlation, and we can predict that the longer you work out, the more calories you burn, and the other
way around: if you burned a lot of calories, you probably had a long work out.
Bad Correlation: "Duration" and "Maxpulse" got a 0.009403 correlation, which is a very bad correlation,
meaning that we can not predict the max pulse by just looking at the duration of the work out, and vice
versa.
Pandas - Plotting
Plotting Pandas uses the plot() method to create diagrams.
We can use Pyplot, a submodule of the Matplotlib library to visualize the diagram on the screen.
In [28]:
#Example
#Import pyplot from Matplotlib and visualize our DataFrame:
import pandas as pd
import [Link] as plt
df = pd.read_csv(r'D:\Anaconda\[Link]')
[Link]()
[Link]()
Scatter Plot Specify that you want a scatter plot with the kind argument:
kind = 'scatter'
A scatter plot needs an x- and a y-axis.
In the example below we will use "Duration" for the x-axis and "Calories" for the y-axis.
Include the x and y arguments like this:
x = 'Duration', y = 'Calories'
In [29]:
#Example
import pandas as pd
import [Link] as plt
df = pd.read_csv(r'D:\Anaconda\[Link]')
[Link](kind = 'scatter', x = 'Duration', y = 'Calories')
[Link]()
Remember: In the previous example, we learned that the correlation between "Duration" and "Calories"
was 0.922721, and we concluded with the fact that higher duration means more calories burned.
By looking at the scatterplot, I will agree.
Let's create another scatterplot, where there is a bad relationship between the columns, like "Duration"
and "Maxpulse", with the correlation 0.009403:
In [32]:
#Example
#A scatterplot where there are no relationship between the columns:
import pandas as pd
import [Link] as plt
df = pd.read_csv(r'D:\Anaconda\[Link]')
[Link](kind = 'scatter', x = 'Duration', y = 'Maxpulse')
[Link]()
Histogram Use the kind argument to specify that you want a histogram:
kind = 'hist'
A histogram needs only one column.
A histogram shows us the frequency of each interval, e.g. how many workouts lasted between 50 and
60 minutes?
In the example below we will use the "Duration" column to create the histogram:
In [43]: # Three lines to make our compiler able to draw:
import sys
import matplotlib
[Link]('Agg')
import pandas as pd
import [Link] as plt
# Read the CSV file
df = pd.read_csv(r'D:\Anaconda\[Link]')
# Plot the "Duration" column as a histogram
df["Duration"].plot(kind='hist')
# Save the plot to a file
[Link]('[Link]')
# Flush the buffer
[Link]()
# Display the plot
from [Link] import Image
Image(filename='[Link]')
Out [43]: