Pandas
Pandas
toolkit
Release 0.18.1
CONTENTS
Whats New
1.1 v0.18.1 (May 3, 2016) . . . . . . . . . . . . . . . . . .
1.2 v0.18.0 (March 13, 2016) . . . . . . . . . . . . . . . .
1.3 v0.17.1 (November 21, 2015) . . . . . . . . . . . . . .
1.4 v0.17.0 (October 9, 2015) . . . . . . . . . . . . . . . .
1.5 v0.16.2 (June 12, 2015) . . . . . . . . . . . . . . . . .
1.6 v0.16.1 (May 11, 2015) . . . . . . . . . . . . . . . . .
1.7 v0.16.0 (March 22, 2015) . . . . . . . . . . . . . . . .
1.8 v0.15.2 (December 12, 2014) . . . . . . . . . . . . . .
1.9 v0.15.1 (November 9, 2014) . . . . . . . . . . . . . . .
1.10 v0.15.0 (October 18, 2014) . . . . . . . . . . . . . . . .
1.11 v0.14.1 (July 11, 2014) . . . . . . . . . . . . . . . . . .
1.12 v0.14.0 (May 31 , 2014) . . . . . . . . . . . . . . . . .
1.13 v0.13.1 (February 3, 2014) . . . . . . . . . . . . . . . .
1.14 v0.13.0 (January 3, 2014) . . . . . . . . . . . . . . . .
1.15 v0.12.0 (July 24, 2013) . . . . . . . . . . . . . . . . . .
1.16 v0.11.0 (April 22, 2013) . . . . . . . . . . . . . . . . .
1.17 v0.10.1 (January 22, 2013) . . . . . . . . . . . . . . . .
1.18 v0.10.0 (December 17, 2012) . . . . . . . . . . . . . .
1.19 v0.9.1 (November 14, 2012) . . . . . . . . . . . . . . .
1.20 v0.9.0 (October 7, 2012) . . . . . . . . . . . . . . . . .
1.21 v0.8.1 (July 22, 2012) . . . . . . . . . . . . . . . . . .
1.22 v0.8.0 (June 29, 2012) . . . . . . . . . . . . . . . . . .
1.23 v.0.7.3 (April 12, 2012) . . . . . . . . . . . . . . . . .
1.24 v.0.7.2 (March 16, 2012) . . . . . . . . . . . . . . . . .
1.25 v.0.7.1 (February 29, 2012) . . . . . . . . . . . . . . .
1.26 v.0.7.0 (February 9, 2012) . . . . . . . . . . . . . . . .
1.27 v.0.6.1 (December 13, 2011) . . . . . . . . . . . . . . .
1.28 v.0.6.0 (November 25, 2011) . . . . . . . . . . . . . . .
1.29 v.0.5.0 (October 24, 2011) . . . . . . . . . . . . . . . .
1.30 v.0.4.3 through v0.4.1 (September 25 - October 9, 2011)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
20
49
54
80
83
94
109
115
121
148
153
180
187
210
221
230
236
247
251
253
254
259
263
264
264
269
270
272
273
Installation
2.1 Python version support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Installing pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
275
275
275
278
Contributing to pandas
3.1 Where to start? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Bug reports and enhancement requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Working with the code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
281
281
282
282
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3.4
3.5
3.6
Package overview
5.1 Data structures at a glance . . .
5.2 Mutability and copying of data .
5.3 Getting Support . . . . . . . .
5.4 Credits . . . . . . . . . . . . .
5.5 Development Team . . . . . . .
5.6 License . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
299
299
300
300
300
300
300
10 Minutes to pandas
6.1 Object Creation . . .
6.2 Viewing Data . . . .
6.3 Selection . . . . . .
6.4 Missing Data . . . .
6.5 Operations . . . . .
6.6 Merge . . . . . . . .
6.7 Grouping . . . . . .
6.8 Reshaping . . . . .
6.9 Time Series . . . . .
6.10 Categoricals . . . .
6.11 Plotting . . . . . . .
6.12 Getting Data In/Out
6.13 Gotchas . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
303
303
305
306
311
311
314
316
317
319
320
322
323
325
Tutorials
7.1 Internal Guides . . . . . . . . . . . . . . . . . .
7.2 pandas Cookbook . . . . . . . . . . . . . . . .
7.3 Lessons for New pandas Users . . . . . . . . . .
7.4 Practical data analysis with Python . . . . . . .
7.5 Modern Pandas . . . . . . . . . . . . . . . . . .
7.6 Excel charts with pandas, vincent and xlsxwriter
7.7 Various Tutorials . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
327
327
327
328
328
328
329
329
Cookbook
8.1 Idioms . . . . . . . . .
8.2 Selection . . . . . . . .
8.3 MultiIndexing . . . . .
8.4 Missing Data . . . . . .
8.5 Grouping . . . . . . . .
8.6 Timeseries . . . . . . .
8.7 Merge . . . . . . . . . .
8.8 Plotting . . . . . . . . .
8.9 Data In/Out . . . . . . .
8.10 Computation . . . . . .
8.11 Timedeltas . . . . . . .
8.12 Aliasing Axis Names . .
8.13 Creating Example Data .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
331
331
334
338
342
343
351
352
353
354
359
359
360
361
ii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
363
363
368
382
387
389
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
391
391
392
393
393
400
408
414
422
425
428
429
433
433
440
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
443
445
447
447
451
452
452
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
455
455
456
457
457
463
464
464
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
467
467
469
470
472
473
476
480
481
484
485
485
487
489
492
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
iii
13.15
13.16
13.17
13.18
13.19
13.20
13.21
Duplicate Data . . . . . . . . .
Dictionary-like get() method
The select() Method . . . .
The lookup() Method . . . .
Index objects . . . . . . . . . .
Set / Reset Index . . . . . . . .
Returning a view versus a copy
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
503
506
506
506
506
509
512
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
515
515
521
530
532
533
15 Computational tools
15.1 Statistical Functions . . . . . . .
15.2 Window Functions . . . . . . . .
15.3 Aggregation . . . . . . . . . . .
15.4 Expanding Windows . . . . . . .
15.5 Exponentially Weighted Windows
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
541
541
545
554
558
560
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
565
565
567
568
569
570
584
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
587
588
593
594
594
597
601
603
604
606
613
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
iv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
641
641
642
648
649
650
653
655
655
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
661
662
662
663
666
668
668
675
689
690
696
702
703
704
21 Time Deltas
21.1 Parsing . . . . . . . .
21.2 Operations . . . . . .
21.3 Reductions . . . . . .
21.4 Frequency Conversion
21.5 Attributes . . . . . . .
21.6 TimedeltaIndex . . . .
21.7 Resampling . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
713
713
715
719
719
721
722
725
22 Categorical Data
22.1 Object Creation . . . . .
22.2 Description . . . . . . .
22.3 Working with categories
22.4 Sorting and Order . . .
22.5 Comparisons . . . . . .
22.6 Operations . . . . . . .
22.7 Data munging . . . . .
22.8 Getting Data In/Out . .
22.9 Missing Data . . . . . .
22.10 Differences to Rs factor
22.11 Gotchas . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
727
727
730
731
734
737
739
740
745
746
747
748
23 Visualization
23.1 Basic Plotting: plot . . . . . .
23.2 Other Plots . . . . . . . . . . .
23.3 Plotting with Missing Data . . .
23.4 Plotting Tools . . . . . . . . . .
23.5 Plot Formatting . . . . . . . . .
23.6 Plotting directly with matplotlib
23.7 Trellis plotting interface . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
753
753
756
787
788
796
819
820
24 Style
25 IO Tools (Text, CSV, HDF5, ...)
25.1 CSV & Text files . . . . .
25.2 JSON . . . . . . . . . . .
25.3 HTML . . . . . . . . . .
25.4 Excel files . . . . . . . .
835
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
837
838
863
871
878
25.5
25.6
25.7
25.8
25.9
25.10
25.11
25.12
25.13
25.14
Clipboard . . . . . . . . . . . . .
Pickling . . . . . . . . . . . . . .
msgpack (experimental) . . . . .
HDF5 (PyTables) . . . . . . . . .
SQL Queries . . . . . . . . . . .
Google BigQuery (Experimental)
Stata Format . . . . . . . . . . .
SAS Formats . . . . . . . . . . .
Other file formats . . . . . . . . .
Performance Considerations . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
884
884
885
888
917
926
930
933
933
933
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
937
937
938
940
940
941
941
944
27 Enhancing Performance
947
27.1 Cython (Writing C extensions for pandas) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
27.2 Using numba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
27.3 Expression Evaluation via eval() (Experimental) . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
28 Sparse data structures
28.1 SparseArray . . . . . . . .
28.2 SparseList . . . . . . . . .
28.3 SparseIndex objects . . . .
28.4 Interaction with scipy.sparse
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
963
965
965
966
966
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
971
971
972
974
974
976
978
978
978
979
980
30 rpy2 / R interface
30.1 Updating your code to use rpy2 functions
30.2 R interface with rpy2 . . . . . . . . . . .
30.3 Transferring R data sets into Python . . .
30.4 Converting DataFrames into R objects . .
30.5 Calling R functions with pandas objects .
30.6 High-level interface to R estimators . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
981
981
982
982
982
983
983
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31 pandas Ecosystem
985
31.1 Statistics and Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985
31.2 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985
31.3 IDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
vi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
989
989
991
996
997
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1003
1003
1004
1006
1008
1009
1011
1012
1013
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1015
1015
1016
1017
1021
1022
1024
1026
35 API Reference
35.1 Input/Output . . . . . .
35.2 General functions . . . .
35.3 Series . . . . . . . . . .
35.4 DataFrame . . . . . . .
35.5 Panel . . . . . . . . . .
35.6 Panel4D . . . . . . . .
35.7 Index . . . . . . . . . .
35.8 CategoricalIndex . . . .
35.9 MultiIndex . . . . . . .
35.10 DatetimeIndex . . . . .
35.11 TimedeltaIndex . . . . .
35.12 Window . . . . . . . . .
35.13 GroupBy . . . . . . . .
35.14 Resampling . . . . . . .
35.15 Style . . . . . . . . . .
35.16 General utility functions
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1027
1027
1057
1078
1277
1491
1595
1658
1696
1726
1758
1795
1821
1831
1855
1864
1877
.
.
.
.
.
.
.
36 Internals
1891
36.1 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1891
36.2 Subclassing pandas Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1892
37 Release Notes
1897
37.1 pandas 0.18.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1897
37.2 pandas 0.18.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1899
37.3 pandas 0.17.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1903
vii
37.4
37.5
37.6
37.7
37.8
37.9
37.10
37.11
37.12
37.13
37.14
37.15
37.16
37.17
37.18
37.19
37.20
37.21
37.22
37.23
37.24
37.25
37.26
37.27
37.28
37.29
37.30
37.31
37.32
37.33
37.34
pandas 0.17.0
pandas 0.16.2
pandas 0.16.1
pandas 0.16.0
pandas 0.15.2
pandas 0.15.1
pandas 0.15.0
pandas 0.14.1
pandas 0.14.0
pandas 0.13.1
pandas 0.13.0
pandas 0.12.0
pandas 0.11.0
pandas 0.10.1
pandas 0.10.0
pandas 0.9.1
pandas 0.9.0
pandas 0.8.1
pandas 0.8.0
pandas 0.7.3
pandas 0.7.2
pandas 0.7.1
pandas 0.7.0
pandas 0.6.1
pandas 0.6.0
pandas 0.5.0
pandas 0.4.3
pandas 0.4.2
pandas 0.4.1
pandas 0.4.0
pandas 0.3.0
viii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1905
1908
1910
1912
1914
1915
1916
1919
1921
1924
1927
1941
1947
1953
1956
1960
1963
1968
1970
1974
1976
1977
1978
1985
1987
1991
1995
1996
1997
1999
2003
2007
PDF Version
Zipped HTML Date: May 03, 2016 Version: 0.18.1
Binary Installers: http://pypi.python.org/pypi/pandas
Source Repository: http://github.com/pydata/pandas
Issues & Ideas: https://github.com/pydata/pandas/issues
Q&A Support: http://stackoverflow.com/questions/tagged/pandas
Developer Mailing List: http://groups.google.com/group/pydata
pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with
relational or labeled data both easy and intuitive. It aims to be the fundamental high-level building block for doing
practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful
and flexible open source data analysis / manipulation tool available in any language. It is already well on its way
toward this goal.
pandas is well suited for many different kinds of data:
Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet
Ordered and unordered (not necessarily fixed-frequency) time series data.
Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels
Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed
into a pandas data structure
The two primary data structures of pandas, Series (1-dimensional) and DataFrame (2-dimensional), handle the
vast majority of typical use cases in finance, statistics, social science, and many areas of engineering. For R users,
DataFrame provides everything that Rs data.frame provides and much more. pandas is built on top of NumPy
and is intended to integrate well within a scientific computing environment with many other 3rd party libraries.
Here are just a few of the things that pandas does well:
Easy handling of missing data (represented as NaN) in floating point as well as non-floating point data
Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the user can
simply ignore the labels and let Series, DataFrame, etc. automatically align the data for you in computations
Powerful, flexible group by functionality to perform split-apply-combine operations on data sets, for both aggregating and transforming data
Make it easy to convert ragged, differently-indexed data in other Python and NumPy data structures into
DataFrame objects
Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
Intuitive merging and joining data sets
Flexible reshaping and pivoting of data sets
Hierarchical labeling of axes (possible to have multiple labels per tick)
Robust IO tools for loading data from flat files (CSV and delimited), Excel files, databases, and saving / loading
data from the ultrafast HDF5 format
Time series-specific functionality: date range generation and frequency conversion, moving window statistics,
moving window linear regressions, date shifting and lagging, etc.
CONTENTS
Many of these principles are here to address the shortcomings frequently experienced using other languages / scientific
research environments. For data scientists, working with data is typically divided into multiple stages: munging and
cleaning data, analyzing / modeling it, then organizing the results of the analysis into a form suitable for plotting or
tabular display. pandas is the ideal tool for all of these tasks.
Some other notes
pandas is fast. Many of the low-level algorithmic bits have been extensively tweaked in Cython code. However,
as with anything else generalization usually sacrifices performance. So if you focus on one feature for your
application you may be able to create a faster specialized tool.
pandas is a dependency of statsmodels, making it an important part of the statistical computing ecosystem in
Python.
pandas has been used extensively in production in financial applications.
Note: This documentation assumes general familiarity with NumPy. If you havent used NumPy much or at all, do
invest some time in learning about NumPy first.
See the package overview for more detail about whats in the library.
CONTENTS
CHAPTER
ONE
WHATS NEW
You can now use .rolling(..) and .expanding(..) as methods on groupbys. These return another deferred
object (similar to what .rolling() and .expanding() do on ungrouped pandas objects). You can then operate
on these RollingGroupby objects in a similar manner.
Previously you would have to do this to get a rolling window mean per-group:
In [7]: df = pd.DataFrame({'A': [1] * 20 + [2] * 12 + [3] * 8,
...:
'B': np.arange(40)})
...:
In [8]: df
Out[8]:
A
B
0
1
0
1
1
1
2
1
2
3
1
3
4
1
4
5
1
5
6
1
6
.. .. ..
33 3 33
34 3 34
35 3 35
36 3 36
37 3 37
38 3 38
39 3 39
[40 rows x 2 columns]
In [9]: df.groupby('A').apply(lambda x: x.rolling(4).B.mean())
Out[9]:
A
1 0
NaN
1
NaN
2
NaN
3
1.5
4
2.5
5
3.5
6
4.5
...
3 33
NaN
34
NaN
35
33.5
36
34.5
37
35.5
38
36.5
39
37.5
Name: B, dtype: float64
3
4
5
6
3
33
34
35
36
37
38
39
Name:
1.5
2.5
3.5
4.5
...
NaN
NaN
33.5
34.5
35.5
36.5
37.5
B, dtype: float64
val
1
1
2
2
5
6
7
8
date
2016-01-03
2016-01-10
2016-01-17
2016-01-24
group date
1
2016-01-03
2016-01-04
2016-01-05
2016-01-06
2016-01-07
2016-01-08
2016-01-09
...
2
2016-01-18
2016-01-19
2016-01-20
2016-01-21
2016-01-22
2016-01-23
2016-01-24
1
1
1
1
1
1
1
...
2
2
2
2
2
2
2
5
5
5
5
5
5
5
...
7
7
7
7
7
7
8
These can accept a callable for the condition and other arguments.
In [15]: df = pd.DataFrame({'A': [1, 2, 3],
....:
'B': [4, 5, 6],
....:
'C': [7, 8, 9]})
....:
In [16]: df.where(lambda x: x > 4, lambda x: x + 10)
Out[16]:
A
B C
0 11 14 7
1 12
5 8
2 13
6 9
These can accept a callable, and a tuple of callable as a slicer. The callable can return a valid boolean indexer or
anything which is valid for these indexers input.
# callable returns bool indexer
In [17]: df.loc[lambda x: x.A >= 2, lambda x: x.sum() > 10]
Out[17]:
B C
1
2
5
6
8
9
[] indexing
Finally, you can use a callable in [] indexing of Series, DataFrame and Panel. The callable must return a valid input
for [] indexing depending on its class and index type.
In [19]: df[lambda x: 'A']
Out[19]:
0
1
1
2
2
3
Name: A, dtype: int64
Using these methods / indexers, you can chain data selection operations without using temporary variable.
In [20]: bb = pd.read_csv('data/baseball.csv', index_col='id')
In [21]: (bb.groupby(['year', 'team'])
....:
.sum()
....:
.loc[lambda df: df.r > 100]
....: )
....:
Out[21]:
stint
g
ab
r
h X2b
year team
2007 CIN
6 379
745 101 203
35
DET
5 301 1062 162 283
54
HOU
4 311
926 109 218
47
LAN
11 413 1021 153 293
61
NYN
13 622 1854 240 509 101
SFN
5 482 1305 198 337
67
TEX
2 198
729 115 200
40
TOR
4 459 1408 187 378
96
year team
2007 CIN
DET
HOU
LAN
NYN
SFN
TEX
TOR
X3b
hr
rbi
sb
cs
bb
2
4
6
3
3
6
4
2
36
37
14
36
61
40
28
58
125.0
144.0
77.0
154.0
243.0
171.0
115.0
223.0
10.0
24.0
10.0
7.0
22.0
26.0
21.0
4.0
1.0
7.0
4.0
5.0
4.0
7.0
4.0
2.0
105
97
60
114
174
235
73
190
so
ibb
hbp
sh
sf
gidp
127.0
176.0
212.0
141.0
310.0
188.0
140.0
265.0
14.0
3.0
3.0
8.0
24.0
51.0
4.0
16.0
1.0
10.0
9.0
9.0
23.0
8.0
5.0
12.0
1.0
4.0
16.0
3.0
18.0
16.0
2.0
4.0
15.0
8.0
6.0
8.0
15.0
6.0
8.0
16.0
18.0
28.0
17.0
29.0
48.0
41.0
16.0
38.0
A
1.129167
0.231299
-0.184695
-0.138561
-0.924325
0.232465
-0.789552
...
1.813962
-1.053571
0.009412
-0.165966
-0.848662
-0.495553
-0.176421
On other levels
In [25]: idx = pd.IndexSlice
In [26]: dft2 = dft2.swaplevel(0, 1).sort_index()
In [27]: dft2
Out[27]:
a 2013-01-01
2013-01-01
2013-01-02
2013-01-02
2013-01-03
2013-01-03
2013-01-04
...
b 2013-01-02
A
1.129167
-0.184695
-0.924325
-0.789552
-0.534541
-0.443109
-0.460149
...
12:00:00 -0.364308
00:00:00
12:00:00
00:00:00
12:00:00
00:00:00
12:00:00
00:00:00
2013-01-03
2013-01-03
2013-01-04
2013-01-04
2013-01-05
2013-01-05
00:00:00 0.822239
12:00:00 -2.119990
00:00:00 1.813962
12:00:00 0.009412
00:00:00 -0.848662
12:00:00 -0.176421
Assembling Datetimes
pd.to_datetime() has gained the ability to assemble datetimes from a passed in DataFrame or a dict.
(GH8158).
In [29]: df = pd.DataFrame({'year': [2015, 2016],
....:
'month': [2, 3],
....:
'day': [4, 5],
....:
'hour': [2, 3]})
....:
In [30]: df
Out[30]:
day hour
0
4
2
1
5
3
month
2
3
year
2015
2016
You can pass only the columns that you need to assemble.
In [32]: pd.to_datetime(df[['year', 'month', 'day']])
Out[32]:
0
2015-02-04
1
2016-03-05
dtype: datetime64[ns]
Other Enhancements
pd.read_csv() now supports delim_whitespace=True for the Python engine (GH12958)
pd.read_csv() now supports opening ZIP files that contains a single CSV, via extension inference or explict
compression=zip (GH12175)
10
pd.read_csv() now supports opening files using xz compression, via extension inference or explicit
compression=xz is specified; xz compressions is also supported by DataFrame.to_csv in the same
way (GH11852)
pd.read_msgpack() now always gives writeable ndarrays even when compression is used (GH12359).
pd.read_msgpack() now supports serializing and de-serializing categoricals with msgpack (GH12573)
.to_json() now supports NDFrames that contain categorical and sparse data (GH10778)
interpolate() now supports method=akima (GH7588).
pd.read_excel() now accepts path objects (e.g. pathlib.Path, py.path.local) for the file path,
in line with other read_* functions (GH12655)
Added .weekday_name property as a component to DatetimeIndex and the .dt accessor. (GH11128)
Index.take now handles allow_fill and fill_value consistently (GH12631)
In [33]: idx = pd.Index([1., 2., 3., 4.], dtype='float')
# default, allow_fill=True, fill_value=None
In [34]: idx.take([2, -1])
Out[34]: Float64Index([3.0, 4.0], dtype='float64')
In [35]: idx.take([2, -1], fill_value=True)
Out[35]: Float64Index([3.0, nan], dtype='float64')
Index now supports .str.get_dummies() which returns MultiIndex, see Creating Indicator Variables (GH10008, GH10103)
In [36]: idx = pd.Index(['a|b', 'a|c', 'b|c'])
In [37]: idx.str.get_dummies('|')
Out[37]:
MultiIndex(levels=[[0, 1], [0, 1], [0, 1]],
labels=[[1, 1, 0], [1, 0, 1], [0, 1, 1]],
names=[u'a', u'b', u'c'])
pd.crosstab() has gained a normalize argument for normalizing frequency tables (GH12569). Examples in the updated docs here.
.resample(..).interpolate() is now supported (GH12925)
.isin() now accepts passed sets (GH12988)
11
rather than
Bug in SparseDataFrame.loc[], .iloc[] may results in dense Series, rather than SparseSeries
(GH12787)
Bug in SparseArray addition ignores fill_value of right hand side (GH12910)
Bug in SparseArray mod raises AttributeError (GH12910)
Bug in SparseArray pow calculates 1 ** np.nan as np.nan which must be 1 (GH12910)
Bug in SparseArray comparison output may incorrect result or raise ValueError (GH12971)
Bug in SparseSeries.__repr__ raises TypeError when it is longer than max_rows (GH10560)
Bug in SparseSeries.shape ignores fill_value (GH10452)
Bug in SparseSeries and SparseArray may have different dtype from its dense values (GH12908)
Bug in SparseSeries.reindex incorrectly handle fill_value (GH12797)
Bug in SparseArray.to_frame() results in DataFrame, rather than SparseDataFrame (GH9850)
Bug in SparseSeries.value_counts() does not count fill_value (GH6749)
Bug in SparseArray.to_dense() does not preserve dtype (GH10648)
Bug in SparseArray.to_dense() incorrectly handle fill_value (GH12797)
Bug in pd.concat() of SparseSeries results in dense (GH10536)
Bug in pd.concat() of SparseDataFrame incorrectly handle fill_value (GH9765)
Bug in pd.concat() of SparseDataFrame may raise AttributeError (GH12174)
Bug in SparseArray.shift() may raise NameError or TypeError (GH12908)
12
0
1
2
A
a
b
a
B
1
2
3
Previous Behavior:
In [3]: df.groupby('A', as_index=True)['B'].nth(0)
Out[3]:
0
1
1
2
Name: B, dtype: int64
In [4]: df.groupby('A', as_index=False)['B'].nth(0)
Out[4]:
0
1
1
2
Name: B, dtype: int64
New Behavior:
In [43]: df.groupby('A', as_index=True)['B'].nth(0)
Out[43]:
A
a
1
b
2
Name: B, dtype: int64
In [44]: df.groupby('A', as_index=False)['B'].nth(0)
Out[44]:
0
1
1
2
Name: B, dtype: int64
Furthermore, previously, a .groupby would always sort, regardless if sort=False was passed with .nth().
In [45]: np.random.seed(1234)
In [46]: df = pd.DataFrame(np.random.randn(100, 2), columns=['a', 'b'])
In [47]: df['c'] = np.random.randint(0, 4, 100)
Previous Behavior:
In [4]: df.groupby('c', sort=True).nth(1)
Out[4]:
a
b
c
0 -0.334077 0.002118
1 0.036142 -2.074978
2 -0.720589 0.887163
3 0.859588 -0.636524
In [5]: df.groupby('c', sort=False).nth(1)
Out[5]:
a
b
c
0 -0.334077 0.002118
1 0.036142 -2.074978
13
2 -0.720589 0.887163
3 0.859588 -0.636524
New Behavior:
In [48]: df.groupby('c', sort=True).nth(1)
Out[48]:
a
b
c
0 -0.334077 0.002118
1 0.036142 -2.074978
2 -0.720589 0.887163
3 0.859588 -0.636524
In [49]: df.groupby('c', sort=False).nth(1)
Out[49]:
a
b
c
2 -0.720589 0.887163
3 0.859588 -0.636524
0 -0.334077 0.002118
1 0.036142 -2.074978
Previous behaviour:
In [2]: np.cumsum(sp, axis=0)
...
TypeError: cumsum() takes at most 2 arguments (4 given)
New behaviour:
In [52]: np.cumsum(sp, axis=0)
Out[52]:
0
0 1.0
1 3.0
2 6.0
14
value
10
13
Previous behavior:
In [1]: df.groupby(pd.TimeGrouper(key='date', freq='M')).apply(lambda x: x.value.sum())
Out[1]:
...
TypeError: cannot concatenate a non-NDFrame object
# Output is a Series
In [2]: df.groupby(pd.TimeGrouper(key='date', freq='M')).apply(lambda x: x[['value']].sum())
Out[2]:
date
2000-10-31 value
10
2000-11-30 value
13
dtype: int64
New Behavior:
# Output is a Series
In [55]: df.groupby(pd.TimeGrouper(key='date', freq='M')).apply(lambda x: x.value.sum())
Out[55]:
date
2000-10-31
10
2000-11-30
13
Freq: M, dtype: int64
# Output is a DataFrame
In [56]: df.groupby(pd.TimeGrouper(key='date', freq='M')).apply(lambda x: x[['value']].sum())
Out[56]:
value
date
2000-10-31
10
2000-11-30
13
15
New behaviour:
In [1]: df = pd.read_csv(StringIO(''), engine='c')
...
pandas.io.common.EmptyDataError: No columns to parse from file
In [2]: df = pd.read_csv(StringIO(''), engine='python')
...
pandas.io.common.EmptyDataError: No columns to parse from file
In addition to this error change, several others have been made as well:
CParserError now sub-classes ValueError instead of just a Exception (GH12551)
A CParserError is now raised instead of a generic Exception in read_csv when the c engine cannot
parse a column (GH12506)
A ValueError is now raised instead of a generic Exception in read_csv when the c engine encounters
a NaN value in an integer column (GH12506)
A ValueError is now raised instead of a generic Exception in read_csv when true_values is
specified, and the c engine encounters an element in a column containing unencodable bytes (GH12506)
pandas.parser.OverflowError exception has been removed and has been replaced with Pythons builtin OverflowError exception (GH12506)
pd.read_csv() no longer allows a combination of strings and integers for the usecols parameter
(GH12678)
to_datetime error changes
Bugs in pd.to_datetime() when passing a unit with convertible entries and errors=coerce or nonconvertible with errors=ignore. Furthermore, an OutOfBoundsDateime exception will be raised when an
out-of-range value is encountered for that unit when errors=raise. (GH11758, GH13052, GH13059)
Previous behaviour:
In [27]: pd.to_datetime(1420043460, unit='s', errors='coerce')
Out[27]: NaT
In [28]: pd.to_datetime(11111111, unit='D', errors='ignore')
OverflowError: Python int too large to convert to C long
In [29]: pd.to_datetime(11111111, unit='D', errors='raise')
OverflowError: Python int too large to convert to C long
New behaviour:
In [2]: pd.to_datetime(1420043460, unit='s', errors='coerce')
Out[2]: Timestamp('2014-12-31 16:31:00')
In [3]: pd.to_datetime(11111111, unit='D', errors='ignore')
Out[3]: 11111111
In [4]: pd.to_datetime(11111111, unit='D', errors='raise')
OutOfBoundsDatetime: cannot convert input with unit 'D'
16
deprecated
and
can
be
replaced
by
17
Bug in Period and PeriodIndex creation raises KeyError if freq="Minute" is specified. Note that
Minute freq is deprecated in v0.17.0, and recommended to use freq="T" instead (GH11854)
Bug in .resample(...).count() with a PeriodIndex always raising a TypeError (GH12774)
Bug in .resample(...) with a PeriodIndex casting to a DatetimeIndex when empty (GH12868)
Bug in .resample(...) with a PeriodIndex when resampling to an existing frequency (GH12770)
Bug in printing data which contains Period with different freq raises ValueError (GH12615)
Bug in Series construction with Categorical and dtype=category is specified (GH12574)
Bugs in concatenation with a coercable dtype was too aggressive, resulting in different dtypes in outputformatting when an object was longer than display.max_rows (GH12411, GH12045, GH11594, GH10571,
GH12211)
Bug in float_format option with option not being validated as a callable. (GH12706)
Bug in GroupBy.filter when dropna=False and no groups fulfilled the criteria (GH12768)
Bug in __name__ of .cum* functions (GH12021)
Bug in .astype() of a Float64Inde/Int64Index to an Int64Index (GH12881)
Bug in roundtripping an integer based index in .to_json()/.read_json() when orient=index
(the default) (GH12866)
Bug in plotting Categorical dtypes cause error when attempting stacked bar plot (GH13019)
Compat with >= numpy 1.11 for NaT comparions (GH12969)
Bug in .drop() with a non-unique MultiIndex. (GH12701)
Bug in .concat of datetime tz-aware and naive DataFrames (GH12467)
Bug in correctly raising a ValueError in .resample(..).fillna(..) when passing a non-string
(GH12952)
Bug fixes in various encoding and header processing issues in pd.read_sas() (GH12659, GH12654,
GH12647, GH12809)
Bug in pd.crosstab() where would silently ignore aggfunc if values=None (GH12569).
Potential segfault in DataFrame.to_json when serialising datetime.time (GH11473).
Potential segfault in DataFrame.to_json when attempting to serialise 0d array (GH11299).
Segfault in to_json when attempting to serialise a DataFrame or Series with non-ndarray values; now
supports serialization of category, sparse, and datetime64[ns, tz] dtypes (GH10778).
Bug in DataFrame.to_json with unsupported dtype not passed to default handler (GH12554).
Bug in .align not returning the sub-class (GH12983)
Bug in aligning a Series with a DataFrame (GH13037)
18
Bug in ABCPanel in which Panel4D was not being considered as a valid instance of this generic type
(GH12810)
Bug in consistency of .name on .groupby(..).apply(..) cases (GH12363)
Bug in Timestamp.__repr__ that caused pprint to fail in nested structures (GH12622)
Bug in Timedelta.min and Timedelta.max, the properties now report the true minimum/maximum
timedeltas as recognized by pandas. See the documentation. (GH12727)
Bug in .quantile() with interpolation may coerce to float unexpectedly (GH12772)
Bug in .quantile() with empty Series may return scalar rather than empty Series (GH12772)
Bug in .loc with out-of-bounds in a large indexer would raise IndexError rather than KeyError
(GH12527)
Bug in resampling when using a TimedeltaIndex and .asfreq(), would previously not include the final
fencepost (GH12926)
Bug in equality testing with a Categorical in a DataFrame (GH12564)
Bug in GroupBy.first(), .last() returns incorrect row when TimeGrouper is used (GH7453)
Bug in pd.read_csv() with the c engine when specifying skiprows with newlines in quoted items
(GH10911, GH12775)
Bug in DataFrame timezone lost when assigning tz-aware datetime Series with alignment (GH12981)
Bug in .value_counts() when normalize=True and dropna=True where nulls still contributed to
the normalized count (GH12558)
Bug in Series.value_counts() loses name if its dtype is category (GH12835)
Bug in Series.value_counts() loses timezone info (GH12835)
Bug
in
Series.value_counts(normalize=True)
UnboundLocalError (GH12835)
with
Categorical
raises
19
pandas >= 0.18.0 no longer supports compatibility with Python version 2.6 and 3.3 (GH7718,
Warning: numexpr version 2.4.4 will now show a warning and not be used as a computation back-end for
pandas because of some buggy behavior. This does not affect other versions (>= 2.1 and >= 2.4.6). (GH12489)
Highlights include:
Moving and expanding window functions are now methods on Series and DataFrame, similar to .groupby,
see here.
Adding support for a RangeIndex as a specialized form of the Int64Index for memory savings, see here.
API breaking change to the .resample method to make it more .groupby like, see here.
20
Removal of support for positional indexing with floats, which was deprecated since 0.14.0. This will now raise
a TypeError, see here.
The .to_xarray() function has been added for compatibility with the xarray package, see here.
The read_sas function has been enhanced to read sas7bdat files, see here.
Addition of the .str.extractall() method, and API changes to the .str.extract() method and .str.cat() method.
pd.test() top-level nose test runner is available (GH4327).
Check the API Changes and deprecations before updating.
Whats new in v0.18.0
New features
Window functions are now methods
Changes to rename
Range Index
Changes to str.extract
Addition of str.extractall
Changes to str.cat
Datetimelike rounding
Formatting of Integers in FloatIndex
Changes to dtype assignment behaviors
to_xarray
Latex Representation
pd.read_sas() changes
Other enhancements
Backwards incompatible API changes
NaT and Timedelta operations
Changes to msgpack
Signature change for .rank
Bug in QuarterBegin with n=0
Resample API
Changes to eval
Other API Changes
Deprecations
Removal of deprecated float indexers
Removal of prior version deprecations/changes
Performance Improvements
Bug Fixes
21
In [3]: df
Out[3]:
A
B
0 0 0.471435
1 1 -1.190976
2 2 1.432707
3 3 -0.312652
4 4 -0.720589
5 5 0.887163
6 6 0.859588
7 7 -0.636524
8 8 0.015696
9 9 -2.242685
Previous Behavior:
In [8]: pd.rolling_mean(df,window=3)
FutureWarning: pd.rolling_mean is deprecated for DataFrame and will be removed in a future ve
DataFrame.rolling(window=3,center=False).mean()
Out[8]:
A
B
0 NaN
NaN
1 NaN
NaN
2
1 0.237722
3
2 -0.023640
4
3 0.133155
5
4 -0.048693
6
5 0.342054
7
6 0.370076
8
7 0.079587
9
8 -0.954504
New Behavior:
In [4]: r = df.rolling(window=3)
r.agg
r.aggregate
r.apply
r.corr
r.count
r.cov
r.exclusions
r.kurt
r.max
r.mean
r.median
r.min
22
r.n
r.q
8
9
7.0 0.079587
8.0 -0.954504
r.agg({'A' : ['mean','std'],
'B' : ['mean','std']})
B
std
mean
NaN
NaN
NaN
NaN
1.0 0.237722
1.0 -0.023640
1.0 0.133155
1.0 -0.048693
1.0 0.342054
1.0 0.370076
1.0 0.079587
1.0 -0.954504
std
NaN
NaN
1.327364
1.335505
1.143778
0.835747
0.920379
0.871850
0.750099
1.162285
Changes to rename
Series.rename and NDFrame.rename_axis can now take a scalar or list-like argument for altering the Series
or axis name, in addition to their old behaviors of altering labels. (GH9494, GH11965)
In [9]: s = pd.Series(np.random.randn(5))
In [10]: s.rename('newname')
Out[10]:
0
1.150036
1
0.991946
2
0.953324
3
-2.021255
4
-0.334077
Name: newname, dtype: float64
In [11]: df = pd.DataFrame(np.random.randn(5, 2))
In [12]: (df.rename_axis("indexname")
....:
.rename_axis("columns_name", axis="columns"))
23
....:
Out[12]:
columns_name
0
1
indexname
0
0.002118 0.405453
1
0.289092 1.321158
2
-1.546906 -0.202646
3
-0.655969 0.193421
4
0.553439 1.318152
The new functionality works well in method chains. Previously these methods only accepted functions or dicts mapping a label to a new label. This continues to work as before for function or dict-like values.
Range Index
A RangeIndex has been added to the Int64Index sub-classes to support a memory saving alternative for common
use cases. This has a similar implementation to the python range object (xrange in python 2), in that it only
stores the start, stop, and step values for the index. It will transparently interact with the user API, converting to
Int64Index if needed.
This will now be the default constructed index for NDFrame objects, rather than previous an Int64Index. (GH939,
GH12070, GH12071, GH12109, GH12888)
Previous Behavior:
In [3]: s = pd.Series(range(1000))
In [4]: s.index
Out[4]:
Int64Index([ 0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
...
990, 991, 992, 993, 994, 995, 996, 997, 998, 999], dtype='int64', length=1000)
In [6]: s.index.nbytes
Out[6]: 8000
New Behavior:
In [13]: s = pd.Series(range(1000))
In [14]: s.index
Out[14]: RangeIndex(start=0, stop=1000, step=1)
In [15]: s.index.nbytes
Out[15]: 72
Changes to str.extract
The .str.extract method takes a regular expression with capture groups, finds the first match in each subject string, and
returns the contents of the capture groups (GH11386).
In v0.18.0, the expand argument was added to extract.
expand=False: it returns a Series, Index, or DataFrame, depending on the subject and regular expression pattern (same behavior as pre-0.18.0).
expand=True: it always returns a DataFrame, which is more consistent and less confusing from the perspective of a user.
24
Currently the default is expand=None which gives a FutureWarning and uses expand=False. To avoid this
warning, please explicitly specify expand.
In [1]: pd.Series(['a1', 'b2', 'c3']).str.extract('[ab](\d)', expand=None)
FutureWarning: currently extract(expand=None) means expand=False (return Index/Series/DataFrame)
but in a future version of pandas this will be changed to expand=True (return DataFrame)
Out[1]:
0
1
1
2
2
NaN
dtype: object
Calling on an Index with a regex with exactly one capture group returns an Index if expand=False.
In [18]: s = pd.Series(["a1", "b2", "c3"], ["A11", "B22", "C33"])
In [19]: s.index
Out[19]: Index([u'A11', u'B22', u'C33'], dtype='object')
In [20]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=False)
Out[20]: Index([u'A', u'B', u'C'], dtype='object', name=u'letter')
Calling on an Index with a regex with more than one capture group raises ValueError if expand=False.
>>> s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=False)
ValueError: only one regex group is supported with Index
25
1
2
B
C
22
33
In summary, extract(expand=True) always returns a DataFrame with a row for every subject string, and a
column for every capture group.
Addition of str.extractall
The .str.extractall method was added (GH11386). Unlike extract, which returns only the first match.
In [23]: s = pd.Series(["a1a2", "b1", "c1"], ["A", "B", "C"])
In [24]: s
Out[24]:
A
a1a2
B
b1
C
c1
dtype: object
In [25]: s.str.extract("(?P<letter>[ab])(?P<digit>\d)", expand=False)
Out[25]:
letter digit
A
a
1
B
b
1
C
NaN
NaN
Changes to str.cat
The method .str.cat() concatenates the members of a Series. Before, if NaN values were present in the Series,
calling .str.cat() on it would return NaN, unlike the rest of the Series.str.* API. This behavior has been
amended to ignore NaN values by default. (GH11435).
A new, friendlier ValueError is added to protect against the mistake of supplying the sep as an arg, rather than as
a kwarg. (GH11334).
In [27]: pd.Series(['a','b',np.nan,'c']).str.cat(sep=' ')
Out[27]: 'a b c'
In [28]: pd.Series(['a','b',np.nan,'c']).str.cat(sep=' ', na_rep='?')
Out[28]: 'a b ? c'
In [2]: pd.Series(['a','b',np.nan,'c']).str.cat(' ')
ValueError: Did you mean to supply a `sep` keyword?
26
Datetimelike rounding
DatetimeIndex, Timestamp, TimedeltaIndex, Timedelta have gained the .round(), .floor() and
.ceil() method for datetimelike rounding, flooring and ceiling. (GH4314, GH11963)
Naive datetimes
In [29]: dr = pd.date_range('20130101 09:12:56.1234', periods=3)
In [30]: dr
Out[30]:
DatetimeIndex(['2013-01-01 09:12:56.123400', '2013-01-02 09:12:56.123400',
'2013-01-03 09:12:56.123400'],
dtype='datetime64[ns]', freq='D')
In [31]: dr.round('s')
Out[31]:
DatetimeIndex(['2013-01-01 09:12:56', '2013-01-02 09:12:56',
'2013-01-03 09:12:56'],
dtype='datetime64[ns]', freq=None)
# Timestamp scalar
In [32]: dr[0]
Out[32]: Timestamp('2013-01-01 09:12:56.123400', offset='D')
In [33]: dr[0].round('10s')
Out[33]: Timestamp('2013-01-01 09:13:00')
Timedeltas
In [37]: t = timedelta_range('1 days 2 hr 13 min 45 us',periods=3,freq='d')
In [38]: t
Out[38]:
TimedeltaIndex(['1 days 02:13:00.000045', '2 days 02:13:00.000045',
'3 days 02:13:00.000045'],
dtype='timedelta64[ns]', freq='D')
In [39]: t.round('10min')
Out[39]: TimedeltaIndex(['1 days 02:10:00', '2 days 02:10:00', '3 days 02:10:00'], dtype='timedelta64
# Timedelta scalar
In [40]: t[0]
27
In addition, .round(), .floor() and .ceil() will be available thru the .dt accessor of Series.
In [42]: s = pd.Series(dr)
In [43]: s
Out[43]:
0
2013-01-01 09:12:56.123400-05:00
1
2013-01-02 09:12:56.123400-05:00
2
2013-01-03 09:12:56.123400-05:00
dtype: datetime64[ns, US/Eastern]
In [44]: s.dt.round('D')
Out[44]:
0
2013-01-01 00:00:00-05:00
1
2013-01-02 00:00:00-05:00
2
2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
New Behavior:
In [45]: s = pd.Series([1,2,3], index=np.arange(3.))
In [46]: s
Out[46]:
0.0
1
1.0
2
2.0
3
dtype: int64
28
In [47]: s.index
Out[47]: Float64Index([0.0, 1.0, 2.0], dtype='float64')
In [48]: print(s.to_csv(path=None))
0.0,1
1.0,2
2.0,3
New Behavior:
In [49]: df = pd.DataFrame({'a': [0, 1, 1],
....:
'b': pd.Series([100, 200, 300], dtype='uint32')})
....:
In [50]: df.dtypes
Out[50]:
a
int64
b
uint32
dtype: object
In [51]: ix = df['a'] == 1
In [52]: df.loc[ix, 'b'] = df.loc[ix, 'b']
In [53]: df.dtypes
Out[53]:
a
int64
b
uint32
dtype: object
When a DataFrames integer slice is partially updated with a new slice of floats that could potentially be downcasted
to integer without losing precision, the dtype of the slice will be set to float instead of integer.
1.2. v0.18.0 (March 13, 2016)
29
Previous Behavior:
In [4]: df = pd.DataFrame(np.array(range(1,10)).reshape(3,3),
columns=list('abc'),
index=[[4,4,8], [8,10,12]])
In [5]: df
Out[5]:
a b
4 8
1 2
10 4 5
8 12 7 8
c
3
6
9
c
0
1
9
New Behavior:
In [54]: df = pd.DataFrame(np.array(range(1,10)).reshape(3,3),
....:
columns=list('abc'),
....:
index=[[4,4,8], [8,10,12]])
....:
In [55]:
Out[55]:
a
4 8
1
10 4
8 12 7
df
b
2
5
8
c
3
6
9
df
b
2
5
8
c
0.0
1.0
9.0
to_xarray
In a future version of pandas, we will be deprecating Panel and other > 2 ndim objects. In order to provide for
continuity, all NDFrame objects have gained the .to_xarray() method in order to convert to xarray objects,
which has a pandas-like interface for > 2 ndim. (GH11972)
See the xarray full-documentation here.
In [1]: p = Panel(np.arange(2*3*4).reshape(2,3,4))
In [2]: p.to_xarray()
Out[2]:
<xarray.DataArray (items: 2, major_axis: 3, minor_axis: 4)>
30
array([[[ 0,
[ 4,
[ 8,
1, 2, 3],
5, 6, 7],
9, 10, 11]],
Latex Representation
DataFrame has gained a ._repr_latex_() method in order to allow for conversion to latex in a ipython/jupyter
notebook using nbconvert. (GH11778)
Note that this must be activated by setting the option pd.display.latex.repr=True (GH12182)
For example, if you have a jupyter notebook you plan to convert to latex using nbconvert, place the statement
pd.display.latex.repr=True in the first cell to have the contained DataFrame output also stored as latex.
The options display.latex.escape and display.latex.longtable have also been added to the configuration and are used automatically by the to_latex method. See the available options docs for more info.
pd.read_sas() changes
read_sas has gained the ability to read SAS7BDAT files, including compressed files. The files can be read in
entirety, or incrementally. For full details see here. (GH4052)
Other enhancements
Handle truncated floats in SAS xport files (GH11713)
Added option to hide index in Series.to_string (GH11729)
read_excel now supports s3 urls of the format s3://bucketname/filename (GH11447)
add support for AWS_S3_HOST env variable when reading from s3 (GH12198)
A simple version of Panel.round() is now implemented (GH11763)
For Python 3.x, round(DataFrame), round(Series), round(Panel) will work (GH11763)
sys.getsizeof(obj) returns the memory usage of a pandas object, including the values it contains
(GH11597)
Series gained an is_unique attribute (GH11946)
DataFrame.quantile and Series.quantile now accept interpolation keyword (GH10174).
Added DataFrame.style.format for more flexible formatting of cell values (GH11692)
DataFrame.select_dtypes now allows the np.float16 typecode (GH11990)
pivot_table() now accepts most iterables for the values parameter (GH12017)
Added Google BigQuery service account authentication support, which enables authentication on remote
servers. (GH11881, GH12572). For further details see here
31
NaT may represent either a datetime64[ns] null or a timedelta64[ns] null. Given the ambiguity, it is
treated as a timedelta64[ns], which allows more operations to succeed.
In [64]: pd.NaT + pd.NaT
Out[64]: NaT
32
# same as
In [65]: pd.Timedelta('1s') + pd.Timedelta('1s')
Out[65]: Timedelta('0 days 00:00:02')
as opposed to
In [3]: pd.Timestamp('19900315') + pd.Timestamp('19900315')
TypeError: unsupported operand type(s) for +: 'Timestamp' and 'Timestamp'
However, when wrapped in a Series whose dtype is datetime64[ns] or timedelta64[ns], the dtype
information is respected.
In [1]: pd.Series([pd.NaT], dtype='<M8[ns]') + pd.Series([pd.NaT], dtype='<M8[ns]')
TypeError: can only operate on a datetimes for subtraction,
but the operator [__add__] was passed
In [66]: pd.Series([pd.NaT], dtype='<m8[ns]') + pd.Series([pd.NaT], dtype='<m8[ns]')
Out[66]:
0
NaT
dtype: timedelta64[ns]
NaT.isoformat() now returns NaT. This change allows allows pd.Timestamp to rehydrate any timestamp
like object from its isoformat (GH12300).
Changes to msgpack
Forward incompatible changes in msgpack writing format were made over 0.17.0 and 0.18.0; older versions of
pandas cannot read files packed by newer versions (GH12129, GH10527)
Bugs in to_msgpack and read_msgpack introduced in 0.17.0 and fixed in 0.18.0, caused files packed in Python
2 unreadable by Python 3 (GH12142). The following table describes the backward and forward compat of msgpacks.
33
Packed with
pre-0.17 / Python 2
pre-0.17 / Python 3
0.17 / Python 2
Warning:
0.17 / Python 3
>=0.18 / any Python
0.18
>= 0.18
0.18.0 is backward-compatible for reading files packed by older versions, except for files packed with 0.17 in
Python 2, in which case only they can only be unpacked in Python 2.
New signature
In [71]: pd.Series([0,1]).rank(axis=0, method='average', numeric_only=None,
....:
na_option='keep', ascending=True, pct=False)
....:
Out[71]:
0
1.0
1
2.0
dtype: float64
In [72]: pd.DataFrame([0,1]).rank(axis=0, method='average', numeric_only=None,
....:
na_option='keep', ascending=True, pct=False)
....:
Out[72]:
0
0 1.0
1 2.0
34
The general semantics of anchored offsets for n=0 is to not move the date when it is an anchor point (e.g., a quarter
start date), and otherwise roll forward to the next anchor point.
In [73]: d = pd.Timestamp('2014-02-01')
In [74]: d
Out[74]: Timestamp('2014-02-01 00:00:00')
In [75]: d + pd.offsets.QuarterBegin(n=0, startingMonth=2)
Out[75]: Timestamp('2014-02-01 00:00:00')
In [76]: d + pd.offsets.QuarterBegin(n=0, startingMonth=1)
Out[76]: Timestamp('2014-04-01 00:00:00')
For the QuarterBegin offset in previous versions, the date would be rolled backwards if date was in the same
month as the quarter start date.
In [3]: d = pd.Timestamp('2014-02-15')
In [4]: d + pd.offsets.QuarterBegin(n=0, startingMonth=2)
Out[4]: Timestamp('2014-02-01 00:00:00')
This behavior has been corrected in version 0.18.0, which is consistent with other anchored offsets like MonthBegin
and YearBegin.
In [77]: d = pd.Timestamp('2014-02-15')
In [78]: d + pd.offsets.QuarterBegin(n=0, startingMonth=2)
Out[78]: Timestamp('2014-05-01 00:00:00')
Resample API
Like the change in the window functions API above, .resample(...) is changing to have a more groupby-like
API. (GH11732, GH12702, GH12202, GH12332, GH12334, GH12348, GH12448).
In [79]: np.random.seed(1234)
In [80]: df = pd.DataFrame(np.random.rand(10,4),
....:
columns=list('ABCD'),
....:
index=pd.date_range('2010-01-01 09:00:00', periods=10, freq='s'))
....:
In [81]: df
Out[81]:
2010-01-01
2010-01-01
2010-01-01
2010-01-01
2010-01-01
2010-01-01
2010-01-01
2010-01-01
2010-01-01
2010-01-01
09:00:00
09:00:01
09:00:02
09:00:03
09:00:04
09:00:05
09:00:06
09:00:07
09:00:08
09:00:09
A
0.191519
0.779976
0.958139
0.683463
0.503083
0.364886
0.933140
0.316836
0.802148
0.218792
B
0.622109
0.272593
0.875933
0.712702
0.013768
0.615396
0.651378
0.568099
0.143767
0.924868
C
0.437728
0.276464
0.357817
0.370251
0.772827
0.075381
0.397203
0.869127
0.704261
0.442141
D
0.785359
0.801872
0.500995
0.561196
0.882641
0.368824
0.788730
0.436173
0.704581
0.909316
Previous API:
35
You would write a resampling operation that immediately evaluates. If a how parameter was not provided, it would
default to how=mean.
In [6]: df.resample('2s')
Out[6]:
A
2010-01-01 09:00:00 0.485748
2010-01-01 09:00:02 0.820801
2010-01-01 09:00:04 0.433985
2010-01-01 09:00:06 0.624988
2010-01-01 09:00:08 0.510470
B
0.447351
0.794317
0.314582
0.609738
0.534317
C
0.357096
0.364034
0.424104
0.633165
0.573201
D
0.793615
0.531096
0.625733
0.612452
0.806949
C
0.714192
0.728068
0.848208
1.266330
1.146402
D
1.587231
1.062191
1.251465
1.224904
1.613897
New API:
Now, you can write .resample(..) as a 2-stage operation like .groupby(...), which yields a Resampler.
In [82]: r = df.resample('2s')
In [83]: r
Out[83]: DatetimeIndexResampler [freq=<2 * Seconds>, axis=0, closed=left, label=left, convention=star
You can then use this object to perform operations. These are downsampling operations (going from a higher frequency
to a lower one).
In [84]: r.mean()
Out[84]:
2010-01-01
2010-01-01
2010-01-01
2010-01-01
2010-01-01
09:00:00
09:00:02
09:00:04
09:00:06
09:00:08
A
0.485748
0.820801
0.433985
0.624988
0.510470
B
0.447351
0.794317
0.314582
0.609738
0.534317
C
0.357096
0.364034
0.424104
0.633165
0.573201
D
0.793615
0.531096
0.625733
0.612452
0.806949
A
0.971495
1.641602
0.867969
1.249976
1.020940
B
0.894701
1.588635
0.629165
1.219477
1.068634
C
0.714192
0.728068
0.848208
1.266330
1.146402
D
1.587231
1.062191
1.251465
1.224904
1.613897
In [85]: r.sum()
Out[85]:
2010-01-01
2010-01-01
2010-01-01
2010-01-01
2010-01-01
09:00:00
09:00:02
09:00:04
09:00:06
09:00:08
Furthermore, resample now supports getitem operations to perform the resample on specific columns.
In [86]: r[['A','C']].mean()
Out[86]:
2010-01-01 09:00:00
2010-01-01 09:00:02
2010-01-01 09:00:04
36
A
0.485748
0.820801
0.433985
C
0.357096
0.364034
0.424104
2010-01-01 09:00:06
2010-01-01 09:00:08
0.624988
0.510470
0.633165
0.573201
sum
0.894701
1.588635
0.629165
1.219477
1.068634
Upsampling operations take you from a lower frequency to a higher frequency. These are now performed with the
Resampler objects with backfill(), ffill(), fillna() and asfreq() methods.
In [89]: s = pd.Series(np.arange(5,dtype='int64'),
....:
index=date_range('2010-01-01', periods=5, freq='Q'))
....:
In [90]: s
Out[90]:
2010-03-31
0
2010-06-30
1
2010-09-30
2
2010-12-31
3
2011-03-31
4
Freq: Q-DEC, dtype: int64
Previously
In [6]: s.resample('M', fill_method='ffill')
Out[6]:
2010-03-31
0
2010-04-30
0
2010-05-31
0
2010-06-30
1
2010-07-31
1
2010-08-31
1
2010-09-30
2
2010-10-31
2
2010-11-30
2
2010-12-31
3
2011-01-31
3
2011-02-28
3
2011-03-31
4
Freq: M, dtype: int64
37
New API
In [91]: s.resample('M').ffill()
Out[91]:
2010-03-31
0
2010-04-30
0
2010-05-31
0
2010-06-30
1
2010-07-31
1
2010-08-31
1
2010-09-30
2
2010-10-31
2
2010-11-30
2
2010-12-31
3
2011-01-31
3
2011-02-28
3
2011-03-31
4
Freq: M, dtype: int64
Note: In the new API, you can either downsample OR upsample. The prior implementation would allow you to pass
an aggregator function (like mean) even though you were upsampling, providing a bit of confusion.
38
Warning: This new API for resample includes some internal changes for the prior-to-0.18.0 API, to work with a
deprecation warning in most cases, as the resample operation returns a deferred object. We can intercept operations
and just do what the (pre 0.18.0) API did (with a warning). Here is a typical use case:
In [4]: r = df.resample('2s')
In [6]: r*10
pandas/tseries/resample.py:80: FutureWarning: .resample() is now a deferred operation
use .resample(...).mean() instead of .resample(...)
Out[6]:
2010-01-01
2010-01-01
2010-01-01
2010-01-01
2010-01-01
09:00:00
09:00:02
09:00:04
09:00:06
09:00:08
A
4.857476
8.208011
4.339846
6.249881
5.104699
B
4.473507
7.943173
3.145823
6.097384
5.343172
C
3.570960
3.640340
4.241039
6.331650
5.732009
D
7.936154
5.310957
6.257326
6.124518
8.069486
However, getting and assignment operations directly on a Resampler will raise a ValueError:
In [7]: r.iloc[0] = 5
ValueError: .resample() is now a deferred operation
use .resample(...).mean() instead of .resample(...)
There is a situation where the new API can not perform all the operations when using original code. This code is
intending to resample every 2s, take the mean AND then take the min of those results.
In [4]: df.resample('2s').min()
Out[4]:
A
0.433985
B
0.314582
C
0.357096
D
0.531096
dtype: float64
C
0.276464
0.357817
0.075381
0.397203
0.442141
D
0.785359
0.500995
0.368824
0.436173
0.704581
The good news is the return dimensions will differ between the new API and the old API, so this should loudly
raise an exception.
To replicate the original operation
In [93]: df.resample('2s').mean().min()
Out[93]:
A
0.433985
B
0.314582
C
0.357096
D
0.531096
dtype: float64
39
Changes to eval
In prior versions, new columns assignments in an eval expression resulted in an inplace change to the DataFrame.
(GH9297)
In [94]: df = pd.DataFrame({'a': np.linspace(0, 10, 5), 'b': range(5)})
In [95]:
Out[95]:
a
0
0.0
1
2.5
2
5.0
3
7.5
4 10.0
df
b
0
1
2
3
4
df
b
0
1
2
3
4
c
0.0
3.5
7.0
10.5
14.0
In version 0.18.0, a new inplace keyword was added to choose whether the assignment should be done inplace or
return a copy.
In [96]:
Out[96]:
a
0
0.0
1
2.5
2
5.0
3
7.5
4 10.0
df
In [97]:
Out[97]:
a
0
0.0
1
2.5
2
5.0
3
7.5
4 10.0
In [98]:
Out[98]:
a
0
0.0
1
2.5
2
5.0
3
7.5
4 10.0
df
40
b
0
1
2
3
4
b
0
1
2
3
4
b
0
1
2
3
4
c
0.0
3.5
7.0
10.5
14.0
c
0.0
3.5
7.0
10.5
14.0
d
0.0
2.5
5.0
7.5
10.0
c
0.0
3.5
7.0
10.5
14.0
d
0.0
2.5
5.0
7.5
10.0
Warning: For backwards compatability, inplace defaults to True if not specified. This will change in a
future version of pandas. If your code depends on an inplace assignment you should update to explicitly set
inplace=True
The inplace keyword parameter was also added the query method.
In [101]: df.query('a > 5')
Out[101]:
a b
c
d
3
7.5 3 10.5
7.5
4 10.0 4 14.0 10.0
In [102]: df.query('a > 5', inplace=True)
In [103]: df
Out[103]:
a b
c
3
7.5 3 10.5
4 10.0 4 14.0
Warning:
versions.
d
7.5
10.0
Note that the default value for inplace in a query is False, which is consistent with prior
eval has also been updated to allow multi-line expressions for multiple assignments. These expressions will be
evaluated one at a time in order. Only assignments are valid for multi-line expressions.
In [104]: df
Out[104]:
a b
c
3
7.5 3 10.5
4 10.0 4 14.0
In [105]:
.....:
.....:
.....:
.....:
d
7.5
10.0
df.eval("""
e = d + a
f = e - 22
g = f / 2.0""", inplace=True)
In [106]: df
Out[106]:
a b
c
3
7.5 3 10.5
4 10.0 4 14.0
d
7.5
10.0
e
f
g
15.0 -7.0 -3.5
20.0 -2.0 -1.0
41
.memory_usage() now includes values in the index, as does memory_usage in .info() (GH11597)
DataFrame.to_latex() now supports non-ascii encodings (eg utf-8) in Python 2 with the parameter
encoding (GH7061)
pandas.merge() and DataFrame.merge() will show a specific error message when trying to merge
with an object that is not of type DataFrame or a subclass (GH12081)
DataFrame.unstack and Series.unstack now take fill_value keyword to allow direct replacement of missing values when an unstack results in missing values in the resulting DataFrame. As an added
benefit, specifying fill_value will preserve the data type of the original stacked data. (GH9746)
As part of the new API for window functions and resampling, aggregation functions have been clarified, raising
more informative error messages on invalid aggregations. (GH9052). A full set of examples are presented in
groupby.
Statistical functions for NDFrame objects (like sum(), mean(), min()) will now raise if non-numpycompatible arguments are passed in for **kwargs (GH12301)
.to_latex and .to_html gain a decimal parameter like .to_csv; the default is . (GH12031)
More helpful error message when constructing a DataFrame with empty data but with indices (GH8020)
.describe() will now properly handle bool dtype as a categorical (GH6625)
More helpful error message with an invalid .transform with user defined input (GH10165)
Exponentially weighted functions now allow specifying alpha directly (GH10789) and raise ValueError if
parameters violate 0 < alpha <= 1 (GH12492)
Deprecations
The functions pd.rolling_*, pd.expanding_*, and pd.ewm* are deprecated and replaced by the corresponding method call. Note that the new suggested syntax includes all of the arguments (even if default)
(GH11603)
In [1]: s = pd.Series(range(3))
In [2]: pd.rolling_mean(s,window=2,min_periods=1)
FutureWarning: pd.rolling_mean is deprecated for Series and
will be removed in a future version, replace with
Series.rolling(min_periods=1,window=2,center=False).mean()
42
Out[2]:
0
0.0
1
0.5
2
1.5
dtype: float64
In [3]: pd.rolling_cov(s, s, window=2)
FutureWarning: pd.rolling_cov is deprecated for Series and
will be removed in a future version, replace with
Series.rolling(window=2).cov(other=<Series>)
Out[3]:
0
NaN
1
0.5
2
0.5
dtype: float64
The the freq and how arguments to the .rolling, .expanding, and .ewm (new) functions are deprecated,
and will be removed in a future version. You can simply resample the input prior to creating a window function.
(GH11603).
For example, instead of s.rolling(window=5,freq=D).max() to get the max value on a rolling
5 Day window, one could use s.resample(D).mean().rolling(window=5).max(), which first
resamples the data to daily data, then provides a rolling 5 day window.
pd.tseries.frequencies.get_offset_name function is deprecated. Use offsets .freqstr property as alternative (GH11192)
pandas.stats.fama_macbeth routines are deprecated and will be removed in a future version (GH6077)
pandas.stats.ols, pandas.stats.plm and pandas.stats.var routines are deprecated and will
be removed in a future version (GH6077)
show a FutureWarning rather than a DeprecationWarning on using long-time deprecated syntax in
HDFStore.select, where the where clause is not a string-like (GH12027)
The pandas.options.display.mpl_style configuration has been deprecated and will be removed in
a future version of pandas. This functionality is better handled by matplotlibs style sheets (GH11783).
Removal of deprecated float indexers
In GH4892 indexing with floating point numbers on a non-Float64Index was deprecated (in version 0.14.0). In
0.18.0, this deprecation warning is removed and these will now raise a TypeError. (GH12165, GH12333)
In [109]: s = pd.Series([1, 2, 3], index=[4, 5, 6])
In [110]: s
Out[110]:
4
1
5
2
6
3
dtype: int64
In [111]: s2 = pd.Series([1, 2, 3], index=list('abc'))
In [112]: s2
Out[112]:
a
1
b
2
43
c
3
dtype: int64
Previous Behavior:
# this is label indexing
In [2]: s[5.0]
FutureWarning: scalar indexers for index type Int64Index should be integers and not floating point
Out[2]: 2
# this is positional indexing
In [3]: s.iloc[1.0]
FutureWarning: scalar indexers for index type Int64Index should be integers and not floating point
Out[3]: 2
# this is label indexing
In [4]: s.loc[5.0]
FutureWarning: scalar indexers for index type Int64Index should be integers and not floating point
Out[4]: 2
# .ix would coerce 1.0 to the positional 1, and index
In [5]: s2.ix[1.0] = 10
FutureWarning: scalar indexers for index type Index should be integers and not floating point
In [6]: s2
Out[6]:
a
1
b
10
c
3
dtype: int64
New Behavior:
For iloc, getting & setting via a float scalar will always raise.
In [3]: s.iloc[2.0]
TypeError: cannot do label indexing on <class 'pandas.indexes.numeric.Int64Index'> with these indexer
Other indexers will coerce to a like integer for both getting and setting. The FutureWarning has been dropped for
.loc, .ix and [].
In [113]: s[5.0]
Out[113]: 2
In [114]: s.loc[5.0]
Out[114]: 2
In [115]: s.ix[5.0]
Out[115]: 2
and setting
In [116]: s_copy = s.copy()
In [117]: s_copy[5.0] = 10
In [118]: s_copy
Out[118]:
4
1
5
10
44
6
3
dtype: int64
In [119]: s_copy = s.copy()
In [120]: s_copy.loc[5.0] = 10
In [121]: s_copy
Out[121]:
4
1
5
10
6
3
dtype: int64
In [122]: s_copy = s.copy()
In [123]: s_copy.ix[5.0] = 10
In [124]: s_copy
Out[124]:
4
1
5
10
6
3
dtype: int64
Positional setting with .ix and a float indexer will ADD this value to the index, rather than previously setting the
value by position.
In [125]: s2.ix[1.0] = 10
In [126]: s2
Out[126]:
a
1
b
2
c
3
1.0
10
dtype: int64
Note that for floats that are NOT coercible to ints, the label based bounds will be excluded
In [129]: s.loc[5.1:6]
Out[129]:
6
3
dtype: int64
45
In [130]: s.ix[5.1:6]
Out[130]:
6
3
dtype: int64
subset
in
DataFrame.duplicated()
and
Removal of the read_frame and frame_query (both aliases for pd.read_sql) and write_frame
(alias of to_sql) functions in the pd.io.sql namespace, deprecated since 0.14.0 (GH6292).
Removal of the order keyword from .factorize() (GH6930)
46
47
Bug in Index creation from Timestamp with mixed tz coerces to UTC (GH11488)
Bug in to_numeric where it does not raise if input is more than one dimension (GH11776)
Bug in parsing timezone offset strings with non-zero minutes (GH11708)
Bug in df.plot using incorrect colors for bar plots under matplotlib 1.5+ (GH11614)
Bug in the groupby plot method when using keyword arguments (GH11805).
Bug in DataFrame.duplicated and drop_duplicates causing spurious matches when setting
keep=False (GH11864)
Bug in .loc result with duplicated key may have Index with incorrect dtype (GH11497)
Bug in pd.rolling_median where memory allocation failed even with sufficient memory (GH11696)
Bug in DataFrame.style with spurious zeros (GH12134)
Bug in DataFrame.style with integer columns not starting at 0 (GH12125)
Bug in .style.bar may not rendered properly using specific browser (GH11678)
Bug in rich comparison of Timedelta with a numpy.array of Timedelta that caused an infinite recursion (GH11835)
Bug in DataFrame.round dropping column index name (GH11986)
Bug in df.replace while replacing value in mixed dtype Dataframe (GH11698)
Bug in Index prevents copying name of passed Index, when a new name is not provided (GH11193)
Bug in read_excel failing to read any non-empty sheets when empty sheets exist and sheetname=None
(GH11711)
Bug in read_excel failing to raise NotImplemented error when keywords parse_dates and
date_parser are provided (GH11544)
Bug in read_sql with pymysql connections failing to return chunked data (GH11522)
Bug in .to_csv ignoring formatting parameters decimal, na_rep, float_format for float indexes
(GH11553)
Bug in Int64Index and Float64Index preventing the use of the modulo operator (GH9244)
Bug in MultiIndex.drop for not lexsorted multi-indexes (GH12078)
Bug in DataFrame when masking an empty DataFrame (GH11859)
Bug in .plot potentially modifying the colors input when the number of columns didnt match the number
of series provided (GH12039).
Bug in Series.plot failing when index has a CustomBusinessDay frequency (GH7222).
Bug in .to_sql for datetime.time values with sqlite fallback (GH8341)
Bug in read_excel failing to read data with one column when squeeze=True (GH12157)
Bug in read_excel failing to read one empty column (GH12292, GH9002)
Bug in .groupby where a KeyError was not raised for a wrong column if there was only one row in the
dataframe (GH11741)
Bug in .read_csv with dtype specified on empty data producing an error (GH12048)
Bug in .read_csv where strings like 2E are treated as valid floats (GH12237)
Bug in building pandas with debugging symbols (GH12123)
48
Bug in .loc setitem indexer preventing the use of a TZ-aware DatetimeIndex (GH12050)
Bug in .style indexes and multi-indexes not appearing (GH11655)
Bug in to_msgpack and from_msgpack which did not correctly serialize or deserialize NaT (GH12307).
Bug in .skew and .kurt due to roundoff error for highly similar values (GH11974)
Bug in Timestamp constructor where microsecond resolution was lost if HHMMSS were not separated with
: (GH10041)
Bug in buffer_rd_bytes src->buffer could be freed more than once if reading failed, causing a segfault
(GH12098)
Bug in crosstab where arguments with non-overlapping indexes would return a KeyError (GH10291)
Bug in DataFrame.apply in which reduction was not being prevented for cases in which dtype was not a
numpy dtype (GH12244)
Bug when initializing categorical series with a scalar value. (GH12336)
Bug when specifying a UTC DatetimeIndex by setting utc=True in .to_datetime (GH11934)
Bug when increasing the buffer size of CSV reader in read_csv (GH12494)
Bug when setting columns of a DataFrame with duplicate column names (GH12344)
49
1.3.2 Enhancements
DatetimeIndex now supports conversion to strings with astype(str) (GH10442)
Support for compression (gzip/bz2) in pandas.DataFrame.to_csv() (GH7615)
pd.read_* functions can now also accept pathlib.Path, or py._path.local.LocalPath objects for the filepath_or_buffer argument. (GH11033) - The DataFrame and Series functions .to_csv(), .to_html() and .to_latex() can now handle paths beginning with tildes (e.g.
~/Documents/) (GH11438)
DataFrame now uses the fields of a namedtuple as columns, if columns are not supplied (GH11181)
DataFrame.itertuples() now returns namedtuple objects, when possible. (GH11269, GH11625)
Added axvlines_kwds to parallel coordinates plot (GH10709)
Option to .info() and .memory_usage() to provide for deep introspection of memory consumption. Note
that this can be expensive to compute and therefore is an optional parameter. (GH11595)
In [4]: df = DataFrame({'A' : ['foo']*1000})
In [5]: df['B'] = df['A'].astype('category')
50
Series of type category now make .str.<...> and .dt.<...> accessor methods / properties available,
if the categories are of that type. (GH10661)
In [9]: s = pd.Series(list('aabb')).astype('category')
In [10]: s
Out[10]:
0
a
1
a
2
b
3
b
dtype: category
Categories (2, object): [a, b]
In [11]: s.str.contains("a")
Out[11]:
0
True
1
True
2
False
3
False
dtype: bool
In [12]: date = pd.Series(pd.date_range('1/1/2015', periods=5)).astype('category')
In [13]: date
Out[13]:
0
2015-01-01
1
2015-01-02
2
2015-01-03
3
2015-01-04
4
2015-01-05
dtype: category
Categories (5, datetime64[ns]): [2015-01-01, 2015-01-02, 2015-01-03, 2015-01-04, 2015-01-05]
51
In [14]: date.dt.day
Out[14]:
0
1
1
2
2
3
3
4
4
5
dtype: int64
pivot_table now has a margins_name argument so you can use something other than the default of All
(GH3335)
Implement export of datetime64[ns, tz] dtypes with a fixed HDF5 store (GH11411)
Pretty printing sets (e.g. in DataFrame cells) now uses set literal syntax ({x, y}) instead of Legacy Python
syntax (set([x, y])) (GH11215)
Improve the error message in pandas.io.gbq.to_gbq() when a streaming insert fails (GH11285) and
when the DataFrame does not match the schema of the destination table (GH11359)
53
Bug in inference of numpy scalars and preserving dtype when setting columns (GH11638)
Bug in to_sql using unicode column names giving UnicodeEncodeError with (GH11431).
Fix regression in setting of xticks in plot (GH11529).
Bug in holiday.dates where observance rules could not be applied to holiday and doc enhancement
(GH11477, GH11533)
Fix plotting issues when having plain Axes instances instead of SubplotAxes (GH11520, GH11556).
Bug in DataFrame.to_latex() produces an extra rule when header=False (GH7124)
Bug in df.groupby(...).apply(func) when a func returns a Series containing a new datetimelike
column (GH11324)
Bug in pandas.json when file to load is big (GH11344)
Bugs in to_excel with duplicate columns (GH11007, GH10982, GH10970)
Fixed a bug that prevented the construction of an empty series of dtype datetime64[ns, tz] (GH11245).
Bug in read_excel with multi-index containing integers (GH11317)
Bug in to_excel with openpyxl 2.2+ and merging (GH11408)
Bug in DataFrame.to_dict() produces a np.datetime64 object instead of Timestamp when only
datetime is present in data (GH11327)
Bug in DataFrame.corr() raises exception when computes Kendall correlation for DataFrames with
boolean and not boolean columns (GH11560)
Bug in the link-time error caused by C inline functions on FreeBSD 10+ (with clang) (GH10510)
Bug in DataFrame.to_csv in passing through arguments for formatting MultiIndexes, including
date_format (GH7791)
Bug in DataFrame.join() with how=right producing a TypeError (GH11519)
Bug in Series.quantile with empty list results has Index with object dtype (GH11588)
Bug in pd.merge results in empty Int64Index rather than Index(dtype=object) when the merge
result is empty (GH11588)
Bug in Categorical.remove_unused_categories when having NaN values (GH11599)
Bug in DataFrame.to_sparse() loses column names for MultiIndexes (GH11600)
Bug in DataFrame.round() with non-unique column index producing a Fatal Python error (GH11611)
Bug in DataFrame.round() with decimals being a non-unique indexed Series producing extra columns
(GH11618)
54
Warning: The pandas.io.data package is deprecated and will be replaced by the pandas-datareader package. This will allow the data modules to be independently updated to your pandas installation. The API for
pandas-datareader v0.1.1 is exactly the same as in pandas v0.17.0 (GH8961, GH10861).
After installing pandas-datareader, you can easily change your imports:
from pandas.io import data, wb
becomes
from pandas_datareader import data, wb
Highlights include:
Release the Global Interpreter Lock (GIL) on some cython operations, see here
Plotting methods are now available as attributes of the .plot accessor, see here
The sorting API has been revamped to remove some long-time inconsistencies, see here
Support for a datetime64[ns] with timezones as a first-class dtype, see here
The default for to_datetime will now be to raise when presented with unparseable formats, previously
this would return the original input. Also, date parse functions now return consistent results. See here
The default for dropna in HDFStore has changed to False, to store by default all rows even if they are all
NaN, see here
Datetime accessor (dt) now supports Series.dt.strftime to generate formatted strings for datetimelikes, and Series.dt.total_seconds to generate each duration of the timedelta in seconds. See here
Period and PeriodIndex can handle multiplied freq like 3D, which corresponding to 3 days span. See here
Development installed versions of pandas will now have PEP440 compliant version strings (GH9518)
Development support for benchmarking with the Air Speed Velocity library (GH8361)
Support for reading SAS xport files, see here
Documentation comparing SAS to pandas, see here
Removal of the automatic TimeSeries broadcasting, deprecated since 0.8.0, see here
Display format with plain text can optionally align with Unicode East Asian Width, see here
Compatibility with Python 3.5 (GH11097)
Compatibility with matplotlib 1.5.0 (GH11111)
Check the API Changes and deprecations before updating.
55
56
In [3]: df.dtypes
Out[3]:
A
datetime64[ns]
B
datetime64[ns, US/Eastern]
C
datetime64[ns, CET]
dtype: object
In [4]: df.B
Out[4]:
0
2013-01-01 00:00:00-05:00
1
2013-01-02 00:00:00-05:00
2
2013-01-03 00:00:00-05:00
Name: B, dtype: datetime64[ns, US/Eastern]
In [5]: df.B.dt.tz_localize(None)
Out[5]:
0
2013-01-01
1
2013-01-02
2
2013-01-03
Name: B, dtype: datetime64[ns]
This uses a new-dtype representation as well, that is very similar in look-and-feel to its numpy cousin
datetime64[ns]
In [6]: df['B'].dtype
Out[6]: datetime64[ns, US/Eastern]
In [7]: type(df['B'].dtype)
Out[7]: pandas.types.dtypes.DatetimeTZDtype
Note: There is a slightly different string repr for the underlying DatetimeIndex as a result of the dtype changes,
but functionally these are the same.
Previous Behavior:
In [1]: pd.date_range('20130101',periods=3,tz='US/Eastern')
Out[1]: DatetimeIndex(['2013-01-01 00:00:00-05:00', '2013-01-02 00:00:00-05:00',
'2013-01-03 00:00:00-05:00'],
dtype='datetime64[ns]', freq='D', tz='US/Eastern')
In [2]: pd.date_range('20130101',periods=3,tz='US/Eastern').dtype
Out[2]: dtype('<M8[ns]')
New Behavior:
In [8]: pd.date_range('20130101',periods=3,tz='US/Eastern')
Out[8]:
DatetimeIndex(['2013-01-01 00:00:00-05:00', '2013-01-02 00:00:00-05:00',
'2013-01-03 00:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq='D')
In [9]: pd.date_range('20130101',periods=3,tz='US/Eastern').dtype
Out[9]: datetime64[ns, US/Eastern]
57
Releasing of the GIL could benefit an application that uses threads for user interactions (e.g. QT), or performing
multi-threaded computations. A nice example of a library that can handle these types of computation-in-parallel is the
dask library.
Plot submethods
The Series and DataFrame .plot() method allows for customizing plot types by supplying the kind keyword
arguments. Unfortunately, many of these kinds of plots use different required and optional keyword arguments, which
makes it difficult to discover what any given plot kind uses out of the dozens of possible arguments.
To alleviate this issue, we have added a new, optional plotting interface, which exposes each kind of plot as a
method of the .plot attribute. Instead of writing series.plot(kind=<kind>, ...), you can now also
use series.plot.<kind>(...):
In [10]: df = pd.DataFrame(np.random.rand(10, 2), columns=['a', 'b'])
In [11]: df.plot.bar()
As a result of this change, these methods are now all discoverable via tab-completion:
58
In [12]: df.plot.<TAB>
df.plot.area
df.plot.barh
df.plot.bar
df.plot.box
df.plot.density
df.plot.hexbin
df.plot.hist
df.plot.kde
df.plot.line
df.plot.pie
df.plot.scatter
Each method signature only includes relevant arguments. Currently, these are limited to required arguments, but in the
future these will include optional arguments, as well. For an overview, see the new Plotting API documentation.
Additional methods for dt accessor
strftime
We are now supporting a Series.dt.strftime method for datetime-likes to generate a formatted string
(GH10110). Examples:
# DatetimeIndex
In [13]: s = pd.Series(pd.date_range('20130101', periods=4))
In [14]: s
Out[14]:
0
2013-01-01
1
2013-01-02
2
2013-01-03
3
2013-01-04
dtype: datetime64[ns]
In [15]: s.dt.strftime('%Y/%m/%d')
Out[15]:
0
2013/01/01
1
2013/01/02
2
2013/01/03
3
2013/01/04
dtype: object
# PeriodIndex
In [16]: s = pd.Series(pd.period_range('20130101', periods=4))
In [17]: s
Out[17]:
0
2013-01-01
1
2013-01-02
2
2013-01-03
3
2013-01-04
dtype: object
In [18]: s.dt.strftime('%Y/%m/%d')
Out[18]:
0
2013/01/01
1
2013/01/02
2
2013/01/03
3
2013/01/04
dtype: object
The string format is as the python standard library and details can be found here
59
total_seconds
pd.Series of type timedelta64 has new method .dt.total_seconds() returning the duration of the
timedelta in seconds (GH10817)
# TimedeltaIndex
In [19]: s = pd.Series(pd.timedelta_range('1 minutes', periods=4))
In [20]: s
Out[20]:
0
0 days 00:01:00
1
1 days 00:01:00
2
2 days 00:01:00
3
3 days 00:01:00
dtype: timedelta64[ns]
In [21]: s.dt.total_seconds()
Out[21]:
0
60.0
1
86460.0
2
172860.0
3
259260.0
dtype: float64
In [29]: idx
Out[29]: PeriodIndex(['2015-08-01', '2015-08-03', '2015-08-05', '2015-08-07'], dtype='int64', freq='2
60
In [30]: idx + 1
Out[30]: PeriodIndex(['2015-08-03', '2015-08-05', '2015-08-07', '2015-08-09'], dtype='int64', freq='2
The support math functions are sin, cos, exp, log, expm1, log1p, sqrt, sinh, cosh, tanh, arcsin, arccos, arctan, arccosh,
arcsinh, arctanh, abs and arctan2.
These functions map to the intrinsics for the NumExpr engine. For the Python engine, they are mapped to NumPy
calls.
Changes to Excel with MultiIndex
In version 0.16.2 a DataFrame with MultiIndex columns could not be written to Excel via to_excel. That
functionality has been added (GH10564), along with updating read_excel so that the data can be read back with, no
loss of information, by specifying which columns/rows make up the MultiIndex in the header and index_col
parameters (GH4679)
See the documentation for more details.
In [31]: df = pd.DataFrame([[1,2,3,4], [5,6,7,8]],
....:
columns = pd.MultiIndex.from_product([['foo','bar'],['a','b']],
....:
names = ['col1', 'col2']),
....:
index = pd.MultiIndex.from_product([['j'], ['l', 'k']],
....:
names = ['i1', 'i2']))
....:
In [32]: df
Out[32]:
col1 foo
bar
col2
a b
a
i1 i2
j l
1 2
3
k
5 6
7
b
4
8
In [33]: df.to_excel('test.xlsx')
In [34]: df = pd.read_excel('test.xlsx', header=[0,1], index_col=[0,1])
61
In [35]: df
Out[35]:
col1 foo
bar
col2
a b
a
i1 i2
j l
1 2
3
k
5 6
7
b
4
8
Previously, it was necessary to specify the has_index_names argument in read_excel, if the serialized data
had index names. For version 0.17.0 the ouptput format of to_excel has been changed to make this keyword
unnecessary - the change is shown below.
Old
New
Warning: Excel files saved in version 0.16.2 or prior that had index names will still able to be read in, but the
has_index_names argument must specified to True.
62
InvalidColumnOrder and InvalidPageToken in the gbq module will raise ValueError instead of
IOError.
The generate_bq_schema() function is now deprecated and will be removed in a future version
(GH11121)
The gbq module will now support Python 3 (GH11094).
Display Alignment with Unicode East Asian Width
Warning: Enabling this option will affect the performance for printing of DataFrame and Series (about 2
times slower). Use only when it is actually required.
Some East Asian countries use Unicode characters its width is corresponding to 2 alphabets. If a DataFrame or
Series contains these characters, the default output cannot be aligned properly. The following options are added to
enable precise handling for these characters.
display.unicode.east_asian_width: Whether to use the Unicode East Asian Width to calculate the
display text width. (GH2612)
display.unicode.ambiguous_as_wide: Whether to handle Unicode characters belong to Ambiguous
as Wide. (GH11102)
In [36]: df = pd.DataFrame({u'': ['UK', u''], u'': ['Alice', u'']})
In [37]: df;
_merge value
left_only
right_only
both
63
Previous Behavior:
In [1] pd.concat([foo, bar, baz], 1)
Out[1]:
0 1 2
0 1 1 4
1 2 2 5
New Behavior:
In [46]: pd.concat([foo, bar, baz], 1)
Out[46]:
foo 0 1
0
1 1 4
1
2 2 5
64
Added a DataFrame.round method to round the values to a variable number of decimal places (GH10568).
In [49]: df = pd.DataFrame(np.random.random([3, 3]), columns=['A', 'B', 'C'],
....: index=['first', 'second', 'third'])
....:
In [50]: df
Out[50]:
first
second
third
A
0.342764
0.681301
0.669314
B
0.304121
0.875457
0.585937
C
0.417022
0.510422
0.624904
In [51]: df.round(2)
Out[51]:
A
B
C
first
0.34 0.30 0.42
second 0.68 0.88 0.51
third
0.67 0.59 0.62
In [52]: df.round({'A': 0, 'C': 2})
Out[52]:
A
B
C
first
0.0 0.304121 0.42
second 1.0 0.875457 0.51
third
1.0 0.585937 0.62
drop_duplicates and duplicated now accept a keep keyword to target first, last, and all duplicates.
The take_last keyword is deprecated, see here (GH6511, GH8505)
In [53]: s = pd.Series(['A', 'B', 'C', 'A', 'B', 'D'])
In [54]: s.drop_duplicates()
Out[54]:
0
A
1
B
2
C
5
D
dtype: object
In [55]: s.drop_duplicates(keep='last')
Out[55]:
2
C
3
A
4
B
5
D
dtype: object
In [56]: s.drop_duplicates(keep=False)
Out[56]:
2
C
5
D
dtype: object
Reindex now has a tolerance argument that allows for finer control of Limits on filling while reindexing
(GH10411):
In [57]: df = pd.DataFrame({'x': range(5),
....:
't': pd.date_range('2000-01-01', periods=5)})
65
....:
In [58]: df.reindex([0.1, 1.9, 3.5],
....:
method='nearest',
....:
tolerance=0.2)
....:
Out[58]:
t
x
0.1 2000-01-01 0.0
1.9 2000-01-03 2.0
3.5
NaT NaN
tolerance is also exposed by the lower level Index.get_indexer and Index.get_loc methods.
Added functionality to use the base argument when resampling a TimeDeltaIndex (GH10530)
DatetimeIndex can be instantiated using strings contains NaT (GH7599)
to_datetime can now accept the yearfirst keyword (GH7599)
pandas.tseries.offsets larger than the Day offset can now be used with a Series for addition/subtraction (GH10699). See the docs for more details.
pd.Timedelta.total_seconds() now returns Timedelta duration to ns precision (previously microsecond precision) (GH10939)
PeriodIndex now supports arithmetic with np.ndarray (GH10638)
Support pickling of Period objects (GH10439)
.as_blocks will now take a copy optional argument to return a copy of the data, default is to copy (no
change in behavior from prior versions), (GH9607)
regex argument to DataFrame.filter now handles numeric column names instead of raising
ValueError (GH10384).
Enable reading gzip compressed files via URL, either by explicitly setting the compression parameter or by
inferring from the presence of the HTTP Content-Encoding header in the response (GH8685)
Enable writing Excel files in memory using StringIO/BytesIO (GH7074)
Enable serialization of lists and dicts to strings in ExcelWriter (GH8188)
SQL io functions now accept a SQLAlchemy connectable. (GH7877)
pd.read_sql and to_sql can accept database URI as con parameter (GH10214)
read_sql_table will now allow reading from views (GH10750).
Enable writing complex values to HDFStores when using the table format (GH10447)
66
Enable pd.read_hdf to be used without specifying a key when the HDF file contains a single dataset
(GH10443)
pd.read_stata will now read Stata 118 type files. (GH9882)
msgpack submodule has been updated to 0.4.6 with backward compatibility (GH10581)
DataFrame.to_dict now accepts orient=index keyword argument (GH10844).
DataFrame.apply will return a Series of dicts if the passed function returns a dict and reduce=True
(GH8735).
Allow passing kwargs to the interpolation methods (GH10378).
Improved error message when concatenating an empty iterable of Dataframe objects (GH9157)
pd.read_csv can now read bz2-compressed files incrementally, and the C parser can read bz2-compressed
files from AWS S3 (GH11070, GH11072).
In pd.read_csv, recognize s3n:// and s3a:// URLs as designating S3 file storage (GH11070,
GH11071).
Read CSV files from AWS S3 incrementally, instead of first downloading the entire file. (Full file download still
required for compressed files in Python 2.) (GH11070, GH11073)
pd.read_csv is now able to infer compression type for files read from AWS S3 storage (GH11070,
GH11074).
67
Previous
* Series.order()
* Series.sort()
* DataFrame.sort(columns=...)
Replacement
Series.sort_values()
Series.sort_values(inplace=True)
DataFrame.sort_values(by=...)
Replacement
Series.sort_index()
Series.sort_index(level=...)
DataFrame.sort_index()
DataFrame.sort_index(level=...)
DataFrame.sort_index()
We have also deprecated and changed similar methods in two Series-like classes, Index and Categorical.
Previous
* Index.order()
* Categorical.order()
Replacement
Index.sort_values()
Categorical.sort_values()
The default for pd.to_datetime error handling has changed to errors=raise. In prior versions it was
errors=ignore. Furthermore, the coerce argument has been deprecated in favor of errors=coerce.
This means that invalid parsing will raise rather that return the original input as in previous versions. (GH10636)
Previous Behavior:
In [2]: pd.to_datetime(['2009-07-31', 'asd'])
Out[2]: array(['2009-07-31', 'asd'], dtype=object)
New Behavior:
In [3]: pd.to_datetime(['2009-07-31', 'asd'])
ValueError: Unknown string format
The string parsing of to_datetime, Timestamp and DatetimeIndex has been made consistent. (GH7599)
Prior to v0.17.0, Timestamp and to_datetime may parse year-only datetime-string incorrectly using todays
date, otherwise DatetimeIndex uses the beginning of the year. Timestamp and to_datetime may raise
ValueError in some types of datetime-string which DatetimeIndex can parse, such as a quarterly string.
68
Previous Behavior:
In [1]: Timestamp('2012Q2')
Traceback
...
ValueError: Unable to parse 2012Q2
# Results in today's date.
In [2]: Timestamp('2014')
Out [2]: 2014-08-12 00:00:00
Note:
If you want to perform calculations based on todays date, use Timestamp.now() and
pandas.tseries.offsets.
In [66]: import pandas.tseries.offsets as offsets
In [67]: Timestamp.now()
Out[67]: Timestamp('2016-05-03 09:46:20.664866')
In [68]: Timestamp.now() + offsets.DateOffset(years=1)
Out[68]: Timestamp('2017-05-03 09:46:20.666917')
New Behavior:
In [8]: pd.Index([1, 2, 3]) == pd.Index([1, 4, 5])
Out[8]: array([ True, False, False], dtype=bool)
69
Note that this is different from the numpy behavior where a comparison can be broadcast:
In [69]: np.array([1, 2, 3]) == np.array([1])
Out[69]: array([ True, False, False], dtype=bool)
Previous Behavior:
In [5]: s==None
TypeError: Could not compare <type 'NoneType'> type with Series
New Behavior:
In [74]: s==None
Out[74]:
0
False
1
False
2
False
dtype: bool
70
Warning:
You generally will want to use isnull/notnull for these types of comparisons, as
isnull/notnull tells you which elements are null. One has to be mindful that nans dont compare equal,
but Nones do. Note that Pandas/numpy uses the fact that np.nan != np.nan, and treats None like np.nan.
In [76]: None == None
Out[76]: True
In [77]: np.nan == np.nan
Out[77]: False
col2
1
NaN
New Behavior:
In [80]: df_with_missing.to_hdf('file.h5',
....:
'df_with_missing',
....:
format='table',
....:
mode='w')
....:
In [81]: pd.read_hdf('file.h5', 'df_with_missing')
Out[81]:
col1 col2
0
0.0
1.0
1
NaN
NaN
2
2.0
NaN
71
If interpreting precision as significant figures this did work for scientific notation but that same interpretation did not
work for values with standard formatting. It was also out of step with how numpy handles formatting.
Going forward the value of display.precision will directly control the number of places after the decimal, for
regular formatting as well as scientific notation, similar to how numpys precision print option works.
In [82]: pd.set_option('display.precision', 2)
In [83]: pd.DataFrame({'x': [123.456789]})
Out[83]:
x
0 123.46
To preserve output behavior with prior versions the default value of display.precision has been reduced to 6
from 7.
Changes to Categorical.unique
Categorical.unique now returns new Categoricals with categories and codes that are unique, rather
than returning np.array (GH10508)
unordered category: values and categories are sorted by appearance order.
ordered category: values are sorted by appearance order, categories keep existing order.
In [84]: cat = pd.Categorical(['C', 'A', 'B', 'C'],
....:
categories=['A', 'B', 'C'],
....:
ordered=True)
....:
In [85]: cat
Out[85]:
[C, A, B, C]
Categories (3, object): [A < B < C]
In [86]: cat.unique()
Out[86]:
[C, A, B]
Categories (3, object): [A < B < C]
In [87]: cat = pd.Categorical(['C', 'A', 'B', 'C'],
....:
categories=['A', 'B', 'C'])
72
....:
In [88]: cat
Out[88]:
[C, A, B, C]
Categories (3, object): [A, B, C]
In [89]: cat.unique()
Out[89]:
[C, A, B]
Categories (3, object): [C, A, B]
Methods
weekday, isoweekday
date, now, replace, to_datetime, today
to_datetime64 (unchanged)
All other public methods (names not beginning with underscores)
73
Deprecations
For Series the following indexing functions are deprecated (GH10177).
Deprecated Function
.irow(i)
.iget(i)
.iget_value(i)
Replacement
.iloc[i] or .iat[i]
.iloc[i] or .iat[i]
.iloc[i] or .iat[i]
Replacement
.iloc[i]
.iloc[i, j] or .iat[i, j]
.iloc[:, j]
Note: These indexing function have been deprecated in the documentation since 0.11.0.
Categorical.name was deprecated to make Categorical more numpy.ndarray like.
Series(cat, name="whatever") instead (GH10482).
Use
Setting missing values (NaN) in a Categoricals categories will issue a warning (GH10748). You can
still have missing values in the values.
drop_duplicates and duplicateds take_last keyword was deprecated in favor of keep. (GH6511,
GH8505)
Series.nsmallest and nlargests take_last keyword was deprecated in favor of keep. (GH10792)
DataFrame.combineAdd and DataFrame.combineMult are deprecated. They can easily be
replaced by using the add and mul methods: DataFrame.add(other, fill_value=0) and
DataFrame.mul(other, fill_value=1.) (GH10735).
TimeSeries deprecated in favor of Series (note that this has been an alias since 0.13.0), (GH10890)
SparsePanel deprecated and will be removed in a future version (GH11157).
Series.is_time_series deprecated in favor of Series.index.is_all_dates (GH11135)
Legacy offsets (like A@JAN) listed in here are deprecated (note that this has been alias since 0.8.0),
(GH10878)
WidePanel deprecated in favor of Panel, LongPanel in favor of DataFrame (note these have been
aliases since < 0.11.0), (GH10892)
DataFrame.convert_objects has been deprecated in favor of type-specific
pd.to_datetime, pd.to_timestamp and pd.to_numeric (new in 0.17.0) (GH11133).
functions
74
In [90]: np.random.seed(1234)
In [91]: df = DataFrame(np.random.randn(5,2),columns=list('AB'),index=date_range('20130101',peri
In [92]: df
Out[92]:
A
B
2013-01-01 0.471435 -1.190976
2013-01-02 1.432707 -0.312652
2013-01-03 -0.720589 0.887163
2013-01-04 0.859588 -0.636524
2013-01-05 0.015696 -2.242685
Previously
In [3]: df + df.A
FutureWarning: TimeSeries broadcasting along DataFrame index by default is deprecated.
Please use DataFrame.<op> to explicitly broadcast arithmetic operations along the index
Out[3]:
A
B
2013-01-01 0.942870 -0.719541
2013-01-02 2.865414 1.120055
2013-01-03 -1.441177 0.166574
2013-01-04 1.719177 0.223065
2013-01-05 0.031393 -2.226989
Current
In [93]: df.add(df.A,axis='index')
Out[93]:
A
B
2013-01-01 0.942870 -0.719541
2013-01-02 2.865414 1.120055
2013-01-03 -1.441177 0.166574
2013-01-04 1.719177 0.223065
2013-01-05 0.031393 -2.226989
75
76
77
header
kwarg
not
setting
the
Series.name
or
the
Bug in groupby.var which caused variance to be inaccurate for small float values (GH10448)
Bug in Series.plot(kind=hist) Y Label not informative (GH10485)
Bug in read_csv when using a converter which generates a uint8 type (GH9266)
Bug causes memory leak in time-series line and area plot (GH9003)
Bug when setting a Panel sliced along the major or minor axes when the right-hand side is a DataFrame
(GH11014)
Bug that returns None and does not raise NotImplementedError when operator functions (e.g. .add) of
Panel are not implemented (GH7692)
Bug in line and kde plot cannot accept multiple colors when subplots=True (GH9894)
Bug in DataFrame.plot raises ValueError when color name is specified by multiple characters
(GH10387)
Bug in left and right align of Series with MultiIndex may be inverted (GH10665)
Bug in left and right join of with MultiIndex may be inverted (GH10741)
Bug in read_stata when reading a file with a different order set in columns (GH10757)
Bug in Categorical may not representing properly when category contains tz or Period (GH10713)
Bug in Categorical.__iter__ may not returning correct datetime and Period (GH10713)
Bug in indexing with a PeriodIndex on an object with a PeriodIndex (GH4125)
Bug in read_csv with engine=c: EOF preceded by a comment, blank line, etc. was not handled correctly
(GH10728, GH10548)
Reading famafrench data via DataReader results in HTTP 404 error because of the website url is changed
(GH10591).
Bug in read_msgpack where DataFrame to decode has duplicate column names (GH9618)
Bug in io.common.get_filepath_or_buffer which caused reading of valid S3 files to fail if the
bucket also contained keys for which the user does not have read permission (GH10604)
Bug in vectorised setting of timestamp columns with python datetime.date and numpy datetime64
(GH10408, GH10412)
Bug in Index.take may add unnecessary freq attribute (GH10791)
Bug in merge with empty DataFrame may raise IndexError (GH10824)
Bug in to_latex where unexpected keyword argument for some documented arguments (GH10888)
Bug in indexing of large DataFrame where IndexError is uncaught (GH10645 and GH10692)
Bug in read_csv when using the nrows or chunksize parameters if file contains only a header line
(GH9535)
Bug in serialization of category types in HDF5 in presence of alternate encodings. (GH10366)
78
Bug in pd.DataFrame when constructing an empty DataFrame with a string dtype (GH9428)
Bug in pd.DataFrame.diff when DataFrame is not consolidated (GH10907)
Bug in pd.unique for arrays with the datetime64 or timedelta64 dtype that meant an array with object
dtype was returned instead the original dtype (GH9431)
Bug in Timedelta raising error when slicing from 0s (GH10583)
Bug in DatetimeIndex.take and TimedeltaIndex.take may not raise IndexError against invalid
index (GH10295)
Bug in Series([np.nan]).astype(M8[ms]),
(GH10747)
79
The logic flows from inside out, and function names are separated from their keyword arguments. This can be rewritten
as
(df.pipe(h)
.pipe(g, arg1=1)
.pipe(f, arg2=2, arg3=3)
)
Now both the code and the logic flow from top to bottom. Keyword arguments are next to their functions. Overall the
code is much more readable.
In the example above, the functions f, g, and h each expected the DataFrame as the first positional argument. When
the function you wish to apply takes its data anywhere other than the first argument, pass a tuple of (function,
keyword) indicating where the DataFrame should flow. For example:
In [1]: import statsmodels.formula.api as sm
In [2]: bb = pd.read_csv('data/baseball.csv', index_col='id')
# sm.poisson takes (formula, data)
80
The pipe method is inspired by unix pipes, which stream text through processes. More recently dplyr and magrittr
have introduced the popular (%>%) pipe operator for R.
See the documentation for more. (GH10129)
Other Enhancements
Added rsplit to Index/Series StringMethods (GH10303)
Removed the hard-coded size limits on the DataFrame HTML representation in the IPython notebook, and
leave this to IPython itself (only for IPython v3.0 or greater). This eliminates the duplicate scroll bars that
appeared in the notebook with large frames (GH10231).
Note that the notebook has a toggle output scrolling feature to limit the display of very large frames
(by clicking left of the output). You can also configure the way DataFrames are displayed using the pandas
options, see here here.
axis parameter of DataFrame.quantile now accepts also index and column. (GH9543)
81
82
Bug in read_csv causing index name not to be set on an empty DataFrame (GH10184)
Bug in SparseSeries.abs resets name (GH10241)
Bug in TimedeltaIndex slicing may reset freq (GH10292)
Bug in GroupBy.get_group raises ValueError when group key contains NaT (GH6992)
Bug in SparseSeries constructor ignores input data name (GH10258)
Bug in Categorical.remove_categories causing a ValueError when removing the NaN category
if underlying dtype is floating-point (GH10156)
Bug where infer_freq infers timerule (WOM-5XXX) unsupported by to_offset (GH9425)
Bug in DataFrame.to_hdf() where table format would raise a seemingly unrelated error for invalid (nonstring) column names. This is now explicitly forbidden. (GH9057)
Bug to handle masking empty DataFrame (GH10126).
Bug where MySQL interface could not handle numeric table/column names (GH10255)
Bug in read_csv with a date_parser that returned a datetime64 array of other time resolution than
[ns] (GH10245)
Bug in Panel.apply when the result has ndim=0 (GH10332)
Bug in read_hdf where auto_close could not be passed (GH9327).
Bug in read_hdf where open stores could not be used (GH10330).
Bug in adding empty DataFrames, now results in a DataFrame that .equals an empty
DataFrame (GH10181).
Bug in to_hdf and HDFStore which did not check that complib choices were valid (GH4582, GH8874).
83
1.6.1 Enhancements
CategoricalIndex
We introduce a CategoricalIndex, a new type of index object that is useful for supporting indexing with duplicates. This is a container around a Categorical (introduced in v0.15.0) and allows efficient indexing and storage
of an index with a large number of duplicated elements. Prior to 0.16.1, setting the index of a DataFrame/Series
with a category dtype would convert this to regular object-based Index.
In [1]: df = DataFrame({'A' : np.arange(6),
...:
'B' : Series(list('aabbca')).astype('category',
...:
categories=list('cab'))
...:
})
...:
In [2]: df
Out[2]:
A B
0 0 a
1 1 a
2 2 b
3 3 b
4 4 c
5 5 a
In [3]: df.dtypes
Out[3]:
A
int64
B
category
dtype: object
In [4]: df.B.cat.categories
Out[4]: Index([u'c', u'a', u'b'], dtype='object')
84
In [6]: df2.index
Out[6]: CategoricalIndex([u'a', u'a', u'b', u'b', u'c', u'a'], categories=[u'c', u'a', u'b'], ordered
indexing with __getitem__/.iloc/.loc/.ix works similarly to an Index with duplicates. The indexers
MUST be in the category or the operation will raise.
In [7]: df2.loc['a']
Out[7]:
A
B
a 0
a 1
a 5
In [8]: df2.loc['a'].index
Out[8]: CategoricalIndex([u'a', u'a', u'a'], categories=[u'c', u'a', u'b'], ordered=False, name=u'B',
groupby operations on the index will preserve the index nature as well
In [10]: df2.groupby(level=0).sum()
Out[10]:
A
B
c 4
a 6
b 5
In [11]: df2.groupby(level=0).sum().index
Out[11]: CategoricalIndex([u'c', u'a', u'b'], categories=[u'c', u'a', u'b'], ordered=False, name=u'B'
reindexing operations, will return a resulting index based on the type of the passed indexer, meaning that passing a
list will return a plain-old-Index; indexing with a Categorical will return a CategoricalIndex, indexed
according to the categories of the PASSED Categorical dtype. This allows one to arbitrarly index these even with
values NOT in the categories, similarly to how you can reindex ANY pandas index.
In [12]: df2.reindex(['a','e'])
Out[12]:
A
B
a 0.0
a 1.0
a 5.0
e NaN
85
In [13]: df2.reindex(['a','e']).index
Out[13]: Index([u'a', u'a', u'a', u'e'], dtype='object', name=u'B')
In [14]: df2.reindex(pd.Categorical(['a','e'],categories=list('abcde')))
Out[14]:
A
B
a 0.0
a 1.0
a 5.0
e NaN
In [15]: df2.reindex(pd.Categorical(['a','e'],categories=list('abcde'))).index
Out[15]: CategoricalIndex([u'a', u'a', u'a', u'e'], categories=[u'a', u'b', u'c', u'd', u'e'], ordere
86
When applied to a DataFrame, one may pass the name of a column to specify sampling weights when sampling from
rows.
In [24]: df = DataFrame({'col1':[9,8,7,6], 'weight_column':[0.5, 0.4, 0.1, 0]})
In [25]: df.sample(n=3, weights='weight_column')
Out[25]:
col1 weight_column
0
9
0.5
2
7
0.1
1
8
0.4
One special case for the .str accessor on Index is that if a string method returns bool, the .str accessor
will return a np.array instead of a boolean Index (GH8875). This enables the following expression to work
naturally:
In [28]: idx = Index(['a1', 'a2', 'b1', 'b2'])
In [29]: s = Series(range(4), index=idx)
In [30]: s
Out[30]:
a1
0
a2
1
b1
2
b2
3
dtype: int64
In [31]: idx.str.startswith('a')
Out[31]: array([ True, True, False, False], dtype=bool)
In [32]: s[s.index.str.startswith('a')]
Out[32]:
a1
0
87
a2
1
dtype: int64
The following new methods are accesible via .str accessor to apply the function to each values. (GH9766,
GH9773, GH10031, GH10045, GH10052)
capitalize()
index()
swapcase()
rindex()
Methods
normalize()
translate()
partition()
rpartition()
split now takes expand keyword to specify whether to expand dimensionality. return_type is deprecated. (GH9847)
In [33]: s = Series(['a,b', 'a,c', 'b,c'])
# return Series
In [34]: s.str.split(',')
Out[34]:
0
[a, b]
1
[a, c]
2
[b, c]
dtype: object
# return DataFrame
In [35]: s.str.split(',', expand=True)
Out[35]:
0 1
0 a b
1 a c
2 b c
In [36]: idx = Index(['a,b', 'a,c', 'b,c'])
# return Index
In [37]: idx.str.split(',')
Out[37]: Index([[u'a', u'b'], [u'a', u'c'], [u'b', u'c']], dtype='object')
# return MultiIndex
In [38]: idx.str.split(',', expand=True)
Out[38]:
MultiIndex(levels=[[u'a', u'b'], [u'b', u'c']],
labels=[[0, 0, 1], [0, 1, 1]])
88
DataFrame.diff now takes an axis parameter that determines the direction of differencing (GH9727)
Allow clip, clip_lower, and clip_upper to accept array-like arguments as thresholds (This is a regression from 0.11.0). These methods now have an axis parameter which determines how the Series or DataFrame
will be aligned with the threshold(s). (GH6966)
DataFrame.mask() and Series.mask() now support same keywords as where (GH8801)
drop function can now accept errors keyword to suppress ValueError raised when any of label does not
exist in the target data. (GH6736)
In [43]: df = DataFrame(np.random.randn(3, 3), columns=['A', 'B', 'C'])
In [44]: df.drop(['A', 'X'], axis=1, errors='ignore')
Out[44]:
B
C
0 0.991946 0.953324
1 -0.334077 0.002118
2 0.289092 1.321158
Add support for separating years and quarters using dashes, for example 2014-Q1. (GH9688)
Allow conversion of values with dtype datetime64 or timedelta64 to strings using astype(str)
(GH9757)
get_dummies function now accepts sparse keyword. If set to True, the return DataFrame is sparse, e.g.
SparseDataFrame. (GH8823)
Period now accepts datetime64 as value input. (GH9054)
Allow timedelta string conversion when leading zero is missing from time definition, ie 0:00:00 vs 00:00:00.
(GH9570)
Allow Panel.shift with axis=items (GH9890)
Trying to write an excel file now raises NotImplementedError if the DataFrame has a MultiIndex
instead of writing a broken Excel file. (GH9794)
Allow Categorical.add_categories to accept Series or np.array. (GH9927)
Add/delete str/dt/cat accessors dynamically from __dir__. (GH9910)
Add normalize as a dt accessor method. (GH10047)
DataFrame and Series now have _constructor_expanddim property as overridable constructor for
one higher dimensionality data. This should be used only when it is really needed, see here
pd.lib.infer_dtype now returns bytes in Python 3 where appropriate. (GH10032)
89
By default, read_csv and read_table will now try to infer the compression type based on the file extension. Set compression=None to restore the previous behavior (no decompression). (GH9770)
Deprecations
Series.str.splits return_type keyword was removed in favor of expand (GH9847)
In [3]: pd.Index(range(104),name='foo')
Out[3]: Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
In [4]: pd.date_range('20130101',periods=4,name='foo',tz='US/Eastern')
Out[4]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-01-01 00:00:00-05:00, ..., 2013-01-04 00:00:00-05:00]
Length: 4, Freq: D, Timezone: US/Eastern
In [5]: pd.date_range('20130101',periods=104,name='foo',tz='US/Eastern')
Out[5]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-01-01 00:00:00-05:00, ..., 2013-04-14 00:00:00-04:00]
Length: 104, Freq: D, Timezone: US/Eastern
New Behavior
In [45]: pd.set_option('display.width', 80)
In [46]: pd.Index(range(4), name='foo')
Out[46]: Int64Index([0, 1, 2, 3], dtype='int64', name=u'foo')
In [47]: pd.Index(range(30), name='foo')
Out[47]:
Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
dtype='int64', name=u'foo')
In [48]: pd.Index(range(104), name='foo')
Out[48]:
Int64Index([ 0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
...
94, 95, 96, 97, 98, 99, 100, 101, 102, 103],
dtype='int64', name=u'foo', length=104)
In [49]: pd.CategoricalIndex(['a','bb','ccc','dddd'], ordered=True, name='foobar')
Out[49]: CategoricalIndex([u'a', u'bb', u'ccc', u'dddd'], categories=[u'a', u'bb', u'ccc', u'dddd'],
90
'2013-01-04',
'2013-01-08',
'2013-01-12',
'2013-01-16',
'2013-01-20',
'2013-01-24',
91
Bug in index equality comparisons using == failing on Index/MultiIndex type incompatibility (GH9785)
Bug in which SparseDataFrame could not take nan as a column name (GH8822)
Bug in to_msgpack and read_msgpack zlib and blosc compression support (GH9783)
Bug GroupBy.size doesnt attach index name properly if grouped by TimeGrouper (GH9925)
Bug causing an exception in slice assignments because length_of_indexer returns wrong results
(GH9995)
Bug in csv parser causing lines with initial whitespace plus one non-space character to be skipped. (GH9710)
Bug in C csv parser causing spurious NaNs when data started with newline followed by whitespace. (GH10022)
Bug causing elements with a null group to spill into the final group when grouping by a Categorical
(GH9603)
Bug where .iloc and .loc behavior is not consistent on empty dataframes (GH9964)
Bug in invalid attribute access on a TimedeltaIndex incorrectly raised ValueError instead of
AttributeError (GH9680)
Bug in unequal comparisons between categorical data and a scalar, which was not in the categories (e.g.
Series(Categorical(list("abc"), ordered=True)) > "d". This returned False for all elements, but now raises a TypeError. Equality comparisons also now return False for == and True for !=.
(GH9848)
Bug in DataFrame __setitem__ when right hand side is a dictionary (GH9874)
Bug in where when dtype is datetime64/timedelta64, but dtype of other is not (GH9804)
Bug in MultiIndex.sortlevel() results in unicode level name breaks (GH9856)
Bug in which groupby.transform incorrectly enforced output dtypes to match input dtypes. (GH9807)
Bug in DataFrame constructor when columns parameter is set, and data is an empty list (GH9939)
Bug in bar plot with log=True raises TypeError if all values are less than 1 (GH9905)
Bug in horizontal bar plot ignores log=True (GH9905)
Bug in PyTables queries that did not return proper results using the index (GH8265, GH9676)
Bug where dividing a dataframe containing values of type Decimal by another Decimal would raise.
(GH9787)
Bug where using DataFrames asfreq would remove the name of the index. (GH9885)
Bug causing extra index point when resample BM/BQ (GH9756)
Changed caching in AbstractHolidayCalendar to be at the instance level rather than at the class level as
the latter can result in unexpected behaviour. (GH9552)
Fixed latex output for multi-indexed dataframes (GH9778)
Bug causing an exception when setting an empty range using DataFrame.loc (GH9596)
Bug in hiding ticklabels with subplots and shared axes when adding a new plot to an existing grid of axes
(GH9158)
Bug in transform and filter when grouping on a categorical variable (GH9921)
Bug in transform when groups are equal in number and dtype to the input index (GH9700)
Google BigQuery connector now imports dependencies on a per-method basis.(GH9713)
Updated BigQuery connector to no longer use deprecated oauth2client.tools.run() (GH8327)
93
Bug in subclassed DataFrame. It may not return the correct class, when slicing or subsetting it. (GH9632)
Bug in .median() where non-float null values are not handled correctly (GH10040)
Bug in Series.fillna() where it raises if a numerically convertible string is given (GH10092)
to
Timedelta
to
conform
the
.seconds
attribute
with
Changes to the .loc slicing API to conform with the behavior of .ix see here
Changes to the default for ordering in the Categorical constructor, see here
Enhancement to the .str accessor to make string operations easier, see here
The pandas.tools.rplot, pandas.sandbox.qtpandas and pandas.rpy modules are deprecated.
We refer users to external packages like seaborn, pandas-qt and rpy2 for similar or equivalent functionality, see
here
Check the API Changes and deprecations before updating.
Whats new in v0.16.0
New features
DataFrame Assign
Interaction with scipy.sparse
String Methods Enhancements
Other enhancements
Backwards incompatible API changes
Changes in Timedelta
Indexing Changes
Categorical Changes
Other API Changes
Deprecations
Removal of prior version deprecations/changes
Performance Improvements
Bug Fixes
(for example, a Series or NumPy array), or a function of one argument to be called on the DataFrame. The new
values are inserted, and the entire DataFrame (with all original and new columns) is returned.
In [1]: iris = read_csv('data/iris.data')
In [2]: iris.head()
Out[2]:
SepalLength SepalWidth
0
5.1
3.5
1
4.9
3.0
2
4.7
3.2
3
4.6
3.1
4
5.0
3.6
PetalLength
1.4
1.4
1.3
1.5
1.4
PetalWidth
0.2
0.2
0.2
0.2
0.2
In [3]: iris.assign(sepal_ratio=iris['SepalWidth'] /
Out[3]:
SepalLength SepalWidth PetalLength PetalWidth
0
5.1
3.5
1.4
0.2
1
4.9
3.0
1.4
0.2
2
4.7
3.2
1.3
0.2
3
4.6
3.1
1.5
0.2
4
5.0
3.6
1.4
0.2
Name
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
iris['SepalLength']).head()
Name
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
sepal_ratio
0.686275
0.612245
0.680851
0.673913
0.720000
Above was an example of inserting a precomputed value. We can also pass in a function to be evalutated.
In [4]: iris.assign(sepal_ratio = lambda x: (x['SepalWidth'] /
...:
x['SepalLength'])).head()
...:
Out[4]:
SepalLength SepalWidth PetalLength PetalWidth
Name sepal_ratio
0
5.1
3.5
1.4
0.2 Iris-setosa
0.686275
1
4.9
3.0
1.4
0.2 Iris-setosa
0.612245
2
4.7
3.2
1.3
0.2 Iris-setosa
0.680851
3
4.6
3.1
1.5
0.2 Iris-setosa
0.673913
4
5.0
3.6
1.4
0.2 Iris-setosa
0.720000
The power of assign comes when used in chains of operations. For example, we can limit the DataFrame to just
those with a Sepal Length greater than 5, calculate the ratio, and plot
In [5]: (iris.query('SepalLength > 5')
...:
.assign(SepalRatio = lambda x: x.SepalWidth / x.SepalLength,
...:
PetalRatio = lambda x: x.PetalWidth / x.PetalLength)
...:
.plot(kind='scatter', x='SepalRatio', y='PetalRatio'))
...:
Out[5]: <matplotlib.axes._subplots.AxesSubplot at 0x119d27790>
95
96
3.0
NaN
1.0
1
3.0
0
NaN
1
NaN
dtype: float64
BlockIndex
Block locations: array([0, 2], dtype=int32)
Block lengths: array([1, 2], dtype=int32)
2
0.],
3.],
0.]])
In [15]: rows
Out[15]: [(1, 2), (1, 1), (2, 1)]
In [16]: columns
Out[16]: [('a', 0), ('a', 1), ('b', 0), ('b', 1)]
convenience
method
for
creating
SparseSeries
from
2.],
0.],
0.]])
In [21]: ss = SparseSeries.from_coo(A)
In [22]: ss
Out[22]:
0 2
1.0
3
2.0
1 0
3.0
dtype: float64
97
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([3], dtype=int32)
isalpha()
isupper()
rfind()
Methods
isdigit()
istitle()
ljust()
isdigit()
isnumeric()
rjust()
isspace()
isdecimal()
zfill()
Series.str.pad() and Series.str.center() now accept fillchar option to specify filling character (GH9352)
In [26]: s = Series(['12', '300', '25'])
In [27]: s.str.pad(5, fillchar='_')
Out[27]:
0
___12
1
__300
2
___25
dtype: object
98
2
JK
dtype: object
Other enhancements
Reindex now supports method=nearest for frames or series with a monotonic increasing or decreasing
index (GH9258):
In [31]: df = pd.DataFrame({'x': range(5)})
In [32]: df.reindex([0.2, 1.8, 3.5], method='nearest')
Out[32]:
x
0.2 0
1.8 2
3.5 4
This method is also exposed by the lower level Index.get_indexer and Index.get_loc methods.
The read_excel() functions sheetname argument now accepts a list and None, to get multiple or all sheets
respectively. If more than one sheet is specified, a dictionary is returned. (GH9450)
# Returns the 1st and 4th sheet, as a dictionary of DataFrames.
pd.read_excel('path_to_file.xls',sheetname=['Sheet1',3])
Allow Stata files to be read incrementally with an iterator; support for long strings in Stata files. See the docs
here (GH9493:).
Paths beginning with ~ will now be expanded to begin with the users home directory (GH9066)
Added time interval selection in get_data_yahoo (GH9071)
Added Timestamp.to_datetime64() to complement Timedelta.to_timedelta64() (GH9255)
tseries.frequencies.to_offset() now accepts Timedelta as input (GH9064)
Lag parameter was added to the autocorrelation method of Series, defaults to lag-1 autocorrelation (GH9192)
Timedelta will now accept nanoseconds keyword in constructor (GH9273)
SQL code now safely escapes table and column names (GH8986)
Added auto-complete for Series.str.<tab>, Series.dt.<tab> and Series.cat.<tab>
(GH9322)
Index.get_indexer now supports method=pad and method=backfill even for any target array, not just monotonic targets. These methods also work for monotonic decreasing as well as monotonic
increasing indexes (GH9258).
Index.asof now works on all index types (GH9258).
A verbose argument has been augmented in io.read_excel(), defaults to False. Set to True to print
sheet names as they are parsed. (GH9450)
Added days_in_month (compatibility alias daysinmonth) property to Timestamp, DatetimeIndex,
Period, PeriodIndex, and Series.dt (GH9572)
Added decimal option in to_csv to provide formatting for non-. decimal separators (GH781)
Added normalize option for Timestamp to normalized to midnight (GH8794)
Added example for DataFrame import to R using HDF5 file and rhdf5 library. See the documentation for
more (GH9636).
1.7. v0.16.0 (March 22, 2015)
99
New Behavior
In [33]: t = pd.Timedelta('1 day, 10:11:12.100123')
In [34]: t.days
Out[34]: 1
In [35]: t.seconds
Out[35]: 36672
In [36]: t.microseconds
Out[36]: 100123
In [37]: t.components
Out[37]: Components(days=1, hours=10, minutes=11, seconds=12, milliseconds=100, microseconds=123, nan
In [38]: t.components.seconds
Out[38]: 12
Indexing Changes
The behavior of a small sub-set of edge cases for using .loc have changed (GH8613). Furthermore we have improved
the content of the error messages that are raised:
Slicing with .loc where the start and/or stop bound is not found in the index is now allowed; this previously
would raise a KeyError. This makes the behavior the same as .ix in this case. This change is only for
slicing, not when indexing with a single label.
100
In [39]: df = DataFrame(np.random.randn(5,4),
....:
columns=list('ABCD'),
....:
index=date_range('20130101',periods=5))
....:
In [40]: df
Out[40]:
A
B
C
D
2013-01-01 -1.546906 -0.202646 -0.655969 0.193421
2013-01-02 0.553439 1.318152 -0.469305 0.675554
2013-01-03 -1.817027 -0.183109 1.058969 -0.397840
2013-01-04 0.337438 1.047579 1.045938 0.863717
2013-01-05 -0.122092 0.124713 -0.322795 0.841675
In [41]: s = Series(range(5),[-2,-1,1,2,3])
In [42]: s
Out[42]:
-2
0
-1
1
1
2
2
3
3
4
dtype: int64
Previous Behavior
In [4]: df.loc['2013-01-02':'2013-01-10']
KeyError: 'stop bound [2013-01-10] is not in the [index]'
In [6]: s.loc[-10:3]
KeyError: 'start bound [-10] is not the [index]'
New Behavior
In [43]: df.loc['2013-01-02':'2013-01-10']
Out[43]:
A
B
C
D
2013-01-02 0.553439 1.318152 -0.469305 0.675554
2013-01-03 -1.817027 -0.183109 1.058969 -0.397840
2013-01-04 0.337438 1.047579 1.045938 0.863717
2013-01-05 -0.122092 0.124713 -0.322795 0.841675
In [44]: s.loc[-10:3]
Out[44]:
-2
0
-1
1
1
2
2
3
3
4
dtype: int64
Allow slicing with float-like values on an integer index for .ix. Previously this was only enabled for .loc:
Previous Behavior
In [8]: s.ix[-1.0:2]
TypeError: the slice start value [-1.0] is not a proper indexer for this index type (Int64Index)
New Behavior
1.7. v0.16.0 (March 22, 2015)
101
In [45]: s.ix[-1.0:2]
Out[45]:
-1
1
1
2
2
3
dtype: int64
Provide a useful exception for indexing with an invalid type for that index when using .loc. For example
trying to use .loc on an index of type DatetimeIndex or PeriodIndex or TimedeltaIndex, with an
integer (or a float).
Previous Behavior
In [4]: df.loc[2:3]
KeyError: 'start bound [2] is not the [index]'
New Behavior
In [4]: df.loc[2:3]
TypeError: Cannot do slice indexing on <class 'pandas.tseries.index.DatetimeIndex'> with <type '
Categorical Changes
In prior versions, Categoricals that had an unspecified ordering (meaning no ordered keyword was passed)
were defaulted as ordered Categoricals. Going forward, the ordered keyword in the Categorical constructor
will default to False. Ordering must now be explicit.
Furthermore, previously you could change the ordered attribute of a Categorical by just setting the attribute, e.g. cat.ordered=True; This is now deprecated and you should use cat.as_ordered() or
cat.as_unordered(). These will by default return a new object and not modify the existing object. (GH9347,
GH9190)
Previous Behavior
In [3]: s = Series([0,1,2], dtype='category')
In [4]: s
Out[4]:
0
0
1
1
2
2
dtype: category
Categories (3, int64): [0 < 1 < 2]
In [5]: s.cat.ordered
Out[5]: True
In [6]: s.cat.ordered = False
In [7]: s
Out[7]:
0
0
1
1
2
2
dtype: category
Categories (3, int64): [0, 1, 2]
New Behavior
102
For ease of creation of series of categorical data, we have added the ability to pass keywords when calling
.astype(). These are passed directly to the constructor.
In [55]: s = Series(["a","b","c","a"]).astype('category',ordered=True)
In [56]: s
Out[56]:
0
a
1
b
2
c
3
a
dtype: category
Categories (3, object): [a < b < c]
In [57]: s = Series(["a","b","c","a"]).astype('category',categories=list('abcdef'),ordered=False)
In [58]: s
Out[58]:
103
0
a
1
b
2
c
3
a
dtype: category
Categories (6, object): [a, b, c, d, e, f]
New Behavior. If the input dtypes are integral, the output dtype is also integral and the output values are the
result of the bitwise operation.
In [2]: pd.Series([0,1,2,3], list('abcd')) | pd.Series([4,4,4,4], list('abcd'))
Out[2]:
a
4
b
5
c
6
104
d
7
dtype: int64
During division involving a Series or DataFrame, 0/0 and 0//0 now give np.nan instead of np.inf.
(GH9144, GH8445)
Previous Behavior
In [2]: p = pd.Series([0, 1])
In [3]: p / 0
Out[3]:
0
inf
1
inf
dtype: float64
In [4]: p // 0
Out[4]:
0
inf
1
inf
dtype: float64
New Behavior
In [59]: p = pd.Series([0, 1])
In [60]: p / 0
Out[60]:
0
NaN
1
inf
dtype: float64
In [61]: p // 0
Out[61]:
0
NaN
1
inf
dtype: float64
Series.values_counts and Series.describe for categorical data will now put NaN entries at the
end. (GH9443)
Series.describe for categorical data will now give counts and frequencies of 0, not NaN, for unused
categories (GH9443)
Due to a bug fix, looking up a partial string label with DatetimeIndex.asof now includes values that
match the string, even if they are after the start of the partial string label (GH9258).
Old behavior:
In [4]: pd.to_datetime(['2000-01-31', '2000-02-28']).asof('2000-02')
Out[4]: Timestamp('2000-01-31 00:00:00')
Fixed behavior:
In [62]: pd.to_datetime(['2000-01-31', '2000-02-28']).asof('2000-02')
Out[62]: Timestamp('2000-02-28 00:00:00')
To reproduce the old behavior, simply add more precision to the label (e.g., use 2000-02-01 instead of
2000-02).
105
Deprecations
The rplot trellis plotting interface is deprecated and will be removed in a future version. We refer to external
packages like seaborn for similar but more refined functionality (GH3445). The documentation includes some
examples how to convert your existing code using rplot to seaborn: rplot docs.
The pandas.sandbox.qtpandas interface is deprecated and will be removed in a future version. We refer
users to the external package pandas-qt. (GH9615)
The pandas.rpy interface is deprecated and will be removed in a future version. Similar functionaility can
be accessed thru the rpy2 project (GH9602)
Adding DatetimeIndex/PeriodIndex to another DatetimeIndex/PeriodIndex is being deprecated as a set-operation. This will be changed to a TypeError in a future version. .union() should be used
for the union set operation. (GH9094)
Subtracting DatetimeIndex/PeriodIndex from another DatetimeIndex/PeriodIndex is being deprecated as a set-operation. This will be changed to an actual numeric subtraction yielding a
TimeDeltaIndex in a future version. .difference() should be used for the differencing set operation.
(GH9094)
Removal of prior version deprecations/changes
DataFrame.pivot_table and crosstabs rows and cols keyword arguments were removed in favor
of index and columns (GH6581)
DataFrame.to_excel and DataFrame.to_csv cols keyword argument was removed in favor of
columns (GH6581)
Removed convert_dummies in favor of get_dummies (GH6581)
Removed value_range in favor of describe (GH6581)
106
107
Bug in using grouper functions that need passed thru arguments (e.g. axis), when using wrapped function (e.g.
fillna), (GH9221)
DataFrame now properly supports simultaneous copy and dtype arguments in constructor (GH9099)
Bug in read_csv when using skiprows on a file with CR line endings with the c engine. (GH9079)
isnull now detects NaT in PeriodIndex (GH9129)
Bug in groupby .nth() with a multiple column groupby (GH8979)
Bug in DataFrame.where and Series.where coerce numerics to string incorrectly (GH9280)
Bug in DataFrame.where and Series.where raise ValueError when string list-like is passed.
(GH9280)
Accessing Series.str methods on with non-string values now raises TypeError instead of producing
incorrect results (GH9184)
Bug in DatetimeIndex.__contains__ when index has duplicates and is not monotonic increasing
(GH9512)
Fixed division by zero error for Series.kurt() when all values are equal (GH9197)
Fixed issue in the xlsxwriter engine where it added a default General format to cells if no other format
wass applied. This prevented other row or column formatting being applied. (GH9167)
Fixes issue with index_col=False when usecols is also specified in read_csv. (GH9082)
Bug where wide_to_long would modify the input stubnames list (GH9204)
Bug in to_sql not storing float64 values using double precision. (GH9009)
SparseSeries and SparsePanel now accept zero argument constructors (same as their non-sparse counterparts) (GH9272).
Regression in merging Categorical and object dtypes (GH9426)
Bug in read_csv with buffer overflows with certain malformed input files (GH9205)
Bug in groupby MultiIndex with missing pair (GH9049, GH9344)
Fixed bug in Series.groupby where grouping on MultiIndex levels would ignore the sort argument
(GH9444)
Fix bug in DataFrame.Groupby where sort=False is ignored in the case of Categorical columns.
(GH8868)
Fixed bug with reading CSV files from Amazon S3 on python 3 raising a TypeError (GH9452)
Bug in the Google BigQuery reader where the jobComplete key may be present but False in the query results
(GH8728)
Bug in Series.values_counts with excluding NaN for categorical type Series with dropna=True
(GH9443)
Fixed mising numeric_only option for DataFrame.std/var/sem (GH9201)
Support constructing Panel or Panel4D with scalar data (GH8285)
Series text representation disconnected from max_rows/max_columns (GH7508).
Series number formatting inconsistent when truncated (GH8532).
Previous Behavior
108
In [2]: pd.options.display.max_rows = 10
In [3]: s = pd.Series([1,1,1,1,1,1,1,1,1,1,0.9999,1,1]*10)
In [4]: s
Out[4]:
0
1
1
1
2
1
...
127
0.9999
128
1.0000
129
1.0000
Length: 130, dtype: float64
New Behavior
0
1
2
3
4
...
125
126
127
128
129
dtype:
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.9999
1.0000
1.0000
float64
A Spurious SettingWithCopy Warning was generated when setting a new item in a frame in some cases
(GH8730)
The following would previously report a SettingWithCopy Warning.
In [1]: df1 = DataFrame({'x': Series(['a','b','c']), 'y': Series(['d','e','f'])})
In [2]: df2 = df1[['x']]
In [3]: df2['y'] = ['g', 'h', 'i']
109
0.043324
0.561433
0.329668
0.502967
In [3]: df.index.lexsort_depth
Out[3]: 1
# in prior versions this would raise a KeyError
# will now show a PerformanceWarning
In [4]: df.loc[(1, 'z')]
Out[4]:
jolie
jim joe
1
z
0.329668
# lexically sorting
In [5]: df2 = df.sortlevel()
In [6]: df2
Out[6]:
jolie
jim joe
0
x
x
1
y
z
0.043324
0.561433
0.502967
0.329668
In [7]: df2.index.lexsort_depth
Out[7]: 2
In [8]: df2.loc[(1,'z')]
Out[8]:
jolie
jim joe
1
z
0.329668
Bug in unique of Series with category dtype, which returned all categories regardless whether they were
used or not (see GH8559 for the discussion). Previous behaviour was to return all categories:
In [3]: cat = pd.Categorical(['a', 'b', 'a'], categories=['a', 'b', 'c'])
In [4]: cat
Out[4]:
[a, b, a]
Categories (3, object): [a < b < c]
In [5]: cat.unique()
Out[5]: array(['a', 'b', 'c'], dtype=object)
110
Now, only the categories that do effectively occur in the array are returned:
In [9]: cat = pd.Categorical(['a', 'b', 'a'], categories=['a', 'b', 'c'])
In [10]: cat.unique()
Out[10]:
[a, b]
Categories (2, object): [a, b]
Series.all and Series.any now support the level and skipna parameters. Series.all,
Series.any, Index.all, and Index.any no longer support the out and keepdims parameters, which
existed for compatibility with ndarray. Various index types no longer support the all and any aggregation
functions and will now raise TypeError. (GH8302).
Allow equality comparisons of Series with a categorical dtype and object dtype; previously these would raise
TypeError (GH8938)
Bug in NDFrame: conflicting attribute/column names now behave consistently between getting and setting.
Previously, when both a column and attribute named y existed, data.y would return the attribute, while
data.y = z would update the column (GH8994)
In [11]: data = pd.DataFrame({'x':[1, 2, 3]})
In [12]: data.y = 2
In [13]: data['y'] = [2, 4, 6]
In [14]: data
Out[14]:
x y
0 1 2
1 2 4
2 3 6
# this assignment was inconsistent
In [15]: data.y = 5
Old behavior:
In [6]: data.y
Out[6]: 2
In [7]: data['y'].values
Out[7]: array([5, 5, 5])
New behavior:
In [16]: data.y
Out[16]: 5
In [17]: data['y'].values
Out[17]: array([2, 4, 6])
Timestamp(now) is now equivalent to Timestamp.now() in that it returns the local time rather than
UTC. Also, Timestamp(today) is now equivalent to Timestamp.today() and both have tz as a
possible argument. (GH9000)
Fix negative step support for label-based slices (GH8753)
Old behavior:
111
New behavior:
In [18]: s = pd.Series(np.arange(3), ['a', 'b', 'c'])
In [19]: s.loc['c':'a':-1]
Out[19]:
c
2
b
1
a
0
dtype: int64
1.8.2 Enhancements
Categorical enhancements:
Added ability to export Categorical data to Stata (GH8633). See here for limitations of categorical variables
exported to Stata data files.
Added flag order_categoricals to StataReader and read_stata to select whether to order imported categorical data (GH8836). See here for more information on importing categorical variables from Stata
data files.
Added ability to export Categorical data to to/from HDF5 (GH7621). Queries work the same as if it was an
object array. However, the category dtyped data is stored in a more efficient manner. See here for an
example and caveats w.r.t. prior versions of pandas.
Added support for searchsorted() on Categorical class (GH8420).
Other enhancements:
Added the ability to specify the SQL type of columns when writing a DataFrame to a database (GH8778). For
example, specifying to use the sqlalchemy String type instead of the default Text type for string columns:
from sqlalchemy.types import String
data.to_sql('data_dtype', engine, dtype={'Col_1': String})
Series.all and Series.any now support the level and skipna parameters (GH8302):
In [20]: s = pd.Series([False, True, False], index=[0, 0, 1])
In [21]: s.any(level=0)
Out[21]:
0
True
1
False
dtype: bool
Panel now supports the all and any aggregation functions. (GH8302):
112
1.8.3 Performance
Reduce memory usage when skiprows is an integer in read_csv (GH8681)
Performance boost for to_datetime conversions with a passed format=, and the exact=False
(GH8904)
113
114
Bug where index name was still used when plotting a series with use_index=False (GH8558).
Bugs when trying to stack multiple columns, when some (or all) of the level names are numbers (GH8584).
Bug in MultiIndex where __contains__ returns wrong result if index is not lexically sorted or unique
(GH7724)
BUG CSV: fix problem with trailing whitespace in skipped rows, (GH8679), (GH8661), (GH8983)
Regression in Timestamp does not parse Z zone designator for UTC (GH8771)
Bug in StataWriter the produces writes strings with 244 characters irrespective of actual size (GH8969)
Fixed ValueError raised by cummin/cummax when datetime64 Series contains NaT. (GH8965)
Bug in Datareader returns object dtype if there are missing values (GH8980)
Bug in plotting if sharex was enabled and index was a timeseries, would show labels on multiple axes (GH3964).
Bug where passing a unit to the TimedeltaIndex constructor applied the to nano-second conversion twice.
(GH9011).
Bug in plotting of a period-like array (GH9012)
previous behavior:
In [6]: s.dt.hour
Out[6]:
0
0
1
0
115
2
-1
3
0
4
0
dtype: int64
current behavior:
In [4]: s.dt.hour
Out[4]:
0
0.0
1
0.0
2
NaN
3
0.0
4
0.0
dtype: float64
groupby with as_index=False will not add erroneous extra columns to result (GH8582):
In [5]: np.random.seed(2718281)
In [6]: df = pd.DataFrame(np.random.randint(0, 100, (10, 2)),
...:
columns=['jim', 'joe'])
...:
In [7]: df.head()
Out[7]:
jim joe
0
61
81
1
96
49
2
55
65
3
72
51
4
77
12
In [8]: ts = pd.Series(5 * np.random.randint(0, 3, 10))
previous behavior:
In [4]: df.groupby(ts, as_index=False).max()
Out[4]:
NaN jim joe
0
0
72
83
1
5
77
84
2
10
96
65
current behavior:
In [9]: df.groupby(ts, as_index=False).max()
Out[9]:
jim joe
0
72
83
1
77
84
2
96
65
groupby will not erroneously exclude columns if the column name conflics with the grouper name (GH8112):
In [10]: df = pd.DataFrame({'jim': range(5), 'joe': range(5, 10)})
In [11]: df
Out[11]:
jim joe
116
0
1
2
3
4
0
1
2
3
4
5
6
7
8
9
current behavior:
In [13]: gr.apply(sum)
Out[13]:
jim joe
jim
False
9
24
True
1
11
Support for slicing with monotonic decreasing indexes, even if start or stop is not found in the index
(GH7860):
In [14]: s = pd.Series(['a', 'b', 'c', 'd'], [4, 3, 2, 1])
In [15]: s
Out[15]:
4
a
3
b
2
c
1
d
dtype: object
previous behavior:
In [8]: s.loc[3.5:1.5]
KeyError: 3.5
current behavior:
In [16]: s.loc[3.5:1.5]
Out[16]:
3
b
2
c
dtype: object
io.data.Options has been fixed for a change in the format of the Yahoo Options page (GH8612),
(GH8741)
Note: As a result of a change in Yahoos option page layout, when an expiry date is given, Options methods
now return data for a single expiry date. Previously, methods returned all data for the selected month.
The month and year parameters have been undeprecated and can be used to get all options data for a given
month.
1.9. v0.15.1 (November 9, 2014)
117
If an expiry date that is not valid is given, data for the next expiry after the given date is returned.
Option data frames are now saved on the instance as callsYYMMDD or putsYYMMDD. Previously they were
saved as callsMMYY and putsMMYY. The next expiry is saved as calls and puts.
New features:
The expiry parameter can now be a single date or a list-like object containing dates.
A new property expiry_dates was added, which returns all available expiry dates.
Current behavior:
In [17]: from pandas.io.data import Options
In [18]: aapl = Options('aapl','yahoo')
In [19]: aapl.get_call_data().iloc[0:5,0:1]
Out[19]:
Last
Strike
55.0
80.0
85.0
86.0
87.0
Expiry
2016-05-06
2016-05-06
2016-05-06
2016-05-06
2016-05-06
Type
call
call
call
call
call
Symbol
AAPL160506C00055000
AAPL160506C00080000
AAPL160506C00085000
AAPL160506C00086000
AAPL160506C00087000
37.74
13.75
9.25
6.91
7.21
In [20]: aapl.expiry_dates
Out[20]:
[datetime.date(2016, 5, 6),
datetime.date(2016, 5, 13),
datetime.date(2016, 5, 20),
datetime.date(2016, 5, 27),
datetime.date(2016, 6, 3),
datetime.date(2016, 6, 10),
datetime.date(2016, 6, 17),
datetime.date(2016, 7, 15),
datetime.date(2016, 8, 19),
datetime.date(2016, 10, 21),
datetime.date(2017, 1, 20),
datetime.date(2017, 3, 17),
datetime.date(2017, 6, 16),
datetime.date(2018, 1, 19)]
In [21]: aapl.get_near_stock_price(expiry=aapl.expiry_dates[0:3]).iloc[0:5,0:1]
Out[21]:
Last
Strike Expiry
Type Symbol
93.5
2016-05-13 call AAPL160513C00093500 1.52
2016-05-20 call AAPL160520C00093500 2.15
94.0
2016-05-06 call AAPL160506C00094000 0.95
2016-05-13 call AAPL160513C00094000 1.45
2016-05-20 call AAPL160520C00094000 1.79
118
1.9.2 Enhancements
concat permits a wider variety of iterables of pandas objects to be passed as the first parameter (GH8645):
In [22]: from collections import deque
In [23]: df1 = pd.DataFrame([1, 2, 3])
In [24]: df2 = pd.DataFrame([4, 5, 6])
previous behavior:
current behavior:
In [25]: pd.concat(deque((df1, df2)))
Out[25]:
0
0 1
1 2
2 3
0 4
1 5
2 6
Represent MultiIndex labels with a dtype that utilizes memory based on the level size. In prior versions,
the memory usage was a constant 8 bytes per element in each level. In addition, in prior versions, the reported
memory usage was incorrect as it didnt show the usage for the memory occupied by the underling data array.
(GH8456)
In [26]: dfi = DataFrame(1,index=pd.MultiIndex.from_product([['a'],range(1000)]),columns=['A'])
previous behavior:
# this was underreported in prior versions
In [1]: dfi.memory_usage(index=True)
Out[1]:
Index
8000 # took about 24008 bytes in < 0.15.1
A
8000
dtype: int64
current behavior:
In [27]: dfi.memory_usage(index=True)
Out[27]:
Index
8000
A
8000
dtype: int64
119
World Bank data requests now will warn/raise based on an errors argument, as well as a list of hard-coded
country codes and the World Banks JSON response. In prior versions, the error messages didnt look at the
World Banks JSON response. Problem-inducing input were simply dropped prior to the request. The issue was
that many good countries were cropped in the hard-coded approach. All countries will work now, but some bad
countries will raise exceptions because some edge cases break the entire response. (GH8482)
Added option to Series.str.split() to return a DataFrame rather than a Series (GH8428)
Added option to df.info(null_counts=None|True|False) to override the default display options
and force showing of the null-counts (GH8701)
120
Bug in date_range where partially-specified dates would incorporate current date (GH6961)
Bug in Setting by indexer to a scalar value with a mixed-dtype Panel4d was failing (GH8702)
Bug where DataReaders would fail if one of the symbols passed was invalid. Now returns data for valid
symbols and np.nan for invalid (GH8494)
Bug in get_quote_yahoo that wouldnt allow non-float return values (GH5229).
121
122
good
2
very good
3
dtype: int64
pandas.core.group_agg and pandas.core.factor_agg were removed. As an alternative, construct a dataframe and use df.groupby(<group>).agg(<func>).
Supplying codes/labels and levels to the Categorical constructor is not supported anymore. Supplying
two arguments to the constructor is now interpreted as values and levels (now called categories). Please
change your code to use the from_codes() constructor.
The Categorical.labels attribute was renamed to Categorical.codes and is read only. If you want
to manipulate codes, please use one of the API methods on Categoricals.
The Categorical.levels attribute is renamed to Categorical.categories.
TimedeltaIndex/Scalar
We introduce a new scalar type Timedelta, which is a subclass of datetime.timedelta, and behaves in a
similar manner, but allows compatibility with np.timedelta64 types as well as a host of custom representation,
parsing, and attributes. This type is very similar to how Timestamp works for datetimes. It is a nice-API box for
the type. See the docs. (GH3009, GH4533, GH8209, GH8187, GH8190, GH7869, GH7661, GH8345, GH8471)
Warning: Timedelta scalars (and TimedeltaIndex) component fields are not the same as the component
fields on a datetime.timedelta object. For example, .seconds on a datetime.timedelta object
returns the total number of seconds combined between hours, minutes and seconds. In contrast, the pandas
Timedelta breaks out hours, minutes, microseconds and nanoseconds separately.
# Timedelta accessor
In [9]: tds = Timedelta('31 days 5 min 3 sec')
In [10]: tds.minutes
Out[10]: 5L
In [11]: tds.seconds
Out[11]: 3L
# datetime.timedelta accessor
# this is 5 minutes * 60 + 3 seconds
In [12]: tds.to_pytimedelta().seconds
Out[12]: 303
Note: this is no longer true starting from v0.16.0, where full compatibility with datetime.timedelta is
introduced. See the 0.16.0 whatsnew entry
Warning: Prior to 0.15.0 pd.to_timedelta would return a Series for list-like/Series input, and a
np.timedelta64 for scalar input. It will now return a TimedeltaIndex for list-like input, Series for
Series input, and Timedelta for scalar input.
The arguments to pd.to_timedelta are now (arg,unit=ns,box=True,coerce=False), previously were (arg,box=True,unit=ns) as these are more logical.
Consruct a scalar
In [9]: Timedelta('1 days 06:05:01.00003')
Out[9]: Timedelta('1 days 06:05:01.000030')
123
In [10]: Timedelta('15.5us')
Out[10]: Timedelta('0 days 00:00:00.000015')
In [11]: Timedelta('1 hour 15.5us')
Out[11]: Timedelta('0 days 01:00:00.000015')
# negative Timedeltas have this string repr
# to be more consistent with datetime.timedelta conventions
In [12]: Timedelta('-1us')
Out[12]: Timedelta('-1 days +23:59:59.999999')
# a NaT
In [13]: Timedelta('nan')
Out[13]: NaT
Construct a TimedeltaIndex
In [18]: TimedeltaIndex(['1 days','1 days, 00:00:05',
....:
np.timedelta64(2,'D'),timedelta(days=2,seconds=2)])
....:
Out[18]:
TimedeltaIndex(['1 days 00:00:00', '1 days 00:00:05', '2 days 00:00:00',
'2 days 00:00:02'],
dtype='timedelta64[ns]', freq=None)
124
Finally, the combination of TimedeltaIndex with DatetimeIndex allow certain combination operations that
are NaT preserving:
In [25]: tdi = TimedeltaIndex(['1 days',pd.NaT,'2 days'])
In [26]: tdi.tolist()
Out[26]: [Timedelta('1 days 00:00:00'), NaT, Timedelta('2 days 00:00:00')]
In [27]: dti = date_range('20130101',periods=3)
In [28]: dti.tolist()
Out[28]:
[Timestamp('2013-01-01 00:00:00', offset='D'),
Timestamp('2013-01-02 00:00:00', offset='D'),
Timestamp('2013-01-03 00:00:00', offset='D')]
In [29]: (dti + tdi).tolist()
Out[29]: [Timestamp('2013-01-02 00:00:00'), NaT, Timestamp('2013-01-05 00:00:00')]
In [30]: (dti - tdi).tolist()
Out[30]: [Timestamp('2012-12-31 00:00:00'), NaT, Timestamp('2013-01-01 00:00:00')]
125
Memory Usage
Implemented methods to find memory usage of a DataFrame. See the FAQ for more. (GH6852).
A new display option display.memory_usage (see Options and Settings) sets the default behavior of the
memory_usage argument in the df.info() method. By default display.memory_usage is True.
In [31]: dtypes = ['int64', 'float64', 'datetime64[ns]', 'timedelta64[ns]',
....:
'complex128', 'object', 'bool']
....:
In [32]: n = 5000
In [33]: data = dict([ (t, np.random.randint(100, size=n).astype(t))
....:
for t in dtypes])
....:
In [34]: df = DataFrame(data)
In [35]: df['categorical'] = df['object'].astype('category')
In [36]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5000 entries, 0 to 4999
Data columns (total 8 columns):
bool
5000 non-null bool
complex128
5000 non-null complex128
datetime64[ns]
5000 non-null datetime64[ns]
float64
5000 non-null float64
int64
5000 non-null int64
object
5000 non-null object
timedelta64[ns]
5000 non-null timedelta64[ns]
categorical
5000 non-null category
dtypes: bool(1), category(1), complex128(1), datetime64[ns](1), float64(1), int64(1), object(1), time
memory usage: 284.1+ KB
Additionally memory_usage() is an available method for a dataframe object which returns the memory usage of
each column.
In [37]: df.memory_usage(index=True)
Out[37]:
Index
72
bool
5000
complex128
80000
datetime64[ns]
40000
float64
40000
int64
40000
object
40000
timedelta64[ns]
40000
categorical
5800
dtype: int64
.dt accessor
Series has gained an accessor to succinctly return datetime like properties for the values of the Series, if its a
datetime/period like Series. (GH7207) This will return a Series, indexed like the existing Series. See the docs
126
# datetime
In [38]: s = Series(date_range('20130101 09:10:12',periods=4))
In [39]: s
Out[39]:
0
2013-01-01 09:10:12
1
2013-01-02 09:10:12
2
2013-01-03 09:10:12
3
2013-01-04 09:10:12
dtype: datetime64[ns]
In [40]: s.dt.hour
Out[40]:
0
9
1
9
2
9
3
9
dtype: int64
In [41]: s.dt.second
Out[41]:
0
12
1
12
2
12
3
12
dtype: int64
In [42]: s.dt.day
Out[42]:
0
1
1
2
2
3
3
4
dtype: int64
In [43]: s.dt.freq
Out[43]: <Day>
127
128
Out[56]:
0
5
1
6
2
7
3
8
dtype: int64
In [57]: s.dt.components
Out[57]:
days hours minutes seconds
0
1
0
0
5
1
1
0
0
6
2
1
0
0
7
3
1
0
0
8
milliseconds
0
0
0
0
microseconds
0
0
0
0
nanoseconds
0
0
0
0
'2014-08-01
'2014-08-01
'2014-08-01
'2014-08-01
'2014-08-01
freq='H')
10:00:00',
12:00:00',
14:00:00',
16:00:00',
18:00:00'],
tz_localize now accepts the ambiguous keyword which allows for passing an array of bools indicating
whether the date belongs in DST or not, NaT for setting transition times to NaT, infer for inferring DST/nonDST, and raise (default) for an AmbiguousTimeError to be raised. See the docs for more details (GH7943)
DataFrame.tz_localize and DataFrame.tz_convert now accepts an optional level argument
for localizing a specific level of a MultiIndex (GH7846)
Timestamp.tz_localize and Timestamp.tz_convert now raise TypeError in error cases, rather
than Exception (GH8025)
129
a timeseries/index localized to UTC when inserted into a Series/DataFrame will preserve the UTC timezone
(rather than being a naive datetime64[ns]) as object dtype (GH8411)
Timestamp.__repr__ displays dateutil.tz.tzoffset info (GH7907)
Rolling/Expanding Moments improvements
rolling_min(), rolling_max(), rolling_cov(), and rolling_corr() now return objects with
all NaN when len(arg) < min_periods <= window rather than raising. (This makes all rolling functions consistent in this behavior). (GH7766)
Prior to 0.15.0
In [64]: s = Series([10, 11, 12, 13])
In [15]: rolling_min(s, window=10, min_periods=5)
ValueError: min_periods (5) must be <= window (4)
New behavior
In [4]: pd.rolling_min(s, window=10, min_periods=5)
Out[4]:
0
NaN
1
NaN
2
NaN
3
NaN
dtype: float64
130
rolling_window() now normalizes the weights properly in rolling mean mode (mean=True) so that the
calculated weighted means (e.g. triang, gaussian) are distributed about the same means as those calculated
without weighting (i.e. boxcar). See the note on normalization for further details. (GH7618)
In [65]: s = Series([10.5, 8.8, 11.4, 9.7, 9.3])
New behavior
In [10]: pd.rolling_window(s, window=3, win_type='triang', center=True)
Out[10]:
0
NaN
1
9.875
2
10.325
3
10.025
4
NaN
dtype: float64
Removed center argument from all expanding_ functions (see list), as the results produced when
center=True did not make much sense. (GH7925)
Added optional ddof argument to expanding_cov() and rolling_cov(). The default value of 1 is
backwards-compatible. (GH8279)
Documented the ddof argument to expanding_var(), expanding_std(), rolling_var(), and
rolling_std(). These functions support of a ddof argument (with a default value of 1) was previously
undocumented. (GH8064)
ewma(), ewmstd(), ewmvol(), ewmvar(), ewmcov(), and ewmcorr() now interpret min_periods
in the same manner that the rolling_*() and expanding_*() functions do: a given result entry will be
NaN if the (expanding, in this case) window does not contain at least min_periods values. The previous
behavior was to set to NaN the min_periods entries starting with the first non- NaN value. (GH7977)
Prior behavior (note values start at index 2, which is min_periods after index 0 (the index of the first nonempty value)):
In [66]: s
New behavior (note values start at index 4, the location of the 2nd (since min_periods=2) non-empty value):
131
ewmstd(), ewmvol(), ewmvar(), ewmcov(), and ewmcorr() now have an optional adjust argument, just like ewma() does, affecting how the weights are calculated. The default value of adjust is True,
which is backwards-compatible. See Exponentially weighted moment functions for details. (GH7911)
ewma(), ewmstd(), ewmvol(), ewmvar(), ewmcov(), and ewmcorr() now have an optional
ignore_na argument. When ignore_na=False (the default), missing values are taken into account in
the weights calculation. When ignore_na=True (which reproduces the pre-0.15.0 behavior), missing values
are ignored in the weights calculation. (GH7543)
In [7]: pd.ewma(Series([None, 1., 8.]), com=2.)
Out[7]:
0
NaN
1
1.0
2
5.2
dtype: float64
In [8]: pd.ewma(Series([1., None, 8.]), com=2., ignore_na=True)
Out[8]:
0
1.0
1
1.0
2
5.2
dtype: float64
In [9]: pd.ewma(Series([1., None, 8.]), com=2., ignore_na=False)
Out[9]:
0
1.000000
1
1.000000
2
5.846154
dtype: float64
# pre-0.15.0 behavior
# new default
Warning: By default (ignore_na=False) the ewm*() functions weights calculation in the presence
of missing values is different than in pre-0.15.0 versions. To reproduce the pre-0.15.0 calculation of weights
in the presence of missing values one must specify explicitly ignore_na=True.
Bug in expanding_cov(), expanding_corr(), rolling_cov(), rolling_cor(), ewmcov(),
and ewmcorr() returning results with columns sorted by name and producing an error for non-unique
columns; now handles non-unique columns and returns columns in original order (except for the case of two
DataFrames with pairwise=False, where behavior is unchanged) (GH7542)
Bug in rolling_count() and expanding_*() functions unnecessarily producing error message for
zero-length data (GH8056)
Bug in rolling_apply()
min_periods=1 (GH8080)
and
expanding_apply()
interpreting
min_periods=0
as
Bug in expanding_std() and expanding_var() for a single value producing a confusing error message
(GH7900)
Bug in rolling_std() and rolling_var() for a single value producing 0 rather than NaN (GH7900)
132
Bug in ewmstd(), ewmvol(), ewmvar(), and ewmcov() calculation of de-biasing factors when
bias=False (the default). Previously an incorrect constant factor was used, based on adjust=True,
ignore_na=True, and an infinite number of observations. Now a different factor is used for each entry,
based on the actual weights (analogous to the usual N/(N-1) factor). In particular, for a single point a value of
NaN is returned when bias=False, whereas previously a value of (approximately) 0 was returned.
For example, consider the following pre-0.15.0 results for ewmvar(..., bias=False), and the corresponding debiasing factors:
In [67]: s = Series([1., 2., 0., 4.])
In [89]: ewmvar(s, com=2., bias=False)
Out[89]:
0
-2.775558e-16
1
3.000000e-01
2
9.556787e-01
3
3.585799e+00
dtype: float64
In [90]: ewmvar(s, com=2., bias=False) / ewmvar(s, com=2., bias=True)
Out[90]:
0
1.25
1
1.25
2
1.25
3
1.25
dtype: float64
Note that entry 0 is approximately 0, and the debiasing factors are a constant 1.25. By comparison, the following
0.15.0 results have a NaN for entry 0, and the debiasing factors are decreasing (towards 1.25):
In [14]: pd.ewmvar(s, com=2., bias=False)
Out[14]:
0
NaN
1
0.500000
2
1.210526
3
4.089069
dtype: float64
In [15]: pd.ewmvar(s, com=2., bias=False) / pd.ewmvar(s, com=2., bias=True)
Out[15]:
0
NaN
1
2.083333
2
1.583333
3
1.425439
dtype: float64
133
Added support for specifying a schema to read from/write to with read_sql_table and to_sql
(GH7441, GH7952). For example:
df.to_sql('table', engine, schema='other_schema')
pd.read_sql_table('table', engine, schema='other_schema')
API changes related to the introduction of the Timedelta scalar (see above for more details):
Prior to 0.15.0 to_timedelta() would return a Series for list-like/Series input, and a
np.timedelta64 for scalar input. It will now return a TimedeltaIndex for list-like input, Series
for Series input, and Timedelta for scalar input.
For API changes related to the rolling and expanding functions, see detailed overview above.
Other notable API changes:
Consistency when indexing with .loc and a list-like indexer when no values are found.
In [68]: df = DataFrame([['a'],['b']],index=[1,2])
In [69]: df
Out[69]:
0
1 a
2 b
134
Furthermore, .loc will raise If no values are found in a multi-index with a list-like indexer:
In [75]: s = Series(np.arange(3,dtype='int64'),
....:
index=MultiIndex.from_product([['A'],['foo','bar','baz']],
....:
names=['one','two'])
....:
).sortlevel()
....:
In [76]: s
Out[76]:
one two
A
bar
1
baz
2
foo
0
dtype: int64
In [77]: try:
....:
s.loc[['D']]
135
Assigning values to None now considers the dtype when choosing an empty value (GH7941).
Previously, assigning to None in numeric containers changed the dtype to object (or errored, depending on the
call). It now uses NaN:
In [78]: s = Series([1, 2, 3])
In [79]: s.loc[0] = None
In [80]: s
Out[80]:
0
NaN
1
2.0
2
3.0
dtype: float64
To insert a NaN, you must explicitly use np.nan. See the docs.
In prior versions, updating a pandas object inplace would not reflect in other python references to this object.
(GH8511, GH5104)
In [84]: s = Series([1, 2, 3])
In [85]: s2 = s
In [86]: s += 1.5
136
Out[7]:
0
1
1
2
2
3
dtype: int64
Made both the C-based and Python engines for read_csv and read_table ignore empty lines in input as well as
whitespace-filled lines, as long as sep is not whitespace. This is an API change that can be controlled by the
keyword parameter skip_blank_lines. See the docs (GH4466)
A timeseries/index localized to UTC when inserted into a Series/DataFrame will preserve the UTC timezone
and inserted as object dtype rather than being converted to a naive datetime64[ns] (GH8411).
Bug in passing a DatetimeIndex with a timezone that was not being retained in DataFrame construction
from a dict (GH7822)
In prior versions this would drop the timezone, now it retains the timezone, but gives a column of object
dtype:
In [89]: i = date_range('1/1/2011', periods=3, freq='10s', tz = 'US/Eastern')
In [90]: i
Out[90]:
DatetimeIndex(['2011-01-01 00:00:00-05:00', '2011-01-01 00:00:10-05:00',
'2011-01-01 00:00:20-05:00'],
dtype='datetime64[ns, US/Eastern]', freq='10S')
In [91]: df = DataFrame( {'a' : i } )
In [92]: df
Out[92]:
a
0 2011-01-01 00:00:00-05:00
1 2011-01-01 00:00:10-05:00
2 2011-01-01 00:00:20-05:00
In [93]: df.dtypes
Out[93]:
a
datetime64[ns, US/Eastern]
dtype: object
Previously this would have yielded a column of datetime64 dtype, but without timezone info.
1.10. v0.15.0 (October 18, 2014)
137
The behaviour of assigning a column to an existing dataframe as df[a] = i remains unchanged (this already
returned an object column with a timezone).
When passing multiple levels to stack(), it will now raise a ValueError when the levels arent all level
names or all level numbers (GH7660). See Reshaping by stacking and unstacking.
Raise a ValueError in df.to_hdf with fixed format, if df has non-unique columns as the resulting file
will be broken (GH7761)
SettingWithCopy raise/warnings (according to the option mode.chained_assignment) will now be
issued when setting a value on a sliced mixed-dtype DataFrame using chained-assignment. (GH7845, GH7950)
In [1]: df = DataFrame(np.arange(0,9), columns=['count'])
In [2]: df['group'] = 'b'
In [3]: df.iloc[0:5]['group'] = 'a'
/usr/local/bin/ipython:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
merge, DataFrame.merge, and ordered_merge now return the same type as the left argument
(GH7737).
Previously an enlargement with a mixed-dtype frame would act unlike .append which will preserve dtypes
(related GH2578, GH8176):
In [94]: df = DataFrame([[True, 1],[False, 2]],
....:
columns=["female","fitness"])
....:
In [95]: df
Out[95]:
female fitness
0
True
1
1 False
2
In [96]: df.dtypes
Out[96]:
female
bool
fitness
int64
dtype: object
# dtypes are now preserved
In [97]: df.loc[2] = df.loc[1]
In [98]: df
Out[98]:
female fitness
0
True
1
1 False
2
2 False
2
In [99]: df.dtypes
Out[99]:
female
bool
fitness
int64
dtype: object
138
string
when
path=None,
matching
the
behaviour
of
read_hdf now raises IOError when a file that doesnt exist is passed in. Previously, a new, empty file was
created, and a KeyError raised (GH7715).
DataFrame.info() now ends its output with a newline character (GH8114)
Concatenating no objects will now raise a ValueError rather than a bare Exception.
Merge errors will now be sub-classes of ValueError rather than raw Exception (GH8501)
DataFrame.plot and Series.plot keywords are now have consistent orders (GH8037)
Internal Refactoring
In 0.15.0 Index has internally been refactored to no longer sub-class ndarray but instead subclass
PandasObject, similarly to the rest of the pandas objects. This change allows very easy sub-classing and creation of new index types. This should be a transparent change with only very limited API implications (GH5080,
GH7439, GH7796, GH8024, GH8367, GH7997, GH8522):
you may need to unpickle pandas version < 0.15.0 pickles using pd.read_pickle rather than
pickle.load. See pickle docs
when plotting with a PeriodIndex, the matplotlib internal axes will now be arrays of Period rather than a
PeriodIndex (this is similar to how a DatetimeIndex passes arrays of datetimes now)
MultiIndexes will now raise similary to other pandas objects w.r.t. truth testing, see here (GH7897).
When plotting a DatetimeIndex directly with matplotlibs plot function, the axis labels will no longer be formatted as dates but as integers (the internal representation of a datetime64). UPDATE This is fixed in 0.15.1,
see here.
Deprecations
The attributes Categorical labels and levels attributes are deprecated and renamed to codes and
categories.
The outtype argument to pd.DataFrame.to_dict has been deprecated in favor of orient. (GH7840)
The convert_dummies method has been deprecated in favor of get_dummies (GH8140)
The infer_dst argument in tz_localize will be deprecated in favor of ambiguous to allow for more
flexibility in dealing with DST transitions. Replace infer_dst=True with ambiguous=infer for the
same behavior (GH7943). See the docs for more details.
The top-level pd.value_range has been deprecated and can be replaced by .describe() (GH8481)
The Index set operations + and - were deprecated in order to provide these for numeric type operations on
certain index types. + can be replaced by .union() or |, and - by .difference(). Further the method
name Index.diff() is deprecated and can be replaced by Index.difference() (GH8226)
# +
Index(['a','b','c']) + Index(['b','c','d'])
# should be replaced by
Index(['a','b','c']).union(Index(['b','c','d']))
139
# Index(['a','b','c']) - Index(['b','c','d'])
# should be replaced by
Index(['a','b','c']).difference(Index(['b','c','d']))
The infer_types argument to read_html() now has no effect and is deprecated (GH7762, GH7032).
Removal of prior version deprecations/changes
Remove DataFrame.delevel method in favor of DataFrame.reset_index
1.10.3 Enhancements
Enhancements in the importing/exporting of Stata files:
Added support for bool, uint8, uint16 and uint32 datatypes in to_stata (GH7097, GH7365)
Added conversion option when importing Stata files (GH8527)
DataFrame.to_stata and StataWriter check string length for compatibility with limitations imposed
in dta files where fixed-width strings must contain 244 or fewer characters. Attempting to write Stata dta files
with strings longer than 244 characters raises a ValueError. (GH7858)
read_stata and StataReader can import missing data information into a DataFrame by setting
the argument convert_missing to True. When using this options, missing values are returned as
StataMissingValue objects and columns containing missing values have object data type. (GH8045)
Enhancements in the plotting functions:
Added layout keyword to DataFrame.plot. You can pass a tuple of (rows, columns), one of which
can be -1 to automatically infer (GH6667, GH8071).
Allow to pass multiple axes to DataFrame.plot, hist and boxplot (GH5353, GH6970, GH7069)
Added support for c, colormap
kind=scatter (GH7780)
and
colorbar
arguments
for
DataFrame.plot
with
140
count
unique
top
freq
catA catB
24
24
2
4
foo
d
16
6
Without those arguments, describe will behave as before, including only numerical columns or, if none are,
only categorical columns. See also the docs
Added split as an option to the orient argument in pd.DataFrame.to_dict. (GH7840)
The get_dummies method can now be used on DataFrames. By default only catagorical columns are encoded
as 0s and 1s, while other columns are left untouched.
In [104]: df = DataFrame({'A': ['a', 'b', 'a'], 'B': ['c', 'c', 'b'],
.....:
'C': [1, 2, 3]})
.....:
In [105]: pd.get_dummies(df)
Out[105]:
C A_a A_b B_b B_c
0 1 1.0 0.0 0.0 1.0
1 2 0.0 1.0 0.0 1.0
2 3 1.0 0.0 1.0 0.0
141
(GH7070)
pandas.tseries.holiday.Holiday now supports a list of offsets in Python3 (GH7070)
pandas.tseries.holiday.Holiday now supports a days_of_week parameter (GH7070)
GroupBy.nth() now supports selecting multiple nth values (GH7910)
In [106]: business_dates = date_range(start='4/1/2014', end='6/30/2014', freq='B')
In [107]: df = DataFrame(1, index=business_dates, columns=['a', 'b'])
# get the first, 4th, and last date index for each month
In [108]: df.groupby((df.index.year, df.index.month)).nth([0, 3, -1])
Out[108]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
In [114]: idx
Out[114]: PeriodIndex(['2014-07', '2014-08', '2014-09', '2014-10', '2014-11'], dtype='int64', fr
Added experimental compatibility with openpyxl for versions >= 2.0. The DataFrame.to_excel method
engine keyword now recognizes openpyxl1 and openpyxl2 which will explicitly require openpyxl v1
142
and v2 respectively, failing if the requested version is not available. The openpyxl engine is a now a metaengine that automatically uses whichever version of openpyxl is installed. (GH7177)
DataFrame.fillna can now accept a DataFrame as a fill value (GH8377)
Passing multiple levels to stack() will now work when multiple level numbers are passed (GH7660). See
Reshaping by stacking and unstacking.
set_names(), set_labels(), and set_levels() methods now take an optional level keyword argument to all modification of specific level(s) of a MultiIndex. Additionally set_names() now accepts a
scalar string value when operating on an Index or on a specific level of a MultiIndex (GH7792)
Index.isin now supports a level argument to specify which index level to use for membership tests
(GH7892, GH7890)
In [1]: idx = MultiIndex.from_product([[0, 1], ['a', 'b', 'c']])
In [2]: idx.values
Out[2]: array([(0, 'a'), (0, 'b'), (0, 'c'), (1, 'a'), (1, 'b'), (1, 'c')], dtype=object)
In [3]: idx.isin(['a', 'c', 'e'], level=1)
Out[3]: array([ True, False, True, True, False,
True], dtype=bool)
True,
True], dtype=bool)
In [124]: idx.drop_duplicates()
143
add copy=True argument to pd.concat to enable pass thru of complete blocks (GH8252)
Added support for numpy 1.8+ data types (bool_, int_, float_, string_) for conversion to R dataframe
(GH8400)
1.10.4 Performance
Performance improvements in DatetimeIndex.__iter__ to allow faster iteration (GH7683)
Performance improvements in Period creation (and PeriodIndex setitem) (GH5155)
Improvements in Series.transform for significant performance gains (revised) (GH6496)
Performance improvements in StataReader when reading large files (GH8040, GH8073)
Performance improvements in StataWriter when writing large files (GH8079)
Performance and memory usage improvements in multi-key groupby (GH8128)
Performance improvements in groupby .agg and .apply where builtins max/min were not mapped to
numpy/cythonized versions (GH7722)
Performance improvement in writing to sql (to_sql) of up to 50% (GH8208).
Performance benchmarking of groupby for large value of ngroups (GH6787)
Performance improvement in CustomBusinessDay, CustomBusinessMonth (GH8236)
Performance improvement for MultiIndex.values for multi-level indexes containing datetimes (GH8543)
contains
PeriodIndex
or
Bug in DataFrame.plot with subplots=True may draw unnecessary minor xticks and yticks (GH7801)
Bug in StataReader which did not read variable labels in 117 files due to difference between Stata documentation and implementation (GH7816)
Bug in StataReader where strings were always converted to 244 characters-fixed width irrespective of underlying string size (GH7858)
Bug in DataFrame.plot and Series.plot may ignore rot and fontsize keywords (GH7844)
Bug in DatetimeIndex.value_counts doesnt preserve tz (GH7735)
Bug in PeriodIndex.value_counts results in Int64Index (GH7735)
Bug in DataFrame.join when doing left join on index and there are multiple matches (GH5391)
145
Bug in GroupBy.transform() where int groups with a transform that didnt preserve the index were incorrectly truncated (GH7972).
Bug in groupby where callable objects without name attributes would take the wrong path, and produce a
DataFrame instead of a Series (GH7929)
Bug in groupby error message when a DataFrame grouping column is duplicated (GH7511)
Bug in read_html where the infer_types argument forced coercion of date-likes incorrectly (GH7762,
GH7032).
Bug in Series.str.cat with an index which was filtered as to not include the first item (GH7857)
Bug in Timestamp cannot parse nanosecond from string (GH7878)
Bug in Timestamp with string offset and tz results incorrect (GH7833)
Bug in tslib.tz_convert and tslib.tz_convert_single may return different results (GH7798)
Bug in DatetimeIndex.intersection of non-overlapping timestamps with tz raises IndexError
(GH7880)
Bug in alignment with TimeOps and non-unique indexes (GH8363)
Bug in GroupBy.filter() where fast path vs. slow path made the filter return a non scalar value that
appeared valid but wasnt (GH7870).
Bug in date_range()/DatetimeIndex() when the timezone was inferred from input dates yet incorrect
times were returned when crossing DST boundaries (GH7835, GH7901).
Bug in to_excel() where a negative sign was being prepended to positive infinity and was absent for negative
infinity (GH7949)
Bug in area plot draws legend with incorrect alpha when stacked=True (GH8027)
Period and PeriodIndex addition/subtraction with np.timedelta64 results in incorrect internal representations (GH7740)
Bug in Holiday with no offset or observance (GH7987)
Bug in DataFrame.to_latex formatting when columns or index is a MultiIndex (GH7982).
Bug in DateOffset around Daylight Savings Time produces unexpected results (GH5175).
Bug in DataFrame.shift where empty columns would throw ZeroDivisionError on numpy 1.7
(GH8019)
Bug in installation where html_encoding/*.html wasnt installed and therefore some tests were not running correctly (GH7927).
Bug in read_html where bytes objects were not tested for in _read (GH7927).
Bug in DataFrame.stack() when one of the column levels was a datelike (GH8039)
Bug in broadcasting numpy scalars with DataFrame (GH8116)
Bug in pivot_table performed with nameless index and columns raises KeyError (GH8103)
Bug in DataFrame.plot(kind=scatter) draws points and errorbars with different colors when the
color is specified by c keyword (GH8081)
Bug in Float64Index where iat and at were not testing and were failing (GH8092).
Bug in DataFrame.boxplot() where y-limits were not set correctly when producing multiple axes
(GH7528, GH5517).
146
Bug in read_csv where line comments were not handled correctly given a custom line terminator or
delim_whitespace=True (GH8122).
Bug in read_html where empty tables caused a StopIteration (GH7575)
Bug in casting when setting a column in a same-dtype block (GH7704)
Bug in accessing groups from a GroupBy when the original grouper was a tuple (GH8121).
Bug in .at that would accept integer indexers on a non-integer index and do fallback (GH7814)
Bug with kde plot and NaNs (GH8182)
Bug in GroupBy.count with float32 data type were nan values were not excluded (GH8169).
Bug with stacked barplots and NaNs (GH8175).
Bug in resample with non evenly divisible offsets (e.g. 7s) (GH8371)
Bug in interpolation methods with the limit keyword when no values needed interpolating (GH7173).
Bug where col_space was ignored in DataFrame.to_string() when header=False (GH8230).
Bug with DatetimeIndex.asof incorrectly matching partial strings and returning the wrong date
(GH8245).
Bug in plotting methods modifying the global matplotlib rcParams (GH8242).
Bug in DataFrame.__setitem__ that caused errors when setting a dataframe column to a sparse array
(GH8131)
Bug where Dataframe.boxplot() failed when entire column was empty (GH8181).
Bug with messed variables in radviz visualization (GH8199).
Bug in interpolation methods with the limit keyword when no values needed interpolating (GH7173).
Bug where col_space was ignored in DataFrame.to_string() when header=False (GH8230).
Bug in to_clipboard that would clip long column data (GH8305)
Bug in DataFrame terminal display: Setting max_column/max_rows to zero did not trigger auto-resizing of
dfs to fit terminal width/height (GH7180).
Bug in OLS where running with cluster and nw_lags parameters did not work correctly, but also did not
throw an error (GH5884).
Bug in DataFrame.dropna that interpreted non-existent columns in the subset argument as the last column
(GH8303)
Bug in Index.intersection on non-monotonic non-unique indexes (GH8362).
Bug in masked series assignment where mismatching types would break alignment (GH8387)
Bug in NDFrame.equals gives false negatives with dtype=object (GH8437)
Bug in assignment with indexer where type diversity would break alignment (GH8258)
Bug in NDFrame.loc indexing when row/column names were lost when target was a list/ndarray (GH6552)
Regression in NDFrame.loc indexing when rows/columns were converted to Float64Index if target was an
empty list/ndarray (GH7774)
Bug in Series that allows it to be indexed by a DataFrame which has unexpected results. Such indexing is
no longer permitted (GH8444)
Bug in item assignment of a DataFrame with multi-index columns where right-hand-side columns were not
aligned (GH7655)
147
Suppress FutureWarning generated by NumPy when comparing object arrays containing NaN for equality
(GH7065)
Bug in DataFrame.eval() where the dtype of the not operator (~) was not correctly inferred as bool.
148
# new behaviour
In [1]: d + offsets.MonthEnd()
Out[1]: Timestamp('2014-01-31 09:00:00')
In [2]: d + offsets.MonthEnd(normalize=True)
Out[2]: Timestamp('2014-01-31 00:00:00')
Note that for the other offsets the default behaviour did not change.
Add back #N/A N/A as a default NA value in text parsing, (regresion from 0.12) (GH5521)
Raise a TypeError on inplace-setting with a .where and a non np.nan value as this is inconsistent with a
set-item expression like df[mask] = None (GH7656)
1.11.2 Enhancements
Add dropna argument to value_counts and nunique (GH5569).
Add select_dtypes() method to allow selection of columns based on dtype (GH7316). See the docs.
All offsets suppports the normalize keyword to specify whether offsets.apply, rollforward
and rollback resets the time (hour, minute, etc) or not (default False, preserves time) (GH7156):
In [3]: import pandas.tseries.offsets as offsets
In [4]: day = offsets.Day()
In [5]: day.apply(Timestamp('2014-01-01 09:00'))
Out[5]: Timestamp('2014-01-02 09:00:00')
In [6]: day = offsets.Day(normalize=True)
In [7]: day.apply(Timestamp('2014-01-01 09:00'))
Out[7]: Timestamp('2014-01-02 00:00:00')
149
In [9]: rng.tz
Out[9]: tzfile('/usr/share/zoneinfo/Europe/London')
1.11.3 Performance
Improvements in dtype inference for numeric operations involving yielding performance gains for dtypes:
int64, timedelta64, datetime64 (GH7223)
Improvements in Series.transform for significant performance gains (GH6496)
Improvements in DataFrame.transform with ufuncs and built-in grouper functions for signifcant performance
gains (GH7383)
Regression in groupby aggregation of datetime64 dtypes (GH7555)
Improvements in MultiIndex.from_product for large iterables (GH7627)
1.11.4 Experimental
pandas.io.data.Options has a new method, get_all_data method, and now consistently returns a
multi-indexed DataFrame, see the docs. (GH5602)
io.gbq.read_gbq and io.gbq.to_gbq were refactored to remove the dependency on the Google
bq.py command line client. This submodule now uses httplib2 and the Google apiclient and
oauth2client API client libraries which should be more stable and, therefore, reliable than bq.py. See the
docs. (GH6937).
Bug in groupby .nth with a Series and integer-like column name (GH7559)
Bug in Series.get with a boolean accessor (GH7407)
Bug in value_counts where NaT did not qualify as missing (NaN) (GH7423)
Bug in to_timedelta that accepted invalid units and misinterpreted m/h (GH7611, GH6423)
Bug in line plot doesnt set correct xlim if secondary_y=True (GH7459)
Bug in grouped hist and scatter plots use old figsize default (GH7394)
Bug in plotting subplots with DataFrame.plot, hist clears passed ax even if the number of subplots is
one (GH7391).
Bug in plotting subplots with DataFrame.boxplot with by kw raises ValueError if the number of
subplots exceeds 1 (GH7391).
Bug in subplots displays ticklabels and labels in different rule (GH5897)
Bug in Panel.apply with a multi-index as an axis (GH7469)
Bug in DatetimeIndex.insert doesnt preserve name and tz (GH7299)
Bug in DatetimeIndex.asobject doesnt preserve name (GH7299)
Bug in multi-index slicing with datetimelike ranges (strings and Timestamps), (GH7429)
Bug in Index.min and max doesnt handle nan and NaT properly (GH7261)
Bug in PeriodIndex.min/max results in int (GH7609)
Bug in resample where fill_method was ignored if you passed how (GH2073)
Bug in TimeGrouper doesnt exclude column specified by key (GH7227)
Bug in DataFrame and Series bar and barh plot raises TypeError when bottom and left keyword is
specified (GH7226)
Bug in DataFrame.hist raises TypeError when it contains non numeric column (GH7277)
Bug in Index.delete does not preserve name and freq attributes (GH7302)
Bug in DataFrame.query()/eval where local string variables with the @ sign were being treated as
temporaries attempting to be deleted (GH7300).
Bug in Float64Index which didnt allow duplicates (GH7149).
Bug in DataFrame.replace() where truthy values were being replaced (GH7140).
Bug in StringMethods.extract() where a single match group Series would use the matchers name
instead of the group name (GH7313).
Bug in isnull() when mode.use_inf_as_null == True where isnull wouldnt test True when it
encountered an inf/-inf (GH7315).
Bug in inferred_freq results in None for eastern hemisphere timezones (GH7310)
Bug in Easter returns incorrect date when offset is negative (GH7195)
Bug in broadcasting with .div, integer dtypes and divide-by-zero (GH7325)
Bug in CustomBusinessDay.apply raiases NameError when np.datetime64 object is passed
(GH7196)
Bug in MultiIndex.append, concat and pivot_table dont preserve timezone (GH6606)
Bug in .loc with a list of indexers on a single-multi index level (that is not nested) (GH7349)
151
Bug in Series.map when mapping a dict with tuple keys of different lengths (GH7333)
Bug all StringMethods now work on empty Series (GH7242)
Fix delegation of read_sql to read_sql_query when query does not contain select (GH7324).
Bug where a string column name assignment to a DataFrame with a Float64Index raised a TypeError
during a call to np.isnan (GH7366).
Bug where NDFrame.replace() didnt correctly replace objects with Period values (GH7379).
Bug in .ix getitem should always return a Series (GH7150)
Bug in multi-index slicing with incomplete indexers (GH7399)
Bug in multi-index slicing with a step in a sliced level (GH7400)
Bug where negative indexers in DatetimeIndex were not correctly sliced (GH7408)
Bug where NaT wasnt reprd correctly in a MultiIndex (GH7406, GH7409).
Bug where bool objects were converted to nan in convert_objects (GH7416).
Bug in quantile ignoring the axis keyword argument (:issue7306)
Bug where nanops._maybe_null_out doesnt work with complex numbers (GH7353)
Bug in several nanops functions when axis==0 for 1-dimensional nan arrays (GH7354)
Bug where nanops.nanmedian doesnt work when axis==None (GH7352)
Bug where nanops._has_infs doesnt work with many dtypes (GH7357)
Bug in StataReader.data where reading a 0-observation dta failed (GH7369)
Bug in StataReader when reading Stata 13 (117) files containing fixed width strings (GH7360)
Bug in StataWriter where encoding was ignored (GH7286)
Bug in DatetimeIndex comparison doesnt handle NaT properly (GH7529)
Bug in passing input with tzinfo to some offsets apply, rollforward or rollback resets tzinfo or
raises ValueError (GH7465)
Bug in DatetimeIndex.to_period, PeriodIndex.asobject, PeriodIndex.to_timestamp
doesnt preserve name (GH7485)
Bug in DatetimeIndex.to_period and PeriodIndex.to_timestanp handle NaT incorrectly
(GH7228)
Bug in offsets.apply, rollforward and rollback may return normal datetime (GH7502)
Bug in resample raises ValueError when target contains NaT (GH7227)
Bug in Timestamp.tz_localize resets nanosecond info (GH7534)
Bug in DatetimeIndex.asobject raises ValueError when it contains NaT (GH7539)
Bug in Timestamp.__new__ doesnt preserve nanosecond properly (GH7610)
Bug in Index.astype(float) where it would return an object dtype Index (GH7464).
Bug in DataFrame.reset_index loses tz (GH3950)
Bug in DatetimeIndex.freqstr raises AttributeError when freq is None (GH7606)
Bug in GroupBy.size created by TimeGrouper raises AttributeError (GH7453)
Bug in single column bar plot is misaligned (GH7498).
152
Bug in area plot with tz-aware time series raises ValueError (GH7471)
Bug in non-monotonic Index.union may preserve name incorrectly (GH7458)
Bug in DatetimeIndex.intersection doesnt preserve timezone (GH4690)
Bug in rolling_var where a window larger than the array would raise an error(GH7297)
Bug with last plotted timeseries dictating xlim (GH2960)
Bug with secondary_y axis not being considered for timeseries xlim (GH3490)
Bug in Float64Index assignment with a non scalar indexer (GH7586)
Bug in pandas.core.strings.str_contains does not properly match in a case insensitive fashion
when regex=False and case=False (GH7505)
Bug in expanding_cov, expanding_corr, rolling_cov, and rolling_corr for two arguments
with mismatched index (GH7512)
Bug in to_sql taking the boolean column as text column (GH7678)
Bug in grouped hist doesnt handle rot kw and sharex kw properly (GH7234)
Bug in .loc performing fallback integer indexing with object dtype indices (GH7496)
Bug (regression) in PeriodIndex constructor when passed Series objects (GH7701).
153
Known Issues
Bug Fixes
Warning: In 0.14.0 all NDFrame based containers have undergone significant internal refactoring. Before that
each block of homogeneous data had its own labels and extra care was necessary to keep those in sync with the
parent containers labels. This should not have any visible user/API behavior changes (GH6745)
154
Slicing with negative start, stop & step values handles corner cases better (GH6531):
df.iloc[:-len(df)] is now empty
df.iloc[len(df)::-1] now enumerates all elements in reverse
The DataFrame.interpolate() keyword downcast default has been changed from infer to None.
This is to preseve the original dtype unless explicitly requested otherwise (GH6290).
When converting a dataframe to HTML it used to return Empty DataFrame. This special case has been removed,
instead a header with the column names is returned (GH6062).
Series
and
Index
now
internall
share
more
common
operations,
e.g.
factorize(),nunique(),value_counts() are now supported on Index types as well.
The Series.weekday property from is removed from Series for API consistency.
Using a
DatetimeIndex/PeriodIndex method on a Series will now raise a TypeError. (GH4551, GH4056,
GH5519, GH6380, GH7206).
Add
is_month_start,
is_month_end,
is_quarter_start,
is_quarter_end,
is_year_start, is_year_end accessors for DateTimeIndex / Timestamp which return a
boolean array of whether the timestamp(s) are at the start/end of the month/quarter/year defined by the
frequency of the DateTimeIndex / Timestamp (GH4565, GH6998)
Local variable usage has changed in pandas.eval()/DataFrame.eval()/DataFrame.query()
(GH5987). For the DataFrame methods, two things have changed
Column names are now given precedence over locals
Local variables must be referred to explicitly. This means that even if you have a local variable that is not
a column you must still refer to it with the @ prefix.
You can have an expression like df.query(@a < a) with no complaints from pandas about ambiguity of the name a.
The top-level pandas.eval() function does not allow you use the @ prefix and provides you with
an error message telling you so.
NameResolutionError was removed because it isnt necessary anymore.
Define and document the order of column vs index names in query/eval (GH6676)
concat will now concatenate mixed Series and DataFrames using the Series name or numbering columns as
needed (GH2385). See the docs
Slicing and advanced/boolean indexing operations on Index classes as well as Index.delete() and
Index.drop() methods will no longer change the type of the resulting index (GH6440, GH7040)
In [6]: i = pd.Index([1, 2, 3, 'a' , 'b', 'c'])
In [7]: i[[0,1,2]]
Out[7]: Index([1, 2, 3], dtype='object')
In [8]: i.drop(['a', 'b', 'c'])
Out[8]: Index([1, 2, 3], dtype='object')
In [9]: i[[0,1,2]].astype(np.int_)
Out[9]: Int64Index([1, 2, 3], dtype='int64')
set_index no longer converts MultiIndexes to an Index of tuples. For example, the old behavior returned an
Index in this case (GH6459):
155
pairwise keyword was added to the statistical moment functions rolling_cov, rolling_corr,
ewmcov, ewmcorr, expanding_cov, expanding_corr to allow the calculation of moving window
covariance and correlation matrices (GH4950). See Computing rolling pairwise covariances and correlations
in the docs.
In [1]: df = DataFrame(np.random.randn(10,4),columns=list('ABCD'))
In [4]: covs = pd.rolling_cov(df[['A','B','C']], df[['B','C','D']], 5, pairwise=True)
In [5]: covs[df.index[-1]]
Out[5]:
B
C
D
A 0.035310 0.326593 -0.505430
156
Series.iteritems() is now lazy (returns an iterator rather than a list). This was the documented behavior
prior to 0.14. (GH6760)
Added nunique and value_counts functions to Index for counting unique elements. (GH6734)
stack and unstack now raise a ValueError when the level keyword refers to a non-unique item in the
Index (previously raised a KeyError). (GH6738)
drop unused order argument from Series.sort; args now are in the same order as Series.order; add
na_position arg to conform to Series.order (GH6847)
default sorting algorithm for Series.order is now quicksort, to conform with Series.sort (and
numpy defaults)
add inplace keyword to Series.order/sort to make them inverses (GH6859)
DataFrame.sort now places NaNs at the beginning or end of the sort according to the na_position
parameter. (GH3917)
accept TextFileReader in concat, which was affecting a common user idiom (GH6583), this was a
regression from 0.13.1
Added factorize functions to Index and Series to get indexer and unique values (GH7090)
describe on a DataFrame with a mix of Timestamp and string like objects returns a different Index (GH7088).
Previously the index was unintentionally sorted.
Arithmetic operations with only bool dtypes now give a warning indicating that they are evaluated in Python
space for +, -, and * operations and raise for all others (GH7011, GH6762, GH7015, GH7210)
x
y
x
x
=
=
+
/
In HDFStore, select_as_multiple will always raise a KeyError, when a key or the selector is not
found (GH6177)
df[col] = value and df.loc[:,col] = value are now completely equivalent; previously the
.loc would not necessarily coerce the dtype of the resultant series (GH6149)
dtypes and ftypes now return a series with dtype=object on empty containers (GH5740)
df.to_csv will now return a string of the CSV data if neither a target path nor a buffer is provided (GH6061)
pd.infer_freq() will now raise a TypeError if given an invalid Series/Index type (GH6407,
GH6463)
A tuple passed to DataFame.sort_index will be interpreted as the levels of the index, rather than requiring
a list of tuple (GH4370)
all offset operations now return Timestamp types (rather than datetime), Business/Week frequencies were
incorrect (GH4069)
to_excel now converts np.inf into a string representation, customizable by the inf_rep keyword argument (Excel has no native inf representation) (GH6782)
Replace pandas.compat.scipy.scoreatpercentile with numpy.percentile (GH6810)
157
In the current version, large DataFrames are centrally truncated, showing a preview of head and tail in both
dimensions.
158
allow option truncate for display.show_dimensions to only show the dimensions if the frame is
truncated (GH6547).
The default for display.show_dimensions will now be truncate. This is consistent with how Series
display length.
Regression in the display of a MultiIndexed Series with display.max_rows is less than the length of the
series (GH7101)
Fixed a bug in the HTML repr of a truncated Series or DataFrame not showing the class name with the large_repr
set to info (GH7105)
The verbose keyword in DataFrame.info(), which controls whether to shorten the info representation,
is now None by default. This will follow the global setting in display.max_info_columns. The global
setting can be overriden with verbose=True or verbose=False.
Fixed a bug with the info repr not honoring the display.max_info_columns setting (GH6939)
Offset/freq info now in Timestamp __repr__ (GH4553)
specified
Raise
ValueError
when
engine=c
read_csv()/read_table() (GH6607)
with
specified
delim_whitespace=True
with
unsupported
options
in
in
Raise ValueError when fallback to python parser causes options to be ignored (GH6607)
Produce ParserWarning on fallback to python parser when no options are ignored (GH6607)
159
# filters DataFrame
groupby nth now reduces by default; filtering can be achieved by passing as_index=False. With an
optional dropna argument to ignore NaN. See the docs.
Reducing
In [24]: df = DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=['A', 'B'])
In [25]: g = df.groupby('A')
In [26]: g.nth(0)
Out[26]:
B
A
1 NaN
5 6.0
# this is equivalent to g.first()
In [27]: g.nth(0, dropna='any')
Out[27]:
B
A
1 4.0
5 6.0
# this is equivalent to g.last()
160
Filtering
In [29]: gf = df.groupby('A',as_index=False)
In [30]: gf.nth(0)
Out[30]:
A
B
0 1 NaN
2 5 6.0
In [31]: gf.nth(0, dropna='any')
Out[31]:
A
B
A
1 1 4.0
5 5 6.0
groupby will now not return the grouped column for non-cython functions (GH5610, GH5614, GH6732), as its
already the index
In [32]: df = DataFrame([[1, np.nan], [1, 4], [5, 6], [5, 8]], columns=['A', 'B'])
In [33]: g = df.groupby('A')
In [34]: g.count()
Out[34]:
B
A
1 1
5 2
In [35]: g.describe()
Out[35]:
B
A
1 count 1.000000
mean
4.000000
std
NaN
min
4.000000
25%
NaN
50%
NaN
75%
NaN
...
...
5 mean
7.000000
std
1.414214
min
6.000000
25%
6.500000
50%
7.000000
75%
7.500000
max
8.000000
[16 rows x 1 columns]
161
passing as_index will leave the grouped column in-place (this is not change in 0.14.0)
In [36]: df = DataFrame([[1, np.nan], [1, 4], [5, 6], [5, 8]], columns=['A', 'B'])
In [37]: g = df.groupby('A',as_index=False)
In [38]: g.count()
Out[38]:
A B
0 1 1
1 5 2
In [39]: g.describe()
Out[39]:
A
B
0 count 2.0 1.000000
mean
1.0 4.000000
std
0.0
NaN
min
1.0 4.000000
25%
1.0
NaN
50%
1.0
NaN
75%
1.0
NaN
...
...
...
1 mean
5.0 7.000000
std
0.0 1.414214
min
5.0 6.000000
25%
5.0 6.500000
50%
5.0 7.000000
75%
5.0 7.500000
max
5.0 8.000000
[16 rows x 2 columns]
Allow specification of a more complex groupby via pd.Grouper, such as grouping by a Time and a string
field simultaneously. See the docs. (GH3794)
Better propagation/preservation of Series names when performing groupby operations:
SeriesGroupBy.agg will ensure that the name attribute of the original series is propagated to the
result (GH6265).
If the function provided to GroupBy.apply returns a named series, the name of the series will be kept as
the name of the column index of the DataFrame returned by GroupBy.apply (GH6124). This facilitates
DataFrame.stack operations where the name of the column index is used as the name of the inserted
column containing the pivoted data.
1.12.5 SQL
The SQL reading and writing functions now support more database flavors through SQLAlchemy (GH2717, GH4163,
GH5950, GH6292). All databases supported by SQLAlchemy can be used, such as PostgreSQL, MySQL, Oracle,
Microsoft SQL server (see documentation of SQLAlchemy on included dialects).
The functionality of providing DBAPI connection objects will only be supported for sqlite3 in the future. The
mysql flavor is deprecated.
The new functions read_sql_query() and read_sql_table() are introduced. The function read_sql()
is kept as a convenience wrapper around the other two and will delegate to specific function depending on the provided
input (database table name or sql query).
162
In practice, you have to provide a SQLAlchemy engine to the sql functions. To connect with SQLAlchemy you use
the create_engine() function to create an engine object from database URI. You only need to create the engine
once per database you are connecting to. For an in-memory sqlite database:
In [40]: from sqlalchemy import create_engine
# Create your connection.
In [41]: engine = create_engine('sqlite:///:memory:')
This engine can then be used to write or read data to/from this database:
In [42]: df = pd.DataFrame({'A': [1,2,3], 'B': ['a', 'b', 'c']})
In [43]: df.to_sql('db_table', engine, index=False)
You can read data from a database by specifying the table name:
In [44]: pd.read_sql_table('db_table', engine)
Out[44]:
A B
0 1 a
1 2 b
2 3 c
163
As usual, both sides of the slicers are included as this is label indexing.
See the docs See also issues (GH6134, GH4036, GH3057, GH2598, GH5641, GH7106)
Warning: You should specify all axes in the .loc specifier, meaning the indexer for the index and for the
columns. Their are some ambiguous cases where the passed indexer could be mis-interpreted as indexing both
axes, rather than into say the MuliIndex for the rows.
You should do this:
df.loc[(slice('A1','A3'),.....),:]
Warning: You will need to make sure that the selection axes are fully lexsorted!
In [46]: def mklbl(prefix,n):
....:
return ["%s%s" % (prefix,i)
....:
for i in range(n)]
164
a
bar
1
5
9
13
17
21
25
...
229
233
237
241
245
249
253
foo
0
4
8
12
16
20
24
...
228
232
236
240
244
248
252
b
bah
3
7
11
15
19
23
27
...
231
235
239
243
247
251
255
foo
2
6
10
14
18
22
26
...
230
234
238
242
246
250
254
It is possible to perform quite complicated selections using this method on multiple axes at the same time.
In [54]: df.loc['A1',(slice(None),'foo')]
Out[54]:
lvl0
a
b
lvl1
foo foo
B0 C0 D0
64
66
165
D1
C1 D0
D1
C2 D0
D1
C3 D0
...
B1 C0 D1
C1 D0
D1
C2 D0
D1
C3 D0
D1
68
72
76
80
84
88
...
100
104
108
112
116
120
124
70
74
78
82
86
90
...
102
106
110
114
118
122
126
Using a boolean indexer you can provide selection related to the values.
In [56]: mask = df[('a','foo')]>200
In [57]: df.loc[idx[mask,:,['C1','C3']],idx[:,'foo']]
Out[57]:
lvl0
a
b
lvl1
foo foo
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
You can also specify the axis argument to .loc to interpret the passed slicers on a single axis.
In [58]: df.loc(axis=0)[:,:,['C1','C3']]
Out[58]:
166
lvl0
lvl1
A0 B0 C1 D0
D1
C3 D0
D1
B1 C1 D0
D1
C3 D0
...
A3 B0 C1 D1
C3 D0
D1
B1 C1 D0
D1
C3 D0
D1
a
bar
9
13
25
29
41
45
57
...
205
217
221
233
237
249
253
foo
8
12
24
28
40
44
56
...
204
216
220
232
236
248
252
b
bah
11
15
27
31
43
47
59
...
207
219
223
235
239
251
255
foo
10
14
26
30
42
46
58
...
206
218
222
234
238
250
254
foo
0
4
-10
-10
16
20
-10
...
228
-10
-10
240
244
-10
-10
b
bah
3
7
-10
-10
19
23
-10
...
231
-10
-10
243
247
-10
-10
foo
2
6
-10
-10
18
22
-10
...
230
-10
-10
242
246
-10
-10
a
bar
1
foo
0
b
bah
3
foo
2
167
D1
C1 D0
D1
C2 D0
D1
C3 D0
...
A3 B1 C0 D1
C1 D0
D1
C2 D0
D1
C3 D0
D1
5
9000
13000
17
21
25000
...
229
233000
237000
241
245
249000
253000
4
8000
12000
16
20
24000
...
228
232000
236000
240
244
248000
252000
7
11000
15000
19
23
27000
...
231
235000
239000
243
247
251000
255000
6
10000
14000
18
22
26000
...
230
234000
238000
242
246
250000
254000
1.12.7 Plotting
Hexagonal bin plots from DataFrame.plot with kind=hexbin (GH5478), See the docs.
DataFrame.plot and Series.plot now supports area plot with specifying kind=area (GH6656),
See the docs
Pie plots from Series.plot and DataFrame.plot with kind=pie (GH6976), See the docs.
Plotting with Error Bars is now supported in the .plot method of DataFrame and Series objects (GH3796,
GH6834), See the docs.
DataFrame.plot and Series.plot now support a table keyword for plotting matplotlib.Table,
See the docs. The table keyword can receive the following values.
False: Do nothing (default).
True: Draw a table using the DataFrame or Series called plot method. Data will be transposed to
meet matplotlibs default layout.
DataFrame or Series: Draw matplotlib.table using the passed data.
The data will be
drawn as displayed in print method (not transposed automatically).
Also, helper function
pandas.tools.plotting.table is added to create a table from DataFrame and Series, and
add it to an matplotlib.Axes.
plot(legend=reverse) will now reverse the order of legend labels for most plot kinds. (GH6014)
Line plot and area plot can be stacked by stacked=True (GH6656)
Following keywords are now acceptable for DataFrame.plot() with kind=bar and kind=barh:
width: Specify the bar width. In previous versions, static value 0.5 was passed to matplotlib and it cannot
be overwritten. (GH6604)
align: Specify the bar alignment. Default is center (different from matplotlib). In previous versions,
pandas passes align=edge to matplotlib and adjust the location to center by itself, and it results align
keyword is not applied as expected. (GH4525)
position: Specify relative alignments for bar plot layout. From 0 (left/bottom-end) to 1(right/top-end).
Default is 0.5 (center). (GH6604)
Because of the default align value changes, coordinates of bar plots are now located on integer values (0.0, 1.0,
2.0 ...). This is intended to make bar plot be located on the same coodinates as line plot. However, bar plot
168
may differs unexpectedly when you manually adjust the bar location or drawing area, such as using set_xlim,
set_ylim, etc. In this cases, please modify your script to meet with new coordinates.
The parallel_coordinates() function now takes argument color instead of colors.
A
FutureWarning is raised to alert that the old colors argument will not be supported in a future release.
(GH6956)
The parallel_coordinates() and andrews_curves() functions now take positional argument
frame instead of data. A FutureWarning is raised if the old data argument is used by name. (GH6956)
DataFrame.boxplot() now supports layout keyword (GH6769)
DataFrame.boxplot() has a new keyword argument, return_type. It accepts dict, axes, or
both, in which case a namedtuple with the matplotlib axes and a dict of matplotlib Lines is returned.
1.12.9 Deprecations
The pivot_table()/DataFrame.pivot_table() and crosstab() functions now take arguments
index and columns instead of rows and cols. A FutureWarning is raised to alert that the old rows
and cols arguments will not be supported in a future release (GH5505)
The DataFrame.drop_duplicates() and DataFrame.duplicated() methods now take argument
subset instead of cols to better align with DataFrame.dropna(). A FutureWarning is raised to
alert that the old cols arguments will not be supported in a future release (GH6680)
The DataFrame.to_csv() and DataFrame.to_excel() functions now takes argument columns instead of cols. A FutureWarning is raised to alert that the old cols arguments will not be supported in a
future release (GH6645)
Indexers will warn FutureWarning when used with a scalar indexer and a non-floating point Index (GH4892,
GH6960)
169
In [2]: Series(1,np.arange(5)).iloc[3.0]
pandas/core/index.py:469: FutureWarning: scalar indexers for index type Int64Index shoul
Out[2]: 1
In [3]: Series(1,np.arange(5)).iloc[3.0:4]
pandas/core/index.py:527: FutureWarning: slice indexers when using iloc should be intege
Out[3]:
3
1
dtype: int64
# these are Float64Indexes, so integer or floating point is acceptable
In [4]: Series(1,np.arange(5.))[3]
Out[4]: 1
In [5]: Series(1,np.arange(5.))[3.0]
Out[6]: 1
170
1.12.11 Enhancements
DataFrame and Series will create a MultiIndex object if passed a tuples dict, See the docs (GH3323)
In [65]: Series({('a', 'b'): 1, ('a', 'a'): 0,
....:
('a', 'c'): 2, ('b', 'a'): 3, ('b', 'b'): 4})
....:
Out[65]:
a a
0
b
1
c
2
b a
3
b
4
dtype: int64
In [66]: DataFrame({('a', 'b'): {('A', 'B'): 1,
....:
('a', 'a'): {('A', 'C'): 3,
....:
('a', 'c'): {('A', 'B'): 5,
....:
('b', 'a'): {('A', 'C'): 7,
....:
('b', 'b'): {('A', 'D'): 9,
....:
Out[66]:
a
b
a
b
c
a
b
A B 4.0 1.0 5.0 8.0 10.0
C 3.0 2.0 6.0 7.0
NaN
D NaN NaN NaN NaN
9.0
('A',
('A',
('A',
('A',
('A',
'C'):
'B'):
'C'):
'B'):
'B'):
2},
4},
6},
8},
10}})
wealth
196087.3
316478.7
294750.0
171
....:
....:
....:
....:
....:
In [70]: portfolio
Out[70]:
household_id asset_id
1
nl0000301109
2
nl0000289783
gb00b03mlx29
3
gb00b03mlx29
lu0197800237
nl0000289965
4
NaN
name
share
ABN Amro
Robeco
Royal Dutch Shell
Royal Dutch Shell
AAB Eastern Europe Equity Fund
Postbank BioTech Fonds
NaN
1.00
0.40
0.60
0.15
0.60
0.25
1.00
share
household_id asset_id
1
nl0000301109
2
nl0000289783
gb00b03mlx29
3
gb00b03mlx29
lu0197800237
nl0000289965
1.00
0.40
0.60
0.15
0.60
0.25
quotechar, doublequote, and escapechar can now be specified when using DataFrame.to_csv
(GH5414, GH4528)
Partially sort by only the specified levels of a MultiIndex with the sort_remaining boolean kwarg.
(GH3984)
Added to_julian_date to TimeStamp and DatetimeIndex. The Julian Date is used primarily in
astronomy and represents the number of days from noon, January 1, 4713 BC. Because nanoseconds are used
to define the time in pandas the actual range of dates that you can use is 1678 AD to 2262 AD. (GH4041)
DataFrame.to_stata will now check data for compatibility with Stata data types and will upcast when
needed. When it is not possible to losslessly upcast, a warning is issued (GH6327)
DataFrame.to_stata and StataWriter will accept keyword arguments time_stamp and data_label
which allow the time stamp and dataset label to be set when creating a file. (GH6545)
pandas.io.gbq now handles reading unicode strings properly. (GH5940)
Holidays Calendars are now available and can be used with the CustomBusinessDay offset (GH6719)
Float64Index is now backed by a float64 dtype ndarray instead of an object dtype array (GH6471).
Implemented Panel.pct_change (GH6904)
172
Added how option to rolling-moment functions to dictate how to handle resampling; rolling_max() defaults to max, rolling_min() defaults to min, and all others default to mean (GH6297)
CustomBuisnessMonthBegin and CustomBusinessMonthEnd are now available (GH6866)
Series.quantile() and DataFrame.quantile() now accept an array of quantiles.
describe() now accepts an array of percentiles to include in the summary statistics (GH4196)
pivot_table can now accept Grouper by index and columns keywords (GH6913)
In [72]: import datetime
In [73]: df = DataFrame({
....:
'Branch' : 'A A A A A B'.split(),
....:
'Buyer': 'Carl Mark Carl Carl Joe Joe'.split(),
....:
'Quantity': [1, 3, 5, 1, 8, 1],
....:
'Date' : [datetime.datetime(2013,11,1,13,0), datetime.datetime(2013,9,1,13,5),
....:
datetime.datetime(2013,10,1,20,0), datetime.datetime(2013,10,2,10,0),
....:
datetime.datetime(2013,11,1,20,0), datetime.datetime(2013,10,2,10,0)],
....:
'PayDay' : [datetime.datetime(2013,10,4,0,0), datetime.datetime(2013,10,15,13,5),
....:
datetime.datetime(2013,9,5,20,0), datetime.datetime(2013,11,2,10,0),
....:
datetime.datetime(2013,10,7,20,0), datetime.datetime(2013,9,5,10,0)]})
....:
In [74]: df
Out[74]:
Branch Buyer
0
A Carl 2013-11-01
1
A Mark 2013-09-01
2
A Carl 2013-10-01
3
A Carl 2013-10-02
4
A
Joe 2013-11-01
5
B
Joe 2013-10-02
Date
13:00:00
13:05:00
20:00:00
10:00:00
20:00:00
10:00:00
2013-10-04
2013-10-15
2013-09-05
2013-11-02
2013-10-07
2013-09-05
PayDay
00:00:00
13:05:00
20:00:00
10:00:00
20:00:00
10:00:00
Quantity
1
3
5
1
8
1
0.015696
-2.242685
1.150036
173
2013-01-01
2013-01-01
2013-01-01
2013-01-01
12:00
13:00
14:00
15:00
0.991946
0.953324
-2.021255
-0.334077
...
2013-01-05 06:00
0.566534
2013-01-05 07:00
0.503592
2013-01-05 08:00
0.285296
2013-01-05 09:00
0.484288
2013-01-05 10:00
1.363482
2013-01-05 11:00
-0.781105
2013-01-05 12:00
-0.468018
Freq: H, dtype: float64
In [79]: ps['2013-01-02']
Out[79]:
2013-01-02 00:00
0.553439
2013-01-02 01:00
1.318152
2013-01-02 02:00
-0.469305
2013-01-02 03:00
0.675554
2013-01-02 04:00
-1.817027
2013-01-02 05:00
-0.183109
2013-01-02 06:00
1.058969
...
2013-01-02 17:00
0.076200
2013-01-02 18:00
-0.566446
2013-01-02 19:00
0.036142
2013-01-02 20:00
-2.074978
2013-01-02 21:00
0.247792
2013-01-02 22:00
-0.897157
2013-01-02 23:00
-0.136795
Freq: H, dtype: float64
read_excel can now read milliseconds in Excel dates and times with xlrd >= 0.9.3. (GH5945)
pd.stats.moments.rolling_var now uses Welfords method for increased numerical stability
(GH6817)
pd.expanding_apply and pd.rolling_apply now take args and kwargs that are passed on to the func (GH6289)
DataFrame.rank() now has a percentage rank option (GH5971)
Series.rank() now has a percentage rank option (GH5971)
Series.rank() and DataFrame.rank() now accept method=dense for ranks without gaps
(GH6514)
Support passing encoding with xlwt (GH3710)
Refactor Block classes removing Block.items attributes to avoid duplication in item handling (GH6745,
GH6988).
Testing statements updated to use specialized asserts (GH6175)
1.12.12 Performance
Performance improvement when
DatetimeConverter (GH6636)
converting
DatetimeIndex
to
floating
ordinals
using
174
1.12.13 Experimental
There are no experimental changes in 0.14.0
175
passed
"F-F_Momentum_Factor"
and
176
Bug in read_html tests where redirected invalid URLs would make one test fail (GH6445).
Bug in multi-axis indexing using .loc on non-unique indices (GH6504)
Bug that caused _ref_locs corruption when slice indexing across columns axis of a DataFrame (GH6525)
Regression from 0.13 in the treatment of numpy datetime64 non-ns dtypes in Series creation (GH6529)
.names attribute of MultiIndexes passed to set_index are now preserved (GH6459).
Bug in setitem with a duplicate index and an alignable rhs (GH6541)
Bug in setitem with .loc on mixed integer Indexes (GH6546)
Bug in pd.read_stata which would use the wrong data types and missing values (GH6327)
Bug in DataFrame.to_stata that lead to data loss in certain cases, and could be exported using the wrong
data types and missing values (GH6335)
StataWriter replaces missing values in string columns by empty string (GH6802)
Inconsistent types in Timestamp addition/subtraction (GH6543)
Bug in preserving frequency across Timestamp addition/subtraction (GH4547)
Bug in empty list lookup caused IndexError exceptions (GH6536, GH6551)
Series.quantile raising on an object dtype (GH6555)
Bug in .xs with a nan in level when dropped (GH6574)
Bug in fillna with method=bfill/ffill and datetime64[ns] dtype (GH6587)
Bug in sql writing with mixed dtypes possibly leading to data loss (GH6509)
Bug in Series.pop (GH6600)
Bug in iloc indexing when positional indexer matched Int64Index of the corresponding axis and no reordering happened (GH6612)
Bug in fillna with limit and value specified
Bug in DataFrame.to_stata when columns have non-string names (GH4558)
Bug in compat with np.compress, surfaced in (GH6658)
Bug in binary operations with a rhs of a Series not aligning (GH6681)
Bug in DataFrame.to_stata which incorrectly handles nan values and ignores with_index keyword
argument (GH6685)
Bug in resample with extra bins when using an evenly divisible frequency (GH4076)
Bug in consistency of groupby aggregation when passing a custom function (GH6715)
Bug in resample when how=None resample freq is the same as the axis frequency (GH5955)
Bug in downcasting inference with empty arrays (GH6733)
Bug in obj.blocks on sparse containers dropping all but the last items of same for dtype (GH6748)
Bug in unpickling NaT (NaTType) (GH4606)
Bug in DataFrame.replace() where regex metacharacters were being treated as regexs even when
regex=False (GH6777).
Bug in timedelta ops on 32-bit platforms (GH6808)
Bug in setting a tz-aware index directly via .index (GH6785)
177
Bug in expressions.py where numexpr would try to evaluate arithmetic ops (GH6762).
Bug in Makefile where it didnt remove Cython generated C files with make clean (GH6768)
Bug with numpy < 1.7.2 when reading long strings from HDFStore (GH6166)
Bug in DataFrame._reduce where non bool-like (0/1) integers were being coverted into bools. (GH6806)
Regression from 0.13 with fillna and a Series on datetime-like (GH6344)
Bug in adding np.timedelta64 to DatetimeIndex with timezone outputs incorrect results (GH6818)
Bug in DataFrame.replace() where changing a dtype through replacement would only replace the first
occurrence of a value (GH6689)
Better error message when passing a frequency of MS in Period construction (GH5332)
Bug in Series.__unicode__ when max_rows=None and the Series has more than 1000 rows. (GH6863)
Bug in groupby.get_group where a datetlike wasnt always accepted (GH5267)
Bug in groupBy.get_group created by TimeGrouper raises AttributeError (GH6914)
Bug in DatetimeIndex.tz_localize and DatetimeIndex.tz_convert converting NaT incorrectly (GH5546)
Bug in arithmetic operations affecting NaT (GH6873)
Bug in Series.str.extract where the resulting Series from a single group match wasnt renamed to
the group name
Bug in DataFrame.to_csv where setting index=False ignored the header kwarg (GH6186)
Bug in DataFrame.plot and Series.plot, where the legend behave inconsistently when plotting to the
same axes repeatedly (GH6678)
Internal tests for patching __finalize__ / bug in merge not finalizing (GH6923, GH6927)
accept TextFileReader in concat, which was affecting a common user idiom (GH6583)
Bug in C parser with leading whitespace (GH3374)
Bug in C parser with delim_whitespace=True and \r-delimited lines
Bug in python parser with explicit multi-index in row following column header (GH6893)
Bug in Series.rank and DataFrame.rank that caused small floats (<1e-13) to all receive the same rank
(GH6886)
Bug in DataFrame.apply with functions that used *args or **kwargs and returned an empty result
(GH6952)
Bug in sum/mean on 32-bit platforms on overflows (GH6915)
Moved Panel.shift to NDFrame.slice_shift and fixed to respect multiple dtypes. (GH6959)
Bug in enabling subplots=True in DataFrame.plot only has single column raises TypeError, and
Series.plot raises AttributeError (GH6951)
Bug in DataFrame.plot draws unnecessary axes when enabling subplots and kind=scatter
(GH6951)
Bug in read_csv from a filesystem with non-utf-8 encoding (GH6807)
Bug in iloc when setting / aligning (GH6766)
Bug causing UnicodeEncodeError when get_dummies called with unicode values and a prefix (GH6885)
Bug in timeseries-with-frequency plot cursor display (GH5453)
178
179
180
Add show_dimensions display option for the new DataFrame repr to control whether the dimensions print.
In [14]: df = DataFrame([[1, 2], [3, 4]])
In [15]: pd.set_option('show_dimensions', False)
In [16]: df
Out[16]:
0 1
0 1 2
1 3 4
In [17]: pd.set_option('show_dimensions', True)
In [18]: df
181
Out[18]:
0 1
0 1 2
1 3 4
[2 rows x 2 columns]
The ArrayFormatter for datetime and timedelta64 now intelligently limit precision based on the
values in the array (GH3401)
Previously output might look like:
age
today
diff
0 2001-01-01 00:00:00 2013-04-19 00:00:00 4491 days, 00:00:00
1 2004-06-01 00:00:00 2013-04-19 00:00:00 3244 days, 00:00:00
s.str.get_dummies(sep='|')
c
0
0
0
1
[4 rows x 3 columns]
Added the NDFrame.equals() method to compare if two NDFrames are equal have equal axes, dtypes, and
values. Added the array_equivalent function to compare if two ndarrays are equal. NaNs in identical
locations are treated as equal. (GH5283) See also the docs for a motivating example.
182
DataFrame.apply will use the reduce argument to determine whether a Series or a DataFrame
should be returned when the DataFrame is empty (GH6007).
Previously, calling DataFrame.apply an empty DataFrame would return either a DataFrame if there
were no columns, or the function being applied would be called with an empty Series to guess whether a
Series or DataFrame should be returned:
In [32]: def applied_func(col):
....:
print("Apply function being called with: ", col)
....:
return col.sum()
....:
In [33]: empty = DataFrame(columns=['a', 'b'])
In [34]: empty.apply(applied_func)
('Apply function being called with: ', Series([], dtype: float64))
Out[34]:
a
NaN
b
NaN
dtype: float64
Now, when apply is called on an empty DataFrame: if the reduce argument is True a Series will
returned, if it is False a DataFrame will be returned, and if it is None (the default) the function being
applied will be called with an empty series to try and guess the return type.
In [35]: empty.apply(applied_func, reduce=True)
Out[35]:
a
NaN
b
NaN
dtype: float64
In [36]: empty.apply(applied_func, reduce=False)
Out[36]:
Empty DataFrame
Columns: [a, b]
Index: []
[0 rows x 2 columns]
183
1.13.4 Deprecations
There are no deprecations of prior behavior in 0.13.1
1.13.5 Enhancements
pd.read_csv and pd.to_datetime learned a new infer_datetime_format keyword which greatly
improves parsing perf in many cases. Thanks to @lexual for suggesting and @danbirken for rapidly implementing. (GH5490, GH6021)
If parse_dates is enabled and this flag is set, pandas will attempt to infer the format of the datetime strings
in the columns, and if it can be inferred, switch to a faster method of parsing them. In some cases this can
increase the parsing speed by ~5-10x.
# Try to infer the format for the index column
df = pd.read_csv('foo.csv', index_col=0, parse_dates=True,
infer_datetime_format=True)
date_format and datetime_format keywords can now be specified when writing to excel files
(GH4133)
MultiIndex.from_product convenience function for creating a MultiIndex from the cartesian product of
a set of iterables (GH6055):
In [37]: shades = ['light', 'dark']
In [38]: colors = ['red', 'green', 'blue']
In [39]: MultiIndex.from_product([shades, colors], names=['shade', 'color'])
Out[39]:
MultiIndex(levels=[[u'dark', u'light'], [u'blue', u'green', u'red']],
labels=[[1, 1, 1, 0, 0, 0], [2, 1, 0, 2, 1, 0]],
names=[u'shade', u'color'])
184
2000-01-05
2000-01-06
2000-01-07
[5 rows x 4 columns]
This is equivalent to
In [46]: panel.sum('major_axis')
Out[46]:
ItemA
ItemB
ItemC
A 2.579643 3.062757 0.379252
B 1.416120 -1.960855 0.923558
C 0.595222 -1.079772 -3.118269
D 1.487226 -0.734611 -1.979310
[4 rows x 3 columns]
A transformation operation that returns a Panel, but is computing the z-score across the major_axis
In [47]: result = panel.apply(
....:
lambda x: (x-x.mean())/x.std(),
....:
axis='major_axis')
....:
In [48]: result
Out[48]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00
Minor_axis axis: A to D
In [49]: result['ItemA']
Out[49]:
185
A
B
C
D
2000-01-03 0.595800 0.907552 -1.556260 -1.244875
2000-01-04 0.544058 0.200868 0.915883 0.953747
2000-01-05 -0.924165 -0.701810 0.569325 -0.891290
2000-01-06 -1.219530 -1.334852 -0.418654 0.437589
2000-01-07 1.003837 0.928242 0.489705 0.744830
[5 rows x 4 columns]
186
1.13.6 Performance
Performance improvements for 0.13.1
Series datetime/timedelta binary operations (GH5801)
DataFrame count/dropna for axis=1
Series.str.contains now has a regex=False keyword which can be faster for plain (non-regex) string patterns.
(GH5879)
Series.str.extract (GH5944)
dtypes/ftypes methods (GH5968)
indexing with object dtypes (GH5968)
DataFrame.apply (GH6013)
Regression in JSON IO (GH5765)
Index construction from Series (GH6150)
1.13.7 Experimental
There are no experimental changes in 0.13.1
187
All division with NDFrame objects is now truedivision, regardless of the future import. This means that operating on pandas objects will by default use floating point division, and return a floating point dtype. You can use
// and floordiv to do integer division.
Integer division
In [3]: arr = np.array([1, 2, 3, 4])
In [4]: arr2 = np.array([5, 3, 2, 1])
188
True Division
In [7]: pd.Series(arr) / pd.Series(arr2) # no future import required
Out[7]:
0
0.200000
1
0.666667
2
1.500000
3
4.000000
dtype: float64
Added the .bool() method to NDFrame objects to facilitate evaluating of single-element boolean Series:
In [1]: Series([True]).bool()
Out[1]: True
In [2]: Series([False]).bool()
Out[2]: False
In [3]: DataFrame([[True]]).bool()
Out[3]: True
In [4]: DataFrame([[False]]).bool()
Out[4]: False
All non-Index NDFrames (Series, DataFrame, Panel, Panel4D, SparsePanel, etc.), now support the
entire set of arithmetic operators and arithmetic flex methods (add, sub, mul, etc.). SparsePanel does not
support pow or mod with non-scalars. (GH3765)
Series and DataFrame now have a mode() method to calculate the statistical mode(s) by axis/Series.
(GH5367)
Chained assignment will now by default warn if the user is assigning to a copy. This can be changed with the
option mode.chained_assignment, allowed options are raise/warn/None. See the docs.
In [5]: dfc = DataFrame({'A':['aaa','bbb','ccc'],'B':[1,2,3]})
In [6]: pd.set_option('chained_assignment','warn')
189
dfc
B
1
2
3
[3 rows x 2 columns]
Series.argmin and Series.argmax are now aliased to Series.idxmin and Series.idxmax. These return the i
min or max element respectively. Prior to 0.13.0 these would return the position of the min / max element.
(GH6214)
1.14.3 Deprecations
Deprecated in 0.13.0
deprecated iterkv, which will be removed in a future release (this was an alias of iteritems used to bypass
2to3s changes). (GH4384, GH4375, GH4372)
190
deprecated the string method match, whose role is now performed more idiomatically by extract. In a
future release, the default behavior of match will change to become analogous to contains, which returns
a boolean indexer. (Their distinction is strictness: match relies on re.match while contains relies on
re.search.) In this release, the deprecated behavior is the default, but the new behavior is available through
the keyword argument as_indexer=True.
dfi
C
0
2
4
[3 rows x 3 columns]
191
dfi
C
0
2
4
5
[4 rows x 3 columns]
A Panel setting operation on an arbitrary axis aligns the input to the Panel
In [20]: p = pd.Panel(np.arange(16).reshape(2,4,2),
....:
items=['Item1','Item2'],
....:
major_axis=pd.date_range('2001/1/12',periods=4),
....:
minor_axis=['A','B'],dtype='float64')
....:
In [21]: p
Out[21]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 4 (major_axis) x 2 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2001-01-12 00:00:00 to 2001-01-15 00:00:00
Minor_axis axis: A to B
In [22]: p.loc[:,:,'C'] = Series([30,32],index=p.items)
In [23]: p
Out[23]:
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 4 (major_axis) x 3 (minor_axis)
Items axis: Item1 to Item2
Major_axis axis: 2001-01-12 00:00:00 to 2001-01-15 00:00:00
Minor_axis axis: A to C
In [24]: p.loc[:,:,'C']
Out[24]:
Item1 Item2
2001-01-12
30.0
32.0
2001-01-13
30.0
32.0
2001-01-14
30.0
32.0
2001-01-15
30.0
32.0
[4 rows x 2 columns]
192
Scalar selection for [],.ix,.loc will always be label based. An integer will match an equal float index (e.g.
3 is equivalent to 3.0)
In [29]: s[3]
Out[29]: 2
In [30]: s.ix[3]
Out[30]: 2
In [31]: s.loc[3]
Out[31]: 2
193
Indexing on other index types are preserved (and positional fallback for [],ix), with the exception, that floating
point slicing on indexes on non Float64Index will now raise a TypeError.
In [1]: Series(range(5))[3.5]
TypeError: the label [3.5] is not a proper indexer for this index type (Int64Index)
In [1]: Series(range(5))[3.5:4.5]
TypeError: the slice start [3.5] is not a proper indexer for this index type (Int64Index)
Using a scalar float indexer will be deprecated in a future version, but is allowed for now.
In [3]: Series(range(5))[3.0]
Out[3]: 3
194
D
-0.592886
-0.431550
-1.051539
-0.571455
0.071878
-0.546416
1.015405
0.688972
0.890765
-0.087107
the format keyword now replaces the table keyword; allowed values are fixed(f) or table(t) the
same defaults as prior < 0.13.0 remain, e.g. put implies fixed format and append implies table format.
This default format can be set as an option by setting io.hdf.default_format.
In [44]: path = 'test.h5'
In [45]: df = DataFrame(randn(10,2))
In [46]: df.to_hdf(path,'df_table',format='table')
In [47]: df.to_hdf(path,'df_table2',append=True)
In [48]: df.to_hdf(path,'df_fixed')
In [49]: with get_store(path) as store:
....:
print(store)
....:
<class 'pandas.io.pytables.HDFStore'>
File path: test.h5
/df_fixed
frame
(shape->[10,2])
/df_table
frame_table (typ->appendable,nrows->10,ncols->2,indexers->[index])
/df_table2
frame_table (typ->appendable,nrows->10,ncols->2,indexers->[index])
195
In [51]: df = DataFrame(randn(10,2))
In [52]: store1 = HDFStore(path)
In [53]: store2 = HDFStore(path)
In [54]: store1.append('df',df)
In [55]: store2.append('df2',df)
In [56]: store1
Out[56]:
<class 'pandas.io.pytables.HDFStore'>
File path: test.h5
/df
frame_table (typ->appendable,nrows->10,ncols->2,indexers->[index])
In [57]: store2
Out[57]:
<class 'pandas.io.pytables.HDFStore'>
File path: test.h5
/df
frame_table (typ->appendable,nrows->10,ncols->2,indexers->[index])
/df2
frame_table (typ->appendable,nrows->10,ncols->2,indexers->[index])
In [58]: store1.close()
In [59]: store2
Out[59]:
<class 'pandas.io.pytables.HDFStore'>
File path: test.h5
/df
frame_table (typ->appendable,nrows->10,ncols->2,indexers->[index])
/df2
frame_table (typ->appendable,nrows->10,ncols->2,indexers->[index])
In [60]: store2.close()
In [61]: store2
Out[61]:
<class 'pandas.io.pytables.HDFStore'>
File path: test.h5
File is CLOSED
removed the _quiet attribute, replace by a DuplicateWarning if retrieving duplicate rows from a table
(GH4367)
removed the warn argument from open. Instead a PossibleDataLossError exception will be raised if
you try to use mode=w with an OPEN file handle (GH4367)
allow a passed locations array or mask as a where condition (GH4467). See the docs for an example.
add the keyword dropna=True to append to change whether ALL nan rows are not written to the store
(default is True, ALL nan rows are NOT written), also settable via the option io.hdf.dropna_table
(GH4625)
pass thru store creation arguments; can be used to support in-memory stores
196
To get the info view, call DataFrame.info(). If you prefer the info view as the repr for large DataFrames, you
can set this by running set_option(display.large_repr, info).
1.14.8 Enhancements
df.to_clipboard() learned a new excel keyword that lets you paste df data directly into excel (enabled
by default). (GH5070).
read_html now raises a URLError instead of catching and raising a ValueError (GH4303, GH4305)
Added a test for read_clipboard() and to_clipboard() (GH4282)
Clipboard functionality now works with PySide (GH4282)
Added a more informative error message when plot arguments contain overlapping color and style arguments
(GH4402)
to_dict now takes records as a possible outtype. Returns an array of column-keyed dictionaries. (GH4936)
NaN handing in get_dummies (GH4446) with dummy_na
# previously, nan was erroneously counted as 2 here
# now it is not counted at all
In [62]: get_dummies([1, 2, np.nan])
Out[62]:
1.0 2.0
0 1.0 0.0
1 0.0 1.0
2 0.0 0.0
[3 rows x 2 columns]
# unless requested
In [63]: get_dummies([1, 2, np.nan], dummy_na=True)
Out[63]:
1.0
2.0 NaN
0
1.0
0.0
0.0
1
0.0
1.0
0.0
2
0.0
0.0
1.0
[3 rows x 3 columns]
197
Using the new top-level to_timedelta, you can convert a scalar or array from the standard timedelta format
(produced by to_csv) into a timedelta type (np.timedelta64 in nanoseconds).
In [64]: to_timedelta('1 days 06:05:01.00003')
Out[64]: Timedelta('1 days 06:05:01.000030')
In [65]: to_timedelta('15.5us')
Out[65]: Timedelta('0 days 00:00:00.000015')
In [67]: to_timedelta(np.arange(5),unit='s')
Out[67]: TimedeltaIndex(['00:00:00', '00:00:01', '00:00:02', '00:00:03', '00:00:04'], dtype='tim
In [68]: to_timedelta(np.arange(5),unit='d')
Out[68]: TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], dtype='timedelta64[n
198
2
2678703.0
3
NaN
dtype: float64
In [77]: td.astype('timedelta64[s]')
Out[77]:
0
2678400.0
1
2678400.0
2
2678703.0
3
NaN
dtype: float64
199
plot(kind=kde) now accepts the optional parameters bw_method and ind, passed to
scipy.stats.gaussian_kde() (for scipy >= 0.11.0) to set the bandwidth, and to gkde.evaluate() to specify the indices at which it is evaluated, respectively. See scipy docs. (GH4298)
DataFrame constructor now accepts a numpy masked record array (GH3478)
The new vectorized string method extract return regular expression matches more conveniently.
In [86]: Series(['a1', 'b2', 'c3']).str.extract('[ab](\d)')
Out[86]:
0
1
1
2
2
NaN
dtype: object
Elements that do not match return NaN. Extracting a regular expression with more than one group returns a
DataFrame with one column per group.
In [87]: Series(['a1', 'b2', 'c3']).str.extract('([ab])(\d)')
Out[87]:
0
1
0
a
1
1
b
2
2 NaN NaN
[3 rows x 2 columns]
Elements that do not match return a row of NaN. Thus, a Series of messy strings can be converted into a likeindexed Series or DataFrame of cleaned-up or more useful strings, without necessitating get() to access tuples
or re.match objects.
Named groups like
In [88]: Series(['a1', 'b2', 'c3']).str.extract(
....:
'(?P<letter>[ab])(?P<digit>\d)')
....:
Out[88]:
letter digit
0
a
1
1
b
2
2
NaN
NaN
[3 rows x 2 columns]
200
NaN
[3 rows x 2 columns]
A new method, isin for DataFrames, which plays nicely with boolean indexing. The argument to isin, what
were comparing the DataFrame to, can be a DataFrame, Series, dict, or array of values. See the docs for more.
To get the rows where any of the conditions are met:
In [94]: dfi = DataFrame({'A': [1, 2, 3, 4], 'B': ['a', 'b', 'f', 'n']})
In [95]: dfi
Out[95]:
A B
0 1 a
1 2 b
2 3 f
3 4 n
[4 rows x 2 columns]
In [96]: other = DataFrame({'A': [1, 3, 3, 7], 'B': ['e', 'f', 'f', 'e']})
In [97]: mask = dfi.isin(other)
In [98]: mask
201
Out[98]:
A
0
True
1 False
2
True
3 False
B
False
False
True
False
[4 rows x 2 columns]
In [99]: dfi[mask.any(1)]
Out[99]:
A B
0 1 a
2 3 f
[2 rows x 2 columns]
tz_localize can infer a fall daylight savings transition based on the structure of the unlocalized data
(GH4230), see the docs
DatetimeIndex is now in the API documentation, see the docs
json_normalize() is a new method to allow you to create a flat table from semi-structured JSON data. See
the docs (GH1067)
Added PySide support for the qtpandas DataFrameModel and DataFrameWidget.
Python csv parser now supports usecols (GH4335)
Frequencies gained several new offsets:
LastWeekOfMonth (GH4637)
FY5253, and FY5253Quarter (GH4511)
DataFrame has a new interpolate method, similar to Series (GH4434, GH1892)
In [100]: df = DataFrame({'A': [1, 2.1, np.nan, 4.7, 5.6, 6.8],
.....:
'B': [.25, np.nan, np.nan, 4, 12.2, 14.4]})
.....:
In [101]: df.interpolate()
Out[101]:
A
B
0 1.0
0.25
1 2.1
1.50
2 3.4
2.75
3 4.7
4.00
4 5.6 12.20
5 6.8 14.40
[6 rows x 2 columns]
202
Additionally, the method argument to interpolate has been expanded to include nearest,
zero, slinear, quadratic, cubic, barycentric, krogh,
piecewise_polynomial, pchip, polynomial, spline The new methods require scipy. Consult the Scipy reference guide and documentation for more information about when the various
methods are appropriate. See the docs.
Interpolate now also accepts a limit keyword argument. This works similar to fillnas limit:
In [102]: ser = Series([1, 3, np.nan, np.nan, np.nan, 11])
In [103]: ser.interpolate(limit=2)
Out[103]:
0
1.0
1
3.0
2
5.0
3
7.0
4
NaN
5
11.0
dtype: float64
2 : "c"},
2 : "f"},
2 : .7},
2 : .1},
np.random.randn(3)))
B1970
2.5
1.2
0.7
B1980
X
3.2 -1.085631
1.3 0.997345
0.1 0.282978
id
0
1
2
[3 rows x 6 columns]
In [108]: wide_to_long(df, ["A", "B"], i="id", j="year")
Out[108]:
X A
B
id year
0 1970 -1.085631 a 2.5
1 1970 0.997345 b 1.2
2 1970 0.282978 c 0.7
0 1980 -1.085631 d 3.2
1 1980 0.997345 e 1.3
2 1980 0.282978 f 0.1
[6 rows x 3 columns]
to_csv now takes a date_format keyword argument that specifies how output datetime objects should
be formatted. Datetimes encountered in the index, columns, and values will all have this formatting applied.
(GH4313)
203
1.14.9 Experimental
The new eval() function implements expression evaluation using numexpr behind the scenes. This results
in large speedups for complicated expressions involving large DataFrames/Series. For example,
In [109]: nrows, ncols = 20000, 100
In [110]: df1, df2, df3, df4 = [DataFrame(randn(nrows, ncols))
.....:
for _ in range(4)]
.....:
# eval with NumExpr backend
In [111]: %timeit pd.eval('df1 + df2 + df3 + df4')
100 loops, best of 3: 15.1 ms per loop
# pure Python evaluation
In [112]: %timeit df1 + df2 + df3 + df4
10 loops, best of 3: 24.1 ms per loop
query() method has been added that allows you to select elements of a DataFrame using a natural query
syntax nearly identical to Python syntax. For example,
In [115]: n = 20
In [116]: df = DataFrame(np.random.randint(n, size=(n, 3)), columns=['a', 'b', 'c'])
In [117]: df.query('a < b < c')
Out[117]:
a
b
c
11 1
5
8
15 8 16 19
[2 rows x 3 columns]
204
selects all the rows of df where a < b < c evaluates to True. For more details see the the docs.
pd.read_msgpack() and pd.to_msgpack() are now a supported method of serialization of arbitrary
pandas (and python objects) in a lightweight portable binary format. See the docs
Warning: Since this is an EXPERIMENTAL LIBRARY, the storage format may not be stable until a future
release.
In [118]: df = DataFrame(np.random.rand(5,2),columns=list('AB'))
In [119]: df.to_msgpack('foo.msg')
In [120]: pd.read_msgpack('foo.msg')
Out[120]:
A
B
0 0.251082 0.017357
1 0.347915 0.929879
2 0.546233 0.203368
3 0.064942 0.031722
4 0.355309 0.524575
[5 rows x 2 columns]
In [121]: s = Series(np.random.rand(5),index=date_range('20130101',periods=5))
In [122]: pd.to_msgpack('foo.msg', df, s)
In [123]: pd.read_msgpack('foo.msg')
Out[123]:
[
A
B
0 0.251082 0.017357
1 0.347915 0.929879
2 0.546233 0.203368
3 0.064942 0.031722
4 0.355309 0.524575
[5 rows x 2 columns], 2013-01-01
2013-01-02
0.227025
2013-01-03
0.383282
2013-01-04
0.193225
2013-01-05
0.110977
Freq: D, dtype: float64]
0.022321
205
2013-01-04
0.193225
2013-01-05
0.110977
Freq: D, dtype: float64
pandas.io.gbq provides a simple way to extract from, and load data into, Googles BigQuery Data Sets by
way of pandas DataFrames. BigQuery is a high performance SQL-like database service, useful for performing
ad-hoc queries against extremely large datasets. See the docs
from pandas.io import gbq
#
#
#
#
206
-53.336667
-49.837500
-77.926087
-82.892858
-92.378261
-77.703334
-87.821428
-89.431999
-86.611112
-78.209677
-50.125000
-50.332258
Mean Temp
39.827892
43.685219
48.708355
55.070087
61.428117
65.858888
68.169663
68.614215
63.436935
56.880838
48.861228
42.286879
Max Temp
89.770968
93.437932
96.099998
97.317240
102.042856
102.900000
106.510714
105.500000
107.142856
92.103333
94.996428
94.396774
Warning:
To use this module, you will need a BigQuery account.
See
<https://cloud.google.com/products/big-query> for details.
As of 10/10/13, there is a bug in Googles API preventing result sets from being larger than 100,000 rows.
A patch is scheduled for the week of 10/14/13.
207
Numpy Usage
In [126]: np.ones_like(s)
Out[126]: array([1, 1, 1, 1])
In [127]: np.diff(s)
Out[127]: array([1, 1, 1])
In [128]: np.where(s>1,s,np.nan)
Out[128]: array([ nan,
2.,
3.,
4.])
Pandonic Usage
In [129]: Series(1,index=s.index)
Out[129]:
0
1
1
1
2
1
3
1
dtype: int64
In [130]: s.diff()
Out[130]:
0
NaN
1
1.0
2
1.0
3
1.0
dtype: float64
In [131]: s.where(s>1)
Out[131]:
0
NaN
1
2.0
2
3.0
3
4.0
dtype: float64
Passing a Series directly to a cython function expecting an ndarray type will no long work directly, you
must pass Series.values, See Enhancing Performance
Series(0.5) would previously return the scalar 0.5, instead this will return a 1-element Series
This change breaks rpy2<=2.3.8. an Issue has been opened against rpy2 and a workaround is detailed in
GH5698. Thanks @JanSchulz.
Pickle compatibility is preserved for pickles created prior to 0.13.
pd.read_pickle, see Pickling.
* __iter__,keys,__contains__,__len__,__neg__,__invert__
* convert_objects,as_blocks,as_matrix,values
* __getstate__,__setstate__ (compat remains in frame/panel)
* __getattr__,__setattr__
* _indexed_same,reindex_like,align,where,mask
* fillna,replace (Series replace is now consistent with DataFrame)
* filter (also added axis argument to selectively filter on a different axis)
* reindex,reindex_axis,take
* truncate (moved to become part of NDFrame)
These are API changes which make Panel more consistent with DataFrame
swapaxes on a Panel with the same axes specified now return a copy
support attribute access for setting
filter supports the same API as the original DataFrame filter
Reindex called with no arguments will now return a copy of the input object
TimeSeries is now an alias for Series. the property is_time_series can be used to distinguish (if
desired)
Refactor of Sparse objects to use BlockManager
Created a new block type in internals, SparseBlock, which can hold multi-dtypes and is nonconsolidatable. SparseSeries and SparseDataFrame now inherit more methods from there hierarchy (Series/DataFrame), and no longer inherit from SparseArray (which instead is the object of
the SparseBlock)
Sparse suite now supports integration with non-sparse data. Non-float sparse data is supportable (partially
implemented)
Operations on sparse structures within DataFrames should preserve sparseness, merging type operations
will convert to dense (and back to sparse), so might be somewhat inefficient
enable setitem on SparseSeries for boolean/integer/slices
SparsePanels implementation is unchanged (e.g. not using BlockManager, needs work)
added ftypes method to Series/DataFrame, similar to dtypes, but indicates if the underlying is sparse/dense
(as well as the dtype)
All NDFrame objects can now use __finalize__() to specify various values to propagate to new objects
from an existing one (e.g. name in Series will follow more automatically now)
Internal type checking is now done via a suite of generated classes, allowing isinstance(value, klass)
without having to directly import the klass, courtesy of @jtratner
Bug in Series update where the parent frame is not updating its cache based on changes (GH4080) or types
(GH3217), fillna (GH3386)
Indexing with dtype conversions fixed (GH4463, GH4204)
Refactor Series.reindex to core/generic.py (GH4604, GH4618), allow method= in reindexing on a Series to work
Series.copy no longer accepts the order parameter and is now consistent with NDFrame copy
209
Refactor rename methods to core/generic.py; fixes Series.rename for (GH4605), and adds rename with
the same signature for Panel
Refactor clip methods to core/generic.py (GH4798)
Refactor of _get_numeric_data/_get_bool_data to core/generic.py, allowing Series/Panel functionality
Series (for index) / Panel (for items) now allow attribute access to its elements (GH1903)
In [132]: s = Series([1,2,3],index=list('abc'))
In [133]: s.b
Out[133]: 2
In [134]: s.a = 5
In [135]: s
Out[135]:
a
5
b
2
c
3
dtype: int64
read_clipboard
The corresponding writer functions are object methods that are accessed like df.to_csv()
to_csv
to_excel
to_hdf
to_sql
to_json
to_html
to_stata
to_clipboard
Fix modulo and integer division on Series,DataFrames to act similary to float dtypes to return np.nan
or np.inf as appropriate (GH3590). This correct a numpy bug that treats integer and float dtypes
differently.
In [1]: p = DataFrame({ 'first' : [4,5,8], 'second' : [0,0,3] })
In [2]: p % 0
Out[2]:
first second
0
NaN
NaN
1
NaN
NaN
2
NaN
NaN
[3 rows x 2 columns]
In [3]: p % p
Out[3]:
first second
0
0.0
NaN
1
0.0
NaN
2
0.0
0.0
[3 rows x 2 columns]
In [4]: p / p
Out[4]:
first second
0
1.0
NaN
1
1.0
NaN
2
1.0
1.0
[3 rows x 2 columns]
In [5]: p / 0
Out[5]:
first second
0
inf
NaN
1
inf
NaN
2
inf
inf
[3 rows x 2 columns]
211
Add squeeze keyword to groupby to allow reduction from DataFrame -> Series if groups are unique. This
is a Regression from 0.10.1. We are reverting back to the prior behavior. This means groupby will return the
same shaped objects whether the groups are unique or not. Revert this issue (GH2893) with (GH3596).
In [6]: df2 = DataFrame([{"val1": 1, "val2" : 20}, {"val1":1, "val2": 19},
...:
{"val1":1, "val2": 27}, {"val1":1, "val2": 12}])
...:
In [7]: def func(dataf):
...:
return dataf["val2"]
...:
- dataf["val2"].mean()
Raise on iloc when boolean indexing with a label based indexer mask e.g. a boolean Series, even with integer
labels, will raise. Since iloc is purely positional based, the labels on the Series are not alignable (GH3631)
This case is rarely used, and there are plently of alternatives. This preserves the iloc API to be purely positional
based.
In [10]: df = DataFrame(lrange(5), list('ABCDE'), columns=['a'])
In [11]: mask = (df.a%2 == 0)
In [12]: mask
Out[12]:
A
True
B
False
C
True
D
False
E
True
Name: a, dtype: bool
# this is what you should use
In [13]: df.loc[mask]
Out[13]:
a
A 0
C 2
E 4
[3 rows x 1 columns]
212
With
import pandas as pd
pd.read_excel('path_to_file.xls', 'Sheet1', index_col=None, na_values=['NA'])
DataFrame.to_html and DataFrame.to_latex now accept a path for their first argument (GH3702)
Do not allow astypes on datetime64[ns] except to object, and timedelta64[ns] to object/int
(GH3425)
The behavior of datetime64 dtypes has changed with respect to certain so-called reduction operations
(GH3726). The following operations now raise a TypeError when perfomed on a Series and return an
empty Series when performed on a DataFrame similar to performing these operations on, for example, a
DataFrame of slice objects:
sum, prod, mean, std, var, skew, kurt, corr, and cov
1.15. v0.12.0 (July 24, 2013)
213
read_html now defaults to None when reading, and falls back on bs4 + html5lib when lxml fails to
parse. a list of parsers to try until success is also valid
The internal pandas class hierarchy has changed (slightly). The previous PandasObject now is called
PandasContainer and a new PandasObject has become the baseclass for PandasContainer as well
as Index, Categorical, GroupBy, SparseList, and SparseArray (+ their base classes). Currently,
PandasObject provides string methods (from StringMixin). (GH4090, GH4092)
New StringMixin that, given a __unicode__ method, gets python 2 and python 3 compatible string
methods (__str__, __bytes__, and __repr__). Plus string safety throughout. Now employed in many
places throughout the pandas library. (GH4090, GH4092)
print(df == alist[0])
b
True
True
True
[3 rows x 2 columns]
Note that alist here is a Python list so pd.read_html() and DataFrame.to_html() are not inverses.
pd.read_html() no longer performs hard conversion of date strings (GH3656).
Warning: You may have to install an older version of BeautifulSoup4, See the installation docs
Added module for reading and writing Stata files: pandas.io.stata (GH1512) accessable via
read_stata top-level function for reading, and to_stata DataFrame method for writing, See the docs
Added module for reading and writing json format files: pandas.io.json accessable via read_json toplevel function for reading, and to_json DataFrame method for writing, See the docs various issues (GH1226,
GH3804, GH3876, GH3867, GH1305)
MultiIndex column support for reading and writing csv format files
214
The header option in read_csv now accepts a list of the rows from which to read the index.
The option, tupleize_cols can now be specified in both to_csv and read_csv, to provide compatiblity for the pre 0.12 behavior of writing and reading MultIndex columns via a list of tuples. The
default in 0.12 is to write lists of tuples and not interpret list of tuples as a MultiIndex column.
Note: The default behavior in 0.12 remains unchanged from prior versions, but starting with 0.13, the
default to write and read MultiIndex columns will be in the new format. (GH3571, GH1651, GH3141)
If an index_col is not specified (e.g. you dont have an index, or wrote it with df.to_csv(...,
index=False), then any names on the columns index will be lost.
In [20]: from pandas.util.testing import makeCustomDataframe as mkdf
In [21]: df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4)
In [22]: df.to_csv('mi.csv',tupleize_cols=False)
In [23]: print(open('mi.csv').read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
C1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2
In [24]: pd.read_csv('mi.csv',header=[0,1,2,3],index_col=[0,1],tupleize_cols=False)
Out[24]:
C0
C_l0_g0 C_l0_g1 C_l0_g2
C1
C_l1_g0 C_l1_g1 C_l1_g2
C2
C_l2_g0 C_l2_g1 C_l2_g2
C3
C_l3_g0 C_l3_g1 C_l3_g2
R0
R1
R_l0_g0 R_l1_g0
R0C0
R0C1
R0C2
R_l0_g1 R_l1_g1
R1C0
R1C1
R1C2
R_l0_g2 R_l1_g2
R2C0
R2C1
R2C2
R_l0_g3 R_l1_g3
R3C0
R3C1
R3C2
R_l0_g4 R_l1_g4
R4C0
R4C1
R4C2
[5 rows x 3 columns]
215
1 -0.661062
2 0.344342
0
3 -0.626968
4 -0.930687
5 0.949965
0
6 -0.402985
7 -0.241527
8 0.049355
0
9 -1.502767
0.862877
0.149565
1
-0.875772
-0.218983
-0.442354
1
1.111358
-0.670477
0.632633
1
-1.225492
read_csv will now throw a more informative error message when a file contains no columns, e.g., all newline
characters
to replace all occurrences of the string . with zero or more instances of surrounding whitespace with NaN.
Regular string replacement still works as expected. For example, you can do
In [27]: df.replace('.', np.nan)
Out[27]:
a b
0
a 1
1
b 2
2 NaN 3
3 NaN 4
[4 rows x 2 columns]
216
In [28]: pd.get_option('a.b')
Out[28]: 2
In [29]: pd.get_option('b.c')
Out[29]: 3
In [30]: pd.set_option('a.b', 1, 'b.c', 4)
In [31]: pd.get_option('a.b')
Out[31]: 1
In [32]: pd.get_option('b.c')
Out[32]: 4
The filter method for group objects returns a subset of the original object. Suppose we want to take only
elements that belong to groups with a group sum greater than 2.
In [33]: sf = Series([1, 1, 2, 3, 3, 3])
In [34]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[34]:
3
3
4
3
5
3
dtype: int64
The argument of filter must a function that, applied to the group as a whole, returns True or False.
Another useful operation is filtering out elements that belong to groups with only a couple members.
In [35]: dff = DataFrame({'A': np.arange(8), 'B': list('aabbbbcc')})
In [36]: dff.groupby('B').filter(lambda x: len(x) > 2)
Out[36]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
[4 rows x 2 columns]
Alternatively, instead of dropping the offending groups, we can return a like-indexed objects where the groups
that do not pass the filter are filled with NaNs.
In [37]: dff.groupby('B').filter(lambda x: len(x) > 2, dropna=False)
Out[37]:
A
B
0 NaN NaN
1 NaN NaN
2 2.0
b
3 3.0
b
4 4.0
b
5 5.0
b
6 NaN NaN
7 NaN NaN
[8 rows x 2 columns]
Series and DataFrame hist methods now take a figsize argument (GH3834)
1.15. v0.12.0 (July 24, 2013)
217
DatetimeIndexes no longer try to convert mixed-integer indexes during join operations (GH3877)
Timestamp.min and Timestamp.max now represent valid Timestamp instances instead of the default datetime.min and datetime.max (respectively), thanks @SleepingPills
read_html now raises when no tables are found and BeautifulSoup==4.2.0 is detected (GH4214)
218
The last element yielded by the iterator will be a Series containing the last element of the longest string in
the Series with all other elements being NaN. Here since slow is the longest string and there are no other
strings with the same length w is the only non-null string in the yielded Series.
HDFStore
will retain index attributes (freq,tz,name) on recreation (GH3499)
will warn with a AttributeConflictWarning if you are attempting to append an index with a
different frequency than the existing, or attempting to append an index with a different name than the
existing
support datelike columns with a timezone as data_columns (GH2852)
Non-unique index support clarified (GH3468).
Fix assigning a new index to a duplicate index in a DataFrame would fail (GH3468)
Fix construction of a DataFrame with a duplic