0% found this document useful (0 votes)
159 views3,071 pages

Pandas Python Data Analysis Toolkit

The document is the pandas 1.0.3 user guide, which provides information about the pandas Python library. It discusses what's new in version 1.0.1, including fixed regressions, deprecations, and bug fixes. It also covers getting started topics such as installation, an introduction to pandas, tutorials, and the user guide, which details functionality for reading and writing data, indexing and selecting data, and other core pandas topics.

Uploaded by

StocknEarn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
159 views3,071 pages

Pandas Python Data Analysis Toolkit

The document is the pandas 1.0.3 user guide, which provides information about the pandas Python library. It discusses what's new in version 1.0.1, including fixed regressions, deprecations, and bug fixes. It also covers getting started topics such as installation, an introduction to pandas, tutorials, and the user guide, which details functionality for reading and writing data, indexing and selecting data, and other core pandas topics.

Uploaded by

StocknEarn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3071

pandas: powerful Python data analysis

toolkit
Release 1.0.3

Wes McKinney and the Pandas Development Team

[email protected]
T56GZSRVAH

Mar 18, 2020

This file is meant for personal use by [email protected] only.


Sharing or publishing the contents in part or full is liable for legal action.
[email protected]
T56GZSRVAH

This file is meant for personal use by [email protected] only.


Sharing or publishing the contents in part or full is liable for legal action.
CONTENTS

1 What’s new in 1.0.1 (February 5, 2020) 3


1.1 Fixed regressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Deprecations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Bug fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2Getting started 5
2.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Intro to pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Coming from. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Community tutorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4.2
[email protected] Package overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
T56GZSRVAH 2.4.3 10 minutes to pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.4 Getting started tutorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.4.5 Essential basic functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.4.6 Intro to data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
2.4.7 Comparison with other tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
2.4.8 Tutorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

3 User Guide 227


3.1 IO tools (text, CSV, HDF5, . . . ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
3.1.1 CSV & text files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
3.1.2 JSON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
3.1.3 HTML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
3.1.4 Excel files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
3.1.5 OpenDocument Spreadsheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
3.1.6 Binary Excel (.xlsb) files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
3.1.7 Clipboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
3.1.8 Pickling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
3.1.9 msgpack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
3.1.10 HDF5 (PyTables) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
3.1.11 Feather . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
3.1.12 Parquet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
3.1.13 ORC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
3.1.14 SQL queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
3.1.15 Google BigQuery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
3.1.16 Stata format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
3.1.17 SAS formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
3.1.18 SPSS formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339

i
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
3.1.19 Other file formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
3.1.20 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
3.2 Indexing and selecting data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
3.2.1 Different choices for indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
3.2.2 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
3.2.3 Attribute access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
3.2.4 Slicing ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
3.2.5 Selection by label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
3.2.6 Selection by position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
3.2.7 Selection by callable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
3.2.8 IX indexer is deprecated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
3.2.9 Indexing with list with missing labels is deprecated . . . . . . . . . . . . . . . . . . . . . . 360
3.2.10 Selecting random samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
3.2.11 Setting with enlargement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
3.2.12 Fast scalar value getting and setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
3.2.13 Boolean indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
3.2.14 Indexing with isin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
3.2.15 The where() Method and Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
3.2.16 The query() Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
3.2.17 Duplicate data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
3.2.18 Dictionary-like get() method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
3.2.19 The lookup() method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
3.2.20 Index objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
3.2.21 Set / reset index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
3.2.22 Returning a view versus a copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
3.3 MultiIndex / advanced indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
3.3.1
[email protected] Hierarchical indexing (MultiIndex) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
T56GZSRVAH 3.3.2 Advanced indexing with hierarchical index . . . . . . . . . . . . . . . . . . . . . . . . . . 403
3.3.3 Sorting a MultiIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
3.3.4 Take methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
3.3.5 Index types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
3.3.6 Miscellaneous indexing FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
3.4 Merge, join, and concatenate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
3.4.1 Concatenating objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
3.4.2 Database-style DataFrame or named Series joining/merging . . . . . . . . . . . . . . . . . 444
3.4.3 Timeseries friendly merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
3.5 Reshaping and pivot tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
3.5.1 Reshaping by pivoting DataFrame objects . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
3.5.2 Reshaping by stacking and unstacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
3.5.3 Reshaping by Melt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
3.5.4 Combining with stats and GroupBy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
3.5.5 Pivot tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
3.5.6 Cross tabulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
3.5.7 Tiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
3.5.8 Computing indicator / dummy variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
3.5.9 Factorizing values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
3.5.10 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
3.5.11 Exploding a list-like column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
3.6 Working with text data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
3.6.1 Text Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
3.6.2 String Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
3.6.3 Splitting and replacing strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
3.6.4 Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
3.6.5 Indexing with .str . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506

ii
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
3.6.6 Extracting substrings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
3.6.7 Testing for Strings that match or contain a pattern . . . . . . . . . . . . . . . . . . . . . . . 511
3.6.8 Creating indicator variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
3.6.9 Method summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
3.7 Working with missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
3.7.1 Values considered “missing” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
3.7.2 Inserting missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
3.7.3 Calculations with missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
3.7.4 Sum/prod of empties/nans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
3.7.5 NA values in GroupBy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
3.7.6 Filling missing values: fillna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
3.7.7 Filling with a PandasObject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
3.7.8 Dropping axis labels with missing data: dropna . . . . . . . . . . . . . . . . . . . . . . . . 523
3.7.9 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
3.7.10 Replacing generic values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
3.7.11 String/regular expression replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
3.7.12 Numeric replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
3.7.13 Experimental NA scalar to denote missing values . . . . . . . . . . . . . . . . . . . . . . . 539
3.8 Categorical data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
3.8.1 Object creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
3.8.2 CategoricalDtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
3.8.3 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
3.8.4 Working with categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
3.8.5 Sorting and order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
3.8.6 Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
3.8.7 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
3.8.8
[email protected] Data munging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
T56GZSRVAH 3.8.9 Getting data in/out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
3.8.10 Missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
3.8.11 Differences to R’s factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
3.8.12 Gotchas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
3.9 Nullable integer data type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
3.9.1 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
3.9.2 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
3.9.3 Scalar NA Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
3.10 Nullable Boolean Data Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
3.10.1 Indexing with NA values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
3.10.2 Kleene Logical Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
3.11 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
3.11.1 Basic plotting: plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
3.11.2 Other plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
3.11.3 Plotting with missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
3.11.4 Plotting Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
3.11.5 Plot Formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
3.11.6 Plotting directly with matplotlib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
3.12 Computational tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
3.12.1 Statistical functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
3.12.2 Window Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
3.12.3 Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
3.12.4 Expanding windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
3.12.5 Exponentially weighted windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
3.13 Group By: split-apply-combine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
3.13.1 Splitting an object into groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
3.13.2 Iterating through groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687

iii
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
3.13.3 Selecting a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
3.13.4 Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
3.13.5 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
3.13.6 Filtration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
3.13.7 Dispatching to instance methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
3.13.8 Flexible apply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
3.13.9 Other useful features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
3.13.10 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
3.14 Time series / date functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
3.14.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
3.14.2 Timestamps vs. Time Spans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
3.14.3 Converting to timestamps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
3.14.4 Generating ranges of timestamps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
3.14.5 Timestamp limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
3.14.6 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
3.14.7 Time/date components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
3.14.8 DateOffset objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
3.14.9 Time Series-Related Instance Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
3.14.10 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
3.14.11 Time span representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
3.14.12 Converting between representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
3.14.13 Representing out-of-bounds spans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
3.14.14 Time zone handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
3.15 Time deltas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
3.15.1 Parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
3.15.2 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 782
3.15.3 Reductions . . . . . . . . . . . . . . . . . . . . . . .
[email protected] . . . . . . . . . . . . . . . . . . . . 786
T56GZSRVAH 3.15.4 Frequency conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
3.15.5 Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
3.15.6 TimedeltaIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
3.15.7 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
3.16 Styling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
3.16.1 Building styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
3.16.2 Finer control: slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797
3.16.3 Finer Control: Display Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
3.16.4 Builtin styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
3.16.5 Sharing styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
3.16.6 Other Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
3.16.7 Fun stuff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804
3.16.8 Export to Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
3.16.9 Extensibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
3.17 Options and settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
3.17.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
3.17.2 Getting and setting options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
3.17.3 Setting startup options in Python/IPython environment . . . . . . . . . . . . . . . . . . . . 809
3.17.4 Frequently Used Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
3.17.5 Available options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816
3.17.6 Number formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
3.17.7 Unicode formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
3.17.8 Table schema display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
3.18 Enhancing performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
3.18.1 Cython (writing C extensions for pandas) . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
3.18.2 Using Numba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
3.18.3 Expression evaluation via eval() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827

iv
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
3.19 Scaling to large datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
3.19.1 Load less data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
3.19.2 Use efficient datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
3.19.3 Use chunking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
3.19.4 Use other libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
3.20 Sparse data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
3.20.1 SparseArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846
3.20.2 SparseDtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846
3.20.3 Sparse accessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
3.20.4 Sparse calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
3.20.5 Migrating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
3.20.6 Interaction with scipy.sparse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
3.21 Frequently Asked Questions (FAQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
3.21.1 DataFrame memory usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
3.21.2 Using if/truth statements with pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
3.21.3 NaN, Integer NA values and NA type promotions . . . . . . . . . . . . . . . . . . . . . . . . 857
3.21.4 Differences with NumPy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
3.21.5 Thread-safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
3.21.6 Byte-Ordering issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
3.22 Cookbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
3.22.1 Idioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
3.22.2 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864
3.22.3 MultiIndexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
3.22.4 Missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
3.22.5 Grouping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873
3.22.6 Timeseries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
3.22.7 Merge . . . . . . . . . . . . . . . . . . . . . .
[email protected] . . . . . . . . . . . . . . . . . . . . . . . . 883
T56GZSRVAH 3.22.8 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
3.22.9 Data In/Out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
3.22.10 Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
3.22.11 Timedeltas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
3.22.12 Aliasing axis names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894
3.22.13 Creating example data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895

4 API reference 897


4.1 Input/output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
4.1.1 Pickling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
4.1.2 Flat file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
4.1.3 Clipboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909
4.1.4 Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909
4.1.5 JSON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915
4.1.6 HTML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
4.1.7 HDFStore: PyTables (HDF5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923
4.1.8 Feather . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927
4.1.9 Parquet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928
4.1.10 ORC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928
4.1.11 SAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929
4.1.12 SPSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930
4.1.13 SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930
4.1.14 Google BigQuery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
4.1.15 STATA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935
4.2 General functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937
4.2.1 Data manipulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937
4.2.2 Top-level missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967

v
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
4.2.3 Top-level conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
4.2.4 Top-level dealing with datetimelike . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
4.2.5 Top-level dealing with intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984
4.2.6 Top-level evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
4.2.7 Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988
4.2.8 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
4.3 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
4.3.1 Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
4.3.2 Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
4.3.3 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
4.3.4 Indexing, iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214
4.3.5 Binary operator functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215
4.3.6 Function application, groupby & window . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216
4.3.7 Computations / descriptive stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216
4.3.8 Reindexing / selection / label manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . 1217
4.3.9 Missing data handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
4.3.10 Reshaping, sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
4.3.11 Combining / joining / merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219
4.3.12 Time series-related . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219
4.3.13 Accessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219
4.3.14 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319
4.3.15 Serialization / IO / conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
4.4 DataFrame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
4.4.1 Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
4.4.2 Attributes and underlying data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1678
4.4.3 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679
4.4.4
[email protected] Indexing, iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679
T56GZSRVAH 4.4.5 Binary operator functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1680
4.4.6 Function application, GroupBy & window . . . . . . . . . . . . . . . . . . . . . . . . . . . 1681
4.4.7 Computations / descriptive stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1681
4.4.8 Reindexing / selection / label manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . 1682
4.4.9 Missing data handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1683
4.4.10 Reshaping, sorting, transposing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1683
4.4.11 Combining / joining / merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1684
4.4.12 Time series-related . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1684
4.4.13 Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1685
4.4.14 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1685
4.4.15 Sparse accessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1731
4.4.16 Serialization / IO / conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1732
4.5 Pandas arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1733
4.5.1 pandas.array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1733
4.5.2 Datetime data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1737
4.5.3 Timedelta data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1757
4.5.4 Timespan data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1766
4.5.5 Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1766
4.5.6 Interval data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1781
4.5.7 Nullable integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1795
4.5.8 Categorical data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1800
4.5.9 Sparse data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1805
4.5.10 Text data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1807
4.5.11 Boolean data with missing values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1809
4.6 Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1811
4.7 Index objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1811
4.7.1 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1811

vi
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
4.7.2 Numeric Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1867
4.7.3 CategoricalIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1871
4.7.4 IntervalIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1880
4.7.5 MultiIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1892
4.7.6 DatetimeIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1909
4.7.7 TimedeltaIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1936
4.7.8 PeriodIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1944
4.8 Date offsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1951
4.8.1 DateOffset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1951
4.8.2 BusinessDay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1956
4.8.3 BusinessHour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1959
4.8.4 CustomBusinessDay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1963
4.8.5 CustomBusinessHour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1967
4.8.6 MonthOffset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1971
4.8.7 MonthEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1974
4.8.8 MonthBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1977
4.8.9 BusinessMonthEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1980
4.8.10 BusinessMonthBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1983
4.8.11 CustomBusinessMonthEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1986
4.8.12 CustomBusinessMonthBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1990
4.8.13 SemiMonthOffset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1995
4.8.14 SemiMonthEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1998
4.8.15 SemiMonthBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2001
4.8.16 Week . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2004
4.8.17 WeekOfMonth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2007
4.8.18 LastWeekOfMonth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2011
4.8.19 QuarterOffset . . . . . . . .
[email protected] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2015
T56GZSRVAH 4.8.20 BQuarterEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2018
4.8.21 BQuarterBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2021
4.8.22 QuarterEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2025
4.8.23 QuarterBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2028
4.8.24 YearOffset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2031
4.8.25 BYearEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2034
4.8.26 BYearBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2037
4.8.27 YearEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2040
4.8.28 YearBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2043
4.8.29 FY5253 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2046
4.8.30 FY5253Quarter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2050
4.8.31 Easter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2055
4.8.32 Tick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2059
4.8.33 Day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2063
4.8.34 Hour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2067
4.8.35 Minute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2070
4.8.36 Second . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2074
4.8.37 Milli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2078
4.8.38 Micro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2081
4.8.39 Nano . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2085
4.8.40 BDay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2089
4.8.41 BMonthEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2092
4.8.42 BMonthBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2094
4.8.43 CBMonthEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2097
4.8.44 CBMonthBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2101
4.8.45 CDay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2104
4.9 Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2107

vii
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
4.9.1 pandas.tseries.frequencies.to_offset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2108
4.10 Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2108
4.10.1 Standard moving window functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2109
4.10.2 Standard expanding window functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2127
4.10.3 Exponentially-weighted moving window functions . . . . . . . . . . . . . . . . . . . . . . 2140
4.10.4 Window Indexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2142
4.11 GroupBy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2143
4.11.1 Indexing, iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2143
4.11.2 Function application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2145
4.11.3 Computations / descriptive stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2147
4.12 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2189
4.12.1 Indexing, iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2189
4.12.2 Function application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2190
4.12.3 Upsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2194
4.12.4 Computations / descriptive stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2206
4.13 Style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2211
4.13.1 Styler constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2211
4.13.2 Styler properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2226
4.13.3 Style application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2226
4.13.4 Builtin styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2226
4.13.5 Style export and import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2227
4.14 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2227
4.14.1 pandas.plotting.andrews_curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2227
4.14.2 pandas.plotting.autocorrelation_plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2228
4.14.3 pandas.plotting.bootstrap_plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2228
4.14.4 pandas.plotting.boxplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2229
4.14.5 pandas.plotting.deregister_matplotlib_converters .
[email protected] . . . . . . . . . . . . . . . . . . . . . . 2236
T56GZSRVAH 4.14.6 pandas.plotting.lag_plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2236
4.14.7 pandas.plotting.parallel_coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2236
4.14.8 pandas.plotting.plot_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2237
4.14.9 pandas.plotting.radviz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2237
4.14.10 pandas.plotting.register_matplotlib_converters . . . . . . . . . . . . . . . . . . . . . . . . . 2238
4.14.11 pandas.plotting.scatter_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2240
4.14.12 pandas.plotting.table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2240
4.15 General utility functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2241
4.15.1 Working with options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2241
4.15.2 Testing functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2255
4.15.3 Exceptions and warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2258
4.15.4 Data types related functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2261
4.16 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2288
4.16.1 pandas.api.extensions.register_extension_dtype . . . . . . . . . . . . . . . . . . . . . . . . 2289
4.16.2 pandas.api.extensions.register_dataframe_accessor . . . . . . . . . . . . . . . . . . . . . . 2289
4.16.3 pandas.api.extensions.register_series_accessor . . . . . . . . . . . . . . . . . . . . . . . . . 2290
4.16.4 pandas.api.extensions.register_index_accessor . . . . . . . . . . . . . . . . . . . . . . . . . 2292
4.16.5 pandas.api.extensions.ExtensionDtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2293
4.16.6 pandas.api.extensions.ExtensionArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2296
4.16.7 pandas.arrays.PandasArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2309
4.16.8 pandas.api.indexers.check_array_indexer . . . . . . . . . . . . . . . . . . . . . . . . . . . 2309

5 Development 2313
5.1 Contributing to pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2313
5.1.1 Where to start? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2314
5.1.2 Bug reports and enhancement requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2315
5.1.3 Working with the code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2315

viii
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
5.1.4 Contributing to the documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2319
5.1.5 Contributing to the code base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2337
5.1.6 Contributing your changes to pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2350
5.2 pandas code style guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2353
5.2.1 Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2353
5.2.2 String formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2353
5.3 Pandas Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
5.3.1 Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
5.3.2 Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
5.3.3 Issue Triage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
5.3.4 Closing Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2356
5.3.5 Reviewing Pull Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2357
5.3.6 Cleaning up old Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2357
5.3.7 Cleaning up old Pull Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2357
5.3.8 Becoming a pandas maintainer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2357
5.4 Internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2358
5.4.1 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2358
5.4.2 Subclassing pandas data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2360
5.5 Extending pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2360
5.5.1 Registering custom accessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2360
5.5.2 Extension types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2361
5.5.3 Subclassing pandas data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2364
5.5.4 Plotting backends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2367
5.6 Developer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2367
5.6.1 Storing pandas DataFrame objects in Apache Parquet format . . . . . . . . . . . . . . . . . 2367
5.7 Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2370
5.7.1
[email protected] Version Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2370
T56GZSRVAH 5.7.2 Python Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2371
5.8 Roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2371
5.8.1 Extensibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2371
5.8.2 String data type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2371
5.8.3 Apache Arrow interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2372
5.8.4 Block manager rewrite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2372
5.8.5 Decoupling of indexing and internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2372
5.8.6 Numba-accelerated operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2372
5.8.7 Documentation improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2373
5.8.8 Package docstring validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2373
5.8.9 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2373
5.8.10 Roadmap Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2373
5.9 Developer Meetings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2374
5.9.1 Minutes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2374
5.9.2 Calendar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2374

6 Release Notes 2375


6.1 Version 1.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2375
6.1.1 What’s new in 1.0.3 (March 17, 2020) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2375
6.1.2 What’s new in 1.0.2 (March 12, 2020) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2376
6.1.3 What’s new in 1.0.0 (January 29, 2020) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2379
6.2 Version 0.25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2418
6.2.1 What’s new in 0.25.3 (October 31, 2019) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2418
6.2.2 What’s new in 0.25.2 (October 15, 2019) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2418
6.2.3 What’s new in 0.25.1 (August 21, 2019) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2419
6.2.4 What’s new in 0.25.0 (July 18, 2019) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2422
6.3 Version 0.24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2460

ix
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
6.3.1 Whats new in 0.24.2 (March 12, 2019) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2460
6.3.2 Whats new in 0.24.1 (February 3, 2019) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2463
6.3.3 What’s new in 0.24.0 (January 25, 2019) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2464
6.4 Version 0.23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2521
6.4.1 What’s new in 0.23.4 (August 3, 2018) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2521
6.4.2 What’s new in 0.23.3 (July 7, 2018) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2522
6.4.3 What’s new in 0.23.2 (July 5, 2018) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2523
6.4.4 What’s new in 0.23.1 (June 12, 2018) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2526
6.4.5 What’s new in 0.23.0 (May 15, 2018) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2530
6.5 Version 0.22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2580
6.5.1 v0.22.0 (December 29, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2580
6.6 Version 0.21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2585
6.6.1 v0.21.1 (December 12, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2585
6.6.2 v0.21.0 (October 27, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2590
6.7 Version 0.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2622
6.7.1 v0.20.3 (July 7, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2622
6.7.2 v0.20.2 (June 4, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2625
6.7.3 v0.20.1 (May 5, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2629
6.8 Version 0.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2677
6.8.1 v0.19.2 (December 24, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2677
6.8.2 v0.19.1 (November 3, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2680
6.8.3 v0.19.0 (October 2, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2683
6.9 Version 0.18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2728
6.9.1 v0.18.1 (May 3, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2728
6.9.2 v0.18.0 (March 13, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2747
6.10 Version 0.17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2782
6.10.1 v0.17.1 (November 21, 2015) . . . . . .
[email protected] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2782
T56GZSRVAH 6.10.2 v0.17.0 (October 9, 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2789
6.11 Version 0.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2819
6.11.1 v0.16.2 (June 12, 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2819
6.11.2 v0.16.1 (May 11, 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2824
6.11.3 v0.16.0 (March 22, 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2837
6.12 Version 0.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2855
6.12.1 v0.15.2 (December 12, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2855
6.12.2 v0.15.1 (November 9, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2862
6.12.3 v0.15.0 (October 18, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2869
6.13 Version 0.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2901
6.13.1 v0.14.1 (July 11, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2901
6.13.2 v0.14.0 (May 31 , 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2908
6.14 Version 0.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2938
6.14.1 v0.13.1 (February 3, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2938
6.14.2 v0.13.0 (January 3, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2949
6.15 Version 0.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2979
6.15.1 v0.12.0 (July 24, 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2979
6.16 Version 0.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2991
6.16.1 v0.11.0 (April 22, 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2991
6.17 Version 0.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3001
6.17.1 v0.10.1 (January 22, 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3001
6.17.2 v0.10.0 (December 17, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3007
6.18 Version 0.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3019
6.18.1 v0.9.1 (November 14, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3019
6.18.2 v0.9.0 (October 7, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3024
6.19 Version 0.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3026
6.19.1 v0.8.1 (July 22, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3026

x
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
6.19.2 v0.8.0 (June 29, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3027
6.20 Version 0.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3034
6.20.1 v.0.7.3 (April 12, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3034
6.20.2 v.0.7.2 (March 16, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3036
6.20.3 v.0.7.1 (February 29, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3037
6.20.4 v.0.7.0 (February 9, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3038
6.21 Version 0.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3044
6.21.1 v.0.6.1 (December 13, 2011) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3044
6.21.2 v.0.6.0 (November 25, 2011) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3045
6.22 Version 0.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3047
6.22.1 v.0.5.0 (October 24, 2011) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3047
6.23 Version 0.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3049
6.23.1 v.0.4.1 through v0.4.3 (September 25 - October 9, 2011) . . . . . . . . . . . . . . . . . . . 3049

Bibliography 3051

Python Module Index 3053

[email protected]
T56GZSRVAH

xi
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
[email protected]
T56GZSRVAH

xii
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Date: Mar 18, 2020 Version: 1.0.3


Download documentation: PDF Version | Zipped HTML
Useful links: Binary Installers | Source Repository | Issues & Ideas | Q&A Support | Mailing List
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data
analysis tools for the Python programming language.
To the getting started guides
To the user guide
To the reference guide
To the development guide

[email protected]
T56GZSRVAH

CONTENTS 1
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

2 CONTENTS
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
CHAPTER

ONE

WHAT’S NEW IN 1.0.1 (FEBRUARY 5, 2020)

These are the changes in pandas 1.0.1. See Release Notes for a full changelog including other versions of pandas.

1.1 Fixed regressions

• Fixed regression in DataFrame setting values with a slice (e.g. df[-4:] = 1) indexing by label instead of
position (GH31469)
• Fixed regression when indexing a Series or DataFrame indexed by DatetimeIndex with a slice containg
a datetime.date (GH31501)
• Fixed regression in DataFrame.__setitem__ raising an AttributeError with a MultiIndex and
a non-monotonic indexer (GH31449)
• Fixed regression in Series multiplication when multiplying a numeric Series with >10000 elements with a
[email protected]
T56GZSRVAH timedelta-like scalar (GH31457)
• Fixed regression in .groupby().agg() raising an AssertionError for some reductions like min on
object-dtype columns (GH31522)
• Fixed regression in .groupby() aggregations with categorical dtype using Cythonized reduction functions
(e.g. first) (GH31450)
• Fixed regression in GroupBy.apply() if called with a function which returned a non-pandas non-scalar
object (e.g. a list or numpy array) (GH31441)
• Fixed regression in DataFrame.groupby() whereby taking the minimum or maximum of a column with
period dtype would raise a TypeError. (GH31471)
• Fixed regression in DataFrame.groupby() with an empty DataFrame grouping by a level of a MultiIndex
(GH31670).
• Fixed regression in DataFrame.apply() with object dtype and non-reducing function (GH31505)
• Fixed regression in to_datetime() when parsing non-nanosecond resolution datetimes (GH31491)
• Fixed regression in to_csv() where specifying an na_rep might truncate the values written (GH31447)
• Fixed regression in Categorical construction with numpy.str_ categories (GH31499)
• Fixed regression in DataFrame.loc() and DataFrame.iloc() when selecting a row containing a single
datetime64 or timedelta64 column (GH31649)
• Fixed regression where setting pd.options.display.max_colwidth was not accepting negative inte-
ger. In addition, this behavior has been deprecated in favor of using None (GH31532)
• Fixed regression in objTOJSON.c fix return-type warning (GH31463)

3
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• Fixed regression in qcut() when passed a nullable integer. (GH31389)


• Fixed regression in assigning to a Series using a nullable integer dtype (GH31446)
• Fixed performance regression when indexing a DataFrame or Series with a MultiIndex for the index
using a list of labels (GH31648)
• Fixed regression in read_csv() used in file like object RawIOBase is not recognize encoding option
(GH31575)

1.2 Deprecations

• Support for negative integer for pd.options.display.max_colwidth is deprecated in favor of using


None (GH31532)

1.3 Bug fixes

Datetimelike
• Fixed bug in to_datetime() raising when cache=True and out-of-bound values are present (GH31491)
Numeric
• Bug in dtypes being lost in DataFrame.__invert__ (~ operator) with mixed dtypes (GH31183) and for
extension-array backed Series and DataFrame (GH23087)
Plotting
[email protected]
T56GZSRVAH • Plotting tz-aware timeseries no longer gives UserWarning (GH31205)

Interval
• Bug in Series.shift() with interval dtype raising a TypeError when shifting an interval array of
integers or datetimes (GH34195)

1.4 Contributors

A total of 7 people contributed patches to this release. People with a “+” by their names contributed a patch for the
first time.
• Guillaume Lemaitre
• Jeff Reback
• Joris Van den Bossche
• Kaiqi Dong
• MeeseeksMachine
• Pandas Development Team
• Tom Augspurger

4 Chapter 1. What’s new in 1.0.1 (February 5, 2020)


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
CHAPTER

TWO

GETTING STARTED

2.1 Installation

Before you can use pandas, you’ll need to get it installed.


Pandas is part of the Anaconda distribution and can be installed with Anaconda or Miniconda:

conda install pandas

Pandas can be installed via pip from PyPI.

pip install pandas

Learn more
[email protected]
T56GZSRVAH
2.2 Intro to pandas

Straight to tutorial. . .
When working with tabular data, such as data stored in spreadsheets or databases, Pandas is the right tool for you.
Pandas will help you to explore, clean and process your data. In Pandas, a data table is called a DataFrame.

To introduction tutorial
To user guide
Straight to tutorial. . .
Pandas supports the integration with many file formats or data sources out of the box (csv, excel, sql, json, parquet,. . . ).
Importing data from each of these data sources is provided by function with the prefix read_*. Similarly, the to_*
methods are used to store data.

To introduction tutorial
To user guide
Straight to tutorial. . .
Selecting or filtering specific rows and/or columns? Filtering the data on a condition? Methods for slicing, selecting,
and extracting the data you need are available in Pandas.

5
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

To introduction tutorial
To user guide
Straight to tutorial. . .
Pandas provides plotting your data out of the box, using the power of Matplotlib. You can pick the plot type (scatter,
bar, boxplot,. . . ) corresponding to your data.

To introduction tutorial
To user guide
Straight to tutorial. . .
There is no need to loop over all rows of your data table to do calculations. Data manipulations on a column work
elementwise. Adding a column to a DataFrame based on existing data in other columns is straightforward.

To introduction tutorial
To user guide
Straight to tutorial. . .
Basic statistics (mean, median, min, max, counts. . . ) are easily calculable. These or custom aggregations can be
applied on the entire data set, a sliding window of the data or grouped by categories. The latter is also known as the
split-apply-combine approach.

[email protected]
To introduction tutorial
T56GZSRVAH
To user guide
Straight to tutorial. . .
Change the structure of your data table in multiple ways. You can melt() your data table from wide to long/tidy form
or pivot() from long to wide format. With aggregations built-in, a pivot table is created with a sinlge command.

To introduction tutorial
To user guide
Straight to tutorial. . .
Multiple tables can be concatenated both column wise as row wise and database-like join/merge operations are pro-
vided to combine multiple tables of data.

To introduction tutorial
To user guide
Straight to tutorial. . .
Pandas has great support for time series and has an extensive set of tools for working with dates, times, and time-
indexed data.
To introduction tutorial
To user guide

6 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Straight to tutorial. . .
Data sets do not only contain numerical data. Pandas provides a wide range of functions to cleaning textual data and
extract useful information from it.
To introduction tutorial
To user guide

2.3 Coming from. . .

Currently working with other software for data manipulation in a tabular format? You’re probably familiar to typical
data operations and know what to do with your tabular data, but lacking the syntax to execute these operations. Get to
know the pandas syntax by looking for equivalents from the software you already know:
Learn more
Learn more
Learn more
Learn more

2.4 Community tutorials

The community produces a wide variety of tutorials available online. Some of the material is enlisted in the community
contributed Tutorials.
[email protected]
T56GZSRVAH
2.4.1 Installation

The easiest way to install pandas is to install it as part of the Anaconda distribution, a cross platform distribution for
data analysis and scientific computing. This is the recommended installation method for most users.
Instructions for installing from source, PyPI, ActivePython, various Linux distributions, or a development version are
also provided.

Python version support

Officially Python 3.6.1 and above, 3.7, and 3.8.

Installing pandas

Installing with Anaconda

Installing pandas and the rest of the NumPy and SciPy stack can be a little difficult for inexperienced users.
The simplest way to install not only pandas, but Python and the most popular packages that make up the SciPy
stack (IPython, NumPy, Matplotlib, . . . ) is with Anaconda, a cross-platform (Linux, Mac OS X, Windows) Python
distribution for data analytics and scientific computing.
After running the installer, the user will have access to pandas and the rest of the SciPy stack without needing to install
anything else, and without needing to wait for any software to be compiled.

2.3. Coming from. . . 7


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Installation instructions for Anaconda can be found here.


A full list of the packages available as part of the Anaconda distribution can be found here.
Another advantage to installing Anaconda is that you don’t need admin rights to install it. Anaconda can install in the
user’s home directory, which makes it trivial to delete Anaconda if you decide (just delete that folder).

Installing with Miniconda

The previous section outlined how to get pandas installed as part of the Anaconda distribution. However this approach
means you will install well over one hundred packages and involves downloading the installer which is a few hundred
megabytes in size.
If you want to have more control on which packages, or have a limited internet bandwidth, then installing pandas with
Miniconda may be a better solution.
Conda is the package manager that the Anaconda distribution is built upon. It is a package manager that is both
cross-platform and language agnostic (it can play a similar role to a pip and virtualenv combination).
Miniconda allows you to create a minimal self contained Python installation, and then use the Conda command to
install additional packages.
First you will need Conda to be installed and downloading and running the Miniconda will do this for you. The
installer can be found here
The next step is to create a new conda environment. A conda environment is like a virtualenv that allows you to specify
a specific version of Python and set of libraries. Run the following commands from a terminal window:

conda create -n name_of_my_env python


[email protected]
T56GZSRVAHThis will create a minimal environment with only Python installed in it. To put your self inside this environment run:

source activate name_of_my_env

On Windows the command is:

activate name_of_my_env

The final step required is to install pandas. This can be done with the following command:

conda install pandas

To install a specific pandas version:

conda install pandas=0.20.3

To install other packages, IPython for example:

conda install ipython

To install the full Anaconda distribution:

conda install anaconda

If you need packages that are available to pip but not conda, then install pip, and then use pip to install those packages:

conda install pip


pip install django

8 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Installing from PyPI

pandas can be installed via pip from PyPI.


pip install pandas

Installing with ActivePython

Installation instructions for ActivePython can be found here. Versions 2.7, 3.5 and 3.6 include pandas.

Installing using your Linux distribution’s package manager.

The commands in this table will install pandas for Python 3 from your distribution. To install pandas for Python 2,
you may need to use the python-pandas package.

Distribution Status Download / Reposi- Install method


tory Link
Debian stable official Debian reposi- sudo apt-get install python3-pandas
tory
Debian & unstable NeuroDebian sudo apt-get install python3-pandas
Ubuntu (latest
pack-
ages)
Ubuntu stable official Ubuntu reposi- sudo apt-get install python3-pandas
[email protected]
T56GZSRVAH tory
OpenSuse stable OpenSuse Repository zypper in python3-pandas
Fedora stable official Fedora reposi- dnf install python3-pandas
tory
Centos/RHELstable EPEL repository yum install python3-pandas

However, the packages in the linux package managers are often a few versions behind, so to get the newest version of
pandas, it’s recommended to install using the pip or conda methods described above.

Installing from source

See the contributing guide for complete instructions on building from the git source tree. Further, see creating a
development environment if you wish to create a pandas development environment.

Running the test suite

pandas is equipped with an exhaustive set of unit tests, covering about 97% of the code base as of this writing. To
run it on your machine to verify that everything is working (and that you have all of the dependencies, soft and hard,
installed), make sure you have pytest >= 5.0.1 and Hypothesis >= 3.58, then run:
>>> pd.test()
running: pytest --skip-slow --skip-network C:\Users\TP\Anaconda3\envs\py36\lib\site-
˓→packages\pandas

============================= test session starts =============================


platform win32 -- Python 3.6.2, pytest-3.6.0, py-1.4.34, pluggy-0.4.0
(continues on next page)

2.4. Community tutorials 9


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


rootdir: C:\Users\TP\Documents\Python\pandasdev\pandas, inifile: setup.cfg
collected 12145 items / 3 skipped

..................................................................S......
........S................................................................
.........................................................................

==================== 12130 passed, 12 skipped in 368.339 seconds =====================

Dependencies

Package Minimum supported version


setuptools 24.2.0
NumPy 1.13.3
python-dateutil 2.6.1
pytz 2017.2

Recommended dependencies

• numexpr: for accelerating certain numerical operations. numexpr uses multiple cores as well as smart chunk-
ing and caching to achieve large speedups. If installed, must be Version 2.6.2 or higher.
• bottleneck: for accelerating certain types of nan evaluations. bottleneck uses specialized cython routines
[email protected]
to achieve large speedups. If installed, must be Version 1.2.1 or higher.
T56GZSRVAH

Note: You are highly encouraged to install these libraries, as they provide speed improvements, especially when
working with large data sets.

Optional dependencies

Pandas has many optional dependencies that are only used for specific methods. For example, pandas.
read_hdf() requires the pytables package, while DataFrame.to_markdown() requires the tabulate
package. If the optional dependency is not installed, pandas will raise an ImportError when the method requiring
that dependency is called.

Dependency Minimum Version Notes


BeautifulSoup4 4.6.0 HTML parser for read_html (see note)
Jinja2 Conditional formatting with DataFrame.style
PyQt4 Clipboard I/O
PyQt5 Clipboard I/O
PyTables 3.4.2 HDF5-based reading / writing
SQLAlchemy 1.1.4 SQL support for databases other than sqlite
SciPy 0.19.0 Miscellaneous statistical functions
XLsxWriter 0.9.8 Excel writing
blosc Compression for HDF5
fastparquet 0.3.2 Parquet reading / writing
Continued on next page

10 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Table 1 – continued from previous page


Dependency Minimum Version Notes
gcsfs 0.2.2 Google Cloud Storage access
html5lib HTML parser for read_html (see note)
lxml 3.8.0 HTML parser for read_html (see note)
matplotlib 2.2.2 Visualization
numba 0.46.0 Alternative execution engine for rolling operations
openpyxl 2.5.7 Reading / writing for xlsx files
pandas-gbq 0.8.0 Google Big Query access
psycopg2 PostgreSQL engine for sqlalchemy
pyarrow 0.12.0 Parquet, ORC (requires 0.13.0), and feather reading / writing
pymysql 0.7.11 MySQL engine for sqlalchemy
pyreadstat SPSS files (.sav) reading
pytables 3.4.2 HDF5 reading / writing
pyxlsb 1.0.6 Reading for xlsb files
qtpy Clipboard I/O
s3fs 0.3.0 Amazon S3 access
tabulate 0.8.3 Printing in Markdown-friendly format (see tabulate)
xarray 0.8.2 pandas-like API for N-dimensional data
xclip Clipboard I/O on linux
xlrd 1.1.0 Excel reading
xlwt 1.2.0 Excel writing
xsel Clipboard I/O on linux
zlib Compression for HDF5

[email protected]
T56GZSRVAHOptional dependencies for parsing HTML

One of the following combinations of libraries is needed to use the top-level read_html() function:
Changed in version 0.23.0.
• BeautifulSoup4 and html5lib
• BeautifulSoup4 and lxml
• BeautifulSoup4 and html5lib and lxml
• Only lxml, although see HTML Table Parsing for reasons as to why you should probably not take this approach.

Warning:
• if you install BeautifulSoup4 you must install either lxml or html5lib or both. read_html() will not work
with only BeautifulSoup4 installed.
• You are highly encouraged to read HTML Table Parsing gotchas. It explains issues surrounding the installa-
tion and usage of the above three libraries.

2.4. Community tutorials 11


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

2.4.2 Package overview

pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with
“relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing
practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful
and flexible open source data analysis / manipulation tool available in any language. It is already well on its way
toward this goal.
pandas is well suited for many different kinds of data:
• Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet
• Ordered and unordered (not necessarily fixed-frequency) time series data.
• Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels
• Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed
into a pandas data structure
The two primary data structures of pandas, Series (1-dimensional) and DataFrame (2-dimensional), handle the
vast majority of typical use cases in finance, statistics, social science, and many areas of engineering. For R users,
DataFrame provides everything that R’s data.frame provides and much more. pandas is built on top of NumPy
and is intended to integrate well within a scientific computing environment with many other 3rd party libraries.
Here are just a few of the things that pandas does well:
• Easy handling of missing data (represented as NaN) in floating point as well as non-floating point data
• Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
• Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the user can
simply ignore the labels and let Series, DataFrame, etc. automatically align the data for you in computations
[email protected]
T56GZSRVAH
• Powerful, flexible group by functionality to perform split-apply-combine operations on data sets, for both ag-
gregating and transforming data
• Make it easy to convert ragged, differently-indexed data in other Python and NumPy data structures into
DataFrame objects
• Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
• Intuitive merging and joining data sets
• Flexible reshaping and pivoting of data sets
• Hierarchical labeling of axes (possible to have multiple labels per tick)
• Robust IO tools for loading data from flat files (CSV and delimited), Excel files, databases, and saving / loading
data from the ultrafast HDF5 format
• Time series-specific functionality: date range generation and frequency conversion, moving window statistics,
date shifting and lagging.
Many of these principles are here to address the shortcomings frequently experienced using other languages / scientific
research environments. For data scientists, working with data is typically divided into multiple stages: munging and
cleaning data, analyzing / modeling it, then organizing the results of the analysis into a form suitable for plotting or
tabular display. pandas is the ideal tool for all of these tasks.
Some other notes
• pandas is fast. Many of the low-level algorithmic bits have been extensively tweaked in Cython code. However,
as with anything else generalization usually sacrifices performance. So if you focus on one feature for your
application you may be able to create a faster specialized tool.

12 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• pandas is a dependency of statsmodels, making it an important part of the statistical computing ecosystem in
Python.
• pandas has been used extensively in production in financial applications.

Data structures

Dimensions Name Description


1 Series 1D labeled homogeneously-typed array
2 DataFrame General 2D labeled, size-mutable tabular structure with potentially
heterogeneously-typed column

Why more than one data structure?

The best way to think about the pandas data structures is as flexible containers for lower dimensional data. For
example, DataFrame is a container for Series, and Series is a container for scalars. We would like to be able to insert
and remove objects from these containers in a dictionary-like fashion.
Also, we would like sensible default behaviors for the common API functions which take into account the typical
orientation of time series and cross-sectional data sets. When using ndarrays to store 2- and 3-dimensional data, a
burden is placed on the user to consider the orientation of the data set when writing functions; axes are considered
more or less equivalent (except when C- or Fortran-contiguousness matters for performance). In pandas, the axes are
intended to lend more semantic meaning to the data; i.e., for a particular data set there is likely to be a “right” way to
orient the data. The goal, then, is to reduce the amount of mental effort required to code up data transformations in
downstream functions.
[email protected]
T56GZSRVAHFor example, with tabular data (DataFrame) it is more semantically helpful to think of the index (the rows) and the
columns rather than axis 0 and axis 1. Iterating through the columns of the DataFrame thus results in more readable
code:

for col in df.columns:


series = df[col]
# do something with series

Mutability and copying of data

All pandas data structures are value-mutable (the values they contain can be altered) but not always size-mutable. The
length of a Series cannot be changed, but, for example, columns can be inserted into a DataFrame. However, the vast
majority of methods produce new objects and leave the input data untouched. In general we like to favor immutability
where sensible.

Getting support

The first stop for pandas issues and ideas is the Github Issue Tracker. If you have a general question, pandas community
experts can answer through Stack Overflow.

2.4. Community tutorials 13


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Community

pandas is actively supported today by a community of like-minded individuals around the world who contribute their
valuable time and energy to help make open source pandas possible. Thanks to all of our contributors.
If you’re interested in contributing, please visit the contributing guide.
pandas is a NumFOCUS sponsored project. This will help ensure the success of development of pandas as a world-
class open-source project, and makes it possible to donate to the project.

Project governance

The governance process that pandas project has used informally since its inception in 2008 is formalized in Project
Governance documents. The documents clarify how decisions are made and how the various elements of our commu-
nity interact, including the relationship between open source collaborative development and work that may be funded
by for-profit or non-profit entities.
Wes McKinney is the Benevolent Dictator for Life (BDFL).

Development team

The list of the Core Team members and more detailed information can be found on the people’s page of the governance
repo.

Institutional partners
[email protected]
T56GZSRVAHThe information about current institutional partners can be found on pandas website page.

License

BSD 3-Clause License

Copyright (c) 2008-2012, AQR Capital Management, LLC, Lambda Foundry, Inc. and PyData
˓→Development Team

All rights reserved.

Redistribution and use in source and binary forms, with or without


modification, are permitted provided that the following conditions are met:

* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.

* Redistributions in binary form must reproduce the above copyright notice,


this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.

* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
(continues on next page)

14 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

{{ header }}

2.4.3 10 minutes to pandas

This is a short introduction to pandas, geared mainly for new users. You can see more complex recipes in the Cookbook.
Customarily, we import as follows:

In [1]: import numpy as np

In [2]: import pandas as pd

Object creation

See the Data Structure Intro section.


Creating a Series by passing a list of values, letting pandas create a default integer index:
[email protected]
T56GZSRVAHIn [3]: s = pd.Series([1, 3, 5, np.nan, 6, 8])

In [4]: s
Out[4]:
0 1.0
1 3.0
2 5.0
3 NaN
4 6.0
5 8.0
dtype: float64

Creating a DataFrame by passing a NumPy array, with a datetime index and labeled columns:

In [5]: dates = pd.date_range('20130101', periods=6)

In [6]: dates
Out[6]:
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
'2013-01-05', '2013-01-06'],
dtype='datetime64[ns]', freq='D')

In [7]: df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list('ABCD'))

In [8]: df
Out[8]:
A B C D
2013-01-01 0.537250 -0.315005 -0.935783 1.199686
(continues on next page)

2.4. Community tutorials 15


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2013-01-02 -1.093443 1.279622 -0.085378 1.156896
2013-01-03 0.045825 -0.274885 -0.213291 1.033425
2013-01-04 -0.221818 -0.530745 0.645645 2.909493
2013-01-05 0.126389 -0.162619 0.780624 -0.213437
2013-01-06 0.045735 -0.554200 -1.404626 -0.286590

Creating a DataFrame by passing a dict of objects that can be converted to series-like.

In [9]: df2 = pd.DataFrame({'A': 1.,


...: 'B': pd.Timestamp('20130102'),
...: 'C': pd.Series(1, index=list(range(4)), dtype='float32'),
...: 'D': np.array([3] * 4, dtype='int32'),
...: 'E': pd.Categorical(["test", "train", "test", "train"]),
...: 'F': 'foo'})
...:

In [10]: df2
Out[10]:
A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo

The columns of the resulting DataFrame have different dtypes.

In [11]: df2.dtypes
Out[11]:
[email protected]
T56GZSRVAHA float64
B datetime64[ns]
C float32
D int32
E category
F object
dtype: object

If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled.
Here’s a subset of the attributes that will be completed:

In [12]: df2.<TAB> # noqa: E225, E999


df2.A df2.bool
df2.abs df2.boxplot
df2.add df2.C
df2.add_prefix df2.clip
df2.add_suffix df2.clip_lower
df2.align df2.clip_upper
df2.all df2.columns
df2.any df2.combine
df2.append df2.combine_first
df2.apply df2.consolidate
df2.applymap
df2.D

As you can see, the columns A, B, C, and D are automatically tab completed. E is there as well; the rest of the attributes
have been truncated for brevity.

16 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Viewing data

See the Basics section.


Here is how to view the top and bottom rows of the frame:
In [13]: df.head()
Out[13]:
A B C D
2013-01-01 0.537250 -0.315005 -0.935783 1.199686
2013-01-02 -1.093443 1.279622 -0.085378 1.156896
2013-01-03 0.045825 -0.274885 -0.213291 1.033425
2013-01-04 -0.221818 -0.530745 0.645645 2.909493
2013-01-05 0.126389 -0.162619 0.780624 -0.213437

In [14]: df.tail(3)
Out[14]:
A B C D
2013-01-04 -0.221818 -0.530745 0.645645 2.909493
2013-01-05 0.126389 -0.162619 0.780624 -0.213437
2013-01-06 0.045735 -0.554200 -1.404626 -0.286590

Display the index, columns:


In [15]: df.index
Out[15]:
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
'2013-01-05', '2013-01-06'],
dtype='datetime64[ns]', freq='D')
[email protected]
T56GZSRVAHIn [16]: df.columns
Out[16]: Index(['A', 'B', 'C', 'D'], dtype='object')

DataFrame.to_numpy() gives a NumPy representation of the underlying data. Note that this can be an expensive
operation when your DataFrame has columns with different data types, which comes down to a fundamental differ-
ence between pandas and NumPy: NumPy arrays have one dtype for the entire array, while pandas DataFrames
have one dtype per column. When you call DataFrame.to_numpy(), pandas will find the NumPy dtype that
can hold all of the dtypes in the DataFrame. This may end up being object, which requires casting every value to a
Python object.
For df, our DataFrame of all floating-point values, DataFrame.to_numpy() is fast and doesn’t require copying
data.
In [17]: df.to_numpy()
Out[17]:
array([[ 0.53725033, -0.31500536, -0.93578271, 1.19968629],
[-1.09344303, 1.27962224, -0.08537764, 1.15689587],
[ 0.04582511, -0.27488522, -0.21329122, 1.03342476],
[-0.22181841, -0.53074538, 0.64564452, 2.90949261],
[ 0.12638926, -0.16261927, 0.78062425, -0.21343653],
[ 0.04573531, -0.55419961, -1.40462594, -0.28659015]])

For df2, the DataFrame with multiple dtypes, DataFrame.to_numpy() is relatively expensive.
In [18]: df2.to_numpy()
Out[18]:
array([[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'test', 'foo'],
[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'train', 'foo'],
(continues on next page)

2.4. Community tutorials 17


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'test', 'foo'],
[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'train', 'foo']],
dtype=object)

Note: DataFrame.to_numpy() does not include the index or column labels in the output.

describe() shows a quick statistic summary of your data:

In [19]: df.describe()
Out[19]:
A B C D
count 6.000000 6.000000 6.000000 6.000000
mean -0.093344 -0.092972 -0.202135 0.966579
std 0.547968 0.689294 0.858199 1.169000
min -1.093443 -0.554200 -1.404626 -0.286590
25% -0.154930 -0.476810 -0.755160 0.098279
50% 0.045780 -0.294945 -0.149334 1.095160
75% 0.106248 -0.190686 0.462889 1.188989
max 0.537250 1.279622 0.780624 2.909493

Transposing your data:

In [20]: df.T
Out[20]:
2013-01-01 2013-01-02 2013-01-03 2013-01-04 2013-01-05 2013-01-06
A 0.537250 -1.093443 0.045825 -0.221818 0.126389 0.045735
[email protected]
T56GZSRVAH B -0.315005 1.279622 -0.274885 -0.530745 -0.162619 -0.554200
C -0.935783 -0.085378 -0.213291 0.645645 0.780624 -1.404626
D 1.199686 1.156896 1.033425 2.909493 -0.213437 -0.286590

Sorting by an axis:

In [21]: df.sort_index(axis=1, ascending=False)


Out[21]:
D C B A
2013-01-01 1.199686 -0.935783 -0.315005 0.537250
2013-01-02 1.156896 -0.085378 1.279622 -1.093443
2013-01-03 1.033425 -0.213291 -0.274885 0.045825
2013-01-04 2.909493 0.645645 -0.530745 -0.221818
2013-01-05 -0.213437 0.780624 -0.162619 0.126389
2013-01-06 -0.286590 -1.404626 -0.554200 0.045735

Sorting by values:

In [22]: df.sort_values(by='B')
Out[22]:
A B C D
2013-01-06 0.045735 -0.554200 -1.404626 -0.286590
2013-01-04 -0.221818 -0.530745 0.645645 2.909493
2013-01-01 0.537250 -0.315005 -0.935783 1.199686
2013-01-03 0.045825 -0.274885 -0.213291 1.033425
2013-01-05 0.126389 -0.162619 0.780624 -0.213437
2013-01-02 -1.093443 1.279622 -0.085378 1.156896

18 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Selection

Note: While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for
interactive work, for production code, we recommend the optimized pandas data access methods, .at, .iat, .loc
and .iloc.

See the indexing documentation Indexing and Selecting Data and MultiIndex / Advanced Indexing.

Getting

Selecting a single column, which yields a Series, equivalent to df.A:

In [23]: df['A']
Out[23]:
2013-01-01 0.537250
2013-01-02 -1.093443
2013-01-03 0.045825
2013-01-04 -0.221818
2013-01-05 0.126389
2013-01-06 0.045735
Freq: D, Name: A, dtype: float64

Selecting via [], which slices the rows.

In [24]: df[0:3]
[email protected]
Out[24]:
T56GZSRVAH A B C D
2013-01-01 0.537250 -0.315005 -0.935783 1.199686
2013-01-02 -1.093443 1.279622 -0.085378 1.156896
2013-01-03 0.045825 -0.274885 -0.213291 1.033425

In [25]: df['20130102':'20130104']
Out[25]:
A B C D
2013-01-02 -1.093443 1.279622 -0.085378 1.156896
2013-01-03 0.045825 -0.274885 -0.213291 1.033425
2013-01-04 -0.221818 -0.530745 0.645645 2.909493

Selection by label

See more in Selection by Label.


For getting a cross section using a label:

In [26]: df.loc[dates[0]]
Out[26]:
A 0.537250
B -0.315005
C -0.935783
D 1.199686
Name: 2013-01-01 00:00:00, dtype: float64

Selecting on a multi-axis by label:

2.4. Community tutorials 19


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [27]: df.loc[:, ['A', 'B']]


Out[27]:
A B
2013-01-01 0.537250 -0.315005
2013-01-02 -1.093443 1.279622
2013-01-03 0.045825 -0.274885
2013-01-04 -0.221818 -0.530745
2013-01-05 0.126389 -0.162619
2013-01-06 0.045735 -0.554200

Showing label slicing, both endpoints are included:


In [28]: df.loc['20130102':'20130104', ['A', 'B']]
Out[28]:
A B
2013-01-02 -1.093443 1.279622
2013-01-03 0.045825 -0.274885
2013-01-04 -0.221818 -0.530745

Reduction in the dimensions of the returned object:


In [29]: df.loc['20130102', ['A', 'B']]
Out[29]:
A -1.093443
B 1.279622
Name: 2013-01-02 00:00:00, dtype: float64

For getting a scalar value:


[email protected]
T56GZSRVAHIn [30]: df.loc[dates[0], 'A']
Out[30]: 0.5372503299875379

For getting fast access to a scalar (equivalent to the prior method):


In [31]: df.at[dates[0], 'A']
Out[31]: 0.5372503299875379

Selection by position

See more in Selection by Position.


Select via the position of the passed integers:
In [32]: df.iloc[3]
Out[32]:
A -0.221818
B -0.530745
C 0.645645
D 2.909493
Name: 2013-01-04 00:00:00, dtype: float64

By integer slices, acting similar to numpy/python:


In [33]: df.iloc[3:5, 0:2]
Out[33]:
A B
(continues on next page)

20 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2013-01-04 -0.221818 -0.530745
2013-01-05 0.126389 -0.162619

By lists of integer position locations, similar to the numpy/python style:

In [34]: df.iloc[[1, 2, 4], [0, 2]]


Out[34]:
A C
2013-01-02 -1.093443 -0.085378
2013-01-03 0.045825 -0.213291
2013-01-05 0.126389 0.780624

For slicing rows explicitly:

In [35]: df.iloc[1:3, :]
Out[35]:
A B C D
2013-01-02 -1.093443 1.279622 -0.085378 1.156896
2013-01-03 0.045825 -0.274885 -0.213291 1.033425

For slicing columns explicitly:

In [36]: df.iloc[:, 1:3]


Out[36]:
B C
2013-01-01 -0.315005 -0.935783
2013-01-02 1.279622 -0.085378
2013-01-03 -0.274885 -0.213291
[email protected]
T56GZSRVAH2013-01-04 -0.530745 0.645645
2013-01-05 -0.162619 0.780624
2013-01-06 -0.554200 -1.404626

For getting a value explicitly:

In [37]: df.iloc[1, 1]
Out[37]: 1.2796222412458425

For getting fast access to a scalar (equivalent to the prior method):

In [38]: df.iat[1, 1]
Out[38]: 1.2796222412458425

Boolean indexing

Using a single column’s values to select data.

In [39]: df[df['A'] > 0]


Out[39]:
A B C D
2013-01-01 0.537250 -0.315005 -0.935783 1.199686
2013-01-03 0.045825 -0.274885 -0.213291 1.033425
2013-01-05 0.126389 -0.162619 0.780624 -0.213437
2013-01-06 0.045735 -0.554200 -1.404626 -0.286590

Selecting values from a DataFrame where a boolean condition is met.

2.4. Community tutorials 21


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [40]: df[df > 0]


Out[40]:
A B C D
2013-01-01 0.537250 NaN NaN 1.199686
2013-01-02 NaN 1.279622 NaN 1.156896
2013-01-03 0.045825 NaN NaN 1.033425
2013-01-04 NaN NaN 0.645645 2.909493
2013-01-05 0.126389 NaN 0.780624 NaN
2013-01-06 0.045735 NaN NaN NaN

Using the isin() method for filtering:

In [41]: df2 = df.copy()

In [42]: df2['E'] = ['one', 'one', 'two', 'three', 'four', 'three']

In [43]: df2
Out[43]:
A B C D E
2013-01-01 0.537250 -0.315005 -0.935783 1.199686 one
2013-01-02 -1.093443 1.279622 -0.085378 1.156896 one
2013-01-03 0.045825 -0.274885 -0.213291 1.033425 two
2013-01-04 -0.221818 -0.530745 0.645645 2.909493 three
2013-01-05 0.126389 -0.162619 0.780624 -0.213437 four
2013-01-06 0.045735 -0.554200 -1.404626 -0.286590 three

In [44]: df2[df2['E'].isin(['two', 'four'])]


Out[44]:
[email protected] A B C D E
T56GZSRVAH2013-01-03 0.045825 -0.274885 -0.213291 1.033425 two
2013-01-05 0.126389 -0.162619 0.780624 -0.213437 four

Setting

Setting a new column automatically aligns the data by the indexes.

In [45]: s1 = pd.Series([1, 2, 3, 4, 5, 6], index=pd.date_range('20130102',


˓→periods=6))

In [46]: s1
Out[46]:
2013-01-02 1
2013-01-03 2
2013-01-04 3
2013-01-05 4
2013-01-06 5
2013-01-07 6
Freq: D, dtype: int64

In [47]: df['F'] = s1

Setting values by label:

In [48]: df.at[dates[0], 'A'] = 0

Setting values by position:

22 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [49]: df.iat[0, 1] = 0

Setting by assigning with a NumPy array:

In [50]: df.loc[:, 'D'] = np.array([5] * len(df))

The result of the prior setting operations.

In [51]: df
Out[51]:
A B C D F
2013-01-01 0.000000 0.000000 -0.935783 5 NaN
2013-01-02 -1.093443 1.279622 -0.085378 5 1.0
2013-01-03 0.045825 -0.274885 -0.213291 5 2.0
2013-01-04 -0.221818 -0.530745 0.645645 5 3.0
2013-01-05 0.126389 -0.162619 0.780624 5 4.0
2013-01-06 0.045735 -0.554200 -1.404626 5 5.0

A where operation with setting.

In [52]: df2 = df.copy()

In [53]: df2[df2 > 0] = -df2

In [54]: df2
Out[54]:
A B C D F
2013-01-01 0.000000 0.000000 -0.935783 -5 NaN
[email protected]
2013-01-02 -1.093443 -1.279622 -0.085378 -5 -1.0
T56GZSRVAH2013-01-03 -0.045825 -0.274885 -0.213291 -5 -2.0
2013-01-04 -0.221818 -0.530745 -0.645645 -5 -3.0
2013-01-05 -0.126389 -0.162619 -0.780624 -5 -4.0
2013-01-06 -0.045735 -0.554200 -1.404626 -5 -5.0

Missing data

pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See
the Missing Data section.
Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data.

In [55]: df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])

In [56]: df1.loc[dates[0]:dates[1], 'E'] = 1

In [57]: df1
Out[57]:
A B C D F E
2013-01-01 0.000000 0.000000 -0.935783 5 NaN 1.0
2013-01-02 -1.093443 1.279622 -0.085378 5 1.0 1.0
2013-01-03 0.045825 -0.274885 -0.213291 5 2.0 NaN
2013-01-04 -0.221818 -0.530745 0.645645 5 3.0 NaN

To drop any rows that have missing data.

2.4. Community tutorials 23


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [58]: df1.dropna(how='any')
Out[58]:
A B C D F E
2013-01-02 -1.093443 1.279622 -0.085378 5 1.0 1.0

Filling missing data.

In [59]: df1.fillna(value=5)
Out[59]:
A B C D F E
2013-01-01 0.000000 0.000000 -0.935783 5 5.0 1.0
2013-01-02 -1.093443 1.279622 -0.085378 5 1.0 1.0
2013-01-03 0.045825 -0.274885 -0.213291 5 2.0 5.0
2013-01-04 -0.221818 -0.530745 0.645645 5 3.0 5.0

To get the boolean mask where values are nan.

In [60]: pd.isna(df1)
Out[60]:
A B C D F E
2013-01-01 False False False False True False
2013-01-02 False False False False False False
2013-01-03 False False False False False True
2013-01-04 False False False False False True

Operations
[email protected]
T56GZSRVAHSee the Basic section on Binary Ops.

Stats

Operations in general exclude missing data.


Performing a descriptive statistic:

In [61]: df.mean()
Out[61]:
A -0.182885
B -0.040471
C -0.202135
D 5.000000
F 3.000000
dtype: float64

Same operation on the other axis:

In [62]: df.mean(1)
Out[62]:
2013-01-01 1.016054
2013-01-02 1.220160
2013-01-03 1.311530
2013-01-04 1.578616
2013-01-05 1.948879
2013-01-06 1.617382
Freq: D, dtype: float64

24 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically
broadcasts along the specified dimension.

In [63]: s = pd.Series([1, 3, 5, np.nan, 6, 8], index=dates).shift(2)

In [64]: s
Out[64]:
2013-01-01 NaN
2013-01-02 NaN
2013-01-03 1.0
2013-01-04 3.0
2013-01-05 5.0
2013-01-06 NaN
Freq: D, dtype: float64

In [65]: df.sub(s, axis='index')


Out[65]:
A B C D F
2013-01-01 NaN NaN NaN NaN NaN
2013-01-02 NaN NaN NaN NaN NaN
2013-01-03 -0.954175 -1.274885 -1.213291 4.0 1.0
2013-01-04 -3.221818 -3.530745 -2.354355 2.0 0.0
2013-01-05 -4.873611 -5.162619 -4.219376 0.0 -1.0
2013-01-06 NaN NaN NaN NaN NaN

Apply

[email protected]
Applying functions to the data:
T56GZSRVAH
In [66]: df.apply(np.cumsum)
Out[66]:
A B C D F
2013-01-01 0.000000 0.000000 -0.935783 5 NaN
2013-01-02 -1.093443 1.279622 -1.021160 10 1.0
2013-01-03 -1.047618 1.004737 -1.234452 15 3.0
2013-01-04 -1.269436 0.473992 -0.588807 20 6.0
2013-01-05 -1.143047 0.311372 0.191817 25 10.0
2013-01-06 -1.097312 -0.242827 -1.212809 30 15.0

In [67]: df.apply(lambda x: x.max() - x.min())


Out[67]:
A 1.219832
B 1.833822
C 2.185250
D 0.000000
F 4.000000
dtype: float64

2.4. Community tutorials 25


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Histogramming

See more at Histogramming and Discretization.

In [68]: s = pd.Series(np.random.randint(0, 7, size=10))

In [69]: s
Out[69]:
0 3
1 4
2 4
3 3
4 1
5 3
6 3
7 5
8 4
9 0
dtype: int64

In [70]: s.value_counts()
Out[70]:
3 4
4 3
5 1
1 1
0 1
dtype: int64
[email protected]
T56GZSRVAH
String Methods

Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each
element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions
by default (and in some cases always uses them). See more at Vectorized String Methods.

In [71]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])

In [72]: s.str.lower()
Out[72]:
0 a
1 b
2 c
3 aaba
4 baca
5 NaN
6 caba
7 dog
8 cat
dtype: object

26 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Merge

Concat

pandas provides various facilities for easily combining together Series and DataFrame objects with various kinds of
set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
See the Merging section.
Concatenating pandas objects together with concat():

In [73]: df = pd.DataFrame(np.random.randn(10, 4))

In [74]: df
Out[74]:
0 1 2 3
0 0.734273 -0.935628 0.902144 0.063131
1 -0.493928 0.905459 -0.736241 0.330944
2 0.101657 -2.083426 0.254902 0.026104
3 0.347046 0.407484 0.130171 -0.146293
4 1.094031 0.941765 -0.698465 1.187225
5 0.781335 -0.858982 -0.051083 -0.894259
6 -1.818150 0.571072 -0.639691 -0.103313
7 -1.528309 0.684885 -0.450234 0.121959
8 -1.545637 -1.075357 -0.377368 0.937646
9 0.960006 1.657349 0.973478 -0.746665

# break it into pieces


In [75]: pieces = [df[:3], df[3:7], df[7:]]
[email protected]
T56GZSRVAH
In [76]: pd.concat(pieces)
Out[76]:
0 1 2 3
0 0.734273 -0.935628 0.902144 0.063131
1 -0.493928 0.905459 -0.736241 0.330944
2 0.101657 -2.083426 0.254902 0.026104
3 0.347046 0.407484 0.130171 -0.146293
4 1.094031 0.941765 -0.698465 1.187225
5 0.781335 -0.858982 -0.051083 -0.894259
6 -1.818150 0.571072 -0.639691 -0.103313
7 -1.528309 0.684885 -0.450234 0.121959
8 -1.545637 -1.075357 -0.377368 0.937646
9 0.960006 1.657349 0.973478 -0.746665

Note: Adding a column to a DataFrame is relatively fast. However, adding a row requires a copy, and may be
expensive. We recommend passing a pre-built list of records to the DataFrame constructor instead of building a
DataFrame by iteratively appending records to it. See Appending to dataframe for more.

2.4. Community tutorials 27


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Join

SQL style merges. See the Database style joining section.

In [77]: left = pd.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})

In [78]: right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})

In [79]: left
Out[79]:
key lval
0 foo 1
1 foo 2

In [80]: right
Out[80]:
key rval
0 foo 4
1 foo 5

In [81]: pd.merge(left, right, on='key')


Out[81]:
key lval rval
0 foo 1 4
1 foo 1 5
2 foo 2 4
3 foo 2 5

[email protected]
Another example that can be given is:
T56GZSRVAH
In [82]: left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]})

In [83]: right = pd.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]})

In [84]: left
Out[84]:
key lval
0 foo 1
1 bar 2

In [85]: right
Out[85]:
key rval
0 foo 4
1 bar 5

In [86]: pd.merge(left, right, on='key')


Out[86]:
key lval rval
0 foo 1 4
1 bar 2 5

28 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Grouping

By “group by” we are referring to a process involving one or more of the following steps:
• Splitting the data into groups based on some criteria
• Applying a function to each group independently
• Combining the results into a data structure
See the Grouping section.

In [87]: df = pd.DataFrame({'A': ['foo', 'bar', 'foo', 'bar',


....: 'foo', 'bar', 'foo', 'foo'],
....: 'B': ['one', 'one', 'two', 'three',
....: 'two', 'two', 'one', 'three'],
....: 'C': np.random.randn(8),
....: 'D': np.random.randn(8)})
....:

In [88]: df
Out[88]:
A B C D
0 foo one -1.708850 -0.063695
1 bar one 0.378037 0.550798
2 foo two 0.215822 1.216365
3 bar three 0.753779 0.590228
4 foo two 0.178214 -0.849016
5 bar two 1.623137 1.803818
6 foo one 2.769917 1.362410
[email protected]
7 foo three 0.645515 0.412037
T56GZSRVAH
Grouping and then applying the sum() function to the resulting groups.

In [89]: df.groupby('A').sum()
Out[89]:
C D
A
bar 2.754954 2.944844
foo 2.100618 2.078100

Grouping by multiple columns forms a hierarchical index, and again we can apply the sum function.

In [90]: df.groupby(['A', 'B']).sum()


Out[90]:
C D
A B
bar one 0.378037 0.550798
three 0.753779 0.590228
two 1.623137 1.803818
foo one 1.061067 1.298715
three 0.645515 0.412037
two 0.394036 0.367349

2.4. Community tutorials 29


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Reshaping

See the sections on Hierarchical Indexing and Reshaping.

Stack

In [91]: tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',


....: 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two',
....: 'one', 'two', 'one', 'two']]))
....:

In [92]: index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])

In [93]: df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])

In [94]: df2 = df[:4]

In [95]: df2
Out[95]:
A B
first second
bar one 1.294317 1.636713
two 0.986587 -0.877156
baz one -0.649757 1.186025
two -2.445453 -0.108421

[email protected]
T56GZSRVAHThe stack() method “compresses” a level in the DataFrame’s columns.
In [96]: stacked = df2.stack()

In [97]: stacked
Out[97]:
first second
bar one A 1.294317
B 1.636713
two A 0.986587
B -0.877156
baz one A -0.649757
B 1.186025
two A -2.445453
B -0.108421
dtype: float64

With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is
unstack(), which by default unstacks the last level:
In [98]: stacked.unstack()
Out[98]:
A B
first second
bar one 1.294317 1.636713
two 0.986587 -0.877156
baz one -0.649757 1.186025
two -2.445453 -0.108421

(continues on next page)

30 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [99]: stacked.unstack(1)
Out[99]:
second one two
first
bar A 1.294317 0.986587
B 1.636713 -0.877156
baz A -0.649757 -2.445453
B 1.186025 -0.108421

In [100]: stacked.unstack(0)
Out[100]:
first bar baz
second
one A 1.294317 -0.649757
B 1.636713 1.186025
two A 0.986587 -2.445453
B -0.877156 -0.108421

Pivot tables

See the section on Pivot Tables.


In [101]: df = pd.DataFrame({'A': ['one', 'one', 'two', 'three'] * 3,
.....: 'B': ['A', 'B', 'C'] * 4,
.....: 'C': ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
.....: 'D': np.random.randn(12),
[email protected]
.....: 'E': np.random.randn(12)})
T56GZSRVAH
.....:

In [102]: df
Out[102]:
A B C D E
0 one A foo -0.618069 -0.814689
1 one B foo 0.846151 1.033482
2 two C foo -0.494035 -0.541444
3 three A bar -1.118823 0.254531
4 one B bar -0.340439 -0.604735
5 one C bar 0.945814 -0.955822
6 two A foo 0.823720 0.544094
7 three B foo 0.812442 1.461520
8 one C foo 2.212842 -1.555660
9 one A bar 0.632421 0.290112
10 two B bar 0.387412 -0.880864
11 three C bar 1.778351 -0.353401

We can produce pivot tables from this data very easily:


In [103]: pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'])
Out[103]:
C bar foo
A B
one A 0.632421 -0.618069
B -0.340439 0.846151
C 0.945814 2.212842
three A -1.118823 NaN
(continues on next page)

2.4. Community tutorials 31


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


B NaN 0.812442
C 1.778351 NaN
two A NaN 0.823720
B 0.387412 NaN
C NaN -0.494035

Time series

pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency con-
version (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial
applications. See the Time Series section.
In [104]: rng = pd.date_range('1/1/2012', periods=100, freq='S')

In [105]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)

In [106]: ts.resample('5Min').sum()
Out[106]:
2012-01-01 23247
Freq: 5T, dtype: int64

Time zone representation:


In [107]: rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D')

In [108]: ts = pd.Series(np.random.randn(len(rng)), rng)


[email protected]
T56GZSRVAHIn [109]: ts
Out[109]:
2012-03-06 1.215326
2012-03-07 0.265352
2012-03-08 -0.142587
2012-03-09 0.134160
2012-03-10 -0.842578
Freq: D, dtype: float64

In [110]: ts_utc = ts.tz_localize('UTC')

In [111]: ts_utc
Out[111]:
2012-03-06 00:00:00+00:00 1.215326
2012-03-07 00:00:00+00:00 0.265352
2012-03-08 00:00:00+00:00 -0.142587
2012-03-09 00:00:00+00:00 0.134160
2012-03-10 00:00:00+00:00 -0.842578
Freq: D, dtype: float64

Converting to another time zone:


In [112]: ts_utc.tz_convert('US/Eastern')
Out[112]:
2012-03-05 19:00:00-05:00 1.215326
2012-03-06 19:00:00-05:00 0.265352
2012-03-07 19:00:00-05:00 -0.142587
2012-03-08 19:00:00-05:00 0.134160
(continues on next page)

32 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2012-03-09 19:00:00-05:00 -0.842578
Freq: D, dtype: float64

Converting between time span representations:

In [113]: rng = pd.date_range('1/1/2012', periods=5, freq='M')

In [114]: ts = pd.Series(np.random.randn(len(rng)), index=rng)

In [115]: ts
Out[115]:
2012-01-31 2.872280
2012-02-29 -0.138958
2012-03-31 -0.006695
2012-04-30 0.114531
2012-05-31 0.061088
Freq: M, dtype: float64

In [116]: ps = ts.to_period()

In [117]: ps
Out[117]:
2012-01 2.872280
2012-02 -0.138958
2012-03 -0.006695
2012-04 0.114531
2012-05 0.061088
Freq: M,
[email protected]: float64
T56GZSRVAH
In [118]: ps.to_timestamp()
Out[118]:
2012-01-01 2.872280
2012-02-01 -0.138958
2012-03-01 -0.006695
2012-04-01 0.114531
2012-05-01 0.061088
Freq: MS, dtype: float64

Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following
example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following
the quarter end:

In [119]: prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV')

In [120]: ts = pd.Series(np.random.randn(len(prng)), prng)

In [121]: ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9

In [122]: ts.head()
Out[122]:
1990-03-01 09:00 -0.047052
1990-06-01 09:00 2.133754
1990-09-01 09:00 0.694554
1990-12-01 09:00 1.031604
1991-03-01 09:00 -0.477875
Freq: H, dtype: float64

2.4. Community tutorials 33


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Categoricals

pandas can include categorical data in a DataFrame. For full docs, see the categorical introduction and the API
documentation.

In [123]: df = pd.DataFrame({"id": [1, 2, 3, 4, 5, 6],


.....: "raw_grade": ['a', 'b', 'b', 'a', 'a', 'e']})
.....:

Convert the raw grades to a categorical data type.

In [124]: df["grade"] = df["raw_grade"].astype("category")

In [125]: df["grade"]
Out[125]:
0 a
1 b
2 b
3 a
4 a
5 e
Name: grade, dtype: category
Categories (3, object): [a, b, e]

Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace!).

In [126]: df["grade"].cat.categories = ["very good", "good", "very bad"]

Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new
[email protected]
T56GZSRVAHSeries by default).

In [127]: df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium",


.....: "good", "very good"])
.....:

In [128]: df["grade"]
Out[128]:
0 very good
1 good
2 good
3 very good
4 very good
5 very bad
Name: grade, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]

Sorting is per order in the categories, not lexical order.

In [129]: df.sort_values(by="grade")
Out[129]:
id raw_grade grade
5 6 e very bad
1 2 b good
2 3 b good
0 1 a very good
3 4 a very good
4 5 a very good

34 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Grouping by a categorical column also shows empty categories.

In [130]: df.groupby("grade").size()
Out[130]:
grade
very bad 1
bad 0
medium 0
good 2
very good 3
dtype: int64

Plotting

See the Plotting docs.


We use the standard convention for referencing the matplotlib API:

In [131]: import matplotlib.pyplot as plt

In [132]: plt.close('all')

In [133]: ts = pd.Series(np.random.randn(1000),
.....: index=pd.date_range('1/1/2000', periods=1000))
.....:

In [134]: ts = ts.cumsum()
[email protected]
T56GZSRVAHIn [135]: ts.plot()
Out[135]: <matplotlib.axes._subplots.AxesSubplot at 0x7f69fa500bd0>

2.4. Community tutorials 35


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

On a DataFrame, the plot() method is a convenience to plot all of the columns with labels:

In [136]: df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index,


.....: columns=['A', 'B', 'C', 'D'])
.....:

In [137]: df = df.cumsum()

In [138]: plt.figure()
Out[138]: <Figure size 640x480 with 0 Axes>

In [139]: df.plot()
Out[139]: <matplotlib.axes._subplots.AxesSubplot at 0x7f69f9a960d0>

In [140]: plt.legend(loc='best')
Out[140]: <matplotlib.legend.Legend at 0x7f69f9a96d90>

36 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Getting data in/out

CSV

Writing to a csv file.

In [141]: df.to_csv('foo.csv')

Reading from a csv file.

In [142]: pd.read_csv('foo.csv')
Out[142]:
Unnamed: 0 A B C D
0 2000-01-01 -0.024395 -0.459905 0.424974 0.460299
1 2000-01-02 0.441403 0.309116 0.295536 -1.331048
2 2000-01-03 1.412170 0.519094 0.759803 -0.798177
3 2000-01-04 -0.280951 -0.284814 1.419353 -1.598425
4 2000-01-05 -1.961733 0.986828 3.894422 -2.294805
.. ... ... ... ... ...
995 2002-09-22 -19.414236 27.809222 39.064016 20.429488
996 2002-09-23 -20.199321 28.740891 36.143194 20.148467
997 2002-09-24 -21.278959 29.251941 36.579199 20.988765
998 2002-09-25 -21.462526 27.865121 36.807859 19.868755
(continues on next page)

2.4. Community tutorials 37


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


999 2002-09-26 -18.016134 27.587711 37.633386 19.861016

[1000 rows x 5 columns]

HDF5

Reading and writing to HDFStores.


Writing to a HDF5 Store.
In [143]: df.to_hdf('foo.h5', 'df')

Reading from a HDF5 Store.


In [144]: pd.read_hdf('foo.h5', 'df')
Out[144]:
A B C D
2000-01-01 -0.024395 -0.459905 0.424974 0.460299
2000-01-02 0.441403 0.309116 0.295536 -1.331048
2000-01-03 1.412170 0.519094 0.759803 -0.798177
2000-01-04 -0.280951 -0.284814 1.419353 -1.598425
2000-01-05 -1.961733 0.986828 3.894422 -2.294805
... ... ... ... ...
2002-09-22 -19.414236 27.809222 39.064016 20.429488
2002-09-23 -20.199321 28.740891 36.143194 20.148467
2002-09-24 -21.278959 29.251941 36.579199 20.988765
[email protected]
2002-09-25 -21.462526 27.865121 36.807859 19.868755
T56GZSRVAH2002-09-26 -18.016134 27.587711 37.633386 19.861016

[1000 rows x 4 columns]

Excel

Reading and writing to MS Excel.


Writing to an excel file.
In [145]: df.to_excel('foo.xlsx', sheet_name='Sheet1')

Reading from an excel file.


In [146]: pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
Out[146]:
Unnamed: 0 A B C D
0 2000-01-01 -0.024395 -0.459905 0.424974 0.460299
1 2000-01-02 0.441403 0.309116 0.295536 -1.331048
2 2000-01-03 1.412170 0.519094 0.759803 -0.798177
3 2000-01-04 -0.280951 -0.284814 1.419353 -1.598425
4 2000-01-05 -1.961733 0.986828 3.894422 -2.294805
.. ... ... ... ... ...
995 2002-09-22 -19.414236 27.809222 39.064016 20.429488
996 2002-09-23 -20.199321 28.740891 36.143194 20.148467
997 2002-09-24 -21.278959 29.251941 36.579199 20.988765
998 2002-09-25 -21.462526 27.865121 36.807859 19.868755
(continues on next page)

38 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


999 2002-09-26 -18.016134 27.587711 37.633386 19.861016

[1000 rows x 5 columns]

Gotchas

If you are attempting to perform an operation you might see an exception like:

>>> if pd.Series([False, True, False]):


... print("I was true")
Traceback
...
ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().

See Comparisons for an explanation and what to do.


See Gotchas as well.

2.4.4 Getting started tutorials

What kind of data does pandas handle?

I want to start using pandas

In [1]: import pandas as pd


[email protected]
T56GZSRVAH
To load the pandas package and start working with it, import the package. The community agreed alias for pandas is
pd, so loading pandas as pd is assumed standard practice for all of the pandas documentation.

Pandas data table representation

I want to store passenger data of the Titanic. For a number of passengers, I know the name (characters), age (integers)
and sex (male/female) data.

In [2]: df = pd.DataFrame({
...: "Name": ["Braund, Mr. Owen Harris",
...: "Allen, Mr. William Henry",
...: "Bonnell, Miss. Elizabeth"],
...: "Age": [22, 35, 58],
...: "Sex": ["male", "male", "female"]}
...: )
...:

In [3]: df
Out[3]:
Name Age Sex
0 Braund, Mr. Owen Harris 22 male
1 Allen, Mr. William Henry 35 male
2 Bonnell, Miss. Elizabeth 58 female

2.4. Community tutorials 39


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

To manually store data in a table, create a DataFrame. When using a Python dictionary of lists, the dictionary keys
will be used as column headers and the values in each list as rows of the DataFrame.
A DataFrame is a 2-dimensional data structure that can store data of different types (including characters, integers,
floating point values, categorical data and more) in columns. It is similar to a spreadsheet, a SQL table or the data.
frame in R.
• The table has 3 columns, each of them with a column label. The column labels are respectively Name, Age and
Sex.
• The column Name consists of textual data with each value a string, the column Age are numbers and the column
Sex is textual data.
In spreadsheet software, the table representation of our data would look very similar:

[email protected]
T56GZSRVAH

Each column in a DataFrame is a Series

I’m just interested in working with the data in the column Age

In [4]: df["Age"]
Out[4]:
0 22
1 35
2 58
Name: Age, dtype: int64

When selecting a single column of a pandas DataFrame, the result is a pandas Series. To select the column, use
the column label in between square brackets [].

Note: If you are familiar to Python dictionaries, the selection of a single column is very similar to selection of

40 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

dictionary values based on the key.

You can create a Series from scratch as well:

In [5]: ages = pd.Series([22, 35, 58], name="Age")

In [6]: ages
Out[6]:
0 22
1 35
2 58
Name: Age, dtype: int64

A pandas Series has no column labels, as it is just a single column of a DataFrame. A Series does have row
labels.

Do something with a DataFrame or Series

I want to know the maximum Age of the passengers


We can do this on the DataFrame by selecting the Age column and applying max():

In [7]: df["Age"].max()
Out[7]: 58

Or to the Series:
[email protected]
In [8]: ages.max()
T56GZSRVAHOut[8]: 58

As illustrated by the max() method, you can do things with a DataFrame or Series. pandas provides a lot of
functionalities, each of them a method you can apply to a DataFrame or Series. As methods are functions, do not
forget to use parentheses ().
I’m interested in some basic statistics of the numerical data of my data table

In [9]: df.describe()
Out[9]:
Age
count 3.000000
mean 38.333333
std 18.230012
min 22.000000
25% 28.500000
50% 35.000000
75% 46.500000
max 58.000000

The describe() method provides a quick overview of the numerical data in a DataFrame. As the Name and Sex
columns are textual data, these are by default not taken into account by the describe() method.
Many pandas operations return a DataFrame or a Series. The describe() method is an example of a pandas
operation returning a pandas Series.
Check more options on describe in the user guide section about aggregations with describe

2.4. Community tutorials 41


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Note: This is just a starting point. Similar to spreadsheet software, pandas represents data as a table with columns
and rows. Apart from the representation, also the data manipulations and calculations you would do in spreadsheet
software are supported by pandas. Continue reading the next tutorials to get started!

• Import the package, aka import pandas as pd


• A table of data is stored as a pandas DataFrame
• Each column in a DataFrame is a Series
• You can do things by applying a method to a DataFrame or Series
A more extended explanation to DataFrame and Series is provided in the introduction to data structures.

In [1]: import pandas as pd

This tutorial uses the titanic data set, stored as CSV. The data consists of the following data columns:
• PassengerId: Id of every passenger.
• Survived: This feature have value 0 and 1. 0 for not survived and 1 for survived.
• Pclass: There are 3 classes: Class 1, Class 2 and Class 3.
• Name: Name of passenger.
• Sex: Gender of passenger.
• Age: Age of passenger.
• SibSp: Indication that passenger have siblings and spouse.
[email protected]
T56GZSRVAH • Parch: Whether a passenger is alone or have family.

• Ticket: Ticket number of passenger.


• Fare: Indicating the fare.
• Cabin: The cabin of passenger.
• Embarked: The embarked category.

How do I read and write tabular data?

I want to analyse the titanic passenger data, available as a CSV file.

In [2]: titanic = pd.read_csv("data/titanic.csv")

pandas provides the read_csv() function to read data stored as a csv file into a pandas DataFrame. pandas
supports many different file formats or data sources out of the box (csv, excel, sql, json, parquet, . . . ), each of them
with the prefix read_*.
Make sure to always have a check on the data after reading in the data. When displaying a DataFrame, the first and
last 5 rows will be shown by default:

In [3]: titanic
Out[3]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
(continues on next page)

42 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→ female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→ female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→ female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S
.. ... ... ... ...
˓→ ... ... ... ... ... ... ... ...
886 887 0 2 Montvila, Rev. Juozas
˓→ male 27.0 0 0 211536 13.0000 NaN S
887 888 1 1 Graham, Miss. Margaret Edith
˓→ female 19.0 0 0 112053 30.0000 B42 S
888 889 0 3 Johnston, Miss. Catherine Helen "Carrie"
˓→ female NaN 1 2 W./C. 6607 23.4500 NaN S
889 890 1 1 Behr, Mr. Karl Howell
˓→ male 26.0 0 0 111369 30.0000 C148 C
890 891 0 3 Dooley, Mr. Patrick
˓→ male 32.0 0 0 370376 7.7500 NaN Q

[891 rows x 12 columns]

I want to see the first 8 rows of a pandas DataFrame.


In [4]: titanic.head(8)
[email protected]
T56GZSRVAHOut[4]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S
5 6 0 3 Moran, Mr. James
˓→ male NaN 0 0 330877 8.4583 NaN Q
6 7 0 1 McCarthy, Mr. Timothy J
˓→ male 54.0 0 0 17463 51.8625 E46 S
7 8 0 3 Palsson, Master. Gosta Leonard
˓→ male 2.0 3 1 349909 21.0750 NaN S

To see the first N rows of a DataFrame, use the head() method with the required number of rows (in this case 8)
as argument.

Note: Interested in the last N rows instead? pandas also provides a tail() method. For example, titanic.
tail(10) will return the last 10 rows of the DataFrame.

A check on how pandas interpreted each of the column data types can be done by requesting the pandas dtypes
attribute:

2.4. Community tutorials 43


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [5]: titanic.dtypes
Out[5]:
PassengerId int64
Survived int64
Pclass int64
Name object
Sex object
Age float64
SibSp int64
Parch int64
Ticket object
Fare float64
Cabin object
Embarked object
dtype: object

For each of the columns, the used data type is enlisted. The data types in this DataFrame are integers (int64),
floats (float63) and strings (object).

Note: When asking for the dtypes, no brackets are used! dtypes is an attribute of a DataFrame and
Series. Attributes of DataFrame or Series do not need brackets. Attributes represent a characteristic of a
DataFrame/Series, whereas a method (which requires brackets) do something with the DataFrame/Series as
introduced in the first tutorial.

My colleague requested the titanic data as a spreadsheet.

In [6]: titanic.to_excel('titanic.xlsx', sheet_name='passengers', index=False)


[email protected]
T56GZSRVAH
Whereas read_* functions are used to read data to pandas, the to_* methods are used to store data. The
to_excel() method stores the data as an excel file. In the example here, the sheet_name is named passen-
gers instead of the default Sheet1. By setting index=False the row index labels are not saved in the spreadsheet.
The equivalent read function to_excel() will reload the data to a DataFrame:

In [7]: titanic = pd.read_excel('titanic.xlsx', sheet_name='passengers')

In [8]: titanic.head()
Out[8]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S

I’m interested in a technical summary of a DataFrame

In [9]: titanic.info()
<class 'pandas.core.frame.DataFrame'>
(continues on next page)

44 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 891 non-null int64
1 Survived 891 non-null int64
2 Pclass 891 non-null int64
3 Name 891 non-null object
4 Sex 891 non-null object
5 Age 714 non-null float64
6 SibSp 891 non-null int64
7 Parch 891 non-null int64
8 Ticket 891 non-null object
9 Fare 891 non-null float64
10 Cabin 204 non-null object
11 Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.7+ KB

The method info() provides technical information about a DataFrame, so let’s explain the output in more detail:
• It is indeed a DataFrame.
• There are 891 entries, i.e. 891 rows.
• Each row has a row label (aka the index) with values ranging from 0 to 890.
• The table has 12 columns. Most columns have a value for each of the rows (all 891 values are non-null).
Some columns do have missing values and less than 891 non-null values.
[email protected]
T56GZSRVAH • The columns Name, Sex, Cabin and Embarked consists of textual data (strings, aka object). The other
columns are numerical data with some of them whole numbers (aka integer) and others are real numbers
(aka float).
• The kind of data (characters, integers,. . . ) in the different columns are summarized by listing the dtypes.
• The approximate amount of RAM used to hold the DataFrame is provided as well.
• Getting data in to pandas from many different file formats or data sources is supported by read_* functions.
• Exporting data out of pandas is provided by different to_*methods.
• The head/tail/info methods and the dtypes attribute are convenient for a first check.
For a complete overview of the input and output possibilites from and to pandas, see the user guide section about
reader and writer functions.

In [1]: import pandas as pd

This tutorial uses the titanic data set, stored as CSV. The data consists of the following data columns:
• PassengerId: Id of every passenger.
• Survived: This feature have value 0 and 1. 0 for not survived and 1 for survived.
• Pclass: There are 3 classes: Class 1, Class 2 and Class 3.
• Name: Name of passenger.
• Sex: Gender of passenger.
• Age: Age of passenger.

2.4. Community tutorials 45


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• SibSp: Indication that passenger have siblings and spouse.


• Parch: Whether a passenger is alone or have family.
• Ticket: Ticket number of passenger.
• Fare: Indicating the fare.
• Cabin: The cabin of passenger.
• Embarked: The embarked category.

In [2]: titanic = pd.read_csv("data/titanic.csv")

In [3]: titanic.head()
Out[3]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S

How do I select a subset of a DataFrame?


[email protected]
T56GZSRVAH
How do I select specific columns from a DataFrame?

I’m interested in the age of the titanic passengers.

In [4]: ages = titanic["Age"]

In [5]: ages.head()
Out[5]:
0 22.0
1 38.0
2 26.0
3 35.0
4 35.0
Name: Age, dtype: float64

To select a single column, use square brackets [] with the column name of the column of interest.
Each column in a DataFrame is a Series. As a single column is selected, the returned object is a pandas
DataFrame. We can verify this by checking the type of the output:

In [6]: type(titanic["Age"])
Out[6]: pandas.core.series.Series

And have a look at the shape of the output:

46 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [7]: titanic["Age"].shape
Out[7]: (891,)

DataFrame.shape is an attribute (remember tutorial on reading and writing, do not use parantheses for attributes)
of a pandas Series and DataFrame containing the number of rows and columns: (nrows, ncolumns). A pandas
Series is 1-dimensional and only the number of rows is returned.
I’m interested in the age and sex of the titanic passengers.
In [8]: age_sex = titanic[["Age", "Sex"]]

In [9]: age_sex.head()
Out[9]:
Age Sex
0 22.0 male
1 38.0 female
2 26.0 female
3 35.0 female
4 35.0 male

To select multiple columns, use a list of column names within the selection brackets [].

Note: The inner square brackets define a Python list with column names, whereas the outer brackets are used to select
the data from a pandas DataFrame as seen in the previous example.

The returned data type is a pandas DataFrame:


[email protected]
In [10]: type(titanic[["Age", "Sex"]])
T56GZSRVAHOut[10]: pandas.core.frame.DataFrame

In [11]: titanic[["Age", "Sex"]].shape


Out[11]: (891, 2)

The selection returned a DataFrame with 891 rows and 2 columns. Remember, a DataFrame is 2-dimensional
with both a row and column dimension.
For basic information on indexing, see the user guide section on indexing and selecting data.

How do I filter specific rows from a DataFrame?

I’m interested in the passengers older than 35 years.


In [12]: above_35 = titanic[titanic["Age"] > 35]

In [13]: above_35.head()
Out[13]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
6 7 0 1 McCarthy, Mr. Timothy J
˓→ male 54.0 0 0 17463 51.8625 E46 S
11 12 1 1 Bonnell, Miss. Elizabeth
˓→female 58.0 0 0 113783 26.5500 C103 S (continues on next page)

2.4. Community tutorials 47


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


13 14 0 3 Andersson, Mr. Anders Johan
˓→ male 39.0 1 5 347082 31.2750 NaN S
15 16 1 2 Hewlett, Mrs. (Mary D Kingcome)
˓→ female 55.0 0 0 248706 16.0000 NaN S

To select rows based on a conditional expression, use a condition inside the selection brackets [].
The condition inside the selection brackets titanic["Age"] > 35 checks for which rows the Age column has a
value larger than 35:

In [14]: titanic["Age"] > 35


Out[14]:
0 False
1 True
2 False
3 False
4 False
...
886 False
887 False
888 False
889 False
890 False
Name: Age, Length: 891, dtype: bool

The output of the conditional expression (>, but also ==, !=, <, <=,. . . would work) is actually a pandas Series of
boolean values (either True or False) with the same number of rows as the original DataFrame. Such a Series
of boolean values can be used to filter the DataFrame by putting it in between the selection brackets []. Only rows
[email protected]
T56GZSRVAHfor which the value is True will be selected.
We now from before that the original titanic DataFrame consists of 891 rows. Let’s have a look at the amount of
rows which satisfy the condition by checking the shape attribute of the resulting DataFrame above_35:

In [15]: above_35.shape
Out[15]: (217, 12)

I’m interested in the titanic passengers from cabin class 2 and 3.

In [16]: class_23 = titanic[titanic["Pclass"].isin([2, 3])]

In [17]: class_23.head()
Out[17]:
PassengerId Survived Pclass Name Sex Age SibSp
˓→ Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris male 22.0 1
˓→ 0 A/5 21171 7.2500 NaN S
2 3 1 3 Heikkinen, Miss. Laina female 26.0 0
˓→ 0 STON/O2. 3101282 7.9250 NaN S
4 5 0 3 Allen, Mr. William Henry male 35.0 0
˓→ 0 373450 8.0500 NaN S
5 6 0 3 Moran, Mr. James male NaN 0
˓→ 0 330877 8.4583 NaN Q
7 8 0 3 Palsson, Master. Gosta Leonard male 2.0 3
˓→ 1 349909 21.0750 NaN S

Similar to the conditional expression, the isin() conditional function returns a True for each row the values are in
the provided list. To filter the rows based on such a function, use the conditional function inside the selection brackets

48 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[]. In this case, the condition inside the selection brackets titanic["Pclass"].isin([2, 3]) checks for
which rows the Pclass column is either 2 or 3.
The above is equivalent to filtering by rows for which the class is either 2 or 3 and combining the two statements with
an | (or) operator:

In [18]: class_23 = titanic[(titanic["Pclass"] == 2) | (titanic["Pclass"] == 3)]

In [19]: class_23.head()
Out[19]:
PassengerId Survived Pclass Name Sex Age SibSp
˓→ Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris male 22.0 1
˓→ 0 A/5 21171 7.2500 NaN S
2 3 1 3 Heikkinen, Miss. Laina female 26.0 0
˓→ 0 STON/O2. 3101282 7.9250 NaN S
4 5 0 3 Allen, Mr. William Henry male 35.0 0
˓→ 0 373450 8.0500 NaN S
5 6 0 3 Moran, Mr. James male NaN 0
˓→ 0 330877 8.4583 NaN Q
7 8 0 3 Palsson, Master. Gosta Leonard male 2.0 3
˓→ 1 349909 21.0750 NaN S

Note: When combining multiple conditional statements, each condition must be surrounded by parentheses ().
Moreover, you can not use or/and but need to use the or operator | and the and operator &.

See the dedicated section in the user guide about boolean indexing or about the isin function.
[email protected]
T56GZSRVAHI want to work with passenger data for which the age is known.
In [20]: age_no_na = titanic[titanic["Age"].notna()]

In [21]: age_no_na.head()
Out[21]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S

The notna() conditional function returns a True for each row the values are not an Null value. As such, this can
be combined with the selection brackets [] to filter the data table.
You might wonder what actually changed, as the first 5 lines are still the same values. One way to verify is to check if
the shape has changed:

In [22]: age_no_na.shape
Out[22]: (714, 12)

For more dedicated functions on missing values, see the user guide section about handling missing data.

2.4. Community tutorials 49


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

How do I select specific rows and columns from a DataFrame?

I’m interested in the names of the passengers older than 35 years.


In [23]: adult_names = titanic.loc[titanic["Age"] > 35, "Name"]

In [24]: adult_names.head()
Out[24]:
1 Cumings, Mrs. John Bradley (Florence Briggs Th...
6 McCarthy, Mr. Timothy J
11 Bonnell, Miss. Elizabeth
13 Andersson, Mr. Anders Johan
15 Hewlett, Mrs. (Mary D Kingcome)
Name: Name, dtype: object

In this case, a subset of both rows and columns is made in one go and just using selection brackets [] is not sufficient
anymore. The loc/iloc operators are required in front of the selection brackets []. When using loc/iloc, the
part before the comma is the rows you want, and the part after the comma is the columns you want to select.
When using the column names, row labels or a condition expression, use the loc operator in front of the selection
brackets []. For both the part before and after the comma, you can use a single label, a list of labels, a slice of labels,
a conditional expression or a colon. Using a colon specificies you want to select all rows or columns.
I’m interested in rows 10 till 25 and columns 3 to 5.
In [25]: titanic.iloc[9:25, 2:5]
Out[25]:
[email protected]
Pclass Name Sex
T56GZSRVAH9 2 Nasser, Mrs. Nicholas (Adele Achem) female
10 3 Sandstrom, Miss. Marguerite Rut female
11 1 Bonnell, Miss. Elizabeth female
12 3 Saundercock, Mr. William Henry male
13 3 Andersson, Mr. Anders Johan male
.. ... ... ...
20 2 Fynney, Mr. Joseph J male
21 2 Beesley, Mr. Lawrence male
22 3 McGowan, Miss. Anna "Annie" female
23 1 Sloper, Mr. William Thompson male
24 3 Palsson, Miss. Torborg Danira female

[16 rows x 3 columns]

Again, a subset of both rows and columns is made in one go and just using selection brackets [] is not sufficient
anymore. When specifically interested in certain rows and/or columns based on their position in the table, use the
iloc operator in front of the selection brackets [].
When selecting specific rows and/or columns with loc or iloc, new values can be assigned to the selected data. For
example, to assign the name anonymous to the first 3 elements of the third column:
In [26]: titanic.iloc[0:3, 3] = "anonymous"

In [27]: titanic.head()
Out[27]:
PassengerId Survived Pclass Name
˓→Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 anonymous
˓→male 22.0 1 0 A/5 21171 7.2500 NaN S
(continues on next page)

50 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 2 1 1 anonymous
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 anonymous
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→male 35.0 0 0 373450 8.0500 NaN S

See the user guide section on different choices for indexing to get more insight in the usage of loc and iloc.
• When selecting subsets of data, square brackets [] are used.
• Inside these brackets, you can use a single column/row label, a list of column/row labels, a slice of labels, a
conditional expression or a colon.
• Select specific rows and/or columns using loc when using the row and column names
• Select specific rows and/or columns using iloc when using the positions in the table
• You can assign new values to a selection based on loc/iloc.
A full overview about indexing is provided in the user guide pages on indexing and selecting data.

In [1]: import pandas as pd

In [2]: import matplotlib.pyplot as plt

For this tutorial, air quality data about 𝑁 𝑂2 is used, made available by openaq and using the py-openaq package.
The air_quality_no2.csv data set provides 𝑁 𝑂2 values for the measurement stations FR04014, BETR801 and
[email protected]
T56GZSRVAHLondon Westminster in respectively Paris, Antwerp and London.

In [3]: air_quality = pd.read_csv("data/air_quality_no2.csv",


...: index_col=0, parse_dates=True)
...:

In [4]: air_quality.head()
Out[4]:
station_antwerp station_paris station_london
datetime
2019-05-07 02:00:00 NaN NaN 23.0
2019-05-07 03:00:00 50.5 25.0 19.0
2019-05-07 04:00:00 45.0 27.7 19.0
2019-05-07 05:00:00 NaN 50.4 16.0
2019-05-07 06:00:00 NaN 61.9 NaN

Note: The usage of the index_col and parse_dates parameters of the read_csv function to define the first
(0th) column as index of the resulting DataFrame and convert the dates in the column to Timestamp objects,
respectively.

2.4. Community tutorials 51


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

How to create plots in pandas?

I want a quick visual check of the data.

In [5]: air_quality.plot()
Out[5]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d255a9f90>

[email protected]
T56GZSRVAH

With a DataFrame, pandas creates by default one line plot for each of the columns with numeric data.
I want to plot only the columns of the data table with the data from Paris.

In [6]: air_quality["station_paris"].plot()
Out[6]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d2561db10>

52 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

To plot a specific column, use the selection method of the subset data tutorial in combination with the plot()
method. Hence, the plot() method works on both Series and DataFrame.
I want to visually compare the 𝑁 02 values measured in London versus Paris.

In [7]: air_quality.plot.scatter(x="station_london",
...: y="station_paris",
...: alpha=0.5)
...:
Out[7]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d27809c90>

2.4. Community tutorials 53


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Apart from the default line plot when using the plot function, a number of alternatives are available to plot data.
Let’s use some standard Python to get an overview of the available plot methods:

In [8]: [method_name for method_name in dir(air_quality.plot)


...: if not method_name.startswith("_")]
...:
Out[8]:
['area',
'bar',
'barh',
'box',
'density',
'hexbin',
'hist',
'kde',
'line',
'pie',
'scatter']

Note: In many development environments as well as ipython and jupyter notebook, use the TAB button to get an
overview of the available methods, for example air_quality.plot. + TAB.

One of the options is DataFrame.plot.box(), which refers to a boxplot. The box method is applicable on the
air quality example data:

54 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [9]: air_quality.plot.box()
Out[9]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d2553a9d0>

[email protected]
T56GZSRVAH

For an introduction to plots other than the default line plot, see the user guide section about supported plot styles.
I want each of the columns in a separate subplot.

In [10]: axs = air_quality.plot.area(figsize=(12, 4), subplots=True)

Separate subplots for each of the data columns is supported by the subplots argument of the plot functions. The
builtin options available in each of the pandas plot functions that are worthwhile to have a look.

2.4. Community tutorials 55


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Some more formatting options are explained in the user guide section on plot formatting.
I want to further customize, extend or save the resulting plot.

In [11]: fig, axs = plt.subplots(figsize=(12, 4));

In [12]: air_quality.plot.area(ax=axs);

In [13]: axs.set_ylabel("NO$_2$ concentration");

In [14]: fig.savefig("no2_concentrations.png")

Each of the plot objects created by pandas are a matplotlib object. As Matplotlib provides plenty of options to cus-
tomize plots, making the link between pandas and Matplotlib explicit enables all the power of matplotlib to the plot.
[email protected]
This strategy is applied in the previous example:
T56GZSRVAH
fig, axs = plt.subplots(figsize=(12, 4)) # Create an empty matplotlib Figure
˓→and Axes

air_quality.plot.area(ax=axs) # Use pandas to put the area plot on


˓→the prepared Figure/Axes

axs.set_ylabel("NO$_2$ concentration") # Do any matplotlib customization you


˓→like

fig.savefig("no2_concentrations.png") # Save the Figure/Axes using the


˓→existing matplotlib method.

• The .plot.* methods are applicable on both Series and DataFrames


• By default, each of the columns is plotted as a different element (line, boxplot,. . . )
• Any plot created by pandas is a Matplotlib object.
A full overview of plotting in pandas is provided in the visualization pages.

In [1]: import pandas as pd

For this tutorial, air quality data about 𝑁 𝑂2 is used, made available by openaq and using the py-openaq package.
The air_quality_no2.csv data set provides 𝑁 𝑂2 values for the measurement stations FR04014, BETR801 and
London Westminster in respectively Paris, Antwerp and London.

In [2]: air_quality = pd.read_csv("data/air_quality_no2.csv",


...: index_col=0, parse_dates=True)
...:

In [3]: air_quality.head()
(continues on next page)

56 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[3]:
station_antwerp station_paris station_london
datetime
2019-05-07 02:00:00 NaN NaN 23.0
2019-05-07 03:00:00 50.5 25.0 19.0
2019-05-07 04:00:00 45.0 27.7 19.0
2019-05-07 05:00:00 NaN 50.4 16.0
2019-05-07 06:00:00 NaN 61.9 NaN

How to create new columns derived from existing columns?

I want to express the 𝑁 𝑂2 concentration of the station in London in mg/m3


(If we assume temperature of 25 degrees Celsius and pressure of 1013 hPa, the conversion factor is 1.882)
In [4]: air_quality["london_mg_per_cubic"] = air_quality["station_london"] * 1.882

In [5]: air_quality.head()
Out[5]:
station_antwerp station_paris station_london london_mg_per_
˓→cubic

datetime
˓→

2019-05-07 02:00:00 NaN NaN 23.0 43.


˓→286
[email protected]
2019-05-07 03:00:00 50.5 25.0 19.0 35.
T56GZSRVAH˓→758
2019-05-07 04:00:00 45.0 27.7 19.0 35.
˓→758

2019-05-07 05:00:00 NaN 50.4 16.0 30.


˓→112

2019-05-07 06:00:00 NaN 61.9 NaN


˓→NaN

To create a new column, use the [] brackets with the new column name at the left side of the assignment.

Note: The calculation of the values is done element_wise. This means all values in the given column are multiplied
by the value 1.882 at once. You do not need to use a loop to iterate each of the rows!

I want to check the ratio of the values in Paris versus Antwerp and save the result in a new column
In [6]: air_quality["ratio_paris_antwerp"] = \
...: air_quality["station_paris"] / air_quality["station_antwerp"]
...:

In [7]: air_quality.head()
Out[7]:
station_antwerp station_paris station_london london_mg_per_
˓→cubic ratio_paris_antwerp
datetime
˓→

(continues on next page)

2.4. Community tutorials 57


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2019-05-07 02:00:00 NaN NaN 23.0 43.
˓→286 NaN
2019-05-07 03:00:00 50.5 25.0 19.0 35.
˓→758 0.495050
2019-05-07 04:00:00 45.0 27.7 19.0 35.
˓→758 0.615556
2019-05-07 05:00:00 NaN 50.4 16.0 30.
˓→112 NaN
2019-05-07 06:00:00 NaN 61.9 NaN
˓→NaN NaN

The calculation is again element-wise, so the / is applied for the values in each row.
Also other mathematical operators (+, -, *, /) or logical operators (<, >, =,. . . ) work element wise. The latter was
already used in the subset data tutorial to filter rows of a table using a conditional expression.
I want to rename the data columns to the corresponding station identifiers used by openAQ

In [8]: air_quality_renamed = air_quality.rename(


...: columns={"station_antwerp": "BETR801",
...: "station_paris": "FR04014",
...: "station_london": "London Westminster"})
...:

In [9]: air_quality_renamed.head()
Out[9]:
BETR801 FR04014 London Westminster london_mg_per_cubic ratio_
˓→paris_antwerp
[email protected]
datetime
T56GZSRVAH
˓→

2019-05-07 02:00:00 NaN NaN 23.0 43.286


˓→ NaN
2019-05-07 03:00:00 50.5 25.0 19.0 35.758
˓→ 0.495050
2019-05-07 04:00:00 45.0 27.7 19.0 35.758
˓→ 0.615556
2019-05-07 05:00:00 NaN 50.4 16.0 30.112
˓→ NaN
2019-05-07 06:00:00 NaN 61.9 NaN NaN
˓→ NaN

The rename() function can be used for both row labels and column labels. Provide a dictionary with the keys the
current names and the values the new names to update the corresponding names.
The mapping should not be restricted to fixed names only, but can be a mapping function as well. For example,
converting the column names to lowercase letters can be done using a function as well:

In [10]: air_quality_renamed = air_quality_renamed.rename(columns=str.lower)

In [11]: air_quality_renamed.head()
Out[11]:
betr801 fr04014 london westminster london_mg_per_cubic ratio_
˓→paris_antwerp

datetime
˓→

2019-05-07 02:00:00 NaN NaN 23.0 43.286


˓→ NaN
(continues on next page)

58 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2019-05-07 03:00:00 50.5 25.0 19.0 35.758
˓→ 0.495050
2019-05-07 04:00:00 45.0 27.7 19.0 35.758
˓→ 0.615556
2019-05-07 05:00:00 NaN 50.4 16.0 30.112
˓→ NaN
2019-05-07 06:00:00 NaN 61.9 NaN NaN
˓→ NaN

Details about column or row label renaming is provided in the user guide section on renaming labels.
• Create a new column by assigning the output to the DataFrame with a new column name in between the [].
• Operations are element-wise, no need to loop over rows.
• Use rename with a dictionary or function to rename row labels or column names.
The user guide contains a separate section on column addition and deletion.

In [1]: import pandas as pd

This tutorial uses the titanic data set, stored as CSV. The data consists of the following data columns:
• PassengerId: Id of every passenger.
• Survived: This feature have value 0 and 1. 0 for not survived and 1 for survived.
• Pclass: There are 3 classes: Class 1, Class 2 and Class 3.
• Name: Name of passenger.
[email protected]
T56GZSRVAH • Sex: Gender of passenger.
• Age: Age of passenger.
• SibSp: Indication that passenger have siblings and spouse.
• Parch: Whether a passenger is alone or have family.
• Ticket: Ticket number of passenger.
• Fare: Indicating the fare.
• Cabin: The cabin of passenger.
• Embarked: The embarked category.

In [2]: titanic = pd.read_csv("data/titanic.csv")

In [3]: titanic.head()
Out[3]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S

2.4. Community tutorials 59


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

How to calculate summary statistics?

Aggregating statistics

What is the average age of the titanic passengers?

In [4]: titanic["Age"].mean()
Out[4]: 29.69911764705882

Different statistics are available and can be applied to columns with numerical data. Operations in general exclude
missing data and operate across rows by default.

What is the median age and ticket fare price of the titanic passengers?

In [5]: titanic[["Age", "Fare"]].median()


Out[5]:
Age 28.0000
Fare 14.4542
dtype: float64

The statistic applied to multiple columns of a DataFrame (the selection of two columns return a DataFrame, see
the subset data tutorial) is calculated for each numeric column.
The aggregating statistic can be calculated for multiple columns at the same time. Remember the describe function
from first tutorial tutorial?
[email protected]
T56GZSRVAHIn [6]: titanic[["Age", "Fare"]].describe()
Out[6]:
Age Fare
count 714.000000 891.000000
mean 29.699118 32.204208
std 14.526497 49.693429
min 0.420000 0.000000
25% 20.125000 7.910400
50% 28.000000 14.454200
75% 38.000000 31.000000
max 80.000000 512.329200

Instead of the predefined statistics, specific combinations of aggregating statistics for given columns can be defined
using the DataFrame.agg() method:

In [7]: titanic.agg({'Age': ['min', 'max', 'median', 'skew'],


...: 'Fare': ['min', 'max', 'median', 'mean']})
...:
Out[7]:
Age Fare
max 80.000000 512.329200
mean NaN 32.204208
median 28.000000 14.454200
min 0.420000 0.000000
skew 0.389108 NaN

Details about descriptive statistics are provided in the user guide section on descriptive statistics.

60 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Aggregating statistics grouped by category

What is the average age for male versus female titanic passengers?

In [8]: titanic[["Sex", "Age"]].groupby("Sex").mean()


Out[8]:
Age
Sex
female 27.915709
male 30.726645

As our interest is the average age for each gender, a subselection on these two columns is made first: titanic[[
"Sex", "Age"]]. Next, the groupby() method is applied on the Sex column to make a group per category.
The average age for each gender is calculated and returned.
Calculating a given statistic (e.g. mean age) for each category in a column (e.g. male/female in the Sex column) is a
common pattern. The groupby method is used to support this type of operations. More general, this fits in the more
general split-apply-combine pattern:
• Split the data into groups
• Apply a function to each group independently
• Combine the results into a data structure
The apply and combine steps are typically done together in pandas.
In the previous example, we explicitly selected the 2 columns first. If not, the mean method is applied to each column
containing
[email protected] numerical columns:
T56GZSRVAH
In [9]: titanic.groupby("Sex").mean()
Out[9]:
PassengerId Survived Pclass Age SibSp Parch Fare
Sex
female 431.028662 0.742038 2.159236 27.915709 0.694268 0.649682 44.479818
male 454.147314 0.188908 2.389948 30.726645 0.429809 0.235702 25.523893

It does not make much sense to get the average value of the Pclass. if we are only interested in the average age for
each gender, the selection of columns (rectangular brackets [] as usual) is supported on the grouped data as well:

In [10]: titanic.groupby("Sex")["Age"].mean()
Out[10]:
Sex
female 27.915709
male 30.726645
Name: Age, dtype: float64

Note: The Pclass column contains numerical data but actually represents 3 categories (or factors) with respectively
the labels ‘1’, ‘2’ and ‘3’. Calculating statistics on these does not make much sense. Therefore, pandas provides a
Categorical data type to handle this type of data. More information is provided in the user guide Categorical data
section.

What is the mean ticket fare price for each of the sex and cabin class combinations?

2.4. Community tutorials 61


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [11]: titanic.groupby(["Sex", "Pclass"])["Fare"].mean()


Out[11]:
Sex Pclass
female 1 106.125798
2 21.970121
3 16.118810
male 1 67.226127
2 19.741782
3 12.661633
Name: Fare, dtype: float64

Grouping can be done by multiple columns at the same time. Provide the column names as a list to the groupby()
method.
A full description on the split-apply-combine approach is provided in the user guide section on groupby operations.

Count number of records by category

What is the number of passengers in each of the cabin classes?

In [12]: titanic["Pclass"].value_counts()
Out[12]:
3 491
1 216
2 184
Name: Pclass, dtype: int64
[email protected]
T56GZSRVAH
The value_counts() method counts the number of records for each category in a column.
The function is a shortcut, as it is actually a groupby operation in combination with counting of the number of records
within each group:

In [13]: titanic.groupby("Pclass")["Pclass"].count()
Out[13]:
Pclass
1 216
2 184
3 491
Name: Pclass, dtype: int64

Note: Both size and count can be used in combination with groupby. Whereas size includes NaN values and
just provides the number of rows (size of the table), count excludes the missing values. In the value_counts
method, use the dropna argument to include or exclude the NaN values.

The user guide has a dedicated section on value_counts , see page on discretization.
• Aggregation statistics can be calculated on entire columns or rows
• groupby provides the power of the split-apply-combine pattern
• value_counts is a convenient shortcut to count the number of entries in each category of a variable
A full description on the split-apply-combine approach is provided in the user guide pages about groupby operations.

62 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [1]: import pandas as pd

This tutorial uses the titanic data set, stored as CSV. The data consists of the following data columns:
• PassengerId: Id of every passenger.
• Survived: This feature have value 0 and 1. 0 for not survived and 1 for survived.
• Pclass: There are 3 classes: Class 1, Class 2 and Class 3.
• Name: Name of passenger.
• Sex: Gender of passenger.
• Age: Age of passenger.
• SibSp: Indication that passenger have siblings and spouse.
• Parch: Whether a passenger is alone or have family.
• Ticket: Ticket number of passenger.
• Fare: Indicating the fare.
• Cabin: The cabin of passenger.
• Embarked: The embarked category.

In [2]: titanic = pd.read_csv("data/titanic.csv")

In [3]: titanic.head()
Out[3]:
PassengerId Survived Pclass Name
[email protected]
T56GZSRVAH ˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S

This tutorial uses air quality data about 𝑁 𝑂2 and Particulate matter less than 2.5 micrometers, made available by
openaq and using the py-openaq package. The air_quality_long.csv data set provides 𝑁 𝑂2 and 𝑃 𝑀25 values
for the measurement stations FR04014, BETR801 and London Westminster in respectively Paris, Antwerp and London.
The air-quality data set has the following columns:
• city: city where the sensor is used, either Paris, Antwerp or London
• country: country where the sensor is used, either FR, BE or GB
• location: the id of the sensor, either FR04014, BETR801 or London Westminster
• parameter: the parameter measured by the sensor, either 𝑁 𝑂2 or Particulate matter
• value: the measured value
• unit: the unit of the measured parameter, in this case ‘µg/m3 ’

2.4. Community tutorials 63


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

and the index of the DataFrame is datetime, the datetime of the measurement.

Note: The air-quality data is provided in a so-called long format data representation with each observation on a
separate row and each variable a separate column of the data table. The long/narrow format is also known as the tidy
data format.

In [4]: air_quality = pd.read_csv("data/air_quality_long.csv",


...: index_col="date.utc", parse_dates=True)
...:

In [5]: air_quality.head()
Out[5]:
city country location parameter value unit
date.utc
2019-06-18 06:00:00+00:00 Antwerpen BE BETR801 pm25 18.0 µg/m3
2019-06-17 08:00:00+00:00 Antwerpen BE BETR801 pm25 6.5 µg/m3
2019-06-17 07:00:00+00:00 Antwerpen BE BETR801 pm25 18.5 µg/m3
2019-06-17 06:00:00+00:00 Antwerpen BE BETR801 pm25 16.0 µg/m3
2019-06-17 05:00:00+00:00 Antwerpen BE BETR801 pm25 7.5 µg/m3

How to reshape the layout of tables?

Sort table rows

I want to sort the titanic data according to the age of the passengers.
[email protected]
T56GZSRVAHIn [6]: titanic.sort_values(by="Age").head()
Out[6]:
PassengerId Survived Pclass Name Sex Age
˓→SibSp Parch Ticket Fare Cabin Embarked
803 804 1 3 Thomas, Master. Assad Alexander male 0.42
˓→ 0 1 2625 8.5167 NaN C
755 756 1 2 Hamalainen, Master. Viljo male 0.67
˓→ 1 1 250649 14.5000 NaN S
644 645 1 3 Baclini, Miss. Eugenie female 0.75
˓→ 2 1 2666 19.2583 NaN C
469 470 1 3 Baclini, Miss. Helene Barbara female 0.75
˓→ 2 1 2666 19.2583 NaN C
78 79 1 2 Caldwell, Master. Alden Gates male 0.83
˓→ 0 2 248738 29.0000 NaN S

I want to sort the titanic data according to the cabin class and age in descending order.
In [7]: titanic.sort_values(by=['Pclass', 'Age'], ascending=False).head()
Out[7]:
PassengerId Survived Pclass Name Sex Age SibSp
˓→Parch Ticket Fare Cabin Embarked
851 852 0 3 Svensson, Mr. Johan male 74.0 0
˓→ 0 347060 7.7750 NaN S
116 117 0 3 Connors, Mr. Patrick male 70.5 0
˓→ 0 370369 7.7500 NaN Q
280 281 0 3 Duane, Mr. Frank male 65.0 0
˓→ 0 336439 7.7500 NaN Q
483 484 1 3 Turkula, Mrs. (Hedwig) female 63.0 0
˓→ 0 4134 9.5875 NaN S (continues on next page)

64 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


326 327 0 3 Nysveen, Mr. Johan Hansen male 61.0 0
˓→ 0 345364 6.2375 NaN S

With Series.sort_values(), the rows in the table are sorted according to the defined column(s). The index
will follow the row order.
More details about sorting of tables is provided in the using guide section on sorting data.

Long to wide table format

Let’s use a small subset of the air quality data set. We focus on 𝑁 𝑂2 data and only use the first two measurements of
each location (i.e. the head of each group). The subset of data will be called no2_subset

# filter for no2 data only


In [8]: no2 = air_quality[air_quality["parameter"] == "no2"]

# use 2 measurements (head) for each location (groupby)


In [9]: no2_subset = no2.sort_index().groupby(["location"]).head(2)

In [10]: no2_subset
Out[10]:
city country location parameter value
unit
˓→

date.utc
˓→

2019-04-09 01:00:00+00:00 Antwerpen BE BETR801 no2 22.5 µg/


[email protected]
˓→m
3
T56GZSRVAH
2019-04-09 01:00:00+00:00 Paris FR FR04014 no2 24.4 µg/
3
˓→m

2019-04-09 02:00:00+00:00 London GB London Westminster no2 67.0 µg/


3
˓→m

2019-04-09 02:00:00+00:00 Antwerpen BE BETR801 no2 53.5 µg/


3
˓→m

2019-04-09 02:00:00+00:00 Paris FR FR04014 no2 27.4 µg/


3
˓→m

2019-04-09 03:00:00+00:00 London GB London Westminster no2 67.0 µg/


3
˓→m

I want the values for the three stations as separate columns next to each other

In [11]: no2_subset.pivot(columns="location", values="value")


Out[11]:
location BETR801 FR04014 London Westminster
date.utc
2019-04-09 01:00:00+00:00 22.5 24.4 NaN
2019-04-09 02:00:00+00:00 53.5 27.4 67.0
2019-04-09 03:00:00+00:00 NaN NaN 67.0

The pivot_table() function is purely reshaping of the data: a single value for each index/column combination is
required.
As pandas support plotting of multiple columns (see plotting tutorial) out of the box, the conversion from long to wide
table format enables the plotting of the different time series at the same time:

2.4. Community tutorials 65


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [12]: no2.head()
Out[12]:
city country location parameter value unit
date.utc
2019-06-21 00:00:00+00:00 Paris FR FR04014 no2 20.0 µg/m3
2019-06-20 23:00:00+00:00 Paris FR FR04014 no2 21.8 µg/m3
2019-06-20 22:00:00+00:00 Paris FR FR04014 no2 26.5 µg/m3
2019-06-20 21:00:00+00:00 Paris FR FR04014 no2 24.9 µg/m3
2019-06-20 20:00:00+00:00 Paris FR FR04014 no2 21.4 µg/m3

In [13]: no2.pivot(columns="location", values="value").plot()


Out[13]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d253f1990>

[email protected]
T56GZSRVAH

Note: When the index parameter is not defined, the existing index (row labels) is used.

For more information about pivot(), see the user guide section on pivoting DataFrame objects.

66 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Pivot table

I want the mean concentrations for 𝑁 𝑂2 and 𝑃 𝑀2.5 in each of the stations in table form

In [14]: air_quality.pivot_table(values="value", index="location",


....: columns="parameter", aggfunc="mean")
....:
Out[14]:
parameter no2 pm25
location
BETR801 26.950920 23.169492
FR04014 29.374284 NaN
London Westminster 29.740050 13.443568

In the case of pivot(), the data is only rearranged. When multiple values need to be aggregated (in this specific
case, the values on different time steps) pivot_table() can be used, providing an aggregation function (e.g. mean)
on how to combine these values.
Pivot table is a well known concept in spreadsheet software. When interested in summary columns for each variable
separately as well, put the margin parameter to True:

In [15]: air_quality.pivot_table(values="value", index="location",


....: columns="parameter", aggfunc="mean",
....: margins=True)
....:
Out[15]:
parameter no2 pm25 All
[email protected]
location
T56GZSRVAH
BETR801 26.950920 23.169492 24.982353
FR04014 29.374284 NaN 29.374284
London Westminster 29.740050 13.443568 21.491708
All 29.430316 14.386849 24.222743

For more information about pivot_table(), see the user guide section on pivot tables.

Note: If case you are wondering, pivot_table() is indeed directly linked to groupby(). The same result can
be derived by grouping on both parameter and location:

air_quality.groupby(["parameter", "location"]).mean()

Have a look at groupby() in combination with unstack() at the user guide section on combining stats and
groupby.

2.4. Community tutorials 67


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Wide to long format

Starting again from the wide format table created in the previous section:

In [16]: no2_pivoted = no2.pivot(columns="location", values="value").reset_index()

In [17]: no2_pivoted.head()
Out[17]:
location date.utc BETR801 FR04014 London Westminster
0 2019-04-09 01:00:00+00:00 22.5 24.4 NaN
1 2019-04-09 02:00:00+00:00 53.5 27.4 67.0
2 2019-04-09 03:00:00+00:00 54.5 34.2 67.0
3 2019-04-09 04:00:00+00:00 34.5 48.5 41.0
4 2019-04-09 05:00:00+00:00 46.5 59.5 41.0

I want to collect all air quality 𝑁 𝑂2 measurements in a single column (long format)

In [18]: no_2 = no2_pivoted.melt(id_vars="date.utc")

In [19]: no_2.head()
Out[19]:
date.utc location value
0 2019-04-09 01:00:00+00:00 BETR801 22.5
1 2019-04-09 02:00:00+00:00 BETR801 53.5
2 2019-04-09 03:00:00+00:00 BETR801 54.5
3 2019-04-09 04:00:00+00:00 BETR801 34.5
4 2019-04-09 05:00:00+00:00 BETR801 46.5
[email protected]
T56GZSRVAH
The pandas.melt() method on a DataFrame converts the data table from wide format to long format. The
column headers become the variable names in a newly created column.
The solution is the short version on how to apply pandas.melt(). The method will melt all columns NOT men-
tioned in id_vars together into two columns: A columns with the column header names and a column with the
values itself. The latter column gets by default the name value.
The pandas.melt() method can be defined in more detail:

In [20]: no_2 = no2_pivoted.melt(id_vars="date.utc",


....: value_vars=["BETR801",
....: "FR04014",
....: "London Westminster"],
....: value_name="NO_2",
....: var_name="id_location")
....:

In [21]: no_2.head()
Out[21]:
date.utc id_location NO_2
0 2019-04-09 01:00:00+00:00 BETR801 22.5
1 2019-04-09 02:00:00+00:00 BETR801 53.5
2 2019-04-09 03:00:00+00:00 BETR801 54.5
3 2019-04-09 04:00:00+00:00 BETR801 34.5
4 2019-04-09 05:00:00+00:00 BETR801 46.5

The result in the same, but in more detail defined:


• value_vars defines explicitly which columns to melt together

68 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• value_name provides a custom column name for the values column instead of the default columns name
value
• var_name provides a custom column name for the columns collecting the column header names. Otherwise it
takes the index name or a default variable
Hence, the arguments value_name and var_name are just user-defined names for the two generated columns. The
columns to melt are defined by id_vars and value_vars.
Conversion from wide to long format with pandas.melt() is explained in the user guide section on reshaping by
melt.
• Sorting by one or more columns is supported by sort_values
• The pivot function is purely restructering of the data, pivot_table supports aggregations
• The reverse of pivot (long to wide format) is melt (wide to long format)
A full overview is available in the user guide on the pages about reshaping and pivoting.

In [1]: import pandas as pd

For this tutorial, air quality data about 𝑁 𝑂2 is used, made available by openaq and downloaded using the py-openaq
package.
The air_quality_no2_long.csv data set provides 𝑁 𝑂2 values for the measurement stations FR04014,
BETR801 and London Westminster in respectively Paris, Antwerp and London.

In [2]: air_quality_no2 = pd.read_csv("data/air_quality_no2_long.csv",


...: parse_dates=True)
...:
[email protected]
T56GZSRVAHIn [3]: air_quality_no2 = air_quality_no2[["date.utc", "location",
...: "parameter", "value"]]
...:

In [4]: air_quality_no2.head()
Out[4]:
date.utc location parameter value
0 2019-06-21 00:00:00+00:00 FR04014 no2 20.0
1 2019-06-20 23:00:00+00:00 FR04014 no2 21.8
2 2019-06-20 22:00:00+00:00 FR04014 no2 26.5
3 2019-06-20 21:00:00+00:00 FR04014 no2 24.9
4 2019-06-20 20:00:00+00:00 FR04014 no2 21.4

For this tutorial, air quality data about Particulate matter less than 2.5 micrometers is used, made available by openaq
and downloaded using the py-openaq package.
The air_quality_pm25_long.csv data set provides 𝑃 𝑀25 values for the measurement stations FR04014,
BETR801 and London Westminster in respectively Paris, Antwerp and London.

In [5]: air_quality_pm25 = pd.read_csv("data/air_quality_pm25_long.csv",


...: parse_dates=True)
...:

In [6]: air_quality_pm25 = air_quality_pm25[["date.utc", "location",


...: "parameter", "value"]]
...:

In [7]: air_quality_pm25.head()
(continues on next page)

2.4. Community tutorials 69


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[7]:
date.utc location parameter value
0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5

How to combine data from multiple tables?

Concatenating objects

I want to combine the measurements of 𝑁 𝑂2 and 𝑃 𝑀25 , two tables with a similar structure, in a single table

In [8]: air_quality = pd.concat([air_quality_pm25, air_quality_no2], axis=0)

In [9]: air_quality.head()
Out[9]:
date.utc location parameter value
0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5
[email protected]
T56GZSRVAH
The concat() function performs concatenation operations of multiple tables along one of the axis (row-wise or
column-wise).
By default concatenation is along axis 0, so the resulting table combines the rows of the input tables. Let’s check the
shape of the original and the concatenated tables to verify the operation:

In [10]: print('Shape of the `air_quality_pm25` table: ', air_quality_pm25.shape)


Shape of the `air_quality_pm25` table: (1110, 4)

In [11]: print('Shape of the `air_quality_no2` table: ', air_quality_no2.shape)


Shape of the `air_quality_no2` table: (2068, 4)

In [12]: print('Shape of the resulting `air_quality` table: ', air_quality.shape)


Shape of the resulting `air_quality` table: (3178, 4)

Hence, the resulting table has 3178 = 1110 + 2068 rows.

Note: The axis argument will return in a number of pandas methods that can be applied along an axis. A
DataFrame has two corresponding axes: the first running vertically downwards across rows (axis 0), and the second
running horizontally across columns (axis 1). Most operations like concatenation or summary statistics are by default
across rows (axis 0), but can be applied across columns as well.

Sorting the table on the datetime information illustrates also the combination of both tables, with the parameter
column defining the origin of the table (either no2 from table air_quality_no2 or pm25 from table
air_quality_pm25):

70 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [13]: air_quality = air_quality.sort_values("date.utc")

In [14]: air_quality.head()
Out[14]:
date.utc location parameter value
2067 2019-05-07 01:00:00+00:00 London Westminster no2 23.0
1003 2019-05-07 01:00:00+00:00 FR04014 no2 25.0
100 2019-05-07 01:00:00+00:00 BETR801 pm25 12.5
1098 2019-05-07 01:00:00+00:00 BETR801 no2 50.5
1109 2019-05-07 01:00:00+00:00 London Westminster pm25 8.0

In this specific example, the parameter column provided by the data ensures that each of the original tables can be
identified. This is not always the case. the concat function provides a convenient solution with the keys argument,
adding an additional (hierarchical) row index. For example:
In [15]: air_quality_ = pd.concat([air_quality_pm25, air_quality_no2],
....: keys=["PM25", "NO2"])
....:

In [16]: air_quality_.head()
Out[16]:
date.utc location parameter value
PM25 0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5
[email protected]
T56GZSRVAH
Note: The existence of multiple row/column indices at the same time has not been mentioned within these tutorials.
Hierarchical indexing or MultiIndex is an advanced and powerfull pandas feature to analyze higher dimensional data.
Multi-indexing is out of scope for this pandas introduction. For the moment, remember that the func-
tion reset_index can be used to convert any level of an index to a column, e.g. air_quality.
reset_index(level=0)
Feel free to dive into the world of multi-indexing at the user guide section on advanced indexing.

More options on table concatenation (row and column wise) and how concat can be used to define the logic (union
or intersection) of the indexes on the other axes is provided at the section on object concatenation.

Join tables using a common identifier

Add the station coordinates, provided by the stations metadata table, to the corresponding rows in the measurements
table.

Warning: The air quality measurement station coordinates are stored in a data file air_quality_stations.
csv, downloaded using the py-openaq package.

In [17]: stations_coord = pd.read_csv("data/air_quality_stations.csv")

(continues on next page)

2.4. Community tutorials 71


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [18]: stations_coord.head()
Out[18]:
location coordinates.latitude coordinates.longitude
0 BELAL01 51.23619 4.38522
1 BELHB23 51.17030 4.34100
2 BELLD01 51.10998 5.00486
3 BELLD02 51.12038 5.02155
4 BELR833 51.32766 4.36226

Note: The stations used in this example (FR04014, BETR801 and London Westminster) are just three entries enlisted
in the metadata table. We only want to add the coordinates of these three to the measurements table, each on the
corresponding rows of the air_quality table.

In [19]: air_quality.head()
Out[19]:
date.utc location parameter value
2067 2019-05-07 01:00:00+00:00 London Westminster no2 23.0
1003 2019-05-07 01:00:00+00:00 FR04014 no2 25.0
100 2019-05-07 01:00:00+00:00 BETR801 pm25 12.5
1098 2019-05-07 01:00:00+00:00 BETR801 no2 50.5
1109 2019-05-07 01:00:00+00:00 London Westminster pm25 8.0

In [20]: air_quality = pd.merge(air_quality, stations_coord,


....: how='left', on='location')
....:
[email protected]
T56GZSRVAH
In [21]: air_quality.head()
Out[21]:
date.utc location parameter value coordinates.
˓→latitude coordinates.longitude
0 2019-05-07 01:00:00+00:00 London Westminster no2 23.0 51.
˓→49467 -0.13193
1 2019-05-07 01:00:00+00:00 FR04014 no2 25.0 48.
˓→83724 2.39390
2 2019-05-07 01:00:00+00:00 FR04014 no2 25.0 48.
˓→83722 2.39390
3 2019-05-07 01:00:00+00:00 BETR801 pm25 12.5 51.
˓→20966 4.43182
4 2019-05-07 01:00:00+00:00 BETR801 no2 50.5 51.
˓→20966 4.43182

Using the merge() function, for each of the rows in the air_quality table, the corresponding coordinates are
added from the air_quality_stations_coord table. Both tables have the column location in common
which is used as a key to combine the information. By choosing the left join, only the locations available in the
air_quality (left) table, i.e. FR04014, BETR801 and London Westminster, end up in the resulting table. The
merge function supports multiple join options similar to database-style operations.
Add the parameter full description and name, provided by the parameters metadata table, to the measurements table

Warning: The air quality parameters metadata are stored in a data file air_quality_parameters.csv,
downloaded using the py-openaq package.

72 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [22]: air_quality_parameters = pd.read_csv("data/air_quality_parameters.csv")

In [23]: air_quality_parameters.head()
Out[23]:
id description name
0 bc Black Carbon BC
1 co Carbon Monoxide CO
2 no2 Nitrogen Dioxide NO2
3 o3 Ozone O3
4 pm10 Particulate matter less than 10 micrometers in... PM10

In [24]: air_quality = pd.merge(air_quality, air_quality_parameters,


....: how='left', left_on='parameter', right_on='id')
....:

In [25]: air_quality.head()
Out[25]:
date.utc location parameter value coordinates.
˓→latitude coordinates.longitude id
˓→description name
0 2019-05-07 01:00:00+00:00 London Westminster no2 23.0 51.
˓→49467 -0.13193 no2 Nitrogen
˓→Dioxide NO2
1 2019-05-07 01:00:00+00:00 FR04014 no2 25.0 48.
˓→83724 2.39390 no2 Nitrogen
˓→Dioxide NO2
2 2019-05-07 01:00:00+00:00 FR04014 no2 25.0 48.
˓→83722 2.39390 no2 Nitrogen
[email protected]
˓→Dioxide NO2
T56GZSRVAH3 2019-05-07 01:00:00+00:00 BETR801 pm25 12.5 51.
˓→20966 4.43182 pm25 Particulate matter less than 2.5 micrometers i..
˓→. PM2.5
4 2019-05-07 01:00:00+00:00 BETR801 no2 50.5 51.
˓→20966 4.43182 no2 Nitrogen
˓→Dioxide NO2

Compared to the previous example, there is no common column name. However, the parameter column in the
air_quality table and the id column in the air_quality_parameters_name both provide the measured
variable in a common format. The left_on and right_on arguments are used here (instead of just on) to make
the link between the two tables.
pandas supports also inner, outer, and right joins. More information on join/merge of tables is provided in the user
guide section on database style merging of tables. Or have a look at the comparison with SQL page.
• Multiple tables can be concatenated both column as row wise using the concat function.
• For database-like merging/joining of tables, use the merge function.
See the user guide for a full description of the various facilities to combine data tables.

In [1]: import pandas as pd

In [2]: import matplotlib.pyplot as plt

For this tutorial, air quality data about 𝑁 𝑂2 and Particulate matter less than 2.5 micrometers is used, made available
by openaq and downloaded using the py-openaq package. The air_quality_no2_long.csv" data set provides
𝑁 𝑂2 values for the measurement stations FR04014, BETR801 and London Westminster in respectively Paris, Antwerp
and London.

2.4. Community tutorials 73


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [3]: air_quality = pd.read_csv("data/air_quality_no2_long.csv")

In [4]: air_quality = air_quality.rename(columns={"date.utc": "datetime"})

In [5]: air_quality.head()
Out[5]:
city country datetime location parameter value unit
0 Paris FR 2019-06-21 00:00:00+00:00 FR04014 no2 20.0 µg/m3
1 Paris FR 2019-06-20 23:00:00+00:00 FR04014 no2 21.8 µg/m3
2 Paris FR 2019-06-20 22:00:00+00:00 FR04014 no2 26.5 µg/m3
3 Paris FR 2019-06-20 21:00:00+00:00 FR04014 no2 24.9 µg/m3
4 Paris FR 2019-06-20 20:00:00+00:00 FR04014 no2 21.4 µg/m3

In [6]: air_quality.city.unique()
Out[6]: array(['Paris', 'Antwerpen', 'London'], dtype=object)

How to handle time series data with ease?

Using pandas datetime properties

I want to work with the dates in the column datetime as datetime objects instead of plain text

In [7]: air_quality["datetime"] = pd.to_datetime(air_quality["datetime"])

In [8]: air_quality["datetime"]
Out[8]:
[email protected]
0 2019-06-21 00:00:00+00:00
T56GZSRVAH
1 2019-06-20 23:00:00+00:00
2 2019-06-20 22:00:00+00:00
3 2019-06-20 21:00:00+00:00
4 2019-06-20 20:00:00+00:00
...
2063 2019-05-07 06:00:00+00:00
2064 2019-05-07 04:00:00+00:00
2065 2019-05-07 03:00:00+00:00
2066 2019-05-07 02:00:00+00:00
2067 2019-05-07 01:00:00+00:00
Name: datetime, Length: 2068, dtype: datetime64[ns, UTC]

Initially, the values in datetime are character strings and do not provide any datetime operations (e.g. extract the
year, day of the week,. . . ). By applying the to_datetime function, pandas interprets the strings and convert these to
datetime (i.e. datetime64[ns, UTC]) objects. In pandas we call these datetime objects similar to datetime.
datetime from the standard library a pandas.Timestamp.

Note: As many data sets do contain datetime information in one of the columns, pandas input function like pandas.
read_csv() and pandas.read_json() can do the transformation to dates when reading the data using the
parse_dates parameter with a list of the columns to read as Timestamp:

pd.read_csv("../data/air_quality_no2_long.csv", parse_dates=["datetime"])

Why are these pandas.Timestamp objects useful. Let’s illustrate the added value with some example cases.
What is the start and end date of the time series data set working with?

74 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [9]: air_quality["datetime"].min(), air_quality["datetime"].max()


Out[9]:
(Timestamp('2019-05-07 01:00:00+0000', tz='UTC'),
Timestamp('2019-06-21 00:00:00+0000', tz='UTC'))

Using pandas.Timestamp for datetimes enable us to calculate with date information and make them comparable.
Hence, we can use this to get the length of our time series:

In [10]: air_quality["datetime"].max() - air_quality["datetime"].min()


Out[10]: Timedelta('44 days 23:00:00')

The result is a pandas.Timedelta object, similar to datetime.timedelta from the standard Python library
and defining a time duration.
The different time concepts supported by pandas are explained in the user guide section on time related concepts.
I want to add a new column to the DataFrame containing only the month of the measurement

In [11]: air_quality["month"] = air_quality["datetime"].dt.month

In [12]: air_quality.head()
Out[12]:
city country datetime location parameter value unit month
0 Paris FR 2019-06-21 00:00:00+00:00 FR04014 no2 20.0 µg/m3 6
1 Paris FR 2019-06-20 23:00:00+00:00 FR04014 no2 21.8 µg/m3 6
2 Paris FR 2019-06-20 22:00:00+00:00 FR04014 no2 26.5 µg/m3 6
3 Paris FR 2019-06-20 21:00:00+00:00 FR04014 no2 24.9 µg/m3 6
4 Paris FR 2019-06-20 20:00:00+00:00 FR04014 no2 21.4 µg/m3 6
[email protected]
T56GZSRVAHBy using Timestamp objects for dates, a lot of time-related properties are provided by pandas. For example the
month, but also year, weekofyear, quarter,. . . All of these properties are accessible by the dt accessor.
An overview of the existing date properties is given in the time and date components overview table. More details
about the dt accessor to return datetime like properties is explained in a dedicated section on the dt accessor.
What is the average 𝑁 𝑂2 concentration for each day of the week for each of the measurement locations?

In [13]: air_quality.groupby(
....: [air_quality["datetime"].dt.weekday, "location"])["value"].mean()
....:
Out[13]:
datetime location
0 BETR801 27.875000
FR04014 24.856250
London Westminster 23.969697
1 BETR801 22.214286
FR04014 30.999359
...
5 FR04014 25.266154
London Westminster 24.977612
6 BETR801 21.896552
FR04014 23.274306
London Westminster 24.859155
Name: value, Length: 21, dtype: float64

Remember the split-apply-combine pattern provided by groupby from the tutorial on statistics calculation? Here,
we want to calculate a given statistic (e.g. mean 𝑁 𝑂2 ) for each weekday and for each measurement location. To
group on weekdays, we use the datetime property weekday (with Monday=0 and Sunday=6) of pandas Timestamp,

2.4. Community tutorials 75


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

which is also accessible by the dt accessor. The grouping on both locations and weekdays can be done to split the
calculation of the mean on each of these combinations.

Danger: As we are working with a very short time series in these examples, the analysis does not provide a
long-term representative result!

Plot the typical 𝑁 𝑂2 pattern during the day of our time series of all stations together. In other words, what is the
average value for each hour of the day?

In [14]: fig, axs = plt.subplots(figsize=(12, 4))

In [15]: air_quality.groupby(
....: air_quality["datetime"].dt.hour)["value"].mean().plot(kind='bar',
....: rot=0,
....: ax=axs)
....:
Out[15]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d253d7bd0>

In [16]: plt.xlabel("Hour of the day"); # custom x label using matplotlib

In [17]: plt.ylabel("$NO_2 (µg/m^3)$");

[email protected]
T56GZSRVAH

Similar to the previous case, we want to calculate a given statistic (e.g. mean 𝑁 𝑂2 ) for each hour of the day and we
can use the split-apply-combine approach again. For this case, the datetime property hour of pandas Timestamp,
which is also accessible by the dt accessor.

Datetime as index

In the tutorial on reshaping, pivot() was introduced to reshape the data table with each of the measurements
locations as a separate column:

In [18]: no_2 = air_quality.pivot(index="datetime", columns="location", values="value


˓→")

In [19]: no_2.head()
Out[19]:
location BETR801 FR04014 London Westminster
datetime
2019-05-07 01:00:00+00:00 50.5 25.0 23.0
(continues on next page)

76 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2019-05-07 02:00:00+00:00 45.0 27.7 19.0
2019-05-07 03:00:00+00:00 NaN 50.4 19.0
2019-05-07 04:00:00+00:00 NaN 61.9 16.0
2019-05-07 05:00:00+00:00 NaN 72.4 NaN

Note: By pivoting the data, the datetime information became the index of the table. In general, setting a column as
an index can be achieved by the set_index function.

Working with a datetime index (i.e. DatetimeIndex) provides powerful functionalities. For example, we do not
need the dt accessor to get the time series properties, but have these properties available on the index directly:

In [20]: no_2.index.year, no_2.index.weekday


Out[20]:
(Int64Index([2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019,
...
2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019],
dtype='int64', name='datetime', length=1033),
Int64Index([1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
...
3, 3, 3, 3, 3, 3, 3, 3, 3, 4],
dtype='int64', name='datetime', length=1033))

Some other advantages are the convenient subsetting of time period or the adapted time scale on plots. Let’s apply this
on our data.
Create a plot of the 𝑁 𝑂2 values in the different stations from the 20th of May till the end of 21st of May
[email protected]
T56GZSRVAH
In [21]: no_2["2019-05-20":"2019-05-21"].plot();

2.4. Community tutorials 77


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

By providing a string that parses to a datetime, a specific subset of the data can be selected on a DatetimeIndex.
More information on the DatetimeIndex and the slicing by using strings is provided in the section on time series
indexing.

Resample a time series to another frequency

Aggregate the current hourly time series values to the monthly maximum value in each of the stations.

In [22]: monthly_max = no_2.resample("M").max()

In [23]: monthly_max
Out[23]:
location BETR801 FR04014 London Westminster
datetime
2019-05-31 00:00:00+00:00 74.5 97.0 97.0
2019-06-30 00:00:00+00:00 52.5 84.7 52.0

A very powerful method on time series data with a datetime index, is the ability to resample() time series to
another frequency (e.g., converting secondly data into 5-minutely data).
The resample() method is similar to a groupby operation:
• it provides a time-based grouping, by using a string (e.g. M, 5H,. . . ) that defines the target frequency
• it requires an aggregation function such as mean, max,. . .

78 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

An overview of the aliases used to define time series frequencies is given in the offset aliases overview table.
When defined, the frequency of the time series is provided by the freq attribute:

In [24]: monthly_max.index.freq
Out[24]: <MonthEnd>

Make a plot of the daily median 𝑁 𝑂2 value in each of the stations.

In [25]: no_2.resample("D").mean().plot(style="-o", figsize=(10, 5));

[email protected]
T56GZSRVAH

More details on the power of time series resampling is provided in the user gudie section on resampling.
• Valid date strings can be converted to datetime objects using to_datetime function or as part of read func-
tions.
• Datetime objects in pandas supports calculations, logical operations and convenient date-related properties using
the dt accessor.
• A DatetimeIndex contains these date-related properties and supports convenient slicing.
• Resample is a powerful method to change the frequency of a time series.
A full overview on time series is given in the pages on time series and date functionality.

In [1]: import pandas as pd

This tutorial uses the titanic data set, stored as CSV. The data consists of the following data columns:
• PassengerId: Id of every passenger.
• Survived: This feature have value 0 and 1. 0 for not survived and 1 for survived.
• Pclass: There are 3 classes: Class 1, Class 2 and Class 3.
• Name: Name of passenger.
• Sex: Gender of passenger.
• Age: Age of passenger.

2.4. Community tutorials 79


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• SibSp: Indication that passenger have siblings and spouse.


• Parch: Whether a passenger is alone or have family.
• Ticket: Ticket number of passenger.
• Fare: Indicating the fare.
• Cabin: The cabin of passenger.
• Embarked: The embarked category.

In [2]: titanic = pd.read_csv("data/titanic.csv")

In [3]: titanic.head()
Out[3]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S

How to manipulate textual data?


[email protected]
T56GZSRVAH
Make all name characters lowercase

In [4]: titanic["Name"].str.lower()
Out[4]:
0 braund, mr. owen harris
1 cumings, mrs. john bradley (florence briggs th...
2 heikkinen, miss. laina
3 futrelle, mrs. jacques heath (lily may peel)
4 allen, mr. william henry
...
886 montvila, rev. juozas
887 graham, miss. margaret edith
888 johnston, miss. catherine helen "carrie"
889 behr, mr. karl howell
890 dooley, mr. patrick
Name: Name, Length: 891, dtype: object

To make each of the strings in the Name column lowercase, select the Name column (see tutorial on selection of data),
add the str accessor and apply the lower method. As such, each of the strings is converted element wise.
Similar to datetime objects in the time series tutorial having a dt accessor, a number of specialized string methods are
available when using the str accessor. These methods have in general matching names with the equivalent built-in
string methods for single elements, but are applied element-wise (remember element wise calculations?) on each of
the values of the columns.
Create a new column Surname that contains the surname of the Passengers by extracting the part before the comma.

80 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [5]: titanic["Name"].str.split(",")
Out[5]:
0 [Braund, Mr. Owen Harris]
1 [Cumings, Mrs. John Bradley (Florence Briggs ...
2 [Heikkinen, Miss. Laina]
3 [Futrelle, Mrs. Jacques Heath (Lily May Peel)]
4 [Allen, Mr. William Henry]
...
886 [Montvila, Rev. Juozas]
887 [Graham, Miss. Margaret Edith]
888 [Johnston, Miss. Catherine Helen "Carrie"]
889 [Behr, Mr. Karl Howell]
890 [Dooley, Mr. Patrick]
Name: Name, Length: 891, dtype: object

Using the Series.str.split() method, each of the values is returned as a list of 2 elements. The first element
is the part before the comma and the second element the part after the comma.

In [6]: titanic["Surname"] = titanic["Name"].str.split(",").str.get(0)

In [7]: titanic["Surname"]
Out[7]:
0 Braund
1 Cumings
2 Heikkinen
3 Futrelle
4 Allen
...
[email protected]
886 Montvila
T56GZSRVAH887 Graham
888 Johnston
889 Behr
890 Dooley
Name: Surname, Length: 891, dtype: object

As we are only interested in the first part representing the surname (element 0), we can again use the str accessor
and apply Series.str.get() to extract the relevant part. Indeed, these string functions can be concatenated to
combine multiple functions at once!
More information on extracting parts of strings is available in the user guide section on splitting and replacing strings.
Extract the passenger data about the Countess on board of the Titanic.

In [8]: titanic["Name"].str.contains("Countess")
Out[8]:
0 False
1 False
2 False
3 False
4 False
...
886 False
887 False
888 False
889 False
890 False
Name: Name, Length: 891, dtype: bool

2.4. Community tutorials 81


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [9]: titanic[titanic["Name"].str.contains("Countess")]
Out[9]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked Surname
759 760 1 1 Rothes, the Countess. of (Lucy Noel Martha Dye...
˓→ female 33.0 0 0 110152 86.5 B77 S Rothes

(Interested in her story? SeeWikipedia!)


The string method Series.str.contains() checks for each of the values in the column Name if the string
contains the word Countess and returns for each of the values True (Countess is part of the name) of False
(Countess is notpart of the name). This output can be used to subselect the data using conditional (boolean) indexing
introduced in the subsetting of data tutorial. As there was only 1 Countess on the Titanic, we get one row as a result.

Note: More powerful extractions on strings is supported, as the Series.str.contains() and Series.str.
extract() methods accepts regular expressions, but out of scope of this tutorial.

More information on extracting parts of strings is available in the user guide section on string matching and extracting.
Which passenger of the titanic has the longest name?

In [10]: titanic["Name"].str.len()
Out[10]:
0 23
1 51
2 22
3 44
[email protected]
4 24
T56GZSRVAH ..
886 21
887 28
888 40
889 21
890 19
Name: Name, Length: 891, dtype: int64

To get the longest name we first have to get the lenghts of each of the names in the Name column. By using pandas
string methods, the Series.str.len() function is applied to each of the names individually (element-wise).

In [11]: titanic["Name"].str.len().idxmax()
Out[11]: 307

Next, we need to get the corresponding location, preferably the index label, in the table for which the name length is
the largest. The idxmax`() method does exactly that. It is not a string method and is applied to integers, so no str
is used.

In [12]: titanic.loc[titanic["Name"].str.len().idxmax(), "Name"]


Out[12]: 'Penasco y Castellana, Mrs. Victor de Satode (Maria Josefa Perez de Soto y
˓→Vallejo)'

Based on the index name of the row (307) and the column (Name), we can do a selection using the loc operator,
introduced in the tutorial on subsetting.
In the ‘Sex’ columns, replace values of ‘male’ by ‘M’ and all ‘female’ values by ‘F’

82 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [13]: titanic["Sex_short"] = titanic["Sex"].replace({"male": "M",


....: "female": "F"})
....:

In [14]: titanic["Sex_short"]
Out[14]:
0 M
1 F
2 F
3 F
4 M
..
886 M
887 F
888 F
889 M
890 M
Name: Sex_short, Length: 891, dtype: object

Whereas replace() is not a string method, it provides a convenient way to use mappings or vocabularies to translate
certain values. It requires a dictionary to define the mapping {from : to}.

Warning: There is also a replace() methods available to replace a specific set of characters. However, when
having a mapping of multiple values, this would become:
titanic["Sex_short"] = titanic["Sex"].str.replace("female", "F")
titanic["Sex_short"] = titanic["Sex_short"].str.replace("male", "M")
[email protected]
T56GZSRVAH This would become cumbersome and easily lead to mistakes. Just think (or try out yourself) what would happen if
those two statements are applied in the opposite order. . .

• String methods are available using the str accessor.


• String methods work element wise and can be used for conditional indexing.
• The replace method is a convenient method to convert values according to a given dictionary.
A full overview is provided in the user guide pages on working with text data.

2.4.5 Essential basic functionality

Here we discuss a lot of the essential functionality common to the pandas data structures. Here’s how to create some
of the objects used in the examples from the previous section:

In [1]: index = pd.date_range('1/1/2000', periods=8)

In [2]: s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])

In [3]: df = pd.DataFrame(np.random.randn(8, 3), index=index,


...: columns=['A', 'B', 'C'])
...:

2.4. Community tutorials 83


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Head and tail

To view a small sample of a Series or DataFrame object, use the head() and tail() methods. The default number
of elements to display is five, but you may pass a custom number.

In [4]: long_series = pd.Series(np.random.randn(1000))

In [5]: long_series.head()
Out[5]:
0 -1.157892
1 -1.344312
2 0.844885
3 1.075770
4 -0.109050
dtype: float64

In [6]: long_series.tail(3)
Out[6]:
997 -0.289388
998 -1.020544
999 0.589993
dtype: float64

Attributes and underlying data

pandas objects have a number of attributes enabling you to access the metadata
• shape: gives the axis dimensions of the object, consistent with ndarray
[email protected]
T56GZSRVAH
• Axis labels
– Series: index (only axis)
– DataFrame: index (rows) and columns
Note, these attributes can be safely assigned to!

In [7]: df[:2]
Out[7]:
A B C
2000-01-01 -0.173215 0.119209 -1.044236
2000-01-02 -0.861849 -2.104569 -0.494929

In [8]: df.columns = [x.lower() for x in df.columns]

In [9]: df
Out[9]:
a b c
2000-01-01 -0.173215 0.119209 -1.044236
2000-01-02 -0.861849 -2.104569 -0.494929
2000-01-03 1.071804 0.721555 -0.706771
2000-01-04 -1.039575 0.271860 -0.424972
2000-01-05 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427
2000-01-07 0.524988 0.404705 0.577046
2000-01-08 -1.715002 -1.039268 -0.370647

84 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Pandas objects (Index, Series, DataFrame) can be thought of as containers for arrays, which hold the actual
data and do the actual computation. For many types, the underlying array is a numpy.ndarray. However, pandas
and 3rd party libraries may extend NumPy’s type system to add support for custom arrays (see dtypes).
To get the actual data inside a Index or Series, use the .array property

In [10]: s.array
Out[10]:
<PandasArray>
[ 0.4691122999071863, -0.2828633443286633, -1.5090585031735124,
-1.1356323710171934, 1.2121120250208506]
Length: 5, dtype: float64

In [11]: s.index.array
Out[11]:
<PandasArray>
['a', 'b', 'c', 'd', 'e']
Length: 5, dtype: object

array will always be an ExtensionArray. The exact details of what an ExtensionArray is and why pandas
uses them is a bit beyond the scope of this introduction. See dtypes for more.
If you know you need a NumPy array, use to_numpy() or numpy.asarray().

In [12]: s.to_numpy()
Out[12]: array([ 0.4691, -0.2829, -1.5091, -1.1356, 1.2121])

In [13]: np.asarray(s)
Out[13]: array([ 0.4691, -0.2829, -1.5091, -1.1356, 1.2121])
[email protected]
T56GZSRVAHWhen the Series or Index is backed by an ExtensionArray, to_numpy() may involve copying data and coercing
values. See dtypes for more.
to_numpy() gives some control over the dtype of the resulting numpy.ndarray. For example, consider date-
times with timezones. NumPy doesn’t have a dtype to represent timezone-aware datetimes, so there are two possibly
useful representations:
1. An object-dtype numpy.ndarray with Timestamp objects, each with the correct tz
2. A datetime64[ns] -dtype numpy.ndarray, where the values have been converted to UTC and the time-
zone discarded
Timezones may be preserved with dtype=object

In [14]: ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))

In [15]: ser.to_numpy(dtype=object)
Out[15]:
array([Timestamp('2000-01-01 00:00:00+0100', tz='CET', freq='D'),
Timestamp('2000-01-02 00:00:00+0100', tz='CET', freq='D')],
dtype=object)

Or thrown away with dtype='datetime64[ns]'

In [16]: ser.to_numpy(dtype="datetime64[ns]")
Out[16]:
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
dtype='datetime64[ns]')

2.4. Community tutorials 85


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Getting the “raw data” inside a DataFrame is possibly a bit more complex. When your DataFrame only has a
single data type for all the columns, DataFrame.to_numpy() will return the underlying data:

In [17]: df.to_numpy()
Out[17]:
array([[-0.1732, 0.1192, -1.0442],
[-0.8618, -2.1046, -0.4949],
[ 1.0718, 0.7216, -0.7068],
[-1.0396, 0.2719, -0.425 ],
[ 0.567 , 0.2762, -1.0874],
[-0.6737, 0.1136, -1.4784],
[ 0.525 , 0.4047, 0.577 ],
[-1.715 , -1.0393, -0.3706]])

If a DataFrame contains homogeneously-typed data, the ndarray can actually be modified in-place, and the changes
will be reflected in the data structure. For heterogeneous data (e.g. some of the DataFrame’s columns are not all the
same dtype), this will not be the case. The values attribute itself, unlike the axis labels, cannot be assigned to.

Note: When working with heterogeneous data, the dtype of the resulting ndarray will be chosen to accommodate all
of the data involved. For example, if strings are involved, the result will be of object dtype. If there are only floats and
integers, the resulting array will be of float dtype.

In the past, pandas recommended Series.values or DataFrame.values for extracting the data from a Series
or DataFrame. You’ll still find references to these in old code bases and online. Going forward, we recommend
avoiding .values and using .array or .to_numpy(). .values has the following drawbacks:
1. When your Series contains an extension type, it’s unclear whether Series.values returns a NumPy array
[email protected]
or the extension array. Series.array will always return an ExtensionArray, and will never copy data.
T56GZSRVAH
Series.to_numpy() will always return a NumPy array, potentially at the cost of copying / coercing values.
2. When your DataFrame contains a mixture of data types, DataFrame.values may involve copying data and
coercing values to a common dtype, a relatively expensive operation. DataFrame.to_numpy(), being a
method, makes it clearer that the returned NumPy array may not be a view on the same data in the DataFrame.

Accelerated operations

pandas has support for accelerating certain types of binary numerical and boolean operations using the numexpr
library and the bottleneck libraries.
These libraries are especially useful when dealing with large data sets, and provide large speedups. numexpr uses
smart chunking, caching, and multiple cores. bottleneck is a set of specialized cython routines that are especially
fast when dealing with arrays that have nans.
Here is a sample (using 100 column x 100,000 row DataFrames):

Operation 0.11.0 (ms) Prior Version (ms) Ratio to Prior


df1 > df2 13.32 125.35 0.1063
df1 * df2 21.71 36.63 0.5928
df1 + df2 22.04 36.50 0.6039

You are highly encouraged to install both libraries. See the section Recommended Dependencies for more installation
info.
These are both enabled to be used by default, you can control this by setting the options:

86 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

pd.set_option('compute.use_bottleneck', False)
pd.set_option('compute.use_numexpr', False)

Flexible binary operations

With binary operations between pandas data structures, there are two key points of interest:
• Broadcasting behavior between higher- (e.g. DataFrame) and lower-dimensional (e.g. Series) objects.
• Missing data in computations.
We will demonstrate how to manage these issues independently, though they can be handled simultaneously.

Matching / broadcasting behavior

DataFrame has the methods add(), sub(), mul(), div() and related functions radd(), rsub(), . . . for
carrying out binary operations. For broadcasting behavior, Series input is of primary interest. Using these functions,
you can use to either match on the index or columns via the axis keyword:

In [18]: df = pd.DataFrame({
....: 'one': pd.Series(np.random.randn(3), index=['a', 'b', 'c']),
....: 'two': pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd']),
....: 'three': pd.Series(np.random.randn(3), index=['b', 'c', 'd'])})
....:

In [19]: df
[email protected]
Out[19]:
T56GZSRVAH one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172

In [20]: row = df.iloc[1]

In [21]: column = df['two']

In [22]: df.sub(row, axis='columns')


Out[22]:
one two three
a 1.051928 -0.139606 NaN
b 0.000000 0.000000 0.000000
c 0.352192 -0.433754 1.277825
d NaN -1.632779 -0.562782

In [23]: df.sub(row, axis=1)


Out[23]:
one two three
a 1.051928 -0.139606 NaN
b 0.000000 0.000000 0.000000
c 0.352192 -0.433754 1.277825
d NaN -1.632779 -0.562782

In [24]: df.sub(column, axis='index')


Out[24]:
(continues on next page)

2.4. Community tutorials 87


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


one two three
a -0.377535 0.0 NaN
b -1.569069 0.0 -1.962513
c -0.783123 0.0 -0.250933
d NaN 0.0 -0.892516

In [25]: df.sub(column, axis=0)


Out[25]:
one two three
a -0.377535 0.0 NaN
b -1.569069 0.0 -1.962513
c -0.783123 0.0 -0.250933
d NaN 0.0 -0.892516

Furthermore you can align a level of a MultiIndexed DataFrame with a Series.

In [26]: dfmi = df.copy()

In [27]: dfmi.index = pd.MultiIndex.from_tuples([(1, 'a'), (1, 'b'),


....: (1, 'c'), (2, 'a')],
....: names=['first', 'second'])
....:

In [28]: dfmi.sub(column, axis=0, level='second')


Out[28]:
one two three
first second
1 a
[email protected] -0.377535 0.000000 NaN
T56GZSRVAH b -1.569069 0.000000 -1.962513
c -0.783123 0.000000 -0.250933
2 a NaN -1.493173 -2.385688

Series and Index also support the divmod() builtin. This function takes the floor division and modulo operation at
the same time returning a two-tuple of the same type as the left hand side. For example:

In [29]: s = pd.Series(np.arange(10))

In [30]: s
Out[30]:
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
dtype: int64

In [31]: div, rem = divmod(s, 3)

In [32]: div
Out[32]:
0 0
(continues on next page)

88 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 0
2 0
3 1
4 1
5 1
6 2
7 2
8 2
9 3
dtype: int64

In [33]: rem
Out[33]:
0 0
1 1
2 2
3 0
4 1
5 2
6 0
7 1
8 2
9 0
dtype: int64

In [34]: idx = pd.Index(np.arange(10))

In [35]: idx
[email protected]
T56GZSRVAHOut[35]: Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype='int64')
In [36]: div, rem = divmod(idx, 3)

In [37]: div
Out[37]: Int64Index([0, 0, 0, 1, 1, 1, 2, 2, 2, 3], dtype='int64')

In [38]: rem
Out[38]: Int64Index([0, 1, 2, 0, 1, 2, 0, 1, 2, 0], dtype='int64')

We can also do elementwise divmod():


In [39]: div, rem = divmod(s, [2, 2, 3, 3, 4, 4, 5, 5, 6, 6])

In [40]: div
Out[40]:
0 0
1 0
2 0
3 1
4 1
5 1
6 1
7 1
8 1
9 1
dtype: int64

In [41]: rem
(continues on next page)

2.4. Community tutorials 89


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[41]:
0 0
1 1
2 2
3 0
4 0
5 1
6 1
7 2
8 2
9 3
dtype: int64

Missing data / operations with fill values

In Series and DataFrame, the arithmetic functions have the option of inputting a fill_value, namely a value to substitute
when at most one of the values at a location are missing. For example, when adding two DataFrame objects, you may
wish to treat NaN as 0 unless both DataFrames are missing that value, in which case the result will be NaN (you can
later replace NaN with some other value using fillna if you wish).

In [42]: df
Out[42]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246
[email protected] 1.478369 1.227435
T56GZSRVAHd NaN 0.279344 -0.613172

In [43]: df2
Out[43]:
one two three
a 1.394981 1.772517 1.000000
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172

In [44]: df + df2
Out[44]:
one two three
a 2.789963 3.545034 NaN
b 0.686107 3.824246 -0.100780
c 1.390491 2.956737 2.454870
d NaN 0.558688 -1.226343

In [45]: df.add(df2, fill_value=0)


Out[45]:
one two three
a 2.789963 3.545034 1.000000
b 0.686107 3.824246 -0.100780
c 1.390491 2.956737 2.454870
d NaN 0.558688 -1.226343

90 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Flexible comparisons

Series and DataFrame have the binary comparison methods eq, ne, lt, gt, le, and ge whose behavior is analogous
to the binary arithmetic operations described above:

In [46]: df.gt(df2)
Out[46]:
one two three
a False False False
b False False False
c False False False
d False False False

In [47]: df2.ne(df)
Out[47]:
one two three
a False False True
b False False False
c False False False
d True False False

These operations produce a pandas object of the same type as the left-hand-side input that is of dtype bool. These
boolean objects can be used in indexing operations, see the section on Boolean indexing.

Boolean reductions

You can apply the reductions: empty, any(), all(), and bool() to provide a way to summarize a boolean result.
[email protected]
T56GZSRVAH
In [48]: (df > 0).all()
Out[48]:
one False
two True
three False
dtype: bool

In [49]: (df > 0).any()


Out[49]:
one True
two True
three True
dtype: bool

You can reduce to a final boolean value.

In [50]: (df > 0).any().any()


Out[50]: True

You can test if a pandas object is empty, via the empty property.

In [51]: df.empty
Out[51]: False

In [52]: pd.DataFrame(columns=list('ABC')).empty
Out[52]: True

To evaluate single-element pandas objects in a boolean context, use the method bool():

2.4. Community tutorials 91


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [53]: pd.Series([True]).bool()
Out[53]: True

In [54]: pd.Series([False]).bool()
Out[54]: False

In [55]: pd.DataFrame([[True]]).bool()
Out[55]: True

In [56]: pd.DataFrame([[False]]).bool()
Out[56]: False

Warning: You might be tempted to do the following:


>>> if df:
... pass

Or
>>> df and df2

These will both raise errors, as you are trying to compare multiple values.:
ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.
˓→all().

See gotchas for a more detailed discussion.


[email protected]
T56GZSRVAH
Comparing if objects are equivalent

Often you may find that there is more than one way to compute the same result. As a simple example, consider df
+ df and df * 2. To test that these two computations produce the same result, given the tools shown above, you
might imagine using (df + df == df * 2).all(). But in fact, this expression is False:

In [57]: df + df == df * 2
Out[57]:
one two three
a True True False
b True True True
c True True True
d False True True

In [58]: (df + df == df * 2).all()


Out[58]:
one False
two True
three False
dtype: bool

Notice that the boolean DataFrame df + df == df * 2 contains some False values! This is because NaNs do
not compare as equals:

In [59]: np.nan == np.nan


Out[59]: False

92 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

So, NDFrames (such as Series and DataFrames) have an equals() method for testing equality, with NaNs in corre-
sponding locations treated as equal.

In [60]: (df + df).equals(df * 2)


Out[60]: True

Note that the Series or DataFrame index needs to be in the same order for equality to be True:

In [61]: df1 = pd.DataFrame({'col': ['foo', 0, np.nan]})

In [62]: df2 = pd.DataFrame({'col': [np.nan, 0, 'foo']}, index=[2, 1, 0])

In [63]: df1.equals(df2)
Out[63]: False

In [64]: df1.equals(df2.sort_index())
Out[64]: True

Comparing array-like objects

You can conveniently perform element-wise comparisons when comparing a pandas data structure with a scalar value:

In [65]: pd.Series(['foo', 'bar', 'baz']) == 'foo'


Out[65]:
0 True
1 False
2 False
[email protected]
dtype: bool
T56GZSRVAH
In [66]: pd.Index(['foo', 'bar', 'baz']) == 'foo'
Out[66]: array([ True, False, False])

Pandas also handles element-wise comparisons between different array-like objects of the same length:

In [67]: pd.Series(['foo', 'bar', 'baz']) == pd.Index(['foo', 'bar', 'qux'])


Out[67]:
0 True
1 True
2 False
dtype: bool

In [68]: pd.Series(['foo', 'bar', 'baz']) == np.array(['foo', 'bar', 'qux'])


Out[68]:
0 True
1 True
2 False
dtype: bool

Trying to compare Index or Series objects of different lengths will raise a ValueError:

In [55]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo', 'bar'])


ValueError: Series lengths must match to compare

In [56]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo'])


ValueError: Series lengths must match to compare

Note that this is different from the NumPy behavior where a comparison can be broadcast:

2.4. Community tutorials 93


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [69]: np.array([1, 2, 3]) == np.array([2])


Out[69]: array([False, True, False])

or it can return False if broadcasting can not be done:

In [70]: np.array([1, 2, 3]) == np.array([1, 2])


Out[70]: False

Combining overlapping data sets

A problem occasionally arising is the combination of two similar data sets where values in one are preferred over the
other. An example would be two data series representing a particular economic indicator where one is considered to
be of “higher quality”. However, the lower quality series might extend further back in history or have more complete
data coverage. As such, we would like to combine two DataFrame objects where missing values in one DataFrame
are conditionally filled with like-labeled values from the other DataFrame. The function implementing this operation
is combine_first(), which we illustrate:

In [71]: df1 = pd.DataFrame({'A': [1., np.nan, 3., 5., np.nan],


....: 'B': [np.nan, 2., 3., np.nan, 6.]})
....:

In [72]: df2 = pd.DataFrame({'A': [5., 2., 4., np.nan, 3., 7.],


....: 'B': [np.nan, np.nan, 3., 4., 6., 8.]})
....:

In [73]: df1
[email protected]
Out[73]:
T56GZSRVAH A B
0 1.0 NaN
1 NaN 2.0
2 3.0 3.0
3 5.0 NaN
4 NaN 6.0

In [74]: df2
Out[74]:
A B
0 5.0 NaN
1 2.0 NaN
2 4.0 3.0
3 NaN 4.0
4 3.0 6.0
5 7.0 8.0

In [75]: df1.combine_first(df2)
Out[75]:
A B
0 1.0 NaN
1 2.0 2.0
2 3.0 3.0
3 5.0 4.0
4 3.0 6.0
5 7.0 8.0

94 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

General DataFrame combine

The combine_first() method above calls the more general DataFrame.combine(). This method takes
another DataFrame and a combiner function, aligns the input DataFrame and then passes the combiner function pairs
of Series (i.e., columns whose names are the same).
So, for instance, to reproduce combine_first() as above:
In [76]: def combiner(x, y):
....: return np.where(pd.isna(x), y, x)
....:

Descriptive statistics

There exists a large number of methods for computing descriptive statistics and other related operations on Series,
DataFrame. Most of these are aggregations (hence producing a lower-dimensional result) like sum(), mean(), and
quantile(), but some of them, like cumsum() and cumprod(), produce an object of the same size. Generally
speaking, these methods take an axis argument, just like ndarray.{sum, std, . . . }, but the axis can be specified by name
or integer:
• Series: no axis argument needed
• DataFrame: “index” (axis=0, default), “columns” (axis=1)
For example:
In [77]: df
Out[77]:
[email protected]
one two three
T56GZSRVAHa 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172

In [78]: df.mean(0)
Out[78]:
one 0.811094
two 1.360588
three 0.187958
dtype: float64

In [79]: df.mean(1)
Out[79]:
a 1.583749
b 0.734929
c 1.133683
d -0.166914
dtype: float64

All such methods have a skipna option signaling whether to exclude missing data (True by default):
In [80]: df.sum(0, skipna=False)
Out[80]:
one NaN
two 5.442353
three NaN
dtype: float64
(continues on next page)

2.4. Community tutorials 95


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [81]: df.sum(axis=1, skipna=True)


Out[81]:
a 3.167498
b 2.204786
c 3.401050
d -0.333828
dtype: float64

Combined with the broadcasting / arithmetic behavior, one can describe various statistical procedures, like standard-
ization (rendering data zero mean and standard deviation 1), very concisely:

In [82]: ts_stand = (df - df.mean()) / df.std()

In [83]: ts_stand.std()
Out[83]:
one 1.0
two 1.0
three 1.0
dtype: float64

In [84]: xs_stand = df.sub(df.mean(1), axis=0).div(df.std(1), axis=0)

In [85]: xs_stand.std(1)
Out[85]:
a 1.0
b 1.0
c 1.0
[email protected]
T56GZSRVAHd 1.0
dtype: float64

Note that methods like cumsum() and cumprod() preserve the location of NaN values. This is somewhat different
from expanding() and rolling(). For more details please see this note.

In [86]: df.cumsum()
Out[86]:
one two three
a 1.394981 1.772517 NaN
b 1.738035 3.684640 -0.050390
c 2.433281 5.163008 1.177045
d NaN 5.442353 0.563873

Here is a quick reference summary table of common functions. Each also takes an optional level parameter which
applies only if the object has a hierarchical index.

96 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Function Description
count Number of non-NA observations
sum Sum of values
mean Mean of values
mad Mean absolute deviation
median Arithmetic median of values
min Minimum
max Maximum
mode Mode
abs Absolute Value
prod Product of values
std Bessel-corrected sample standard deviation
var Unbiased variance
sem Standard error of the mean
skew Sample skewness (3rd moment)
kurt Sample kurtosis (4th moment)
quantile Sample quantile (value at %)
cumsum Cumulative sum
cumprod Cumulative product
cummax Cumulative maximum
cummin Cumulative minimum

Note that by chance some NumPy methods, like mean, std, and sum, will exclude NAs on Series input by default:

In [87]: np.mean(df['one'])
[email protected]
Out[87]: 0.8110935116651192
T56GZSRVAH
In [88]: np.mean(df['one'].to_numpy())
Out[88]: nan

Series.nunique() will return the number of unique non-NA values in a Series:

In [89]: series = pd.Series(np.random.randn(500))

In [90]: series[20:500] = np.nan

In [91]: series[10:20] = 5

In [92]: series.nunique()
Out[92]: 11

Summarizing data: describe

There is a convenient describe() function which computes a variety of summary statistics about a Series or the
columns of a DataFrame (excluding NAs of course):

In [93]: series = pd.Series(np.random.randn(1000))

In [94]: series[::2] = np.nan

In [95]: series.describe()
Out[95]:
(continues on next page)

2.4. Community tutorials 97


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


count 500.000000
mean -0.021292
std 1.015906
min -2.683763
25% -0.699070
50% -0.069718
75% 0.714483
max 3.160915
dtype: float64

In [96]: frame = pd.DataFrame(np.random.randn(1000, 5),


....: columns=['a', 'b', 'c', 'd', 'e'])
....:

In [97]: frame.iloc[::2] = np.nan

In [98]: frame.describe()
Out[98]:
a b c d e
count 500.000000 500.000000 500.000000 500.000000 500.000000
mean 0.033387 0.030045 -0.043719 -0.051686 0.005979
std 1.017152 0.978743 1.025270 1.015988 1.006695
min -3.000951 -2.637901 -3.303099 -3.159200 -3.188821
25% -0.647623 -0.576449 -0.712369 -0.691338 -0.691115
50% 0.047578 -0.021499 -0.023888 -0.032652 -0.025363
75% 0.729907 0.775880 0.618896 0.670047 0.649748
max 2.740139 2.752332 3.004229 2.728702 3.240991
[email protected]
T56GZSRVAHYou can select specific percentiles to include in the output:
In [99]: series.describe(percentiles=[.05, .25, .75, .95])
Out[99]:
count 500.000000
mean -0.021292
std 1.015906
min -2.683763
5% -1.645423
25% -0.699070
50% -0.069718
75% 0.714483
95% 1.711409
max 3.160915
dtype: float64

By default, the median is always included.


For a non-numerical Series object, describe() will give a simple summary of the number of unique values and
most frequently occurring values:
In [100]: s = pd.Series(['a', 'a', 'b', 'b', 'a', 'a', np.nan, 'c', 'd', 'a'])

In [101]: s.describe()
Out[101]:
count 9
unique 4
top a
freq 5
(continues on next page)

98 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


dtype: object

Note that on a mixed-type DataFrame object, describe() will restrict the summary to include only numerical
columns or, if none are, only categorical columns:

In [102]: frame = pd.DataFrame({'a': ['Yes', 'Yes', 'No', 'No'], 'b': range(4)})

In [103]: frame.describe()
Out[103]:
b
count 4.000000
mean 1.500000
std 1.290994
min 0.000000
25% 0.750000
50% 1.500000
75% 2.250000
max 3.000000

This behavior can be controlled by providing a list of types as include/exclude arguments. The special value
all can also be used:

In [104]: frame.describe(include=['object'])
Out[104]:
a
count 4
unique 2
[email protected]
top Yes
T56GZSRVAHfreq 2

In [105]: frame.describe(include=['number'])
Out[105]:
b
count 4.000000
mean 1.500000
std 1.290994
min 0.000000
25% 0.750000
50% 1.500000
75% 2.250000
max 3.000000

In [106]: frame.describe(include='all')
Out[106]:
a b
count 4 4.000000
unique 2 NaN
top Yes NaN
freq 2 NaN
mean NaN 1.500000
std NaN 1.290994
min NaN 0.000000
25% NaN 0.750000
50% NaN 1.500000
75% NaN 2.250000
max NaN 3.000000

2.4. Community tutorials 99


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

That feature relies on select_dtypes. Refer to there for details about accepted inputs.

Index of min/max values

The idxmin() and idxmax() functions on Series and DataFrame compute the index labels with the minimum and
maximum corresponding values:

In [107]: s1 = pd.Series(np.random.randn(5))

In [108]: s1
Out[108]:
0 1.118076
1 -0.352051
2 -1.242883
3 -1.277155
4 -0.641184
dtype: float64

In [109]: s1.idxmin(), s1.idxmax()


Out[109]: (3, 0)

In [110]: df1 = pd.DataFrame(np.random.randn(5, 3), columns=['A', 'B', 'C'])

In [111]: df1
Out[111]:
A B C
0 -0.327863 -0.946180 -0.137570
1 -0.186235 -0.257213 -0.486567
[email protected]
T56GZSRVAH2 -0.507027 -0.871259 -0.111110
3 2.000339 -2.430505 0.089759
4 -0.321434 -0.033695 0.096271

In [112]: df1.idxmin(axis=0)
Out[112]:
A 2
B 3
C 1
dtype: int64

In [113]: df1.idxmax(axis=1)
Out[113]:
0 C
1 A
2 C
3 A
4 C
dtype: object

When there are multiple rows (or columns) matching the minimum or maximum value, idxmin() and idxmax()
return the first matching index:

In [114]: df3 = pd.DataFrame([2, 1, 1, 3, np.nan], columns=['A'], index=list('edcba'))

In [115]: df3
Out[115]:
A
(continues on next page)

100 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


e 2.0
d 1.0
c 1.0
b 3.0
a NaN

In [116]: df3['A'].idxmin()
Out[116]: 'd'

Note: idxmin and idxmax are called argmin and argmax in NumPy.

Value counts (histogramming) / mode

The value_counts() Series method and top-level function computes a histogram of a 1D array of values. It can
also be used as a function on regular arrays:
In [117]: data = np.random.randint(0, 7, size=50)

In [118]: data
Out[118]:
array([6, 6, 2, 3, 5, 3, 2, 5, 4, 5, 4, 3, 4, 5, 0, 2, 0, 4, 2, 0, 3, 2,
2, 5, 6, 5, 3, 4, 6, 4, 3, 5, 6, 4, 3, 6, 2, 6, 6, 2, 3, 4, 2, 1,
6, 2, 6, 1, 5, 4])

[email protected]
In [119]: s = pd.Series(data)
T56GZSRVAH
In [120]: s.value_counts()
Out[120]:
6 10
2 10
4 9
5 8
3 8
0 3
1 2
dtype: int64

In [121]: pd.value_counts(data)
Out[121]:
6 10
2 10
4 9
5 8
3 8
0 3
1 2
dtype: int64

Similarly, you can get the most frequently occurring value(s) (the mode) of the values in a Series or DataFrame:
In [122]: s5 = pd.Series([1, 1, 3, 3, 3, 5, 5, 7, 7, 7])

In [123]: s5.mode()
(continues on next page)

2.4. Community tutorials 101


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[123]:
0 3
1 7
dtype: int64

In [124]: df5 = pd.DataFrame({"A": np.random.randint(0, 7, size=50),


.....: "B": np.random.randint(-10, 15, size=50)})
.....:

In [125]: df5.mode()
Out[125]:
A B
0 1.0 -9
1 NaN 10
2 NaN 13

Discretization and quantiling

Continuous values can be discretized using the cut() (bins based on values) and qcut() (bins based on sample
quantiles) functions:
In [126]: arr = np.random.randn(20)

In [127]: factor = pd.cut(arr, 4)

In [128]: factor
[email protected]
T56GZSRVAHOut[128]:
[(-0.251, 0.464], (-0.968, -0.251], (0.464, 1.179], (-0.251, 0.464], (-0.968, -0.251],
˓→ ..., (-0.251, 0.464], (-0.968, -0.251], (-0.968, -0.251], (-0.968, -0.251], (-0.

˓→968, -0.251]]

Length: 20
Categories (4, interval[float64]): [(-0.968, -0.251] < (-0.251, 0.464] < (0.464, 1.
˓→179] <

(1.179, 1.893]]

In [129]: factor = pd.cut(arr, [-5, -1, 0, 1, 5])

In [130]: factor
Out[130]:
[(0, 1], (-1, 0], (0, 1], (0, 1], (-1, 0], ..., (-1, 0], (-1, 0], (-1, 0], (-1, 0], (-
˓→1, 0]]

Length: 20
Categories (4, interval[int64]): [(-5, -1] < (-1, 0] < (0, 1] < (1, 5]]

qcut() computes sample quantiles. For example, we could slice up some normally distributed data into equal-size
quartiles like so:
In [131]: arr = np.random.randn(30)

In [132]: factor = pd.qcut(arr, [0, .25, .5, .75, 1])

In [133]: factor
Out[133]:
[(0.569, 1.184], (-2.278, -0.301], (-2.278, -0.301], (0.569, 1.184], (0.569, 1.184], .
˓→.., (-0.301, 0.569], (1.184, 2.346], (1.184, 2.346], (-0.301, 0.569], (-2.278, -0.

˓→301]] (continues on next page)

102 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Length: 30
Categories (4, interval[float64]): [(-2.278, -0.301] < (-0.301, 0.569] < (0.569, 1.
˓→184] <

(1.184, 2.346]]

In [134]: pd.value_counts(factor)
Out[134]:
(1.184, 2.346] 8
(-2.278, -0.301] 8
(0.569, 1.184] 7
(-0.301, 0.569] 7
dtype: int64

We can also pass infinite values to define the bins:

In [135]: arr = np.random.randn(20)

In [136]: factor = pd.cut(arr, [-np.inf, 0, np.inf])

In [137]: factor
Out[137]:
[(-inf, 0.0], (0.0, inf], (0.0, inf], (-inf, 0.0], (-inf, 0.0], ..., (-inf, 0.0], (-
˓→inf, 0.0], (-inf, 0.0], (0.0, inf], (0.0, inf]]

Length: 20
Categories (2, interval[float64]): [(-inf, 0.0] < (0.0, inf]]

[email protected]
Function application
T56GZSRVAH
To apply your own or another library’s functions to pandas objects, you should be aware of the three methods below.
The appropriate method to use depends on whether your function expects to operate on an entire DataFrame or
Series, row- or column-wise, or elementwise.
1. Tablewise Function Application: pipe()
2. Row or Column-wise Function Application: apply()
3. Aggregation API: agg() and transform()
4. Applying Elementwise Functions: applymap()

Tablewise function application

DataFrames and Series can be passed into functions. However, if the function needs to be called in a chain,
consider using the pipe() method.
First some setup:

In [138]: def extract_city_name(df):


.....: """
.....: Chicago, IL -> Chicago for city_name column
.....: """
.....: df['city_name'] = df['city_and_code'].str.split(",").str.get(0)
.....: return df
.....:

(continues on next page)

2.4. Community tutorials 103


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [139]: def add_country_name(df, country_name=None):
.....: """
.....: Chicago -> Chicago-US for city_name column
.....: """
.....: col = 'city_name'
.....: df['city_and_country'] = df[col] + country_name
.....: return df
.....:

In [140]: df_p = pd.DataFrame({'city_and_code': ['Chicago, IL']})

extract_city_name and add_country_name are functions taking and returning DataFrames.


Now compare the following:

In [141]: add_country_name(extract_city_name(df_p), country_name='US')


Out[141]:
city_and_code city_name city_and_country
0 Chicago, IL Chicago ChicagoUS

Is equivalent to:

In [142]: (df_p.pipe(extract_city_name)
.....: .pipe(add_country_name, country_name="US"))
.....:
Out[142]:
city_and_code city_name city_and_country
0 Chicago, IL Chicago ChicagoUS
[email protected]
T56GZSRVAH
Pandas encourages the second style, which is known as method chaining. pipe makes it easy to use your own or
another library’s functions in method chains, alongside pandas’ methods.
In the example above, the functions extract_city_name and add_country_name each expected a
DataFrame as the first positional argument. What if the function you wish to apply takes its data as, say, the
second argument? In this case, provide pipe with a tuple of (callable, data_keyword). .pipe will route
the DataFrame to the argument specified in the tuple.
For example, we can fit a regression using statsmodels. Their API expects a formula first and a DataFrame as the
second argument, data. We pass in the function, keyword pair (sm.ols, 'data') to pipe:

In [143]: import statsmodels.formula.api as sm

In [144]: bb = pd.read_csv('data/baseball.csv', index_col='id')

In [145]: (bb.query('h > 0')


.....: .assign(ln_h=lambda df: np.log(df.h))
.....: .pipe((sm.ols, 'data'), 'hr ~ ln_h + year + g + C(lg)')
.....: .fit()
.....: .summary()
.....: )
.....:
Out[145]:
<class 'statsmodels.iolib.summary.Summary'>
"""
OLS Regression Results
==============================================================================
Dep. Variable: hr R-squared: 0.685
(continues on next page)

104 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Model: OLS Adj. R-squared: 0.665
Method: Least Squares F-statistic: 34.28
Date: Wed, 18 Mar 2020 Prob (F-statistic): 3.48e-15
Time: 15:38:44 Log-Likelihood: -205.92
No. Observations: 68 AIC: 421.8
Df Residuals: 63 BIC: 432.9
Df Model: 4
Covariance Type: nonrobust
===============================================================================
coef std err t P>|t| [0.025 0.975]
-------------------------------------------------------------------------------
Intercept -8484.7720 4664.146 -1.819 0.074 -1.78e+04 835.780
C(lg)[T.NL] -2.2736 1.325 -1.716 0.091 -4.922 0.375
ln_h -1.3542 0.875 -1.547 0.127 -3.103 0.395
year 4.2277 2.324 1.819 0.074 -0.417 8.872
g 0.1841 0.029 6.258 0.000 0.125 0.243
==============================================================================
Omnibus: 10.875 Durbin-Watson: 1.999
Prob(Omnibus): 0.004 Jarque-Bera (JB): 17.298
Skew: 0.537 Prob(JB): 0.000175
Kurtosis: 5.225 Cond. No. 1.49e+07
==============================================================================

Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly
˓→specified.

[2] The condition number is large, 1.49e+07. This might indicate that there are
strong multicollinearity or other numerical problems.
[email protected]
T56GZSRVAH"""

The pipe method is inspired by unix pipes and more recently dplyr and magrittr, which have introduced the popular
(%>%) (read pipe) operator for R. The implementation of pipe here is quite clean and feels right at home in python.
We encourage you to view the source code of pipe().

Row or column-wise function application

Arbitrary functions can be applied along the axes of a DataFrame using the apply() method, which, like the de-
scriptive statistics methods, takes an optional axis argument:
In [146]: df.apply(np.mean)
Out[146]:
one 0.811094
two 1.360588
three 0.187958
dtype: float64

In [147]: df.apply(np.mean, axis=1)


Out[147]:
a 1.583749
b 0.734929
c 1.133683
d -0.166914
dtype: float64

In [148]: df.apply(lambda x: x.max() - x.min())


(continues on next page)

2.4. Community tutorials 105


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[148]:
one 1.051928
two 1.632779
three 1.840607
dtype: float64

In [149]: df.apply(np.cumsum)
Out[149]:
one two three
a 1.394981 1.772517 NaN
b 1.738035 3.684640 -0.050390
c 2.433281 5.163008 1.177045
d NaN 5.442353 0.563873

In [150]: df.apply(np.exp)
Out[150]:
one two three
a 4.034899 5.885648 NaN
b 1.409244 6.767440 0.950858
c 2.004201 4.385785 3.412466
d NaN 1.322262 0.541630

The apply() method will also dispatch on a string method name.


In [151]: df.apply('mean')
Out[151]:
one 0.811094
two 1.360588
[email protected]
T56GZSRVAH three 0.187958
dtype: float64

In [152]: df.apply('mean', axis=1)


Out[152]:
a 1.583749
b 0.734929
c 1.133683
d -0.166914
dtype: float64

The return type of the function passed to apply() affects the type of the final output from DataFrame.apply for
the default behaviour:
• If the applied function returns a Series, the final output is a DataFrame. The columns match the index of
the Series returned by the applied function.
• If the applied function returns any other type, the final output is a Series.
This default behaviour can be overridden using the result_type, which accepts three options: reduce,
broadcast, and expand. These will determine how list-likes return values expand (or not) to a DataFrame.
apply() combined with some cleverness can be used to answer many questions about a data set. For example,
suppose we wanted to extract the date where the maximum value for each column occurred:
In [153]: tsdf = pd.DataFrame(np.random.randn(1000, 3), columns=['A', 'B', 'C'],
.....: index=pd.date_range('1/1/2000', periods=1000))
.....:

In [154]: tsdf.apply(lambda x: x.idxmax())


(continues on next page)

106 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[154]:
A 2000-08-06
B 2001-01-18
C 2001-07-18
dtype: datetime64[ns]

You may also pass additional arguments and keyword arguments to the apply() method. For instance, consider the
following function you would like to apply:

def subtract_and_divide(x, sub, divide=1):


return (x - sub) / divide

You may then apply this function as follows:

df.apply(subtract_and_divide, args=(5,), divide=3)

Another useful feature is the ability to pass Series methods to carry out some Series operation on each column or row:

In [155]: tsdf
Out[155]:
A B C
2000-01-01 -0.158131 -0.232466 0.321604
2000-01-02 -1.810340 -3.105758 0.433834
2000-01-03 -1.209847 -1.156793 -0.136794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
[email protected]
2000-01-07 NaN NaN NaN
T56GZSRVAH2000-01-08 -0.653602 0.178875 1.008298
2000-01-09 1.007996 0.462824 0.254472
2000-01-10 0.307473 0.600337 1.643950

In [156]: tsdf.apply(pd.Series.interpolate)
Out[156]:
A B C
2000-01-01 -0.158131 -0.232466 0.321604
2000-01-02 -1.810340 -3.105758 0.433834
2000-01-03 -1.209847 -1.156793 -0.136794
2000-01-04 -1.098598 -0.889659 0.092225
2000-01-05 -0.987349 -0.622526 0.321243
2000-01-06 -0.876100 -0.355392 0.550262
2000-01-07 -0.764851 -0.088259 0.779280
2000-01-08 -0.653602 0.178875 1.008298
2000-01-09 1.007996 0.462824 0.254472
2000-01-10 0.307473 0.600337 1.643950

Finally, apply() takes an argument raw which is False by default, which converts each row or column into a Series
before applying the function. When set to True, the passed function will instead receive an ndarray object, which has
positive performance implications if you do not need the indexing functionality.

2.4. Community tutorials 107


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Aggregation API

The aggregation API allows one to express possibly multiple aggregation operations in a single concise way. This API
is similar across pandas objects, see groupby API, the window functions API, and the resample API. The entry point
for aggregation is DataFrame.aggregate(), or the alias DataFrame.agg().
We will use a similar starting frame from above:

In [157]: tsdf = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'],


.....: index=pd.date_range('1/1/2000', periods=10))
.....:

In [158]: tsdf.iloc[3:7] = np.nan

In [159]: tsdf
Out[159]:
A B C
2000-01-01 1.257606 1.004194 0.167574
2000-01-02 -0.749892 0.288112 -0.757304
2000-01-03 -0.207550 -0.298599 0.116018
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.814347 -0.257623 0.869226
2000-01-09 -0.250663 -1.206601 0.896839
2000-01-10 2.169758 -1.333363 0.283157

Using a single function is equivalent to apply(). You can also pass named methods as strings. These will return a
[email protected]
T56GZSRVAHSeries of the aggregated output:

In [160]: tsdf.agg(np.sum)
Out[160]:
A 3.033606
B -1.803879
C 1.575510
dtype: float64

In [161]: tsdf.agg('sum')
Out[161]:
A 3.033606
B -1.803879
C 1.575510
dtype: float64

# these are equivalent to a ``.sum()`` because we are aggregating


# on a single function
In [162]: tsdf.sum()
Out[162]:
A 3.033606
B -1.803879
C 1.575510
dtype: float64

Single aggregations on a Series this will return a scalar value:

In [163]: tsdf['A'].agg('sum')
Out[163]: 3.033606102414146

108 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Aggregating with multiple functions

You can pass multiple aggregation arguments as a list. The results of each of the passed functions will be a row in the
resulting DataFrame. These are naturally named from the aggregation function.

In [164]: tsdf.agg(['sum'])
Out[164]:
A B C
sum 3.033606 -1.803879 1.57551

Multiple functions yield multiple rows:

In [165]: tsdf.agg(['sum', 'mean'])


Out[165]:
A B C
sum 3.033606 -1.803879 1.575510
mean 0.505601 -0.300647 0.262585

On a Series, multiple functions return a Series, indexed by the function names:

In [166]: tsdf['A'].agg(['sum', 'mean'])


Out[166]:
sum 3.033606
mean 0.505601
Name: A, dtype: float64

Passing a lambda function will yield a <lambda> named row:


[email protected]
In [167]: tsdf['A'].agg(['sum', lambda x: x.mean()])
T56GZSRVAHOut[167]:
sum 3.033606
<lambda> 0.505601
Name: A, dtype: float64

Passing a named function will yield that name for the row:

In [168]: def mymean(x):


.....: return x.mean()
.....:

In [169]: tsdf['A'].agg(['sum', mymean])


Out[169]:
sum 3.033606
mymean 0.505601
Name: A, dtype: float64

Aggregating with a dict

Passing a dictionary of column names to a scalar or a list of scalars, to DataFrame.agg allows you to customize
which functions are applied to which columns. Note that the results are not in any particular order, you can use an
OrderedDict instead to guarantee ordering.

In [170]: tsdf.agg({'A': 'mean', 'B': 'sum'})


Out[170]:
A 0.505601
(continues on next page)

2.4. Community tutorials 109


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


B -1.803879
dtype: float64

Passing a list-like will generate a DataFrame output. You will get a matrix-like output of all of the aggregators. The
output will consist of all unique functions. Those that are not noted for a particular column will be NaN:

In [171]: tsdf.agg({'A': ['mean', 'min'], 'B': 'sum'})


Out[171]:
A B
mean 0.505601 NaN
min -0.749892 NaN
sum NaN -1.803879

Mixed dtypes

When presented with mixed dtypes that cannot aggregate, .agg will only take the valid aggregations. This is similar
to how groupby .agg works.

In [172]: mdf = pd.DataFrame({'A': [1, 2, 3],


.....: 'B': [1., 2., 3.],
.....: 'C': ['foo', 'bar', 'baz'],
.....: 'D': pd.date_range('20130101', periods=3)})
.....:

In [173]: mdf.dtypes
Out[173]:
[email protected]
T56GZSRVAHA int64
B float64
C object
D datetime64[ns]
dtype: object

In [174]: mdf.agg(['min', 'sum'])


Out[174]:
A B C D
min 1 1.0 bar 2013-01-01
sum 6 6.0 foobarbaz NaT

Custom describe

With .agg() is it possible to easily create a custom describe function, similar to the built in describe function.

In [175]: from functools import partial

In [176]: q_25 = partial(pd.Series.quantile, q=0.25)

In [177]: q_25.__name__ = '25%'

In [178]: q_75 = partial(pd.Series.quantile, q=0.75)

In [179]: q_75.__name__ = '75%'

(continues on next page)

110 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [180]: tsdf.agg(['count', 'mean', 'std', 'min', q_25, 'median', q_75, 'max'])
Out[180]:
A B C
count 6.000000 6.000000 6.000000
mean 0.505601 -0.300647 0.262585
std 1.103362 0.887508 0.606860
min -0.749892 -1.333363 -0.757304
25% -0.239885 -0.979600 0.128907
median 0.303398 -0.278111 0.225365
75% 1.146791 0.151678 0.722709
max 2.169758 1.004194 0.896839

Transform API

The transform() method returns an object that is indexed the same (same size) as the original. This API allows
you to provide multiple operations at the same time rather than one-by-one. Its API is quite similar to the .agg API.
We create a frame similar to the one used in the above sections.

In [181]: tsdf = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'],


.....: index=pd.date_range('1/1/2000', periods=10))
.....:

In [182]: tsdf.iloc[3:7] = np.nan

In [183]: tsdf
[email protected]
Out[183]:
T56GZSRVAH A B C
2000-01-01 -0.428759 -0.864890 -0.675341
2000-01-02 -0.168731 1.338144 -1.279321
2000-01-03 -1.621034 0.438107 0.903794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 -1.240447 -0.201052
2000-01-09 -0.157795 0.791197 -1.144209
2000-01-10 -0.030876 0.371900 0.061932

Transform the entire frame. .transform() allows input functions as: a NumPy function, a string function name or
a user defined function.

In [184]: tsdf.transform(np.abs)
Out[184]:
A B C
2000-01-01 0.428759 0.864890 0.675341
2000-01-02 0.168731 1.338144 1.279321
2000-01-03 1.621034 0.438107 0.903794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 1.240447 0.201052
2000-01-09 0.157795 0.791197 1.144209
2000-01-10 0.030876 0.371900 0.061932
(continues on next page)

2.4. Community tutorials 111


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [185]: tsdf.transform('abs')
Out[185]:
A B C
2000-01-01 0.428759 0.864890 0.675341
2000-01-02 0.168731 1.338144 1.279321
2000-01-03 1.621034 0.438107 0.903794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 1.240447 0.201052
2000-01-09 0.157795 0.791197 1.144209
2000-01-10 0.030876 0.371900 0.061932

In [186]: tsdf.transform(lambda x: x.abs())


Out[186]:
A B C
2000-01-01 0.428759 0.864890 0.675341
2000-01-02 0.168731 1.338144 1.279321
2000-01-03 1.621034 0.438107 0.903794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 1.240447 0.201052
2000-01-09 0.157795 0.791197 1.144209
2000-01-10 0.030876 0.371900 0.061932
[email protected]
T56GZSRVAH
Here transform() received a single function; this is equivalent to a ufunc application.
In [187]: np.abs(tsdf)
Out[187]:
A B C
2000-01-01 0.428759 0.864890 0.675341
2000-01-02 0.168731 1.338144 1.279321
2000-01-03 1.621034 0.438107 0.903794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 1.240447 0.201052
2000-01-09 0.157795 0.791197 1.144209
2000-01-10 0.030876 0.371900 0.061932

Passing a single function to .transform() with a Series will yield a single Series in return.
In [188]: tsdf['A'].transform(np.abs)
Out[188]:
2000-01-01 0.428759
2000-01-02 0.168731
2000-01-03 1.621034
2000-01-04 NaN
2000-01-05 NaN
2000-01-06 NaN
2000-01-07 NaN
2000-01-08 0.254374
(continues on next page)

112 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-09 0.157795
2000-01-10 0.030876
Freq: D, Name: A, dtype: float64

Transform with multiple functions

Passing multiple functions will yield a column MultiIndexed DataFrame. The first level will be the original frame
column names; the second level will be the names of the transforming functions.
In [189]: tsdf.transform([np.abs, lambda x: x + 1])
Out[189]:
A B C
absolute <lambda> absolute <lambda> absolute <lambda>
2000-01-01 0.428759 0.571241 0.864890 0.135110 0.675341 0.324659
2000-01-02 0.168731 0.831269 1.338144 2.338144 1.279321 -0.279321
2000-01-03 1.621034 -0.621034 0.438107 1.438107 0.903794 1.903794
2000-01-04 NaN NaN NaN NaN NaN NaN
2000-01-05 NaN NaN NaN NaN NaN NaN
2000-01-06 NaN NaN NaN NaN NaN NaN
2000-01-07 NaN NaN NaN NaN NaN NaN
2000-01-08 0.254374 1.254374 1.240447 -0.240447 0.201052 0.798948
2000-01-09 0.157795 0.842205 0.791197 1.791197 1.144209 -0.144209
2000-01-10 0.030876 0.969124 0.371900 1.371900 0.061932 1.061932

Passing multiple functions to a Series will yield a DataFrame. The resulting column names will be the transforming
functions.
[email protected]
T56GZSRVAH
In [190]: tsdf['A'].transform([np.abs, lambda x: x + 1])
Out[190]:
absolute <lambda>
2000-01-01 0.428759 0.571241
2000-01-02 0.168731 0.831269
2000-01-03 1.621034 -0.621034
2000-01-04 NaN NaN
2000-01-05 NaN NaN
2000-01-06 NaN NaN
2000-01-07 NaN NaN
2000-01-08 0.254374 1.254374
2000-01-09 0.157795 0.842205
2000-01-10 0.030876 0.969124

Transforming with a dict

Passing a dict of functions will allow selective transforming per column.


In [191]: tsdf.transform({'A': np.abs, 'B': lambda x: x + 1})
Out[191]:
A B
2000-01-01 0.428759 0.135110
2000-01-02 0.168731 2.338144
2000-01-03 1.621034 1.438107
2000-01-04 NaN NaN
2000-01-05 NaN NaN
(continues on next page)

2.4. Community tutorials 113


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-06 NaN NaN
2000-01-07 NaN NaN
2000-01-08 0.254374 -0.240447
2000-01-09 0.157795 1.791197
2000-01-10 0.030876 1.371900

Passing a dict of lists will generate a MultiIndexed DataFrame with these selective transforms.

In [192]: tsdf.transform({'A': np.abs, 'B': [lambda x: x + 1, 'sqrt']})


Out[192]:
A B
absolute <lambda> sqrt
2000-01-01 0.428759 0.135110 NaN
2000-01-02 0.168731 2.338144 1.156782
2000-01-03 1.621034 1.438107 0.661897
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 -0.240447 NaN
2000-01-09 0.157795 1.791197 0.889493
2000-01-10 0.030876 1.371900 0.609836

Applying elementwise functions

Since not all functions can be vectorized (accept NumPy arrays and return another array or value), the methods
[email protected]
T56GZSRVAHapplymap() on DataFrame and analogously map() on Series accept any Python function taking a single value and
returning a single value. For example:

In [193]: df4
Out[193]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172

In [194]: def f(x):


.....: return len(str(x))
.....:

In [195]: df4['one'].map(f)
Out[195]:
a 18
b 19
c 18
d 3
Name: one, dtype: int64

In [196]: df4.applymap(f)
Out[196]:
one two three
a 18 17 3
b 19 18 20
(continues on next page)

114 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


c 18 18 16
d 3 19 19

Series.map() has an additional feature; it can be used to easily “link” or “map” values defined by a secondary
series. This is closely related to merging/joining functionality:

In [197]: s = pd.Series(['six', 'seven', 'six', 'seven', 'six'],


.....: index=['a', 'b', 'c', 'd', 'e'])
.....:

In [198]: t = pd.Series({'six': 6., 'seven': 7.})

In [199]: s
Out[199]:
a six
b seven
c six
d seven
e six
dtype: object

In [200]: s.map(t)
Out[200]:
a 6.0
b 7.0
c 6.0
d 7.0
e 6.0
[email protected]
T56GZSRVAHdtype: float64

Reindexing and altering labels

reindex() is the fundamental data alignment method in pandas. It is used to implement nearly all other features
relying on label-alignment functionality. To reindex means to conform the data to match a given set of labels along a
particular axis. This accomplishes several things:
• Reorders the existing data to match a new set of labels
• Inserts missing value (NA) markers in label locations where no data for that label existed
• If specified, fill data for missing labels using logic (highly relevant to working with time series data)
Here is a simple example:

In [201]: s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])

In [202]: s
Out[202]:
a 1.695148
b 1.328614
c 1.234686
d -0.385845
e -1.326508
dtype: float64

In [203]: s.reindex(['e', 'b', 'f', 'd'])


(continues on next page)

2.4. Community tutorials 115


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[203]:
e -1.326508
b 1.328614
f NaN
d -0.385845
dtype: float64

Here, the f label was not contained in the Series and hence appears as NaN in the result.
With a DataFrame, you can simultaneously reindex the index and columns:

In [204]: df
Out[204]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172

In [205]: df.reindex(index=['c', 'f', 'b'], columns=['three', 'two', 'one'])


Out[205]:
three two one
c 1.227435 1.478369 0.695246
f NaN NaN NaN
b -0.050390 1.912123 0.343054

You may also use reindex with an axis keyword:


[email protected]
In [206]: df.reindex(['c', 'f', 'b'], axis='index')
T56GZSRVAH
Out[206]:
one two three
c 0.695246 1.478369 1.227435
f NaN NaN NaN
b 0.343054 1.912123 -0.050390

Note that the Index objects containing the actual axis labels can be shared between objects. So if we have a Series
and a DataFrame, the following can be done:

In [207]: rs = s.reindex(df.index)

In [208]: rs
Out[208]:
a 1.695148
b 1.328614
c 1.234686
d -0.385845
dtype: float64

In [209]: rs.index is df.index


Out[209]: True

This means that the reindexed Series’s index is the same Python object as the DataFrame’s index.
New in version 0.21.0.
DataFrame.reindex() also supports an “axis-style” calling convention, where you specify a single labels
argument and the axis it applies to.

116 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [210]: df.reindex(['c', 'f', 'b'], axis='index')


Out[210]:
one two three
c 0.695246 1.478369 1.227435
f NaN NaN NaN
b 0.343054 1.912123 -0.050390

In [211]: df.reindex(['three', 'two', 'one'], axis='columns')


Out[211]:
three two one
a NaN 1.772517 1.394981
b -0.050390 1.912123 0.343054
c 1.227435 1.478369 0.695246
d -0.613172 0.279344 NaN

See also:
MultiIndex / Advanced Indexing is an even more concise way of doing reindexing.

Note: When writing performance-sensitive code, there is a good reason to spend some time becoming a reindexing
ninja: many operations are faster on pre-aligned data. Adding two unaligned DataFrames internally triggers a
reindexing step. For exploratory analysis you will hardly notice the difference (because reindex has been heavily
optimized), but when CPU cycles matter sprinkling a few explicit reindex calls here and there can have an impact.

Reindexing to align with another object


[email protected]
T56GZSRVAHYou may wish to take an object and reindex its axes to be labeled the same as another object. While the syntax for this
is straightforward albeit verbose, it is a common enough operation that the reindex_like() method is available
to make this simpler:

In [212]: df2
Out[212]:
one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369

In [213]: df3
Out[213]:
one two
a 0.583888 0.051514
b -0.468040 0.191120
c -0.115848 -0.242634

In [214]: df.reindex_like(df2)
Out[214]:
one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369

2.4. Community tutorials 117


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Aligning objects with each other with align

The align() method is the fastest way to simultaneously align two objects. It supports a join argument (related to
joining and merging):
• join='outer': take the union of the indexes (default)
• join='left': use the calling object’s index
• join='right': use the passed object’s index
• join='inner': intersect the indexes
It returns a tuple with both of the reindexed Series:

In [215]: s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])

In [216]: s1 = s[:4]

In [217]: s2 = s[1:]

In [218]: s1.align(s2)
Out[218]:
(a -0.186646
b -1.692424
c -0.303893
d -1.425662
e NaN
dtype: float64,
a NaN
[email protected]
b -1.692424
T56GZSRVAH c -0.303893
d -1.425662
e 1.114285
dtype: float64)

In [219]: s1.align(s2, join='inner')


Out[219]:
(b -1.692424
c -0.303893
d -1.425662
dtype: float64,
b -1.692424
c -0.303893
d -1.425662
dtype: float64)

In [220]: s1.align(s2, join='left')


Out[220]:
(a -0.186646
b -1.692424
c -0.303893
d -1.425662
dtype: float64,
a NaN
b -1.692424
c -0.303893
d -1.425662
dtype: float64)

118 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

For DataFrames, the join method will be applied to both the index and the columns by default:
In [221]: df.align(df2, join='inner')
Out[221]:
( one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369,
one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369)

You can also pass an axis option to only align on the specified axis:
In [222]: df.align(df2, join='inner', axis=0)
Out[222]:
( one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435,
one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369)

If you pass a Series to DataFrame.align(), you can choose to align both objects either on the DataFrame’s index
or columns using the axis argument:
[email protected]
In [223]: df.align(df2.iloc[0], axis=1)
T56GZSRVAHOut[223]:
( one three two
a 1.394981 NaN 1.772517
b 0.343054 -0.050390 1.912123
c 0.695246 1.227435 1.478369
d NaN -0.613172 0.279344,
one 1.394981
three NaN
two 1.772517
Name: a, dtype: float64)

Filling while reindexing

reindex() takes an optional parameter method which is a filling method chosen from the following table:

Method Action
pad / ffill Fill values forward
bfill / backfill Fill values backward
nearest Fill from the nearest index value

We illustrate these fill methods on a simple Series:


In [224]: rng = pd.date_range('1/3/2000', periods=8)

In [225]: ts = pd.Series(np.random.randn(8), index=rng)


(continues on next page)

2.4. Community tutorials 119


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [226]: ts2 = ts[[0, 3, 6]]

In [227]: ts
Out[227]:
2000-01-03 0.183051
2000-01-04 0.400528
2000-01-05 -0.015083
2000-01-06 2.395489
2000-01-07 1.414806
2000-01-08 0.118428
2000-01-09 0.733639
2000-01-10 -0.936077
Freq: D, dtype: float64

In [228]: ts2
Out[228]:
2000-01-03 0.183051
2000-01-06 2.395489
2000-01-09 0.733639
dtype: float64

In [229]: ts2.reindex(ts.index)
Out[229]:
2000-01-03 0.183051
2000-01-04 NaN
2000-01-05 NaN
2000-01-06 2.395489
[email protected]
T56GZSRVAH2000-01-07 NaN
2000-01-08 NaN
2000-01-09 0.733639
2000-01-10 NaN
Freq: D, dtype: float64

In [230]: ts2.reindex(ts.index, method='ffill')


Out[230]:
2000-01-03 0.183051
2000-01-04 0.183051
2000-01-05 0.183051
2000-01-06 2.395489
2000-01-07 2.395489
2000-01-08 2.395489
2000-01-09 0.733639
2000-01-10 0.733639
Freq: D, dtype: float64

In [231]: ts2.reindex(ts.index, method='bfill')


Out[231]:
2000-01-03 0.183051
2000-01-04 2.395489
2000-01-05 2.395489
2000-01-06 2.395489
2000-01-07 0.733639
2000-01-08 0.733639
2000-01-09 0.733639
2000-01-10 NaN
Freq: D, dtype: float64
(continues on next page)

120 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [232]: ts2.reindex(ts.index, method='nearest')


Out[232]:
2000-01-03 0.183051
2000-01-04 0.183051
2000-01-05 2.395489
2000-01-06 2.395489
2000-01-07 2.395489
2000-01-08 0.733639
2000-01-09 0.733639
2000-01-10 0.733639
Freq: D, dtype: float64

These methods require that the indexes are ordered increasing or decreasing.
Note that the same result could have been achieved using fillna (except for method='nearest') or interpolate:

In [233]: ts2.reindex(ts.index).fillna(method='ffill')
Out[233]:
2000-01-03 0.183051
2000-01-04 0.183051
2000-01-05 0.183051
2000-01-06 2.395489
2000-01-07 2.395489
2000-01-08 2.395489
2000-01-09 0.733639
2000-01-10 0.733639
Freq: D, dtype: float64
[email protected]
T56GZSRVAH
reindex() will raise a ValueError if the index is not monotonically increasing or decreasing. fillna() and
interpolate() will not perform any checks on the order of the index.

Limits on filling while reindexing

The limit and tolerance arguments provide additional control over filling while reindexing. Limit specifies the
maximum count of consecutive matches:

In [234]: ts2.reindex(ts.index, method='ffill', limit=1)


Out[234]:
2000-01-03 0.183051
2000-01-04 0.183051
2000-01-05 NaN
2000-01-06 2.395489
2000-01-07 2.395489
2000-01-08 NaN
2000-01-09 0.733639
2000-01-10 0.733639
Freq: D, dtype: float64

In contrast, tolerance specifies the maximum distance between the index and indexer values:

In [235]: ts2.reindex(ts.index, method='ffill', tolerance='1 day')


Out[235]:
2000-01-03 0.183051
2000-01-04 0.183051
(continues on next page)

2.4. Community tutorials 121


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-05 NaN
2000-01-06 2.395489
2000-01-07 2.395489
2000-01-08 NaN
2000-01-09 0.733639
2000-01-10 0.733639
Freq: D, dtype: float64

Notice that when used on a DatetimeIndex, TimedeltaIndex or PeriodIndex, tolerance will coerced
into a Timedelta if possible. This allows you to specify tolerance with appropriate strings.

Dropping labels from an axis

A method closely related to reindex is the drop() function. It removes a set of labels from an axis:

In [236]: df
Out[236]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172

In [237]: df.drop(['a', 'd'], axis=0)


Out[237]:
one two three
[email protected]
b 0.343054 1.912123 -0.050390
T56GZSRVAHc 0.695246 1.478369 1.227435

In [238]: df.drop(['one'], axis=1)


Out[238]:
two three
a 1.772517 NaN
b 1.912123 -0.050390
c 1.478369 1.227435
d 0.279344 -0.613172

Note that the following also works, but is a bit less obvious / clean:

In [239]: df.reindex(df.index.difference(['a', 'd']))


Out[239]:
one two three
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435

122 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Renaming / mapping labels

The rename() method allows you to relabel an axis based on some mapping (a dict or Series) or an arbitrary function.

In [240]: s
Out[240]:
a -0.186646
b -1.692424
c -0.303893
d -1.425662
e 1.114285
dtype: float64

In [241]: s.rename(str.upper)
Out[241]:
A -0.186646
B -1.692424
C -0.303893
D -1.425662
E 1.114285
dtype: float64

If you pass a function, it must return a value when called with any of the labels (and must produce a set of unique
values). A dict or Series can also be used:

In [242]: df.rename(columns={'one': 'foo', 'two': 'bar'},


.....: index={'a': 'apple', 'b': 'banana', 'd': 'durian'})
.....:
[email protected]
Out[242]:
T56GZSRVAH foo bar three
apple 1.394981 1.772517 NaN
banana 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
durian NaN 0.279344 -0.613172

If the mapping doesn’t include a column/index label, it isn’t renamed. Note that extra labels in the mapping don’t
throw an error.
New in version 0.21.0.
DataFrame.rename() also supports an “axis-style” calling convention, where you specify a single mapper and
the axis to apply that mapping to.

In [243]: df.rename({'one': 'foo', 'two': 'bar'}, axis='columns')


Out[243]:
foo bar three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172

In [244]: df.rename({'a': 'apple', 'b': 'banana', 'd': 'durian'}, axis='index')


Out[244]:
one two three
apple 1.394981 1.772517 NaN
banana 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
durian NaN 0.279344 -0.613172

2.4. Community tutorials 123


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

The rename() method also provides an inplace named parameter that is by default False and copies the under-
lying data. Pass inplace=True to rename the data in place.
Finally, rename() also accepts a scalar or list-like for altering the Series.name attribute.

In [245]: s.rename("scalar-name")
Out[245]:
a -0.186646
b -1.692424
c -0.303893
d -1.425662
e 1.114285
Name: scalar-name, dtype: float64

New in version 0.24.0.


The methods rename_axis() and rename_axis() allow specific names of a MultiIndex to be changed (as
opposed to the labels).

In [246]: df = pd.DataFrame({'x': [1, 2, 3, 4, 5, 6],


.....: 'y': [10, 20, 30, 40, 50, 60]},
.....: index=pd.MultiIndex.from_product([['a', 'b', 'c'], [1,
˓→2]],

.....: names=['let', 'num']))


.....:

In [247]: df
Out[247]:
x y
let num
[email protected]
T56GZSRVAHa 1 1 10
2 2 20
b 1 3 30
2 4 40
c 1 5 50
2 6 60

In [248]: df.rename_axis(index={'let': 'abc'})


Out[248]:
x y
abc num
a 1 1 10
2 2 20
b 1 3 30
2 4 40
c 1 5 50
2 6 60

In [249]: df.rename_axis(index=str.upper)
Out[249]:
x y
LET NUM
a 1 1 10
2 2 20
b 1 3 30
2 4 40
c 1 5 50
2 6 60

124 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Iteration

The behavior of basic iteration over pandas objects depends on the type. When iterating over a Series, it is regarded
as array-like, and basic iteration produces the values. DataFrames follow the dict-like convention of iterating over the
“keys” of the objects.
In short, basic iteration (for i in object) produces:
• Series: values
• DataFrame: column labels
Thus, for example, iterating over a DataFrame gives you the column names:
In [250]: df = pd.DataFrame({'col1': np.random.randn(3),
.....: 'col2': np.random.randn(3)}, index=['a', 'b', 'c'])
.....:

In [251]: for col in df:


.....: print(col)
.....:
col1
col2

Pandas objects also have the dict-like items() method to iterate over the (key, value) pairs.
To iterate over the rows of a DataFrame, you can use the following methods:
• iterrows(): Iterate over the rows of a DataFrame as (index, Series) pairs. This converts the rows to Series
objects, which can change the dtypes and has some performance implications.
[email protected]
• itertuples(): Iterate over the rows of a DataFrame as namedtuples of the values. This is a lot faster than
T56GZSRVAH
iterrows(), and is in most cases preferable to use to iterate over the values of a DataFrame.

Warning: Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is
not needed and can be avoided with one of the following approaches:
• Look for a vectorized solution: many operations can be performed using built-in methods or NumPy func-
tions, (boolean) indexing, . . .
• When you have a function that cannot work on the full DataFrame/Series at once, it is better to use apply()
instead of iterating over the values. See the docs on function application.
• If you need to do iterative manipulations on the values but performance is important, consider writing the in-
ner loop with cython or numba. See the enhancing performance section for some examples of this approach.

Warning: You should never modify something you are iterating over. This is not guaranteed to work in all cases.
Depending on the data types, the iterator returns a copy and not a view, and writing to it will have no effect!
For example, in the following case setting the value has no effect:
In [252]: df = pd.DataFrame({'a': [1, 2, 3], 'b': ['a', 'b', 'c']})

In [253]: for index, row in df.iterrows():


.....: row['a'] = 10
.....:

In [254]: df
Out[254]:

2.4. Community tutorials 125


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

a b
0 1 a
1 2 b
2 3 c

items

Consistent with the dict-like interface, items() iterates through key-value pairs:
• Series: (index, scalar value) pairs
• DataFrame: (column, Series) pairs
For example:

In [255]: for label, ser in df.items():


.....: print(label)
.....: print(ser)
.....:
a
0 1
1 2
2 3
Name: a, dtype: int64
b
0 a
1 b
[email protected]
2 c
T56GZSRVAHName: b, dtype: object

iterrows

iterrows() allows you to iterate through the rows of a DataFrame as Series objects. It returns an iterator yielding
each index value along with a Series containing the data in each row:

In [256]: for row_index, row in df.iterrows():


.....: print(row_index, row, sep='\n')
.....:
0
a 1
b a
Name: 0, dtype: object
1
a 2
b b
Name: 1, dtype: object
2
a 3
b c
Name: 2, dtype: object

Note: Because iterrows() returns a Series for each row, it does not preserve dtypes across the rows (dtypes are
preserved across columns for DataFrames). For example,

126 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [257]: df_orig = pd.DataFrame([[1, 1.5]], columns=['int', 'float'])

In [258]: df_orig.dtypes
Out[258]:
int int64
float float64
dtype: object

In [259]: row = next(df_orig.iterrows())[1]

In [260]: row
Out[260]:
int 1.0
float 1.5
Name: 0, dtype: float64

All values in row, returned as a Series, are now upcasted to floats, also the original integer value in column x:

In [261]: row['int'].dtype
Out[261]: dtype('float64')

In [262]: df_orig['int'].dtype
Out[262]: dtype('int64')

To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns namedtuples of the
values and which is generally much faster than iterrows().

For instance, a contrived way to transpose the DataFrame would be:


[email protected]
T56GZSRVAH
In [263]: df2 = pd.DataFrame({'x': [1, 2, 3], 'y': [4, 5, 6]})

In [264]: print(df2)
x y
0 1 4
1 2 5
2 3 6

In [265]: print(df2.T)
0 1 2
x 1 2 3
y 4 5 6

In [266]: df2_t = pd.DataFrame({idx: values for idx, values in df2.iterrows()})

In [267]: print(df2_t)
0 1 2
x 1 2 3
y 4 5 6

2.4. Community tutorials 127


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

itertuples

The itertuples() method will return an iterator yielding a namedtuple for each row in the DataFrame. The first
element of the tuple will be the row’s corresponding index value, while the remaining values are the row values.
For instance:

In [268]: for row in df.itertuples():


.....: print(row)
.....:
Pandas(Index=0, a=1, b='a')
Pandas(Index=1, a=2, b='b')
Pandas(Index=2, a=3, b='c')

This method does not convert the row to a Series object; it merely returns the values inside a namedtuple. Therefore,
itertuples() preserves the data type of the values and is generally faster as iterrows().

Note: The column names will be renamed to positional names if they are invalid Python identifiers, repeated, or start
with an underscore. With a large number of columns (>255), regular tuples are returned.

.dt accessor

Series has an accessor to succinctly return datetime like properties for the values of the Series, if it is a date-
time/period like Series. This will return a Series, indexed like the existing Series.

# datetime
[email protected]
T56GZSRVAHIn [269]: s = pd.Series(pd.date_range('20130101 09:10:12', periods=4))

In [270]: s
Out[270]:
0 2013-01-01 09:10:12
1 2013-01-02 09:10:12
2 2013-01-03 09:10:12
3 2013-01-04 09:10:12
dtype: datetime64[ns]

In [271]: s.dt.hour
Out[271]:
0 9
1 9
2 9
3 9
dtype: int64

In [272]: s.dt.second
Out[272]:
0 12
1 12
2 12
3 12
dtype: int64

In [273]: s.dt.day
Out[273]:
(continues on next page)

128 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


0 1
1 2
2 3
3 4
dtype: int64

This enables nice expressions like this:

In [274]: s[s.dt.day == 2]
Out[274]:
1 2013-01-02 09:10:12
dtype: datetime64[ns]

You can easily produces tz aware transformations:

In [275]: stz = s.dt.tz_localize('US/Eastern')

In [276]: stz
Out[276]:
0 2013-01-01 09:10:12-05:00
1 2013-01-02 09:10:12-05:00
2 2013-01-03 09:10:12-05:00
3 2013-01-04 09:10:12-05:00
dtype: datetime64[ns, US/Eastern]

In [277]: stz.dt.tz
Out[277]: <DstTzInfo 'US/Eastern' LMT-1 day, 19:04:00 STD>
[email protected]
T56GZSRVAHYou can also chain these types of operations:

In [278]: s.dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
Out[278]:
0 2013-01-01 04:10:12-05:00
1 2013-01-02 04:10:12-05:00
2 2013-01-03 04:10:12-05:00
3 2013-01-04 04:10:12-05:00
dtype: datetime64[ns, US/Eastern]

You can also format datetime values as strings with Series.dt.strftime() which supports the same format as
the standard strftime().

# DatetimeIndex
In [279]: s = pd.Series(pd.date_range('20130101', periods=4))

In [280]: s
Out[280]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: datetime64[ns]

In [281]: s.dt.strftime('%Y/%m/%d')
Out[281]:
0 2013/01/01
1 2013/01/02
(continues on next page)

2.4. Community tutorials 129


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2 2013/01/03
3 2013/01/04
dtype: object

# PeriodIndex
In [282]: s = pd.Series(pd.period_range('20130101', periods=4))

In [283]: s
Out[283]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: period[D]

In [284]: s.dt.strftime('%Y/%m/%d')
Out[284]:
0 2013/01/01
1 2013/01/02
2 2013/01/03
3 2013/01/04
dtype: object

The .dt accessor works for period and timedelta dtypes.


# period
In [285]: s = pd.Series(pd.period_range('20130101', periods=4, freq='D'))
[email protected]
T56GZSRVAHIn [286]: s
Out[286]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: period[D]

In [287]: s.dt.year
Out[287]:
0 2013
1 2013
2 2013
3 2013
dtype: int64

In [288]: s.dt.day
Out[288]:
0 1
1 2
2 3
3 4
dtype: int64

# timedelta
In [289]: s = pd.Series(pd.timedelta_range('1 day 00:00:05', periods=4, freq='s'))

In [290]: s
(continues on next page)

130 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[290]:
0 1 days 00:00:05
1 1 days 00:00:06
2 1 days 00:00:07
3 1 days 00:00:08
dtype: timedelta64[ns]

In [291]: s.dt.days
Out[291]:
0 1
1 1
2 1
3 1
dtype: int64

In [292]: s.dt.seconds
Out[292]:
0 5
1 6
2 7
3 8
dtype: int64

In [293]: s.dt.components
Out[293]:
days hours minutes seconds milliseconds microseconds nanoseconds
0 1 0 0 5 0 0 0
1 1 0 0 6 0 0 0
[email protected]
T56GZSRVAH 2 1 0 0 7 0 0 0
3 1 0 0 8 0 0 0

Note: Series.dt will raise a TypeError if you access with a non-datetime-like values.

Vectorized string methods

Series is equipped with a set of string processing methods that make it easy to operate on each element of the array.
Perhaps most importantly, these methods exclude missing/NA values automatically. These are accessed via the Series’s
str attribute and generally have names matching the equivalent (scalar) built-in string methods. For example:

In [294]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog


˓→', 'cat'],

.....: dtype="string")
.....:

In [295]: s.str.lower()
Out[295]:
0 a
1 b
2 c
3 aaba
4 baca
5 <NA>
6 caba
(continues on next page)

2.4. Community tutorials 131


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


7 dog
8 cat
dtype: string

Powerful pattern-matching methods are provided as well, but note that pattern-matching generally uses regular expres-
sions by default (and in some cases always uses them).

Note: Prior to pandas 1.0, string methods were only available on object -dtype Series. Pandas 1.0 added the
StringDtype which is dedicated to strings. See Text Data Types for more.

Please see Vectorized String Methods for a complete description.

Sorting

Pandas supports three kinds of sorting: sorting by index labels, sorting by column values, and sorting by a combination
of both.

By index

The Series.sort_index() and DataFrame.sort_index() methods are used to sort a pandas object by its
index levels.

In [296]: df = pd.DataFrame({
[email protected]
.....: 'one': pd.Series(np.random.randn(3), index=['a', 'b', 'c']),
T56GZSRVAH .....: 'two': pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd']),
.....: 'three': pd.Series(np.random.randn(3), index=['b', 'c', 'd'])})
.....:

In [297]: unsorted_df = df.reindex(index=['a', 'd', 'c', 'b'],


.....: columns=['three', 'two', 'one'])
.....:

In [298]: unsorted_df
Out[298]:
three two one
a NaN -1.152244 0.562973
d -0.252916 -0.109597 NaN
c 1.273388 -0.167123 0.640382
b -0.098217 0.009797 -1.299504

# DataFrame
In [299]: unsorted_df.sort_index()
Out[299]:
three two one
a NaN -1.152244 0.562973
b -0.098217 0.009797 -1.299504
c 1.273388 -0.167123 0.640382
d -0.252916 -0.109597 NaN

In [300]: unsorted_df.sort_index(ascending=False)
Out[300]:
three two one
(continues on next page)

132 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


d -0.252916 -0.109597 NaN
c 1.273388 -0.167123 0.640382
b -0.098217 0.009797 -1.299504
a NaN -1.152244 0.562973

In [301]: unsorted_df.sort_index(axis=1)
Out[301]:
one three two
a 0.562973 NaN -1.152244
d NaN -0.252916 -0.109597
c 0.640382 1.273388 -0.167123
b -1.299504 -0.098217 0.009797

# Series
In [302]: unsorted_df['three'].sort_index()
Out[302]:
a NaN
b -0.098217
c 1.273388
d -0.252916
Name: three, dtype: float64

By values

The Series.sort_values() method is used to sort a Series by its values. The DataFrame.sort_values()
method is used to sort a DataFrame by its column or row values. The optional by parameter to DataFrame.
[email protected]
T56GZSRVAHsort_values() may used to specify one or more columns to use to determine the sorted order.
In [303]: df1 = pd.DataFrame({'one': [2, 1, 1, 1],
.....: 'two': [1, 3, 2, 4],
.....: 'three': [5, 4, 3, 2]})
.....:

In [304]: df1.sort_values(by='two')
Out[304]:
one two three
0 2 1 5
2 1 2 3
1 1 3 4
3 1 4 2

The by parameter can take a list of column names, e.g.:


In [305]: df1[['one', 'two', 'three']].sort_values(by=['one', 'two'])
Out[305]:
one two three
2 1 2 3
1 1 3 4
3 1 4 2
0 2 1 5

These methods have special treatment of NA values via the na_position argument:
In [306]: s[2] = np.nan

(continues on next page)

2.4. Community tutorials 133


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [307]: s.sort_values()
Out[307]:
0 A
3 Aaba
1 B
4 Baca
6 CABA
8 cat
7 dog
2 <NA>
5 <NA>
dtype: string

In [308]: s.sort_values(na_position='first')
Out[308]:
2 <NA>
5 <NA>
0 A
3 Aaba
1 B
4 Baca
6 CABA
8 cat
7 dog
dtype: string

[email protected]
By indexes and values
T56GZSRVAH
New in version 0.23.0.
Strings passed as the by parameter to DataFrame.sort_values() may refer to either columns or index level
names.

# Build MultiIndex
In [309]: idx = pd.MultiIndex.from_tuples([('a', 1), ('a', 2), ('a', 2),
.....: ('b', 2), ('b', 1), ('b', 1)])
.....:

In [310]: idx.names = ['first', 'second']

# Build DataFrame
In [311]: df_multi = pd.DataFrame({'A': np.arange(6, 0, -1)},
.....: index=idx)
.....:

In [312]: df_multi
Out[312]:
A
first second
a 1 6
2 5
2 4
b 2 3
1 2
1 1

134 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Sort by ‘second’ (index) and ‘A’ (column)

In [313]: df_multi.sort_values(by=['second', 'A'])


Out[313]:
A
first second
b 1 1
1 2
a 1 6
b 2 3
a 2 4
2 5

Note: If a string matches both a column name and an index level name then a warning is issued and the column takes
precedence. This will result in an ambiguity error in a future version.

searchsorted

Series has the searchsorted() method, which works similarly to numpy.ndarray.searchsorted().

In [314]: ser = pd.Series([1, 2, 3])

In [315]: ser.searchsorted([0, 3])


Out[315]: array([0, 2])

[email protected]
In [316]: ser.searchsorted([0, 4])
T56GZSRVAHOut[316]: array([0, 3])

In [317]: ser.searchsorted([1, 3], side='right')


Out[317]: array([1, 3])

In [318]: ser.searchsorted([1, 3], side='left')


Out[318]: array([0, 2])

In [319]: ser = pd.Series([3, 1, 2])

In [320]: ser.searchsorted([0, 3], sorter=np.argsort(ser))


Out[320]: array([0, 2])

smallest / largest values

Series has the nsmallest() and nlargest() methods which return the smallest or largest 𝑛 values. For a
large Series this can be much faster than sorting the entire Series and calling head(n) on the result.

In [321]: s = pd.Series(np.random.permutation(10))

In [322]: s
Out[322]:
0 2
1 0
2 3
3 7
(continues on next page)

2.4. Community tutorials 135


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


4 1
5 5
6 9
7 6
8 8
9 4
dtype: int64

In [323]: s.sort_values()
Out[323]:
1 0
4 1
0 2
2 3
9 4
5 5
7 6
3 7
8 8
6 9
dtype: int64

In [324]: s.nsmallest(3)
Out[324]:
1 0
4 1
0 2
dtype: int64
[email protected]
T56GZSRVAH
In [325]: s.nlargest(3)
Out[325]:
6 9
8 8
3 7
dtype: int64

DataFrame also has the nlargest and nsmallest methods.


In [326]: df = pd.DataFrame({'a': [-2, -1, 1, 10, 8, 11, -1],
.....: 'b': list('abdceff'),
.....: 'c': [1.0, 2.0, 4.0, 3.2, np.nan, 3.0, 4.0]})
.....:

In [327]: df.nlargest(3, 'a')


Out[327]:
a b c
5 11 f 3.0
3 10 c 3.2
4 8 e NaN

In [328]: df.nlargest(5, ['a', 'c'])


Out[328]:
a b c
5 11 f 3.0
3 10 c 3.2
4 8 e NaN
2 1 d 4.0
(continues on next page)

136 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


6 -1 f 4.0

In [329]: df.nsmallest(3, 'a')


Out[329]:
a b c
0 -2 a 1.0
1 -1 b 2.0
6 -1 f 4.0

In [330]: df.nsmallest(5, ['a', 'c'])


Out[330]:
a b c
0 -2 a 1.0
1 -1 b 2.0
6 -1 f 4.0
2 1 d 4.0
4 8 e NaN

Sorting by a MultiIndex column

You must be explicit about sorting when the column is a MultiIndex, and fully specify all levels to by.

In [331]: df1.columns = pd.MultiIndex.from_tuples([('a', 'one'),


.....: ('a', 'two'),
.....: ('b', 'three')])
.....:
[email protected]
T56GZSRVAH
In [332]: df1.sort_values(by=('a', 'two'))
Out[332]:
a b
one two three
0 2 1 5
2 1 2 3
1 1 3 4
3 1 4 2

Copying

The copy() method on pandas objects copies the underlying data (though not the axis indexes, since they are im-
mutable) and returns a new object. Note that it is seldom necessary to copy objects. For example, there are only a
handful of ways to alter a DataFrame in-place:
• Inserting, deleting, or modifying a column.
• Assigning to the index or columns attributes.
• For homogeneous data, directly modifying the values via the values attribute or advanced indexing.
To be clear, no pandas method has the side effect of modifying your data; almost every method returns a new object,
leaving the original object untouched. If the data is modified, it is because you did so explicitly.

2.4. Community tutorials 137


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

dtypes

For the most part, pandas uses NumPy arrays and dtypes for Series or individual columns of a DataFrame. NumPy
provides support for float, int, bool, timedelta64[ns] and datetime64[ns] (note that NumPy does not
support timezone-aware datetimes).
Pandas and third-party libraries extend NumPy’s type system in a few places. This section describes the extensions
pandas has made internally. See Extension types for how to write your own extension that works with pandas. See
ecosystem.extensions for a list of third-party libraries that have implemented an extension.
The following table lists all of pandas extension types. For methods requiring dtype arguments, strings can be
specified as indicated. See the respective documentation sections for more on each type.

Kind Data Scalar Array String Aliases Documen-


of Type tation
Data
tz- DatetimeTZDtype
Timestamp
arrays. 'datetime64[ns, <tz>]' Time zone
aware DatetimeArray handling
date-
time
Cate- (none) Categorical
CategoricalDtype 'category' Cate-
gori- gorical
cal data
period PeriodDtype
Period arrays. 'period[<freq>]', 'Period[<freq>]' Time span
(time PeriodArray representa-
spans) tion
sparse SparseDtype
(none) arrays. 'Sparse', 'Sparse[int]', 'Sparse[float]' Sparse
[email protected] SparseArray data struc-
T56GZSRVAH tures
inter- IntervalDtype
Interval
arrays. 'interval', 'Interval', IntervalIn-
vals IntervalArray
'Interval[<numpy_dtype>]', dex
'Interval[datetime64[ns, <tz>]]',
'Interval[timedelta64[<freq>]]'
nullable Int64Dtype,
(none) arrays. 'Int8', 'Int16', 'Int32', 'Int64', 'UInt8', Nullable
inte- ... IntegerArray
'UInt16', 'UInt32', 'UInt64' integer
ger data type
Strings StringDtype
str arrays. 'string' Working
StringArray with text
data
Boolean BooleanDtype
bool arrays. 'boolean' Boolean
(with BooleanArray data with
NA) missing
values

Pandas has two ways to store strings.


1. object dtype, which can hold any Python object, including strings.
2. StringDtype, which is dedicated to strings.
Generally, we recommend using StringDtype. See Text Data Types fore more.
Finally, arbitrary objects may be stored using the object dtype, but should be avoided to the extent possible (for
performance and interoperability with other libraries and methods. See object conversion).
A convenient dtypes attribute for DataFrame returns a Series with the data type of each column.

138 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [333]: dft = pd.DataFrame({'A': np.random.rand(3),


.....: 'B': 1,
.....: 'C': 'foo',
.....: 'D': pd.Timestamp('20010102'),
.....: 'E': pd.Series([1.0] * 3).astype('float32'),
.....: 'F': False,
.....: 'G': pd.Series([1] * 3, dtype='int8')})
.....:

In [334]: dft
Out[334]:
A B C D E F G
0 0.035962 1 foo 2001-01-02 1.0 False 1
1 0.701379 1 foo 2001-01-02 1.0 False 1
2 0.281885 1 foo 2001-01-02 1.0 False 1

In [335]: dft.dtypes
Out[335]:
A float64
B int64
C object
D datetime64[ns]
E float32
F bool
G int8
dtype: object

On a Series object, use the dtype attribute.


[email protected]
T56GZSRVAHIn [336]: dft['A'].dtype
Out[336]: dtype('float64')

If a pandas object contains data with multiple dtypes in a single column, the dtype of the column will be chosen to
accommodate all of the data types (object is the most general).

# these ints are coerced to floats


In [337]: pd.Series([1, 2, 3, 4, 5, 6.])
Out[337]:
0 1.0
1 2.0
2 3.0
3 4.0
4 5.0
5 6.0
dtype: float64

# string data forces an ``object`` dtype


In [338]: pd.Series([1, 2, 3, 6., 'foo'])
Out[338]:
0 1
1 2
2 3
3 6
4 foo
dtype: object

The number of columns of each type in a DataFrame can be found by calling DataFrame.dtypes.
value_counts().

2.4. Community tutorials 139


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [339]: dft.dtypes.value_counts()
Out[339]:
bool 1
datetime64[ns] 1
object 1
int8 1
int64 1
float32 1
float64 1
dtype: int64

Numeric dtypes will propagate and can coexist in DataFrames. If a dtype is passed (either directly via the dtype
keyword, a passed ndarray, or a passed Series), then it will be preserved in DataFrame operations. Furthermore,
different numeric dtypes will NOT be combined. The following example will give you a taste.

In [340]: df1 = pd.DataFrame(np.random.randn(8, 1), columns=['A'], dtype='float32')

In [341]: df1
Out[341]:
A
0 0.224364
1 1.890546
2 0.182879
3 0.787847
4 -0.188449
5 0.667715
6 -0.011736
7 -0.399073
[email protected]
T56GZSRVAHIn [342]: df1.dtypes
Out[342]:
A float32
dtype: object

In [343]: df2 = pd.DataFrame({'A': pd.Series(np.random.randn(8), dtype='float16'),


.....: 'B': pd.Series(np.random.randn(8)),
.....: 'C': pd.Series(np.array(np.random.randn(8),
.....: dtype='uint8'))})
.....:

In [344]: df2
Out[344]:
A B C
0 0.823242 0.256090 0
1 1.607422 1.426469 0
2 -0.333740 -0.416203 255
3 -0.063477 1.139976 0
4 -1.014648 -1.193477 0
5 0.678711 0.096706 0
6 -0.040863 -1.956850 1
7 -0.357422 -0.714337 0

In [345]: df2.dtypes
Out[345]:
A float16
B float64
C uint8
(continues on next page)

140 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


dtype: object

defaults

By default integer types are int64 and float types are float64, regardless of platform (32-bit or 64-bit). The
following will all result in int64 dtypes.

In [346]: pd.DataFrame([1, 2], columns=['a']).dtypes


Out[346]:
a int64
dtype: object

In [347]: pd.DataFrame({'a': [1, 2]}).dtypes


Out[347]:
a int64
dtype: object

In [348]: pd.DataFrame({'a': 1}, index=list(range(2))).dtypes


Out[348]:
a int64
dtype: object

Note that Numpy will choose platform-dependent types when creating arrays. The following WILL result in int32
on 32-bit platform.

In [349]: frame = pd.DataFrame(np.array([1, 2]))


[email protected]
T56GZSRVAH

upcasting

Types can potentially be upcasted when combined with other types, meaning they are promoted from the current type
(e.g. int to float).

In [350]: df3 = df1.reindex_like(df2).fillna(value=0.0) + df2

In [351]: df3
Out[351]:
A B C
0 1.047606 0.256090 0.0
1 3.497968 1.426469 0.0
2 -0.150862 -0.416203 255.0
3 0.724370 1.139976 0.0
4 -1.203098 -1.193477 0.0
5 1.346426 0.096706 0.0
6 -0.052599 -1.956850 1.0
7 -0.756495 -0.714337 0.0

In [352]: df3.dtypes
Out[352]:
A float32
B float64
C float64
dtype: object

2.4. Community tutorials 141


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

DataFrame.to_numpy() will return the lower-common-denominator of the dtypes, meaning the dtype that can
accommodate ALL of the types in the resulting homogeneous dtyped NumPy array. This can force some upcasting.

In [353]: df3.to_numpy().dtype
Out[353]: dtype('float64')

astype

You can use the astype() method to explicitly convert dtypes from one to another. These will by default return a
copy, even if the dtype was unchanged (pass copy=False to change this behavior). In addition, they will raise an
exception if the astype operation is invalid.
Upcasting is always according to the numpy rules. If two different dtypes are involved in an operation, then the more
general one will be used as the result of the operation.

In [354]: df3
Out[354]:
A B C
0 1.047606 0.256090 0.0
1 3.497968 1.426469 0.0
2 -0.150862 -0.416203 255.0
3 0.724370 1.139976 0.0
4 -1.203098 -1.193477 0.0
5 1.346426 0.096706 0.0
6 -0.052599 -1.956850 1.0
7 -0.756495 -0.714337 0.0

[email protected]
In [355]: df3.dtypes
T56GZSRVAHOut[355]:
A float32
B float64
C float64
dtype: object

# conversion of dtypes
In [356]: df3.astype('float32').dtypes
Out[356]:
A float32
B float32
C float32
dtype: object

Convert a subset of columns to a specified type using astype().

In [357]: dft = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]})

In [358]: dft[['a', 'b']] = dft[['a', 'b']].astype(np.uint8)

In [359]: dft
Out[359]:
a b c
0 1 4 7
1 2 5 8
2 3 6 9

In [360]: dft.dtypes
(continues on next page)

142 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[360]:
a uint8
b uint8
c int64
dtype: object

Convert certain columns to a specific dtype by passing a dict to astype().

In [361]: dft1 = pd.DataFrame({'a': [1, 0, 1], 'b': [4, 5, 6], 'c': [7, 8, 9]})

In [362]: dft1 = dft1.astype({'a': np.bool, 'c': np.float64})

In [363]: dft1
Out[363]:
a b c
0 True 4 7.0
1 False 5 8.0
2 True 6 9.0

In [364]: dft1.dtypes
Out[364]:
a bool
b int64
c float64
dtype: object

Note: When trying to convert a subset of columns to a specified type using astype() and loc(), upcasting occurs.
[email protected]
T56GZSRVAH
loc() tries to fit in what we are assigning to the current dtypes, while [] will overwrite them taking the dtype from
the right hand side. Therefore the following piece of code produces the unintended result.

In [365]: dft = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]})

In [366]: dft.loc[:, ['a', 'b']].astype(np.uint8).dtypes


Out[366]:
a uint8
b uint8
dtype: object

In [367]: dft.loc[:, ['a', 'b']] = dft.loc[:, ['a', 'b']].astype(np.uint8)

In [368]: dft.dtypes
Out[368]:
a int64
b int64
c int64
dtype: object

2.4. Community tutorials 143


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

object conversion

pandas offers various functions to try to force conversion of types from the object dtype to other types. In cases
where the data is already of the correct type, but stored in an object array, the DataFrame.infer_objects()
and Series.infer_objects() methods can be used to soft convert to the correct type.

In [369]: import datetime

In [370]: df = pd.DataFrame([[1, 2],


.....: ['a', 'b'],
.....: [datetime.datetime(2016, 3, 2),
.....: datetime.datetime(2016, 3, 2)]])
.....:

In [371]: df = df.T

In [372]: df
Out[372]:
0 1 2
0 1 a 2016-03-02
1 2 b 2016-03-02

In [373]: df.dtypes
Out[373]:
0 object
1 object
2 datetime64[ns]
dtype: object
[email protected]
T56GZSRVAHBecause the data was transposed the original inference stored all columns as object, which infer_objects will
correct.

In [374]: df.infer_objects().dtypes
Out[374]:
0 int64
1 object
2 datetime64[ns]
dtype: object

The following functions are available for one dimensional object arrays or scalars to perform hard conversion of objects
to a specified type:
• to_numeric() (conversion to numeric dtypes)

In [375]: m = ['1.1', 2, 3]

In [376]: pd.to_numeric(m)
Out[376]: array([1.1, 2. , 3. ])

• to_datetime() (conversion to datetime objects)

In [377]: import datetime

In [378]: m = ['2016-07-09', datetime.datetime(2016, 3, 2)]

In [379]: pd.to_datetime(m)
Out[379]: DatetimeIndex(['2016-07-09', '2016-03-02'], dtype='datetime64[ns]',
˓→freq=None)

144 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• to_timedelta() (conversion to timedelta objects)

In [380]: m = ['5us', pd.Timedelta('1day')]

In [381]: pd.to_timedelta(m)
Out[381]: TimedeltaIndex(['0 days 00:00:00.000005', '1 days 00:00:00'], dtype=
˓→'timedelta64[ns]', freq=None)

To force a conversion, we can pass in an errors argument, which specifies how pandas should deal with elements
that cannot be converted to desired dtype or object. By default, errors='raise', meaning that any errors encoun-
tered will be raised during the conversion process. However, if errors='coerce', these errors will be ignored
and pandas will convert problematic elements to pd.NaT (for datetime and timedelta) or np.nan (for numeric).
This might be useful if you are reading in data which is mostly of the desired dtype (e.g. numeric, datetime), but
occasionally has non-conforming elements intermixed that you want to represent as missing:

In [382]: import datetime

In [383]: m = ['apple', datetime.datetime(2016, 3, 2)]

In [384]: pd.to_datetime(m, errors='coerce')


Out[384]: DatetimeIndex(['NaT', '2016-03-02'], dtype='datetime64[ns]', freq=None)

In [385]: m = ['apple', 2, 3]

In [386]: pd.to_numeric(m, errors='coerce')


Out[386]: array([nan, 2., 3.])

In [387]: m = ['apple', pd.Timedelta('1day')]


[email protected]
T56GZSRVAHIn [388]: pd.to_timedelta(m, errors='coerce')
Out[388]: TimedeltaIndex([NaT, '1 days'], dtype='timedelta64[ns]', freq=None)

The errors parameter has a third option of errors='ignore', which will simply return the passed in data if it
encounters any errors with the conversion to a desired data type:

In [389]: import datetime

In [390]: m = ['apple', datetime.datetime(2016, 3, 2)]

In [391]: pd.to_datetime(m, errors='ignore')


Out[391]: Index(['apple', 2016-03-02 00:00:00], dtype='object')

In [392]: m = ['apple', 2, 3]

In [393]: pd.to_numeric(m, errors='ignore')


Out[393]: array(['apple', 2, 3], dtype=object)

In [394]: m = ['apple', pd.Timedelta('1day')]

In [395]: pd.to_timedelta(m, errors='ignore')


Out[395]: array(['apple', Timedelta('1 days 00:00:00')], dtype=object)

In addition to object conversion, to_numeric() provides another argument downcast, which gives the option of
downcasting the newly (or already) numeric data to a smaller dtype, which can conserve memory:

In [396]: m = ['1', 2, 3]

(continues on next page)

2.4. Community tutorials 145


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [397]: pd.to_numeric(m, downcast='integer') # smallest signed int dtype
Out[397]: array([1, 2, 3], dtype=int8)

In [398]: pd.to_numeric(m, downcast='signed') # same as 'integer'


Out[398]: array([1, 2, 3], dtype=int8)

In [399]: pd.to_numeric(m, downcast='unsigned') # smallest unsigned int dtype


Out[399]: array([1, 2, 3], dtype=uint8)

In [400]: pd.to_numeric(m, downcast='float') # smallest float dtype


Out[400]: array([1., 2., 3.], dtype=float32)

As these methods apply only to one-dimensional arrays, lists or scalars; they cannot be used directly on multi-
dimensional objects such as DataFrames. However, with apply(), we can “apply” the function over each column
efficiently:

In [401]: import datetime

In [402]: df = pd.DataFrame([
.....: ['2016-07-09', datetime.datetime(2016, 3, 2)]] * 2, dtype='O')
.....:

In [403]: df
Out[403]:
0 1
0 2016-07-09 2016-03-02 00:00:00
1 2016-07-09 2016-03-02 00:00:00
[email protected]
T56GZSRVAHIn [404]: df.apply(pd.to_datetime)
Out[404]:
0 1
0 2016-07-09 2016-03-02
1 2016-07-09 2016-03-02

In [405]: df = pd.DataFrame([['1.1', 2, 3]] * 2, dtype='O')

In [406]: df
Out[406]:
0 1 2
0 1.1 2 3
1 1.1 2 3

In [407]: df.apply(pd.to_numeric)
Out[407]:
0 1 2
0 1.1 2 3
1 1.1 2 3

In [408]: df = pd.DataFrame([['5us', pd.Timedelta('1day')]] * 2, dtype='O')

In [409]: df
Out[409]:
0 1
0 5us 1 days 00:00:00
1 5us 1 days 00:00:00

(continues on next page)

146 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [410]: df.apply(pd.to_timedelta)
Out[410]:
0 1
0 00:00:00.000005 1 days
1 00:00:00.000005 1 days

gotchas

Performing selection operations on integer type data can easily upcast the data to floating. The dtype of the
input data will be preserved in cases where nans are not introduced. See also Support for integer NA.
In [411]: dfi = df3.astype('int32')

In [412]: dfi['E'] = 1

In [413]: dfi
Out[413]:
A B C E
0 1 0 0 1
1 3 1 0 1
2 0 0 255 1
3 0 1 0 1
4 -1 -1 0 1
5 1 0 0 1
6 0 -1 1 1
7 0 0 0 1
[email protected]
T56GZSRVAH
In [414]: dfi.dtypes
Out[414]:
A int32
B int32
C int32
E int64
dtype: object

In [415]: casted = dfi[dfi > 0]

In [416]: casted
Out[416]:
A B C E
0 1.0 NaN NaN 1
1 3.0 1.0 NaN 1
2 NaN NaN 255.0 1
3 NaN 1.0 NaN 1
4 NaN NaN NaN 1
5 1.0 NaN NaN 1
6 NaN NaN 1.0 1
7 NaN NaN NaN 1

In [417]: casted.dtypes
Out[417]:
A float64
B float64
C float64
E int64
(continues on next page)

2.4. Community tutorials 147


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


dtype: object

While float dtypes are unchanged.

In [418]: dfa = df3.copy()

In [419]: dfa['A'] = dfa['A'].astype('float32')

In [420]: dfa.dtypes
Out[420]:
A float32
B float64
C float64
dtype: object

In [421]: casted = dfa[df2 > 0]

In [422]: casted
Out[422]:
A B C
0 1.047606 0.256090 NaN
1 3.497968 1.426469 NaN
2 NaN NaN 255.0
3 NaN 1.139976 NaN
4 NaN NaN NaN
5 1.346426 0.096706 NaN
6 NaN NaN 1.0
7 NaN
[email protected] NaN NaN
T56GZSRVAH
In [423]: casted.dtypes
Out[423]:
A float32
B float64
C float64
dtype: object

Selecting columns based on dtype

The select_dtypes() method implements subsetting of columns based on their dtype.


First, let’s create a DataFrame with a slew of different dtypes:

In [424]: df = pd.DataFrame({'string': list('abc'),


.....: 'int64': list(range(1, 4)),
.....: 'uint8': np.arange(3, 6).astype('u1'),
.....: 'float64': np.arange(4.0, 7.0),
.....: 'bool1': [True, False, True],
.....: 'bool2': [False, True, False],
.....: 'dates': pd.date_range('now', periods=3),
.....: 'category': pd.Series(list("ABC")).astype('category')})
.....:

In [425]: df['tdeltas'] = df.dates.diff()

In [426]: df['uint64'] = np.arange(3, 6).astype('u8')


(continues on next page)

148 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [427]: df['other_dates'] = pd.date_range('20130101', periods=3)

In [428]: df['tz_aware_dates'] = pd.date_range('20130101', periods=3, tz='US/Eastern')

In [429]: df
Out[429]:
string int64 uint8 float64 bool1 bool2 dates category
˓→tdeltas uint64 other_dates tz_aware_dates
0 a 1 3 4.0 True False 2020-03-18 15:38:47.007134 A
˓→NaT 3 2013-01-01 2013-01-01 00:00:00-05:00
1 b 2 4 5.0 False True 2020-03-19 15:38:47.007134 B 1
˓→days 4 2013-01-02 2013-01-02 00:00:00-05:00
2 c 3 5 6.0 True False 2020-03-20 15:38:47.007134 C 1
˓→days 5 2013-01-03 2013-01-03 00:00:00-05:00

And the dtypes:

In [430]: df.dtypes
Out[430]:
string object
int64 int64
uint8 uint8
float64 float64
bool1 bool
bool2 bool
dates datetime64[ns]
category
[email protected] category
T56GZSRVAH tdeltas timedelta64[ns]
uint64 uint64
other_dates datetime64[ns]
tz_aware_dates datetime64[ns, US/Eastern]
dtype: object

select_dtypes() has two parameters include and exclude that allow you to say “give me the columns with
these dtypes” (include) and/or “give the columns without these dtypes” (exclude).
For example, to select bool columns:

In [431]: df.select_dtypes(include=[bool])
Out[431]:
bool1 bool2
0 True False
1 False True
2 True False

You can also pass the name of a dtype in the NumPy dtype hierarchy:

In [432]: df.select_dtypes(include=['bool'])
Out[432]:
bool1 bool2
0 True False
1 False True
2 True False

select_dtypes() also works with generic dtypes as well.


For example, to select all numeric and boolean columns while excluding unsigned integers:

2.4. Community tutorials 149


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [433]: df.select_dtypes(include=['number', 'bool'], exclude=['unsignedinteger'])


Out[433]:
int64 float64 bool1 bool2 tdeltas
0 1 4.0 True False NaT
1 2 5.0 False True 1 days
2 3 6.0 True False 1 days

To select string columns you must use the object dtype:

In [434]: df.select_dtypes(include=['object'])
Out[434]:
string
0 a
1 b
2 c

To see all the child dtypes of a generic dtype like numpy.number you can define a function that returns a tree of
child dtypes:

In [435]: def subdtypes(dtype):


.....: subs = dtype.__subclasses__()
.....: if not subs:
.....: return dtype
.....: return [dtype, [subdtypes(dt) for dt in subs]]
.....:

All NumPy dtypes are subclasses of numpy.generic:


[email protected]
In [436]: subdtypes(np.generic)
T56GZSRVAHOut[436]:
[numpy.generic,
[[numpy.number,
[[numpy.integer,
[[numpy.signedinteger,
[numpy.int8,
numpy.int16,
numpy.int32,
numpy.int64,
numpy.longlong,
numpy.timedelta64]],
[numpy.unsignedinteger,
[numpy.uint8,
numpy.uint16,
numpy.uint32,
numpy.uint64,
numpy.ulonglong]]]],
[numpy.inexact,
[[numpy.floating,
[numpy.float16, numpy.float32, numpy.float64, numpy.float128]],
[numpy.complexfloating,
[numpy.complex64, numpy.complex128, numpy.complex256]]]]]],
[numpy.flexible,
[[numpy.character, [numpy.bytes_, numpy.str_]],
[numpy.void, [numpy.record]]]],
numpy.bool_,
numpy.datetime64,
numpy.object_]]

150 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Note: Pandas also defines the types category, and datetime64[ns, tz], which are not integrated into the
normal NumPy hierarchy and won’t show up with the above function.

2.4.6 Intro to data structures

We’ll start with a quick, non-comprehensive overview of the fundamental data structures in pandas to get you started.
The fundamental behavior about data types, indexing, and axis labeling / alignment apply across all of the objects. To
get started, import NumPy and load pandas into your namespace:

In [1]: import numpy as np

In [2]: import pandas as pd

Here is a basic tenet to keep in mind: data alignment is intrinsic. The link between labels and data will not be broken
unless done so explicitly by you.
We’ll give a brief intro to the data structures, then consider all of the broad categories of functionality and methods in
separate sections.

Series

Series is a one-dimensional labeled array capable of holding any data type (integers, strings, floating point numbers,
Python objects, etc.). The axis labels are collectively referred to as the index. The basic method to create a Series is
to call:
[email protected]
T56GZSRVAH>>> s = pd.Series(data, index=index)

Here, data can be many different things:


• a Python dict
• an ndarray
• a scalar value (like 5)
The passed index is a list of axis labels. Thus, this separates into a few cases depending on what data is:
From ndarray
If data is an ndarray, index must be the same length as data. If no index is passed, one will be created having values
[0, ..., len(data) - 1].

In [3]: s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])

In [4]: s
Out[4]:
a 0.469112
b -0.282863
c -1.509059
d -1.135632
e 1.212112
dtype: float64

In [5]: s.index
Out[5]: Index(['a', 'b', 'c', 'd', 'e'], dtype='object')
(continues on next page)

2.4. Community tutorials 151


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [6]: pd.Series(np.random.randn(5))
Out[6]:
0 -0.173215
1 0.119209
2 -1.044236
3 -0.861849
4 -2.104569
dtype: float64

Note: pandas supports non-unique index values. If an operation that does not support duplicate index values is
attempted, an exception will be raised at that time. The reason for being lazy is nearly all performance-based (there
are many instances in computations, like parts of GroupBy, where the index is not used).

From dict
Series can be instantiated from dicts:

In [7]: d = {'b': 1, 'a': 0, 'c': 2}

In [8]: pd.Series(d)
Out[8]:
b 1
a 0
c 2
dtype: int64
[email protected]
T56GZSRVAH
Note: When the data is a dict, and an index is not passed, the Series index will be ordered by the dict’s insertion
order, if you’re using Python version >= 3.6 and Pandas version >= 0.23.
If you’re using Python < 3.6 or Pandas < 0.23, and an index is not passed, the Series index will be the lexically
ordered list of dict keys.

In the example above, if you were on a Python version lower than 3.6 or a Pandas version lower than 0.23, the Series
would be ordered by the lexical order of the dict keys (i.e. ['a', 'b', 'c'] rather than ['b', 'a', 'c']).
If an index is passed, the values in data corresponding to the labels in the index will be pulled out.

In [9]: d = {'a': 0., 'b': 1., 'c': 2.}

In [10]: pd.Series(d)
Out[10]:
a 0.0
b 1.0
c 2.0
dtype: float64

In [11]: pd.Series(d, index=['b', 'c', 'd', 'a'])


Out[11]:
b 1.0
c 2.0
d NaN
a 0.0
dtype: float64

152 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Note: NaN (not a number) is the standard missing data marker used in pandas.

From scalar value


If data is a scalar value, an index must be provided. The value will be repeated to match the length of index.

In [12]: pd.Series(5., index=['a', 'b', 'c', 'd', 'e'])


Out[12]:
a 5.0
b 5.0
c 5.0
d 5.0
e 5.0
dtype: float64

Series is ndarray-like

Series acts very similarly to a ndarray, and is a valid argument to most NumPy functions. However, operations
such as slicing will also slice the index.

In [13]: s[0]
Out[13]: 0.4691122999071863

In [14]: s[:3]
Out[14]:
a 0.469112
[email protected]
T56GZSRVAH b -0.282863
c -1.509059
dtype: float64

In [15]: s[s > s.median()]


Out[15]:
a 0.469112
e 1.212112
dtype: float64

In [16]: s[[4, 3, 1]]


Out[16]:
e 1.212112
d -1.135632
b -0.282863
dtype: float64

In [17]: np.exp(s)
Out[17]:
a 1.598575
b 0.753623
c 0.221118
d 0.321219
e 3.360575
dtype: float64

Note: We will address array-based indexing like s[[4, 3, 1]] in section.

2.4. Community tutorials 153


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Like a NumPy array, a pandas Series has a dtype.

In [18]: s.dtype
Out[18]: dtype('float64')

This is often a NumPy dtype. However, pandas and 3rd-party libraries extend NumPy’s type system in a few places,
in which case the dtype would be a ExtensionDtype. Some examples within pandas are Categorical data and
Nullable integer data type. See dtypes for more.
If you need the actual array backing a Series, use Series.array.

In [19]: s.array
Out[19]:
<PandasArray>
[ 0.4691122999071863, -0.2828633443286633, -1.5090585031735124,
-1.1356323710171934, 1.2121120250208506]
Length: 5, dtype: float64

Accessing the array can be useful when you need to do some operation without the index (to disable automatic
alignment, for example).
Series.array will always be an ExtensionArray. Briefly, an ExtensionArray is a thin wrapper around one
or more concrete arrays like a numpy.ndarray. Pandas knows how to take an ExtensionArray and store it in
a Series or a column of a DataFrame. See dtypes for more.
While Series is ndarray-like, if you need an actual ndarray, then use Series.to_numpy().

In [20]: s.to_numpy()
Out[20]: array([ 0.4691, -0.2829, -1.5091, -1.1356, 1.2121])
[email protected]
T56GZSRVAHEven if the Series is backed by a ExtensionArray, Series.to_numpy() will return a NumPy ndarray.

Series is dict-like

A Series is like a fixed-size dict in that you can get and set values by index label:

In [21]: s['a']
Out[21]: 0.4691122999071863

In [22]: s['e'] = 12.

In [23]: s
Out[23]:
a 0.469112
b -0.282863
c -1.509059
d -1.135632
e 12.000000
dtype: float64

In [24]: 'e' in s
Out[24]: True

In [25]: 'f' in s
Out[25]: False

If a label is not contained, an exception is raised:

154 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

>>> s['f']
KeyError: 'f'

Using the get method, a missing label will return None or specified default:

In [26]: s.get('f')

In [27]: s.get('f', np.nan)


Out[27]: nan

See also the section on attribute access.

Vectorized operations and label alignment with Series

When working with raw NumPy arrays, looping through value-by-value is usually not necessary. The same is true
when working with Series in pandas. Series can also be passed into most NumPy methods expecting an ndarray.

In [28]: s + s
Out[28]:
a 0.938225
b -0.565727
c -3.018117
d -2.271265
e 24.000000
dtype: float64

In [29]: s 2
[email protected] *
T56GZSRVAHOut[29]:
a 0.938225
b -0.565727
c -3.018117
d -2.271265
e 24.000000
dtype: float64

In [30]: np.exp(s)
Out[30]:
a 1.598575
b 0.753623
c 0.221118
d 0.321219
e 162754.791419
dtype: float64

A key difference between Series and ndarray is that operations between Series automatically align the data based on
label. Thus, you can write computations without giving consideration to whether the Series involved have the same
labels.

In [31]: s[1:] + s[:-1]


Out[31]:
a NaN
b -0.565727
c -3.018117
d -2.271265
e NaN
dtype: float64

2.4. Community tutorials 155


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

The result of an operation between unaligned Series will have the union of the indexes involved. If a label is not found
in one Series or the other, the result will be marked as missing NaN. Being able to write code without doing any explicit
data alignment grants immense freedom and flexibility in interactive data analysis and research. The integrated data
alignment features of the pandas data structures set pandas apart from the majority of related tools for working with
labeled data.

Note: In general, we chose to make the default result of operations between differently indexed objects yield the
union of the indexes in order to avoid loss of information. Having an index label, though the data is missing, is
typically important information as part of a computation. You of course have the option of dropping labels with
missing data via the dropna function.

Name attribute

Series can also have a name attribute:


In [32]: s = pd.Series(np.random.randn(5), name='something')

In [33]: s
Out[33]:
0 -0.494929
1 1.071804
2 0.721555
3 -0.706771
4 -1.039575
Name: something, dtype: float64
[email protected]
T56GZSRVAHIn [34]: s.name
Out[34]: 'something'

The Series name will be assigned automatically in many cases, in particular when taking 1D slices of DataFrame as
you will see below.
You can rename a Series with the pandas.Series.rename() method.
In [35]: s2 = s.rename("different")

In [36]: s2.name
Out[36]: 'different'

Note that s and s2 refer to different objects.

DataFrame

DataFrame is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it
like a spreadsheet or SQL table, or a dict of Series objects. It is generally the most commonly used pandas object.
Like Series, DataFrame accepts many different kinds of input:
• Dict of 1D ndarrays, lists, dicts, or Series
• 2-D numpy.ndarray
• Structured or record ndarray
• A Series
• Another DataFrame

156 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Along with the data, you can optionally pass index (row labels) and columns (column labels) arguments. If you pass
an index and / or columns, you are guaranteeing the index and / or columns of the resulting DataFrame. Thus, a dict
of Series plus a specific index will discard all data not matching up to the passed index.
If axis labels are not passed, they will be constructed from the input data based on common sense rules.

Note: When the data is a dict, and columns is not specified, the DataFrame columns will be ordered by the dict’s
insertion order, if you are using Python version >= 3.6 and Pandas >= 0.23.
If you are using Python < 3.6 or Pandas < 0.23, and columns is not specified, the DataFrame columns will be the
lexically ordered list of dict keys.

From dict of Series or dicts

The resulting index will be the union of the indexes of the various Series. If there are any nested dicts, these will first
be converted to Series. If no columns are passed, the columns will be the ordered list of dict keys.

In [37]: d = {'one': pd.Series([1., 2., 3.], index=['a', 'b', 'c']),


....: 'two': pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
....:

In [38]: df = pd.DataFrame(d)

In [39]: df
Out[39]:
one two
[email protected]
a 1.0 1.0
T56GZSRVAHb 2.0 2.0
c 3.0 3.0
d NaN 4.0

In [40]: pd.DataFrame(d, index=['d', 'b', 'a'])


Out[40]:
one two
d NaN 4.0
b 2.0 2.0
a 1.0 1.0

In [41]: pd.DataFrame(d, index=['d', 'b', 'a'], columns=['two', 'three'])


Out[41]:
two three
d 4.0 NaN
b 2.0 NaN
a 1.0 NaN

The row and column labels can be accessed respectively by accessing the index and columns attributes:

Note: When a particular set of columns is passed along with a dict of data, the passed columns override the keys in
the dict.

In [42]: df.index
Out[42]: Index(['a', 'b', 'c', 'd'], dtype='object')

(continues on next page)

2.4. Community tutorials 157


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [43]: df.columns
Out[43]: Index(['one', 'two'], dtype='object')

From dict of ndarrays / lists

The ndarrays must all be the same length. If an index is passed, it must clearly also be the same length as the arrays.
If no index is passed, the result will be range(n), where n is the array length.

In [44]: d = {'one': [1., 2., 3., 4.],


....: 'two': [4., 3., 2., 1.]}
....:

In [45]: pd.DataFrame(d)
Out[45]:
one two
0 1.0 4.0
1 2.0 3.0
2 3.0 2.0
3 4.0 1.0

In [46]: pd.DataFrame(d, index=['a', 'b', 'c', 'd'])


Out[46]:
one two
a 1.0 4.0
b 2.0 3.0
c 3.0 2.0
[email protected]
T56GZSRVAHd 4.0 1.0

From structured or record array

This case is handled identically to a dict of arrays.

In [47]: data = np.zeros((2, ), dtype=[('A', 'i4'), ('B', 'f4'), ('C', 'a10')])

In [48]: data[:] = [(1, 2., 'Hello'), (2, 3., "World")]

In [49]: pd.DataFrame(data)
Out[49]:
A B C
0 1 2.0 b'Hello'
1 2 3.0 b'World'

In [50]: pd.DataFrame(data, index=['first', 'second'])


Out[50]:
A B C
first 1 2.0 b'Hello'
second 2 3.0 b'World'

In [51]: pd.DataFrame(data, columns=['C', 'A', 'B'])


Out[51]:
C A B
0 b'Hello' 1 2.0
1 b'World' 2 3.0

158 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Note: DataFrame is not intended to work exactly like a 2-dimensional NumPy ndarray.

From a list of dicts

In [52]: data2 = [{'a': 1, 'b': 2}, {'a': 5, 'b': 10, 'c': 20}]

In [53]: pd.DataFrame(data2)
Out[53]:
a b c
0 1 2 NaN
1 5 10 20.0

In [54]: pd.DataFrame(data2, index=['first', 'second'])


Out[54]:
a b c
first 1 2 NaN
second 5 10 20.0

In [55]: pd.DataFrame(data2, columns=['a', 'b'])


Out[55]:
a b
0 1 2
1 5 10

[email protected]
T56GZSRVAHFrom a dict of tuples

You can automatically create a MultiIndexed frame by passing a tuples dictionary.

In [56]: pd.DataFrame({('a', 'b'): {('A', 'B'): 1, ('A', 'C'): 2},


....: ('a', 'a'): {('A', 'C'): 3, ('A', 'B'): 4},
....: ('a', 'c'): {('A', 'B'): 5, ('A', 'C'): 6},
....: ('b', 'a'): {('A', 'C'): 7, ('A', 'B'): 8},
....: ('b', 'b'): {('A', 'D'): 9, ('A', 'B'): 10}})
....:
Out[56]:
a b
b a c a b
A B 1.0 4.0 5.0 8.0 10.0
C 2.0 3.0 6.0 7.0 NaN
D NaN NaN NaN NaN 9.0

From a Series

The result will be a DataFrame with the same index as the input Series, and with one column whose name is the
original name of the Series (only if no other column name provided).
Missing data
Much more will be said on this topic in the Missing data section. To construct a DataFrame with missing data, we use
np.nan to represent missing values. Alternatively, you may pass a numpy.MaskedArray as the data argument to
the DataFrame constructor, and its masked entries will be considered missing.

2.4. Community tutorials 159


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Alternate constructors

DataFrame.from_dict
DataFrame.from_dict takes a dict of dicts or a dict of array-like sequences and returns a DataFrame. It operates
like the DataFrame constructor except for the orient parameter which is 'columns' by default, but which can
be set to 'index' in order to use the dict keys as row labels.
In [57]: pd.DataFrame.from_dict(dict([('A', [1, 2, 3]), ('B', [4, 5, 6])]))
Out[57]:
A B
0 1 4
1 2 5
2 3 6

If you pass orient='index', the keys will be the row labels. In this case, you can also pass the desired column
names:
In [58]: pd.DataFrame.from_dict(dict([('A', [1, 2, 3]), ('B', [4, 5, 6])]),
....: orient='index', columns=['one', 'two', 'three'])
....:
Out[58]:
one two three
A 1 2 3
B 4 5 6

DataFrame.from_records
DataFrame.from_records takes a list of tuples or an ndarray with structured dtype. It works analogously to the
[email protected]
T56GZSRVAHnormal DataFrame constructor, except that the resulting DataFrame index may be a specific field of the structured
dtype. For example:
In [59]: data
Out[59]:
array([(1, 2., b'Hello'), (2, 3., b'World')],
dtype=[('A', '<i4'), ('B', '<f4'), ('C', 'S10')])

In [60]: pd.DataFrame.from_records(data, index='C')


Out[60]:
A B
C
b'Hello' 1 2.0
b'World' 2 3.0

Column selection, addition, deletion

You can treat a DataFrame semantically like a dict of like-indexed Series objects. Getting, setting, and deleting
columns works with the same syntax as the analogous dict operations:
In [61]: df['one']
Out[61]:
a 1.0
b 2.0
c 3.0
d NaN
Name: one, dtype: float64
(continues on next page)

160 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [62]: df['three'] = df['one'] * df['two']

In [63]: df['flag'] = df['one'] > 2

In [64]: df
Out[64]:
one two three flag
a 1.0 1.0 1.0 False
b 2.0 2.0 4.0 False
c 3.0 3.0 9.0 True
d NaN 4.0 NaN False

Columns can be deleted or popped like with a dict:

In [65]: del df['two']

In [66]: three = df.pop('three')

In [67]: df
Out[67]:
one flag
a 1.0 False
b 2.0 False
c 3.0 True
d NaN False

When inserting a scalar value, it will naturally be propagated to fill the column:
[email protected]
T56GZSRVAH
In [68]: df['foo'] = 'bar'

In [69]: df
Out[69]:
one flag foo
a 1.0 False bar
b 2.0 False bar
c 3.0 True bar
d NaN False bar

When inserting a Series that does not have the same index as the DataFrame, it will be conformed to the DataFrame’s
index:

In [70]: df['one_trunc'] = df['one'][:2]

In [71]: df
Out[71]:
one flag foo one_trunc
a 1.0 False bar 1.0
b 2.0 False bar 2.0
c 3.0 True bar NaN
d NaN False bar NaN

You can insert raw ndarrays but their length must match the length of the DataFrame’s index.
By default, columns get inserted at the end. The insert function is available to insert at a particular location in the
columns:

2.4. Community tutorials 161


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [72]: df.insert(1, 'bar', df['one'])

In [73]: df
Out[73]:
one bar flag foo one_trunc
a 1.0 1.0 False bar 1.0
b 2.0 2.0 False bar 2.0
c 3.0 3.0 True bar NaN
d NaN NaN False bar NaN

Assigning new columns in method chains

Inspired by dplyr’s mutate verb, DataFrame has an assign() method that allows you to easily create new columns
that are potentially derived from existing columns.

In [74]: iris = pd.read_csv('data/iris.data')

In [75]: iris.head()
Out[75]:
SepalLength SepalWidth PetalLength PetalWidth Name
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa

In [76]: (iris.assign(sepal_ratio=iris['SepalWidth'] / iris['SepalLength'])


[email protected]
T56GZSRVAH ....: .head())
....:
Out[76]:
SepalLength SepalWidth PetalLength PetalWidth Name sepal_ratio
0 5.1 3.5 1.4 0.2 Iris-setosa 0.686275
1 4.9 3.0 1.4 0.2 Iris-setosa 0.612245
2 4.7 3.2 1.3 0.2 Iris-setosa 0.680851
3 4.6 3.1 1.5 0.2 Iris-setosa 0.673913
4 5.0 3.6 1.4 0.2 Iris-setosa 0.720000

In the example above, we inserted a precomputed value. We can also pass in a function of one argument to be evaluated
on the DataFrame being assigned to.

In [77]: iris.assign(sepal_ratio=lambda x: (x['SepalWidth'] / x['SepalLength'])).


˓→head()

Out[77]:
SepalLength SepalWidth PetalLength PetalWidth Name sepal_ratio
0 5.1 3.5 1.4 0.2 Iris-setosa 0.686275
1 4.9 3.0 1.4 0.2 Iris-setosa 0.612245
2 4.7 3.2 1.3 0.2 Iris-setosa 0.680851
3 4.6 3.1 1.5 0.2 Iris-setosa 0.673913
4 5.0 3.6 1.4 0.2 Iris-setosa 0.720000

assign always returns a copy of the data, leaving the original DataFrame untouched.
Passing a callable, as opposed to an actual value to be inserted, is useful when you don’t have a reference to the
DataFrame at hand. This is common when using assign in a chain of operations. For example, we can limit the
DataFrame to just those observations with a Sepal Length greater than 5, calculate the ratio, and plot:

162 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [78]: (iris.query('SepalLength > 5')


....: .assign(SepalRatio=lambda x: x.SepalWidth / x.SepalLength,
....: PetalRatio=lambda x: x.PetalWidth / x.PetalLength)
....: .plot(kind='scatter', x='SepalRatio', y='PetalRatio'))
....:
Out[78]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d510d9950>

[email protected]
T56GZSRVAH

Since a function is passed in, the function is computed on the DataFrame being assigned to. Importantly, this is the
DataFrame that’s been filtered to those rows with sepal length greater than 5. The filtering happens first, and then the
ratio calculations. This is an example where we didn’t have a reference to the filtered DataFrame available.
The function signature for assign is simply **kwargs. The keys are the column names for the new fields, and the
values are either a value to be inserted (for example, a Series or NumPy array), or a function of one argument to be
called on the DataFrame. A copy of the original DataFrame is returned, with the new values inserted.
Changed in version 0.23.0.
Starting with Python 3.6 the order of **kwargs is preserved. This allows for dependent assignment, where an
expression later in **kwargs can refer to a column created earlier in the same assign().

In [79]: dfa = pd.DataFrame({"A": [1, 2, 3],


....: "B": [4, 5, 6]})
....:

In [80]: dfa.assign(C=lambda x: x['A'] + x['B'],


(continues on next page)

2.4. Community tutorials 163


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


....: D=lambda x: x['A'] + x['C'])
....:
Out[80]:
A B C D
0 1 4 5 6
1 2 5 7 9
2 3 6 9 12

In the second expression, x['C'] will refer to the newly created column, that’s equal to dfa['A'] + dfa['B'].

Indexing / selection

The basics of indexing are as follows:

Operation Syntax Result


Select column df[col] Series
Select row by label df.loc[label] Series
Select row by integer location df.iloc[loc] Series
Slice rows df[5:10] DataFrame
Select rows by boolean vector df[bool_vec] DataFrame

Row selection, for example, returns a Series whose index is the columns of the DataFrame:

In [81]: df.loc['b']
Out[81]:
[email protected]
T56GZSRVAHone 2
bar 2
flag False
foo bar
one_trunc 2
Name: b, dtype: object

In [82]: df.iloc[2]
Out[82]:
one 3
bar 3
flag True
foo bar
one_trunc NaN
Name: c, dtype: object

For a more exhaustive treatment of sophisticated label-based indexing and slicing, see the section on indexing. We
will address the fundamentals of reindexing / conforming to new sets of labels in the section on reindexing.

164 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Data alignment and arithmetic

Data alignment between DataFrame objects automatically align on both the columns and the index (row labels).
Again, the resulting object will have the union of the column and row labels.

In [83]: df = pd.DataFrame(np.random.randn(10, 4), columns=['A', 'B', 'C', 'D'])

In [84]: df2 = pd.DataFrame(np.random.randn(7, 3), columns=['A', 'B', 'C'])

In [85]: df + df2
Out[85]:
A B C D
0 0.045691 -0.014138 1.380871 NaN
1 -0.955398 -1.501007 0.037181 NaN
2 -0.662690 1.534833 -0.859691 NaN
3 -2.452949 1.237274 -0.133712 NaN
4 1.414490 1.951676 -2.320422 NaN
5 -0.494922 -1.649727 -1.084601 NaN
6 -1.047551 -0.748572 -0.805479 NaN
7 NaN NaN NaN NaN
8 NaN NaN NaN NaN
9 NaN NaN NaN NaN

When doing an operation between DataFrame and Series, the default behavior is to align the Series index on the
DataFrame columns, thus broadcasting row-wise. For example:

In [86]: df - df.iloc[0]
Out[86]:
[email protected] B C D
T56GZSRVAH0 0.000000 0.000000 0.000000 0.000000
1 -1.359261 -0.248717 -0.453372 -1.754659
2 0.253128 0.829678 0.010026 -1.991234
3 -1.311128 0.054325 -1.724913 -1.620544
4 0.573025 1.500742 -0.676070 1.367331
5 -1.741248 0.781993 -1.241620 -2.053136
6 -1.240774 -0.869551 -0.153282 0.000430
7 -0.743894 0.411013 -0.929563 -0.282386
8 -1.194921 1.320690 0.238224 -1.482644
9 2.293786 1.856228 0.773289 -1.446531

In the special case of working with time series data, if the DataFrame index contains dates, the broadcasting will be
column-wise:

In [87]: index = pd.date_range('1/1/2000', periods=8)

In [88]: df = pd.DataFrame(np.random.randn(8, 3), index=index, columns=list('ABC'))

In [89]: df
Out[89]:
A B C
2000-01-01 -1.226825 0.769804 -1.281247
2000-01-02 -0.727707 -0.121306 -0.097883
2000-01-03 0.695775 0.341734 0.959726
2000-01-04 -1.110336 -0.619976 0.149748
2000-01-05 -0.732339 0.687738 0.176444
2000-01-06 0.403310 -0.154951 0.301624
2000-01-07 -2.179861 -1.369849 -0.954208
(continues on next page)

2.4. Community tutorials 165


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-08 1.462696 -1.743161 -0.826591

In [90]: type(df['A'])
Out[90]: pandas.core.series.Series

In [91]: df - df['A']
Out[91]:
2000-01-01 00:00:00 2000-01-02 00:00:00 2000-01-03 00:00:00 2000-01-04
˓→00:00:00 2000-01-05 00:00:00 ... 2000-01-07 00:00:00 2000-01-08 00:00:00 A
˓→B C
2000-01-01 NaN NaN NaN
˓→ NaN NaN ... NaN NaN NaN
˓→NaN NaN

2000-01-02 NaN NaN NaN


˓→ NaN NaN ... NaN NaN NaN
˓→NaN NaN

2000-01-03 NaN NaN NaN


˓→ NaN NaN ... NaN NaN NaN
˓→NaN NaN

2000-01-04 NaN NaN NaN


˓→ NaN NaN ... NaN NaN NaN
˓→NaN NaN

2000-01-05 NaN NaN NaN


˓→ NaN NaN ... NaN NaN NaN
˓→NaN NaN

2000-01-06 NaN NaN NaN


˓→ NaN NaN ... NaN NaN NaN
˓→NaN NaN
[email protected]
T56GZSRVAH2000-01-07 NaN NaN NaN
˓→ NaN NaN ... NaN NaN NaN
˓→NaN NaN

2000-01-08 NaN NaN NaN


˓→ NaN NaN ... NaN NaN NaN
˓→NaN NaN

[8 rows x 11 columns]

Warning:
df - df['A']

is now deprecated and will be removed in a future release. The preferred way to replicate this behavior is
df.sub(df['A'], axis=0)

For explicit control over the matching and broadcasting behavior, see the section on flexible binary operations.
Operations with scalars are just as you would expect:

In [92]: df * 5 + 2
Out[92]:
A B C
2000-01-01 -4.134126 5.849018 -4.406237
2000-01-02 -1.638535 1.393469 1.510587
2000-01-03 5.478873 3.708672 6.798628
(continues on next page)

166 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-04 -3.551681 -1.099880 2.748742
2000-01-05 -1.661697 5.438692 2.882222
2000-01-06 4.016548 1.225246 3.508122
2000-01-07 -8.899303 -4.849247 -2.771039
2000-01-08 9.313480 -6.715805 -2.132955

In [93]: 1 / df
Out[93]:
A B C
2000-01-01 -0.815112 1.299033 -0.780489
2000-01-02 -1.374179 -8.243600 -10.216313
2000-01-03 1.437247 2.926250 1.041965
2000-01-04 -0.900628 -1.612966 6.677871
2000-01-05 -1.365487 1.454041 5.667510
2000-01-06 2.479485 -6.453662 3.315381
2000-01-07 -0.458745 -0.730007 -1.047990
2000-01-08 0.683669 -0.573671 -1.209788

In [94]: df ** 4
Out[94]:
A B C
2000-01-01 2.265327 0.351172 2.694833
2000-01-02 0.280431 0.000217 0.000092
2000-01-03 0.234355 0.013638 0.848376
2000-01-04 1.519910 0.147740 0.000503
2000-01-05 0.287640 0.223714 0.000969
2000-01-06 0.026458 0.000576 0.008277
2000-01-07 22.579530 3.521204 0.829033
[email protected]
T56GZSRVAH2000-01-08 4.577374 9.233151 0.466834

Boolean operators work as well:


In [95]: df1 = pd.DataFrame({'a': [1, 0, 1], 'b': [0, 1, 1]}, dtype=bool)

In [96]: df2 = pd.DataFrame({'a': [0, 1, 1], 'b': [1, 1, 0]}, dtype=bool)

In [97]: df1 & df2


Out[97]:
a b
0 False False
1 False True
2 True False

In [98]: df1 | df2


Out[98]:
a b
0 True True
1 True True
2 True True

In [99]: df1 ^ df2


Out[99]:
a b
0 True True
1 True False
2 False True

(continues on next page)

2.4. Community tutorials 167


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [100]: -df1
Out[100]:
a b
0 False True
1 True False
2 False False

Transposing

To transpose, access the T attribute (also the transpose function), similar to an ndarray:

# only show the first 5 rows


In [101]: df[:5].T
Out[101]:
2000-01-01 2000-01-02 2000-01-03 2000-01-04 2000-01-05
A -1.226825 -0.727707 0.695775 -1.110336 -0.732339
B 0.769804 -0.121306 0.341734 -0.619976 0.687738
C -1.281247 -0.097883 0.959726 0.149748 0.176444

DataFrame interoperability with NumPy functions

Elementwise NumPy ufuncs (log, exp, sqrt, . . . ) and various other NumPy functions can be used with no issues on
Series and DataFrame, assuming the data within are numeric:
[email protected]
T56GZSRVAHIn [102]: np.exp(df)
Out[102]:
A B C
2000-01-01 0.293222 2.159342 0.277691
2000-01-02 0.483015 0.885763 0.906755
2000-01-03 2.005262 1.407386 2.610980
2000-01-04 0.329448 0.537957 1.161542
2000-01-05 0.480783 1.989212 1.192968
2000-01-06 1.496770 0.856457 1.352053
2000-01-07 0.113057 0.254145 0.385117
2000-01-08 4.317584 0.174966 0.437538

In [103]: np.asarray(df)
Out[103]:
array([[-1.2268, 0.7698, -1.2812],
[-0.7277, -0.1213, -0.0979],
[ 0.6958, 0.3417, 0.9597],
[-1.1103, -0.62 , 0.1497],
[-0.7323, 0.6877, 0.1764],
[ 0.4033, -0.155 , 0.3016],
[-2.1799, -1.3698, -0.9542],
[ 1.4627, -1.7432, -0.8266]])

DataFrame is not intended to be a drop-in replacement for ndarray as its indexing semantics and data model are quite
different in places from an n-dimensional array.
Series implements __array_ufunc__, which allows it to work with NumPy’s universal functions.
The ufunc is applied to the underlying array in a Series.

168 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [104]: ser = pd.Series([1, 2, 3, 4])

In [105]: np.exp(ser)
Out[105]:
0 2.718282
1 7.389056
2 20.085537
3 54.598150
dtype: float64

Changed in version 0.25.0: When multiple Series are passed to a ufunc, they are aligned before performing the
operation.
Like other parts of the library, pandas will automatically align labeled inputs as part of a ufunc with multiple inputs.
For example, using numpy.remainder() on two Series with differently ordered labels will align before the
operation.

In [106]: ser1 = pd.Series([1, 2, 3], index=['a', 'b', 'c'])

In [107]: ser2 = pd.Series([1, 3, 5], index=['b', 'a', 'c'])

In [108]: ser1
Out[108]:
a 1
b 2
c 3
dtype: int64

In [109]: ser2
[email protected]
T56GZSRVAHOut[109]:
b 1
a 3
c 5
dtype: int64

In [110]: np.remainder(ser1, ser2)


Out[110]:
a 1
b 0
c 3
dtype: int64

As usual, the union of the two indices is taken, and non-overlapping values are filled with missing values.

In [111]: ser3 = pd.Series([2, 4, 6], index=['b', 'c', 'd'])

In [112]: ser3
Out[112]:
b 2
c 4
d 6
dtype: int64

In [113]: np.remainder(ser1, ser3)


Out[113]:
a NaN
b 0.0
(continues on next page)

2.4. Community tutorials 169


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


c 3.0
d NaN
dtype: float64

When a binary ufunc is applied to a Series and Index, the Series implementation takes precedence and a Series is
returned.
In [114]: ser = pd.Series([1, 2, 3])

In [115]: idx = pd.Index([4, 5, 6])

In [116]: np.maximum(ser, idx)


Out[116]:
0 4
1 5
2 6
dtype: int64

NumPy ufuncs are safe to apply to Series backed by non-ndarray arrays, for example arrays.SparseArray
(see Sparse calculation). If possible, the ufunc is applied without converting the underlying data to an ndarray.

Console display

Very large DataFrames will be truncated to display them in the console. You can also get a summary using info().
(Here I am reading a CSV version of the baseball dataset from the plyr R package):
[email protected]
In [117]: baseball = pd.read_csv('data/baseball.csv')
T56GZSRVAH
In [118]: print(baseball)
id player year stint team lg g ab r h X2b X3b hr rbi sb
˓→ cs bb so ibb hbp sh sf gidp
0 88641 womacto01 2006 2 CHN NL 19 50 6 14 1 0 1 2.0 1.0
˓→ 1.0 4 4.0 0.0 0.0 3.0 0.0 0.0
1 88643 schilcu01 2006 1 BOS AL 31 2 0 1 0 0 0 0.0 0.0
˓→ 0.0 0 1.0 0.0 0.0 0.0 0.0 0.0
.. ... ... ... ... ... .. .. ... .. ... ... ... .. ... ...
˓→ ... .. ... ... ... ... ... ...
98 89533 aloumo01 2007 1 NYN NL 87 328 51 112 19 1 13 49.0 3.0
˓→ 0.0 27 30.0 5.0 2.0 0.0 3.0 13.0
99 89534 alomasa02 2007 1 NYN NL 8 22 1 3 1 0 0 0.0 0.0
˓→ 0.0 0 3.0 0.0 0.0 0.0 0.0 0.0

[100 rows x 23 columns]

In [119]: baseball.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 100 entries, 0 to 99
Data columns (total 23 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 100 non-null int64
1 player 100 non-null object
2 year 100 non-null int64
3 stint 100 non-null int64
4 team 100 non-null object
(continues on next page)

170 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


5 lg 100 non-null object
6 g 100 non-null int64
7 ab 100 non-null int64
8 r 100 non-null int64
9 h 100 non-null int64
10 X2b 100 non-null int64
11 X3b 100 non-null int64
12 hr 100 non-null int64
13 rbi 100 non-null float64
14 sb 100 non-null float64
15 cs 100 non-null float64
16 bb 100 non-null int64
17 so 100 non-null float64
18 ibb 100 non-null float64
19 hbp 100 non-null float64
20 sh 100 non-null float64
21 sf 100 non-null float64
22 gidp 100 non-null float64
dtypes: float64(9), int64(11), object(3)
memory usage: 18.1+ KB

However, using to_string will return a string representation of the DataFrame in tabular form, though it won’t
always fit the console width:
In [120]: print(baseball.iloc[-20:, :12].to_string())
id player year stint team lg g ab r h X2b X3b
80 89474 finlest01 2007 1 COL NL 43 94 9 17 3 0
81 89480 embreal01 2007
[email protected] 1 OAK AL 4 0 0 0 0 0
T56GZSRVAH82 89481 edmonji01 2007 1 SLN NL 117 365 39 92 15 2
83 89482 easleda01 2007 1 NYN NL 76 193 24 54 6 0
84 89489 delgaca01 2007 1 NYN NL 139 538 71 139 30 0
85 89493 cormirh01 2007 1 CIN NL 6 0 0 0 0 0
86 89494 coninje01 2007 2 NYN NL 21 41 2 8 2 0
87 89495 coninje01 2007 1 CIN NL 80 215 23 57 11 1
88 89497 clemero02 2007 1 NYA AL 2 2 0 1 0 0
89 89498 claytro01 2007 2 BOS AL 8 6 1 0 0 0
90 89499 claytro01 2007 1 TOR AL 69 189 23 48 14 0
91 89501 cirilje01 2007 2 ARI NL 28 40 6 8 4 0
92 89502 cirilje01 2007 1 MIN AL 50 153 18 40 9 2
93 89521 bondsba01 2007 1 SFN NL 126 340 75 94 14 0
94 89523 biggicr01 2007 1 HOU NL 141 517 68 130 31 3
95 89525 benitar01 2007 2 FLO NL 34 0 0 0 0 0
96 89526 benitar01 2007 1 SFN NL 19 0 0 0 0 0
97 89530 ausmubr01 2007 1 HOU NL 117 349 38 82 16 3
98 89533 aloumo01 2007 1 NYN NL 87 328 51 112 19 1
99 89534 alomasa02 2007 1 NYN NL 8 22 1 3 1 0

Wide DataFrames will be printed across multiple rows by default:


In [121]: pd.DataFrame(np.random.randn(3, 12))
Out[121]:
0 1 2 3 4 5 6 7
˓→ 8 9 10 11
0 -0.345352 1.314232 0.690579 0.995761 2.396780 0.014871 3.357427 -0.317441 -1.
˓→236269 0.896171 -0.487602 -0.082240
1 -2.182937 0.380396 0.084844 0.432390 1.519970 -0.493662 0.600178 0.274230 0.
˓→132885 -0.023688 2.410179 1.450520
(continues on next page)

2.4. Community tutorials 171


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2 0.206053 -0.251905 -2.213588 1.063327 1.266143 0.299368 -0.863838 0.408204 -1.
˓→048089 -0.025747 -0.988387 0.094055

You can change how much to print on a single row by setting the display.width option:

In [122]: pd.set_option('display.width', 40) # default is 80

In [123]: pd.DataFrame(np.random.randn(3, 12))


Out[123]:
0 1 2 3 4 5 6 7
˓→ 8 9 10 11
0 1.262731 1.289997 0.082423 -0.055758 0.536580 -0.489682 0.369374 -0.034571 -2.
˓→484478 -0.281461 0.030711 0.109121
1 1.126203 -0.977349 1.474071 -0.064034 -1.282782 0.781836 -1.071357 0.441153 2.
˓→353925 0.583787 0.221471 -0.744471
2 0.758527 1.729689 -0.964980 -0.845696 -1.340896 1.846883 -1.328865 1.682706 -1.
˓→717693 0.888782 0.228440 0.901805

You can adjust the max width of the individual columns by setting display.max_colwidth

In [124]: datafile = {'filename': ['filename_01', 'filename_02'],


.....: 'path': ["media/user_name/storage/folder_01/filename_01",
.....: "media/user_name/storage/folder_02/filename_02"]}
.....:

In [125]: pd.set_option('display.max_colwidth', 30)

In [126]: pd.DataFrame(datafile)
[email protected]
T56GZSRVAHOut[126]:
filename path
0 filename_01 media/user_name/storage/fo...
1 filename_02 media/user_name/storage/fo...

In [127]: pd.set_option('display.max_colwidth', 100)

In [128]: pd.DataFrame(datafile)
Out[128]:
filename path
0 filename_01 media/user_name/storage/folder_01/filename_01
1 filename_02 media/user_name/storage/folder_02/filename_02

You can also disable this feature via the expand_frame_repr option. This will print the table in one block.

DataFrame column attribute access and IPython completion

If a DataFrame column label is a valid Python variable name, the column can be accessed like an attribute:

In [129]: df = pd.DataFrame({'foo1': np.random.randn(5),


.....: 'foo2': np.random.randn(5)})
.....:

In [130]: df
Out[130]:
foo1 foo2
0 1.171216 -0.858447
(continues on next page)

172 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 0.520260 0.306996
2 -1.197071 -0.028665
3 -1.066969 0.384316
4 -0.303421 1.574159

In [131]: df.foo1
Out[131]:
0 1.171216
1 0.520260
2 -1.197071
3 -1.066969
4 -0.303421
Name: foo1, dtype: float64

The columns are also connected to the IPython completion mechanism so they can be tab-completed:

In [5]: df.fo<TAB> # noqa: E225, E999


df.foo1 df.foo2

2.4.7 Comparison with other tools

Comparison with R / R libraries

Since pandas aims to provide a lot of the data manipulation and analysis functionality that people use R for, this
page was started to provide a more detailed look at the R language and its many third party libraries as they relate to
[email protected]
pandas. In comparisons with R and CRAN libraries, we care about the following things:
T56GZSRVAH
• Functionality / flexibility: what can/cannot be done with each tool
• Performance: how fast are operations. Hard numbers/benchmarks are preferable
• Ease-of-use: Is one tool easier/harder to use (you may have to be the judge of this, given side-by-side code
comparisons)
This page is also here to offer a bit of a translation guide for users of these R packages.
For transfer of DataFrame objects from pandas to R, one option is to use HDF5 files, see External compatibility
for an example.

Quick reference

We’ll start off with a quick reference guide pairing some common R operations using dplyr with pandas equivalents.

2.4. Community tutorials 173


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Querying, filtering, sampling

R pandas
dim(df) df.shape
head(df) df.head()
slice(df, 1:10) df.iloc[:9]
filter(df, col1 == 1, col2 == 1) df.query('col1 == 1 & col2 == 1')
df[df$col1 == 1 & df$col2 == 1,] df[(df.col1 == 1) & (df.col2 == 1)]
select(df, col1, col2) df[['col1', 'col2']]
select(df, col1:col3) df.loc[:, 'col1':'col3']
select(df, -(col1:col3)) df.drop(cols_to_drop, axis=1) but see1
distinct(select(df, col1)) df[['col1']].drop_duplicates()
distinct(select(df, col1, col2)) df[['col1', 'col2']].drop_duplicates()
sample_n(df, 10) df.sample(n=10)
sample_frac(df, 0.01) df.sample(frac=0.01)

Sorting

R pandas
arrange(df, col1, col2) df.sort_values(['col1', 'col2'])
arrange(df, desc(col1)) df.sort_values('col1', ascending=False)

[email protected]
T56GZSRVAHTransforming

R pandas
select(df, col_one = df.rename(columns={'col1': 'col_one'})['col_one']
col1)
rename(df, col_one = df.rename(columns={'col1': 'col_one'})
col1)
mutate(df, c=a-b) df.assign(c=df['a']-df['b'])

Grouping and summarizing

R pandas
summary(df) df.describe()
gdf <- group_by(df, col1) gdf = df.groupby('col1')
summarise(gdf, avg=mean(col1, na. df.groupby('col1').agg({'col1':
rm=TRUE)) 'mean'})
summarise(gdf, total=sum(col1)) df.groupby('col1').sum()
1 R’s shorthand for a subrange of columns (select(df, col1:col3)) can be approached cleanly in pandas, if you have the list of columns,

for example df[cols[1:3]] or df.drop(cols[1:3]), but doing this by column name is a bit messy.

174 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Base R

Slicing with R’s c

R makes it easy to access data.frame columns by name


df <- data.frame(a=rnorm(5), b=rnorm(5), c=rnorm(5), d=rnorm(5), e=rnorm(5))
df[, c("a", "c", "e")]

or by integer location
df <- data.frame(matrix(rnorm(1000), ncol=100))
df[, c(1:10, 25:30, 40, 50:100)]

Selecting multiple columns by name in pandas is straightforward


In [1]: df = pd.DataFrame(np.random.randn(10, 3), columns=list('abc'))

In [2]: df[['a', 'c']]


Out[2]:
a c
0 0.469112 -1.509059
1 -1.135632 -0.173215
2 0.119209 -0.861849
3 -2.104569 1.071804
4 0.721555 -1.039575
5 0.271860 0.567020
6 0.276232 -0.673690
[email protected]
7 0.113648 0.524988
T56GZSRVAH8 0.404705 -1.715002
9 -1.039268 -1.157892

In [3]: df.loc[:, ['a', 'c']]


Out[3]:
a c
0 0.469112 -1.509059
1 -1.135632 -0.173215
2 0.119209 -0.861849
3 -2.104569 1.071804
4 0.721555 -1.039575
5 0.271860 0.567020
6 0.276232 -0.673690
7 0.113648 0.524988
8 0.404705 -1.715002
9 -1.039268 -1.157892

Selecting multiple noncontiguous columns by integer location can be achieved with a combination of the iloc indexer
attribute and numpy.r_.
In [4]: named = list('abcdefg')

In [5]: n = 30

In [6]: columns = named + np.arange(len(named), n).tolist()

In [7]: df = pd.DataFrame(np.random.randn(n, n), columns=columns)

(continues on next page)

2.4. Community tutorials 175


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [8]: df.iloc[:, np.r_[:10, 24:30]]
Out[8]:
a b c d e f g 7
˓→ 8 9 24 25 26 27 28 29
0 -1.344312 0.844885 1.075770 -0.109050 1.643563 -1.469388 0.357021 -0.674600 -1.
˓→776904 -0.968914 -1.170299 -0.226169 0.410835 0.813850 0.132003 -0.827317
1 -0.076467 -1.187678 1.130127 -1.436737 -1.413681 1.607920 1.024180 0.569605 0.
˓→875906 -2.211372 0.959726 -1.110336 -0.619976 0.149748 -0.732339 0.687738
2 0.176444 0.403310 -0.154951 0.301624 -2.179861 -1.369849 -0.954208 1.462696 -1.
˓→743161 -0.826591 0.084844 0.432390 1.519970 -0.493662 0.600178 0.274230
3 0.132885 -0.023688 2.410179 1.450520 0.206053 -0.251905 -2.213588 1.063327 1.
˓→266143 0.299368 -2.484478 -0.281461 0.030711 0.109121 1.126203 -0.977349
4 1.474071 -0.064034 -1.282782 0.781836 -1.071357 0.441153 2.353925 0.583787 0.
˓→221471 -0.744471 -1.197071 -1.066969 -0.303421 -0.858447 0.306996 -0.028665
.. ... ... ... ... ... ... ... ...
˓→ ... ... ... ... ... ... ... ...
25 1.492125 -0.068190 0.681456 1.221829 -0.434352 1.204815 -0.195612 1.251683 -1.
˓→040389 -0.796211 1.944517 0.042344 -0.307904 0.428572 0.880609 0.487645
26 0.725238 0.624607 -0.141185 -0.143948 -0.328162 2.095086 -0.608888 -0.926422 1.
˓→872601 -2.513465 -0.846188 1.190624 0.778507 1.008500 1.424017 0.717110
27 1.262419 1.950057 0.301038 -0.933858 0.814946 0.181439 -0.110015 -2.364638 -1.
˓→584814 0.307941 -1.341814 0.334281 -0.162227 1.007824 2.826008 1.458383
28 -1.585746 -0.899734 0.921494 -0.211762 -0.059182 0.058308 0.915377 -0.696321 0.
˓→150664 -3.060395 0.403620 -0.026602 -0.240481 0.577223 -1.088417 0.326687
29 -0.986248 0.169729 -1.158091 1.019673 0.646039 0.917399 -0.010435 0.366366 0.
˓→922729 0.869610 -1.209247 -0.671466 0.332872 -2.013086 -1.602549 0.333109

[30 rows x 16 columns]


[email protected]
T56GZSRVAH

aggregate

In R you may want to split data into subsets and compute the mean for each. Using a data.frame called df and splitting
it into groups by1 and by2:
df <- data.frame(
v1 = c(1,3,5,7,8,3,5,NA,4,5,7,9),
v2 = c(11,33,55,77,88,33,55,NA,44,55,77,99),
by1 = c("red", "blue", 1, 2, NA, "big", 1, 2, "red", 1, NA, 12),
by2 = c("wet", "dry", 99, 95, NA, "damp", 95, 99, "red", 99, NA, NA))
aggregate(x=df[, c("v1", "v2")], by=list(mydf2$by1, mydf2$by2), FUN = mean)

The groupby() method is similar to base R aggregate function.


In [9]: df = pd.DataFrame(
...: {'v1': [1, 3, 5, 7, 8, 3, 5, np.nan, 4, 5, 7, 9],
...: 'v2': [11, 33, 55, 77, 88, 33, 55, np.nan, 44, 55, 77, 99],
...: 'by1': ["red", "blue", 1, 2, np.nan, "big", 1, 2, "red", 1, np.nan, 12],
...: 'by2': ["wet", "dry", 99, 95, np.nan, "damp", 95, 99, "red", 99, np.nan,
...: np.nan]})
...:

In [10]: g = df.groupby(['by1', 'by2'])

In [11]: g[['v1', 'v2']].mean()


Out[11]:
(continues on next page)

176 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


v1 v2
by1 by2
1 95 5.0 55.0
99 5.0 55.0
2 95 7.0 77.0
99 NaN NaN
big damp 3.0 33.0
blue dry 3.0 33.0
red red 4.0 44.0
wet 1.0 11.0

For more details and examples see the groupby documentation.

match / %in%

A common way to select data in R is using %in% which is defined using the function match. The operator %in% is
used to return a logical vector indicating if there is a match or not:

s <- 0:4
s %in% c(2,4)

The isin() method is similar to R %in% operator:

In [12]: s = pd.Series(np.arange(5), dtype=np.float32)

In [13]: s.isin([2, 4])


[email protected]
Out[13]:
T56GZSRVAH0 False
1 False
2 True
3 False
4 True
dtype: bool

The match function returns a vector of the positions of matches of its first argument in its second:

s <- 0:4
match(s, c(2,4))

For more details and examples see the reshaping documentation.

tapply

tapply is similar to aggregate, but data can be in a ragged array, since the subclass sizes are possibly irregular.
Using a data.frame called baseball, and retrieving information based on the array team:

baseball <-
data.frame(team = gl(5, 5,
labels = paste("Team", LETTERS[1:5])),
player = sample(letters, 25),
batting.average = runif(25, .200, .400))

tapply(baseball$batting.average, baseball.example$team,
max)

2.4. Community tutorials 177


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In pandas we may use pivot_table() method to handle this:

In [14]: import random

In [15]: import string

In [16]: baseball = pd.DataFrame(


....: {'team': ["team %d" % (x + 1) for x in range(5)] * 5,
....: 'player': random.sample(list(string.ascii_lowercase), 25),
....: 'batting avg': np.random.uniform(.200, .400, 25)})
....:

In [17]: baseball.pivot_table(values='batting avg', columns='team', aggfunc=np.max)


Out[17]:
team team 1 team 2 team 3 team 4 team 5
batting avg 0.352134 0.295327 0.397191 0.394457 0.396194

For more details and examples see the reshaping documentation.

subset

The query() method is similar to the base R subset function. In R you might want to get the rows of a data.
frame where one column’s values are less than another column’s values:

df <- data.frame(a=rnorm(10), b=rnorm(10))


subset(df, a <= b)
df[df$a <= df$b,] # note the comma
[email protected]
T56GZSRVAHIn pandas, there are a few ways to perform subsetting. You can use query() or pass an expression as if it were an
index/slice as well as standard boolean indexing:

In [18]: df = pd.DataFrame({'a': np.random.randn(10), 'b': np.random.randn(10)})

In [19]: df.query('a <= b')


Out[19]:
a b
1 0.174950 0.552887
2 -0.023167 0.148084
3 -0.495291 -0.300218
4 -0.860736 0.197378
5 -1.134146 1.720780
7 -0.290098 0.083515
8 0.238636 0.946550

In [20]: df[df['a'] <= df['b']]


Out[20]:
a b
1 0.174950 0.552887
2 -0.023167 0.148084
3 -0.495291 -0.300218
4 -0.860736 0.197378
5 -1.134146 1.720780
7 -0.290098 0.083515
8 0.238636 0.946550

In [21]: df.loc[df['a'] <= df['b']]


(continues on next page)

178 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[21]:
a b
1 0.174950 0.552887
2 -0.023167 0.148084
3 -0.495291 -0.300218
4 -0.860736 0.197378
5 -1.134146 1.720780
7 -0.290098 0.083515
8 0.238636 0.946550

For more details and examples see the query documentation.

with

An expression using a data.frame called df in R with the columns a and b would be evaluated using with like so:

df <- data.frame(a=rnorm(10), b=rnorm(10))


with(df, a + b)
df$a + df$b # same as the previous expression

In pandas the equivalent expression, using the eval() method, would be:

In [22]: df = pd.DataFrame({'a': np.random.randn(10), 'b': np.random.randn(10)})

In [23]: df.eval('a + b')


Out[23]:
[email protected]
0 -0.091430
T56GZSRVAH1 -2.483890
2 -0.252728
3 -0.626444
4 -0.261740
5 2.149503
6 -0.332214
7 0.799331
8 -2.377245
9 2.104677
dtype: float64

In [24]: df['a'] + df['b'] # same as the previous expression


Out[24]:
0 -0.091430
1 -2.483890
2 -0.252728
3 -0.626444
4 -0.261740
5 2.149503
6 -0.332214
7 0.799331
8 -2.377245
9 2.104677
dtype: float64

In certain cases eval() will be much faster than evaluation in pure Python. For more details and examples see the
eval documentation.

2.4. Community tutorials 179


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

plyr

plyr is an R library for the split-apply-combine strategy for data analysis. The functions revolve around three data
structures in R, a for arrays, l for lists, and d for data.frame. The table below shows how these data
structures could be mapped in Python.

R Python
array list
lists dictionary or list of objects
data.frame dataframe

ddply

An expression using a data.frame called df in R where you want to summarize x by month:

require(plyr)
df <- data.frame(
x = runif(120, 1, 168),
y = runif(120, 7, 334),
z = runif(120, 1.7, 20.7),
month = rep(c(5,6,7,8),30),
week = sample(1:4, 120, TRUE)
)

ddply(df, .(month, week), summarize,


mean = round(mean(x), 2),
[email protected]
T56GZSRVAH sd = round(sd(x), 2))

In pandas the equivalent expression, using the groupby() method, would be:

In [25]: df = pd.DataFrame({'x': np.random.uniform(1., 168., 120),


....: 'y': np.random.uniform(7., 334., 120),
....: 'z': np.random.uniform(1.7, 20.7, 120),
....: 'month': [5, 6, 7, 8] * 30,
....: 'week': np.random.randint(1, 4, 120)})
....:

In [26]: grouped = df.groupby(['month', 'week'])

In [27]: grouped['x'].agg([np.mean, np.std])


Out[27]:
mean std
month week
5 1 63.653367 40.601965
2 78.126605 53.342400
3 92.091886 57.630110
6 1 81.747070 54.339218
2 70.971205 54.687287
3 100.968344 54.010081
7 1 61.576332 38.844274
2 61.733510 48.209013
3 71.688795 37.595638
8 1 62.741922 34.618153
2 91.774627 49.790202
3 73.936856 60.773900

180 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

For more details and examples see the groupby documentation.

reshape / reshape2

melt.array

An expression using a 3 dimensional array called a in R where you want to melt it into a data.frame:

a <- array(c(1:23, NA), c(2,3,4))


data.frame(melt(a))

In Python, since a is a list, you can simply use list comprehension.

In [28]: a = np.array(list(range(1, 24)) + [np.NAN]).reshape(2, 3, 4)

In [29]: pd.DataFrame([tuple(list(x) + [val]) for x, val in np.ndenumerate(a)])


Out[29]:
0 1 2 3
0 0 0 0 1.0
1 0 0 1 2.0
2 0 0 2 3.0
3 0 0 3 4.0
4 0 1 0 5.0
.. .. .. .. ...
19 1 1 3 20.0
20 1 2 0 21.0
21 1 2 1 22.0
[email protected]
22 1 2 2 23.0
T56GZSRVAH23 1 2 3 NaN

[24 rows x 4 columns]

melt.list

An expression using a list called a in R where you want to melt it into a data.frame:

a <- as.list(c(1:4, NA))


data.frame(melt(a))

In Python, this list would be a list of tuples, so DataFrame() method would convert it to a dataframe as required.

In [30]: a = list(enumerate(list(range(1, 5)) + [np.NAN]))

In [31]: pd.DataFrame(a)
Out[31]:
0 1
0 0 1.0
1 1 2.0
2 2 3.0
3 3 4.0
4 4 NaN

For more details and examples see the Into to Data Structures documentation.

2.4. Community tutorials 181


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

melt.data.frame

An expression using a data.frame called cheese in R where you want to reshape the data.frame:

cheese <- data.frame(


first = c('John', 'Mary'),
last = c('Doe', 'Bo'),
height = c(5.5, 6.0),
weight = c(130, 150)
)
melt(cheese, id=c("first", "last"))

In Python, the melt() method is the R equivalent:

In [32]: cheese = pd.DataFrame({'first': ['John', 'Mary'],


....: 'last': ['Doe', 'Bo'],
....: 'height': [5.5, 6.0],
....: 'weight': [130, 150]})
....:

In [33]: pd.melt(cheese, id_vars=['first', 'last'])


Out[33]:
first last variable value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0

In [34]: cheese.set_index(['first', 'last']).stack()


[email protected] # alternative way
T56GZSRVAHOut[34]:
first last
John Doe height 5.5
weight 130.0
Mary Bo height 6.0
weight 150.0
dtype: float64

For more details and examples see the reshaping documentation.

cast

In R acast is an expression using a data.frame called df in R to cast into a higher dimensional array:

df <- data.frame(
x = runif(12, 1, 168),
y = runif(12, 7, 334),
z = runif(12, 1.7, 20.7),
month = rep(c(5,6,7),4),
week = rep(c(1,2), 6)
)

mdf <- melt(df, id=c("month", "week"))


acast(mdf, week ~ month ~ variable, mean)

In Python the best way is to make use of pivot_table():

182 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [35]: df = pd.DataFrame({'x': np.random.uniform(1., 168., 12),


....: 'y': np.random.uniform(7., 334., 12),
....: 'z': np.random.uniform(1.7, 20.7, 12),
....: 'month': [5, 6, 7] * 4,
....: 'week': [1, 2] * 6})
....:

In [36]: mdf = pd.melt(df, id_vars=['month', 'week'])

In [37]: pd.pivot_table(mdf, values='value', index=['variable', 'week'],


....: columns=['month'], aggfunc=np.mean)
....:
Out[37]:
month 5 6 7
variable week
x 1 93.888747 98.762034 55.219673
2 94.391427 38.112932 83.942781
y 1 94.306912 279.454811 227.840449
2 87.392662 193.028166 173.899260
z 1 11.016009 10.079307 16.170549
2 8.476111 17.638509 19.003494

Similarly for dcast which uses a data.frame called df in R to aggregate information based on Animal and
FeedType:

df <- data.frame(
Animal = c('Animal1', 'Animal2', 'Animal3', 'Animal2', 'Animal1',
'Animal2', 'Animal3'),
[email protected]
FeedType = c('A', 'B', 'A', 'A', 'B', 'B', 'A'),
T56GZSRVAH Amount = c(10, 7, 4, 2, 5, 6, 2)
)

dcast(df, Animal ~ FeedType, sum, fill=NaN)


# Alternative method using base R
with(df, tapply(Amount, list(Animal, FeedType), sum))

Python can approach this in two different ways. Firstly, similar to above using pivot_table():

In [38]: df = pd.DataFrame({
....: 'Animal': ['Animal1', 'Animal2', 'Animal3', 'Animal2', 'Animal1',
....: 'Animal2', 'Animal3'],
....: 'FeedType': ['A', 'B', 'A', 'A', 'B', 'B', 'A'],
....: 'Amount': [10, 7, 4, 2, 5, 6, 2],
....: })
....:

In [39]: df.pivot_table(values='Amount', index='Animal', columns='FeedType',


....: aggfunc='sum')
....:
Out[39]:
FeedType A B
Animal
Animal1 10.0 5.0
Animal2 2.0 13.0
Animal3 6.0 NaN

The second approach is to use the groupby() method:

2.4. Community tutorials 183


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [40]: df.groupby(['Animal', 'FeedType'])['Amount'].sum()


Out[40]:
Animal FeedType
Animal1 A 10
B 5
Animal2 A 2
B 13
Animal3 A 6
Name: Amount, dtype: int64

For more details and examples see the reshaping documentation or the groupby documentation.

factor

pandas has a data type for categorical data.

cut(c(1,2,3,4,5,6), 3)
factor(c(1,2,3,2,2,3))

In pandas this is accomplished with pd.cut and astype("category"):

In [41]: pd.cut(pd.Series([1, 2, 3, 4, 5, 6]), 3)


Out[41]:
0 (0.995, 2.667]
1 (0.995, 2.667]
2 (2.667, 4.333]
3 (2.667, 4.333]
[email protected]
4 (4.333, 6.0]
T56GZSRVAH
5 (4.333, 6.0]
dtype: category
Categories (3, interval[float64]): [(0.995, 2.667] < (2.667, 4.333] < (4.333, 6.0]]

In [42]: pd.Series([1, 2, 3, 2, 2, 3]).astype("category")


Out[42]:
0 1
1 2
2 3
3 2
4 2
5 3
dtype: category
Categories (3, int64): [1, 2, 3]

For more details and examples see categorical introduction and the API documentation. There is also a documentation
regarding the differences to R’s factor.

184 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Comparison with SQL

Since many potential pandas users have some familiarity with SQL, this page is meant to provide some examples of
how various SQL operations would be performed using pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas to familiarize yourself with the
library.
As is customary, we import pandas and NumPy as follows:

In [1]: import pandas as pd

In [2]: import numpy as np

Most of the examples will utilize the tips dataset found within pandas tests. We’ll read the data into a DataFrame
called tips and assume we have a database table of the same name and structure.

In [3]: url = ('https://raw.github.com/pandas-dev'


...: '/pandas/master/pandas/tests/data/tips.csv')
...:

In [4]: tips = pd.read_csv(url)

In [5]: tips.head()
Out[5]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
[email protected]
3 23.68 3.31 Male No Sun Dinner 2
T56GZSRVAH4 24.59 3.61 Female No Sun Dinner 4

SELECT

In SQL, selection is done using a comma-separated list of columns you’d like to select (or a * to select all columns):

SELECT total_bill, tip, smoker, time


FROM tips
LIMIT 5;

With pandas, column selection is done by passing a list of column names to your DataFrame:

In [6]: tips[['total_bill', 'tip', 'smoker', 'time']].head(5)


Out[6]:
total_bill tip smoker time
0 16.99 1.01 No Dinner
1 10.34 1.66 No Dinner
2 21.01 3.50 No Dinner
3 23.68 3.31 No Dinner
4 24.59 3.61 No Dinner

Calling the DataFrame without the list of column names would display all columns (akin to SQL’s *).
In SQL, you can add a calculated column:

2.4. Community tutorials 185


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

SELECT *, tip/total_bill as tip_rate


FROM tips
LIMIT 5;

With pandas, you can use the DataFrame.assign() method of a DataFrame to append a new column:

In [7]: tips.assign(tip_rate=tips['tip'] / tips['total_bill']).head(5)


Out[7]:
total_bill tip sex smoker day time size tip_rate
0 16.99 1.01 Female No Sun Dinner 2 0.059447
1 10.34 1.66 Male No Sun Dinner 3 0.160542
2 21.01 3.50 Male No Sun Dinner 3 0.166587
3 23.68 3.31 Male No Sun Dinner 2 0.139780
4 24.59 3.61 Female No Sun Dinner 4 0.146808

WHERE

Filtering in SQL is done via a WHERE clause.

SELECT *
FROM tips
WHERE time = 'Dinner'
LIMIT 5;

DataFrames can be filtered in multiple ways; the most intuitive of which is using boolean indexing.

In [8]: tips[tips['time'] == 'Dinner'].head(5)


[email protected]
T56GZSRVAHOut[8]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4

The above statement is simply passing a Series of True/False objects to the DataFrame, returning all rows with
True.

In [9]: is_dinner = tips['time'] == 'Dinner'

In [10]: is_dinner.value_counts()
Out[10]:
True 176
False 68
Name: time, dtype: int64

In [11]: tips[is_dinner].head(5)
Out[11]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4

Just like SQL’s OR and AND, multiple conditions can be passed to a DataFrame using | (OR) and & (AND).

186 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

-- tips of more than $5.00 at Dinner meals


SELECT *
FROM tips
WHERE time = 'Dinner' AND tip > 5.00;

# tips of more than $5.00 at Dinner meals


In [12]: tips[(tips['time'] == 'Dinner') & (tips['tip'] > 5.00)]
Out[12]:
total_bill tip sex smoker day time size
23 39.42 7.58 Male No Sat Dinner 4
44 30.40 5.60 Male No Sun Dinner 4
47 32.40 6.00 Male No Sun Dinner 4
52 34.81 5.20 Female No Sun Dinner 4
59 48.27 6.73 Male No Sat Dinner 4
116 29.93 5.07 Male No Sun Dinner 4
155 29.85 5.14 Female No Sun Dinner 5
170 50.81 10.00 Male Yes Sat Dinner 3
172 7.25 5.15 Male Yes Sun Dinner 2
181 23.33 5.65 Male Yes Sun Dinner 2
183 23.17 6.50 Male Yes Sun Dinner 4
211 25.89 5.16 Male Yes Sat Dinner 4
212 48.33 9.00 Male No Sat Dinner 4
214 28.17 6.50 Female Yes Sat Dinner 3
239 29.03 5.92 Male No Sat Dinner 3

-- tips by parties of at least 5 diners OR bill total was more than $45
SELECT *
FROM tips
[email protected]
T56GZSRVAHWHERE size >= 5 OR total_bill > 45;

# tips by parties of at least 5 diners OR bill total was more than $45
In [13]: tips[(tips['size'] >= 5) | (tips['total_bill'] > 45)]
Out[13]:
total_bill tip sex smoker day time size
59 48.27 6.73 Male No Sat Dinner 4
125 29.80 4.20 Female No Thur Lunch 6
141 34.30 6.70 Male No Thur Lunch 6
142 41.19 5.00 Male No Thur Lunch 5
143 27.05 5.00 Female No Thur Lunch 6
155 29.85 5.14 Female No Sun Dinner 5
156 48.17 5.00 Male No Sun Dinner 6
170 50.81 10.00 Male Yes Sat Dinner 3
182 45.35 3.50 Male Yes Sun Dinner 3
185 20.69 5.00 Male No Sun Dinner 5
187 30.46 2.00 Male Yes Sun Dinner 5
212 48.33 9.00 Male No Sat Dinner 4
216 28.15 3.00 Male Yes Sat Dinner 5

NULL checking is done using the notna() and isna() methods.

In [14]: frame = pd.DataFrame({'col1': ['A', 'B', np.NaN, 'C', 'D'],


....: 'col2': ['F', np.NaN, 'G', 'H', 'I']})
....:

In [15]: frame
Out[15]:
(continues on next page)

2.4. Community tutorials 187


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


col1 col2
0 A F
1 B NaN
2 NaN G
3 C H
4 D I

Assume we have a table of the same structure as our DataFrame above. We can see only the records where col2 IS
NULL with the following query:
SELECT *
FROM frame
WHERE col2 IS NULL;

In [16]: frame[frame['col2'].isna()]
Out[16]:
col1 col2
1 B NaN

Getting items where col1 IS NOT NULL can be done with notna().
SELECT *
FROM frame
WHERE col1 IS NOT NULL;

In [17]: frame[frame['col1'].notna()]
Out[17]:
[email protected]
T56GZSRVAH col1 col2
0 A F
1 B NaN
3 C H
4 D I

GROUP BY

In pandas, SQL’s GROUP BY operations are performed using the similarly named groupby() method.
groupby() typically refers to a process where we’d like to split a dataset into groups, apply some function (typically
aggregation) , and then combine the groups together.
A common SQL operation would be getting the count of records in each group throughout a dataset. For instance, a
query getting us the number of tips left by sex:
SELECT sex, count(*)
FROM tips
GROUP BY sex;
/*
Female 87
Male 157
*/

The pandas equivalent would be:


In [18]: tips.groupby('sex').size()
Out[18]:
(continues on next page)

188 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


sex
Female 87
Male 157
dtype: int64

Notice that in the pandas code we used size() and not count(). This is because count() applies the function
to each column, returning the number of not null records within each.
In [19]: tips.groupby('sex').count()
Out[19]:
total_bill tip smoker day time size
sex
Female 87 87 87 87 87 87
Male 157 157 157 157 157 157

Alternatively, we could have applied the count() method to an individual column:


In [20]: tips.groupby('sex')['total_bill'].count()
Out[20]:
sex
Female 87
Male 157
Name: total_bill, dtype: int64

Multiple functions can also be applied at once. For instance, say we’d like to see how tip amount differs by day of
the week - agg() allows you to pass a dictionary to your grouped DataFrame, indicating which functions to apply to
specific columns.
[email protected]
T56GZSRVAHSELECT day, AVG(tip), COUNT(*)
FROM tips
GROUP BY day;
/*
Fri 2.734737 19
Sat 2.993103 87
Sun 3.255132 76
Thur 2.771452 62
*/

In [21]: tips.groupby('day').agg({'tip': np.mean, 'day': np.size})


Out[21]:
tip day
day
Fri 2.734737 19
Sat 2.993103 87
Sun 3.255132 76
Thur 2.771452 62

Grouping by more than one column is done by passing a list of columns to the groupby() method.
SELECT smoker, day, COUNT(*), AVG(tip)
FROM tips
GROUP BY smoker, day;
/*
smoker day
No Fri 4 2.812500
Sat 45 3.102889
(continues on next page)

2.4. Community tutorials 189


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Sun 57 3.167895
Thur 45 2.673778
Yes Fri 15 2.714000
Sat 42 2.875476
Sun 19 3.516842
Thur 17 3.030000
*/

In [22]: tips.groupby(['smoker', 'day']).agg({'tip': [np.size, np.mean]})


Out[22]:
tip
size mean
smoker day
No Fri 4.0 2.812500
Sat 45.0 3.102889
Sun 57.0 3.167895
Thur 45.0 2.673778
Yes Fri 15.0 2.714000
Sat 42.0 2.875476
Sun 19.0 3.516842
Thur 17.0 3.030000

JOIN

JOINs can be performed with join() or merge(). By default, join() will join the DataFrames on their indices.
[email protected]
Each method has parameters allowing you to specify the type of join to perform (LEFT, RIGHT, INNER, FULL) or
T56GZSRVAHthe columns to join on (column names or indices).

In [23]: df1 = pd.DataFrame({'key': ['A', 'B', 'C', 'D'],


....: 'value': np.random.randn(4)})
....:

In [24]: df2 = pd.DataFrame({'key': ['B', 'D', 'D', 'E'],


....: 'value': np.random.randn(4)})
....:

Assume we have two database tables of the same name and structure as our DataFrames.
Now let’s go over the various types of JOINs.

INNER JOIN

SELECT *
FROM df1
INNER JOIN df2
ON df1.key = df2.key;

# merge performs an INNER JOIN by default


In [25]: pd.merge(df1, df2, on='key')
Out[25]:
key value_x value_y
0 B -0.282863 1.212112
(continues on next page)

190 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 D -1.135632 -0.173215
2 D -1.135632 0.119209

merge() also offers parameters for cases when you’d like to join one DataFrame’s column with another DataFrame’s
index.

In [26]: indexed_df2 = df2.set_index('key')

In [27]: pd.merge(df1, indexed_df2, left_on='key', right_index=True)


Out[27]:
key value_x value_y
1 B -0.282863 1.212112
3 D -1.135632 -0.173215
3 D -1.135632 0.119209

LEFT OUTER JOIN

-- show all records from df1


SELECT *
FROM df1
LEFT OUTER JOIN df2
ON df1.key = df2.key;

# show all records from df1


In [28]: pd.merge(df1, df2, on='key', how='left')
[email protected]
T56GZSRVAHOut[28]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209

RIGHT JOIN

-- show all records from df2


SELECT *
FROM df1
RIGHT OUTER JOIN df2
ON df1.key = df2.key;

# show all records from df2


In [29]: pd.merge(df1, df2, on='key', how='right')
Out[29]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
3 E NaN -1.044236

2.4. Community tutorials 191


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

FULL JOIN

pandas also allows for FULL JOINs, which display both sides of the dataset, whether or not the joined columns find a
match. As of writing, FULL JOINs are not supported in all RDBMS (MySQL).

-- show all records from both tables


SELECT *
FROM df1
FULL OUTER JOIN df2
ON df1.key = df2.key;

# show all records from both frames


In [30]: pd.merge(df1, df2, on='key', how='outer')
Out[30]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236

UNION

UNION ALL can be performed using concat().


[email protected]
In [31]: df1 = pd.DataFrame({'city': ['Chicago', 'San Francisco', 'New York City'],
T56GZSRVAH ....: 'rank': range(1, 4)})
....:

In [32]: df2 = pd.DataFrame({'city': ['Chicago', 'Boston', 'Los Angeles'],


....: 'rank': [1, 4, 5]})
....:

SELECT city, rank


FROM df1
UNION ALL
SELECT city, rank
FROM df2;
/*
city rank
Chicago 1
San Francisco 2
New York City 3
Chicago 1
Boston 4
Los Angeles 5
*/

In [33]: pd.concat([df1, df2])


Out[33]:
city rank
0 Chicago 1
1 San Francisco 2
(continues on next page)

192 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2 New York City 3
0 Chicago 1
1 Boston 4
2 Los Angeles 5

SQL’s UNION is similar to UNION ALL, however UNION will remove duplicate rows.

SELECT city, rank


FROM df1
UNION
SELECT city, rank
FROM df2;
-- notice that there is only one Chicago record this time
/*
city rank
Chicago 1
San Francisco 2
New York City 3
Boston 4
Los Angeles 5
*/

In pandas, you can use concat() in conjunction with drop_duplicates().

In [34]: pd.concat([df1, df2]).drop_duplicates()


Out[34]:
city rank
0 Chicago
[email protected] 1
T56GZSRVAH1 San Francisco 2
2 New York City 3
1 Boston 4
2 Los Angeles 5

Pandas equivalents for some SQL analytic and aggregate functions

Top N rows with offset

-- MySQL
SELECT * FROM tips
ORDER BY tip DESC
LIMIT 10 OFFSET 5;

In [35]: tips.nlargest(10 + 5, columns='tip').tail(10)


Out[35]:
total_bill tip sex smoker day time size
183 23.17 6.50 Male Yes Sun Dinner 4
214 28.17 6.50 Female Yes Sat Dinner 3
47 32.40 6.00 Male No Sun Dinner 4
239 29.03 5.92 Male No Sat Dinner 3
88 24.71 5.85 Male No Thur Lunch 2
181 23.33 5.65 Male Yes Sun Dinner 2
44 30.40 5.60 Male No Sun Dinner 4
52 34.81 5.20 Female No Sun Dinner 4
(continues on next page)

2.4. Community tutorials 193


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


85 34.83 5.17 Female No Thur Lunch 4
211 25.89 5.16 Male Yes Sat Dinner 4

Top N rows per group

-- Oracle's ROW_NUMBER() analytic function


SELECT * FROM (
SELECT
t.*,
ROW_NUMBER() OVER(PARTITION BY day ORDER BY total_bill DESC) AS rn
FROM tips t
)
WHERE rn < 3
ORDER BY day, rn;

In [36]: (tips.assign(rn=tips.sort_values(['total_bill'], ascending=False)


....: .groupby(['day'])
....: .cumcount() + 1)
....: .query('rn < 3')
....: .sort_values(['day', 'rn']))
....:
Out[36]:
total_bill tip sex smoker day time size rn
95 40.17 4.73 Male Yes Fri Dinner 4 1
90 28.97 3.00 Male Yes Fri Dinner 2 2
[email protected]
170 50.81 10.00 Male Yes Sat Dinner 3 1
T56GZSRVAH212 48.33 9.00 Male No Sat Dinner 4 2
156 48.17 5.00 Male No Sun Dinner 6 1
182 45.35 3.50 Male Yes Sun Dinner 3 2
197 43.11 5.00 Female Yes Thur Lunch 4 1
142 41.19 5.00 Male No Thur Lunch 5 2

the same using rank(method=’first’) function


In [37]: (tips.assign(rnk=tips.groupby(['day'])['total_bill']
....: .rank(method='first', ascending=False))
....: .query('rnk < 3')
....: .sort_values(['day', 'rnk']))
....:
Out[37]:
total_bill tip sex smoker day time size rnk
95 40.17 4.73 Male Yes Fri Dinner 4 1.0
90 28.97 3.00 Male Yes Fri Dinner 2 2.0
170 50.81 10.00 Male Yes Sat Dinner 3 1.0
212 48.33 9.00 Male No Sat Dinner 4 2.0
156 48.17 5.00 Male No Sun Dinner 6 1.0
182 45.35 3.50 Male Yes Sun Dinner 3 2.0
197 43.11 5.00 Female Yes Thur Lunch 4 1.0
142 41.19 5.00 Male No Thur Lunch 5 2.0

-- Oracle's RANK() analytic function


SELECT * FROM (
SELECT
t.*,
(continues on next page)

194 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


RANK() OVER(PARTITION BY sex ORDER BY tip) AS rnk
FROM tips t
WHERE tip < 2
)
WHERE rnk < 3
ORDER BY sex, rnk;

Let’s find tips with (rank < 3) per gender group for (tips < 2). Notice that when using rank(method='min')
function rnk_min remains the same for the same tip (as Oracle’s RANK() function)

In [38]: (tips[tips['tip'] < 2]


....: .assign(rnk_min=tips.groupby(['sex'])['tip']
....: .rank(method='min'))
....: .query('rnk_min < 3')
....: .sort_values(['sex', 'rnk_min']))
....:
Out[38]:
total_bill tip sex smoker day time size rnk_min
67 3.07 1.00 Female Yes Sat Dinner 1 1.0
92 5.75 1.00 Female Yes Fri Dinner 2 1.0
111 7.25 1.00 Female No Sat Dinner 1 1.0
236 12.60 1.00 Male Yes Sat Dinner 2 1.0
237 32.83 1.17 Male Yes Sat Dinner 2 2.0

UPDATE
[email protected]
T56GZSRVAHUPDATE tips
SET tip = tip*2
WHERE tip < 2;

In [39]: tips.loc[tips['tip'] < 2, 'tip'] *= 2

DELETE

DELETE FROM tips


WHERE tip > 9;

In pandas we select the rows that should remain, instead of deleting them

In [40]: tips = tips.loc[tips['tip'] <= 9]

2.4. Community tutorials 195


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Comparison with SAS

For potential users coming from SAS this page is meant to demonstrate how different SAS operations would be
performed in pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas to familiarize yourself with the
library.
As is customary, we import pandas and NumPy as follows:

In [1]: import pandas as pd

In [2]: import numpy as np

Note: Throughout this tutorial, the pandas DataFrame will be displayed by calling df.head(), which displays
the first N (default 5) rows of the DataFrame. This is often used in interactive work (e.g. Jupyter notebook or
terminal) - the equivalent in SAS would be:

proc print data=df(obs=5);


run;

Data structures

General terminology translation


[email protected]
T56GZSRVAH
pandas SAS
DataFrame data set
column variable
row observation
groupby BY-group
NaN .

DataFrame / Series

A DataFrame in pandas is analogous to a SAS data set - a two-dimensional data source with labeled columns that
can be of different types. As will be shown in this document, almost any operation that can be applied to a data set
using SAS’s DATA step, can also be accomplished in pandas.
A Series is the data structure that represents one column of a DataFrame. SAS doesn’t have a separate data
structure for a single column, but in general, working with a Series is analogous to referencing a column in the
DATA step.

196 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Index

Every DataFrame and Series has an Index - which are labels on the rows of the data. SAS does not have an
exactly analogous concept. A data set’s rows are essentially unlabeled, other than an implicit integer index that can be
accessed during the DATA step (_N_).
In pandas, if no index is specified, an integer index is also used by default (first row = 0, second row = 1, and so on).
While using a labeled Index or MultiIndex can enable sophisticated analyses and is ultimately an important part
of pandas to understand, for this comparison we will essentially ignore the Index and just treat the DataFrame as
a collection of columns. Please see the indexing documentation for much more on how to use an Index effectively.

Data input / output

Constructing a DataFrame from values

A SAS data set can be built from specified values by placing the data after a datalines statement and specifying
the column names.

data df;
input x y;
datalines;
1 2
3 4
5 6
;
run;
[email protected]
T56GZSRVAHA pandas DataFrame can be constructed in many different ways, but for a small number of values, it is often
convenient to specify it as a Python dictionary, where the keys are the column names and the values are the data.

In [3]: df = pd.DataFrame({'x': [1, 3, 5], 'y': [2, 4, 6]})

In [4]: df
Out[4]:
x y
0 1 2
1 3 4
2 5 6

Reading external data

Like SAS, pandas provides utilities for reading in data from many formats. The tips dataset, found within the pandas
tests (csv) will be used in many of the following examples.
SAS provides PROC IMPORT to read csv data into a data set.

proc import datafile='tips.csv' dbms=csv out=tips replace;


getnames=yes;
run;

The pandas method is read_csv(), which works similarly.

2.4. Community tutorials 197


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [5]: url = ('https://raw.github.com/pandas-dev/'


...: 'pandas/master/pandas/tests/data/tips.csv')
...:

In [6]: tips = pd.read_csv(url)

In [7]: tips.head()
Out[7]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4

Like PROC IMPORT, read_csv can take a number of parameters to specify how the data should be parsed. For
example, if the data was instead tab delimited, and did not have column names, the pandas command would be:

tips = pd.read_csv('tips.csv', sep='\t', header=None)

# alternatively, read_table is an alias to read_csv with tab delimiter


tips = pd.read_table('tips.csv', header=None)

In addition to text/csv, pandas supports a variety of other data formats such as Excel, HDF5, and SQL databases. These
are all read via a pd.read_* function. See the IO documentation for more details.

Exporting data
[email protected]
T56GZSRVAH
The inverse of PROC IMPORT in SAS is PROC EXPORT

proc export data=tips outfile='tips2.csv' dbms=csv;


run;

Similarly in pandas, the opposite of read_csv is to_csv(), and other data formats follow a similar api.

tips.to_csv('tips2.csv')

Data operations

Operations on columns

In the DATA step, arbitrary math expressions can be used on new or existing columns.

data tips;
set tips;
total_bill = total_bill - 2;
new_bill = total_bill / 2;
run;

pandas provides similar vectorized operations by specifying the individual Series in the DataFrame. New
columns can be assigned in the same way.

198 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [8]: tips['total_bill'] = tips['total_bill'] - 2

In [9]: tips['new_bill'] = tips['total_bill'] / 2.0

In [10]: tips.head()
Out[10]:
total_bill tip sex smoker day time size new_bill
0 14.99 1.01 Female No Sun Dinner 2 7.495
1 8.34 1.66 Male No Sun Dinner 3 4.170
2 19.01 3.50 Male No Sun Dinner 3 9.505
3 21.68 3.31 Male No Sun Dinner 2 10.840
4 22.59 3.61 Female No Sun Dinner 4 11.295

Filtering

Filtering in SAS is done with an if or where statement, on one or more columns.

data tips;
set tips;
if total_bill > 10;
run;

data tips;
set tips;
where total_bill > 10;
/* equivalent in this case - where happens before the
DATA step begins and can also be used in PROC statements */
[email protected]
T56GZSRVAHrun;

DataFrames can be filtered in multiple ways; the most intuitive of which is using boolean indexing

In [11]: tips[tips['total_bill'] > 10].head()


Out[11]:
total_bill tip sex smoker day time size
0 14.99 1.01 Female No Sun Dinner 2
2 19.01 3.50 Male No Sun Dinner 3
3 21.68 3.31 Male No Sun Dinner 2
4 22.59 3.61 Female No Sun Dinner 4
5 23.29 4.71 Male No Sun Dinner 4

If/then logic

In SAS, if/then logic can be used to create new columns.

data tips;
set tips;
format bucket $4.;

if total_bill < 10 then bucket = 'low';


else bucket = 'high';
run;

The same operation in pandas can be accomplished using the where method from numpy.

2.4. Community tutorials 199


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [12]: tips['bucket'] = np.where(tips['total_bill'] < 10, 'low', 'high')

In [13]: tips.head()
Out[13]:
total_bill tip sex smoker day time size bucket
0 14.99 1.01 Female No Sun Dinner 2 high
1 8.34 1.66 Male No Sun Dinner 3 low
2 19.01 3.50 Male No Sun Dinner 3 high
3 21.68 3.31 Male No Sun Dinner 2 high
4 22.59 3.61 Female No Sun Dinner 4 high

Date functionality

SAS provides a variety of functions to do operations on date/datetime columns.

data tips;
set tips;
format date1 date2 date1_plusmonth mmddyy10.;
date1 = mdy(1, 15, 2013);
date2 = mdy(2, 15, 2015);
date1_year = year(date1);
date2_month = month(date2);
* shift date to beginning of next interval;
date1_next = intnx('MONTH', date1, 1);
* count intervals between dates;
months_between = intck('MONTH', date1, date2);
run;
[email protected]
T56GZSRVAH
The equivalent pandas operations are shown below. In addition to these functions pandas supports other Time Series
features not available in Base SAS (such as resampling and custom offsets) - see the timeseries documentation for
more details.

In [14]: tips['date1'] = pd.Timestamp('2013-01-15')

In [15]: tips['date2'] = pd.Timestamp('2015-02-15')

In [16]: tips['date1_year'] = tips['date1'].dt.year

In [17]: tips['date2_month'] = tips['date2'].dt.month

In [18]: tips['date1_next'] = tips['date1'] + pd.offsets.MonthBegin()

In [19]: tips['months_between'] = (
....: tips['date2'].dt.to_period('M') - tips['date1'].dt.to_period('M'))
....:

In [20]: tips[['date1', 'date2', 'date1_year', 'date2_month',


....: 'date1_next', 'months_between']].head()
....:
Out[20]:
date1 date2 date1_year date2_month date1_next months_between
0 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
1 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
2 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
3 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
4 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>

200 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Selection of columns

SAS provides keywords in the DATA step to select, drop, and rename columns.

data tips;
set tips;
keep sex total_bill tip;
run;

data tips;
set tips;
drop sex;
run;

data tips;
set tips;
rename total_bill=total_bill_2;
run;

The same operations are expressed in pandas below.

# keep
In [21]: tips[['sex', 'total_bill', 'tip']].head()
Out[21]:
sex total_bill tip
0 Female 14.99 1.01
1 Male 8.34 1.66
2 Male 19.01 3.50
3 Male
[email protected] 21.68 3.31
T56GZSRVAH4 Female 22.59 3.61

# drop
In [22]: tips.drop('sex', axis=1).head()
Out[22]:
total_bill tip smoker day time size
0 14.99 1.01 No Sun Dinner 2
1 8.34 1.66 No Sun Dinner 3
2 19.01 3.50 No Sun Dinner 3
3 21.68 3.31 No Sun Dinner 2
4 22.59 3.61 No Sun Dinner 4

# rename
In [23]: tips.rename(columns={'total_bill': 'total_bill_2'}).head()
Out[23]:
total_bill_2 tip sex smoker day time size
0 14.99 1.01 Female No Sun Dinner 2
1 8.34 1.66 Male No Sun Dinner 3
2 19.01 3.50 Male No Sun Dinner 3
3 21.68 3.31 Male No Sun Dinner 2
4 22.59 3.61 Female No Sun Dinner 4

2.4. Community tutorials 201


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Sorting by values

Sorting in SAS is accomplished via PROC SORT

proc sort data=tips;


by sex total_bill;
run;

pandas objects have a sort_values() method, which takes a list of columns to sort by.

In [24]: tips = tips.sort_values(['sex', 'total_bill'])

In [25]: tips.head()
Out[25]:
total_bill tip sex smoker day time size
67 1.07 1.00 Female Yes Sat Dinner 1
92 3.75 1.00 Female Yes Fri Dinner 2
111 5.25 1.00 Female No Sat Dinner 1
145 6.35 1.50 Female No Thur Lunch 2
135 6.51 1.25 Female No Thur Lunch 2

String processing

Length

SAS determines the length of a character string with the LENGTHN and LENGTHC functions. LENGTHN excludes
[email protected]
T56GZSRVAHtrailing blanks and LENGTHC includes trailing blanks.
data _null_;
set tips;
put(LENGTHN(time));
put(LENGTHC(time));
run;

Python determines the length of a character string with the len function. len includes trailing blanks. Use len and
rstrip to exclude trailing blanks.

In [26]: tips['time'].str.len().head()
Out[26]:
67 6
92 6
111 6
145 5
135 5
Name: time, dtype: int64

In [27]: tips['time'].str.rstrip().str.len().head()
Out[27]:
67 6
92 6
111 6
145 5
135 5
Name: time, dtype: int64

202 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Find

SAS determines the position of a character in a string with the FINDW function. FINDW takes the string defined by
the first argument and searches for the first position of the substring you supply as the second argument.

data _null_;
set tips;
put(FINDW(sex,'ale'));
run;

Python determines the position of a character in a string with the find function. find searches for the first position
of the substring. If the substring is found, the function returns its position. Keep in mind that Python indexes are
zero-based and the function will return -1 if it fails to find the substring.

In [28]: tips['sex'].str.find("ale").head()
Out[28]:
67 3
92 3
111 3
145 3
135 3
Name: sex, dtype: int64

Substring

SAS extracts a substring from a string based on its position with the SUBSTR function.
[email protected]
T56GZSRVAHdata _null_;
set tips;
put(substr(sex,1,1));
run;

With pandas you can use [] notation to extract a substring from a string by position locations. Keep in mind that
Python indexes are zero-based.

In [29]: tips['sex'].str[0:1].head()
Out[29]:
67 F
92 F
111 F
145 F
135 F
Name: sex, dtype: object

Scan

The SAS SCAN function returns the nth word from a string. The first argument is the string you want to parse and the
second argument specifies which word you want to extract.

data firstlast;
input String $60.;
First_Name = scan(string, 1);
Last_Name = scan(string, -1);
(continues on next page)

2.4. Community tutorials 203


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


datalines2;
John Smith;
Jane Cook;
;;;
run;

Python extracts a substring from a string based on its text by using regular expressions. There are much more powerful
approaches, but this just shows a simple approach.

In [30]: firstlast = pd.DataFrame({'String': ['John Smith', 'Jane Cook']})

In [31]: firstlast['First_Name'] = firstlast['String'].str.split(" ", expand=True)[0]

In [32]: firstlast['Last_Name'] = firstlast['String'].str.rsplit(" ", expand=True)[0]

In [33]: firstlast
Out[33]:
String First_Name Last_Name
0 John Smith John John
1 Jane Cook Jane Jane

Upcase, lowcase, and propcase

The SAS UPCASE LOWCASE and PROPCASE functions change the case of the argument.

data firstlast;
[email protected]
T56GZSRVAHinput String $60.;
string_up = UPCASE(string);
string_low = LOWCASE(string);
string_prop = PROPCASE(string);
datalines2;
John Smith;
Jane Cook;
;;;
run;

The equivalent Python functions are upper, lower, and title.

In [34]: firstlast = pd.DataFrame({'String': ['John Smith', 'Jane Cook']})

In [35]: firstlast['string_up'] = firstlast['String'].str.upper()

In [36]: firstlast['string_low'] = firstlast['String'].str.lower()

In [37]: firstlast['string_prop'] = firstlast['String'].str.title()

In [38]: firstlast
Out[38]:
String string_up string_low string_prop
0 John Smith JOHN SMITH john smith John Smith
1 Jane Cook JANE COOK jane cook Jane Cook

204 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Merging

The following tables will be used in the merge examples

In [39]: df1 = pd.DataFrame({'key': ['A', 'B', 'C', 'D'],


....: 'value': np.random.randn(4)})
....:

In [40]: df1
Out[40]:
key value
0 A 0.469112
1 B -0.282863
2 C -1.509059
3 D -1.135632

In [41]: df2 = pd.DataFrame({'key': ['B', 'D', 'D', 'E'],


....: 'value': np.random.randn(4)})
....:

In [42]: df2
Out[42]:
key value
0 B 1.212112
1 D -0.173215
2 D 0.119209
3 E -1.044236

[email protected]
In SAS, data must be explicitly sorted before merging. Different types of joins are accomplished using the in= dummy
T56GZSRVAHvariables to track whether a match was found in one or both input frames.

proc sort data=df1;


by key;
run;

proc sort data=df2;


by key;
run;

data left_join inner_join right_join outer_join;


merge df1(in=a) df2(in=b);

if a and b then output inner_join;


if a then output left_join;
if b then output right_join;
if a or b then output outer_join;
run;

pandas DataFrames have a merge() method, which provides similar functionality. Note that the data does not have
to be sorted ahead of time, and different join types are accomplished via the how keyword.

In [43]: inner_join = df1.merge(df2, on=['key'], how='inner')

In [44]: inner_join
Out[44]:
key value_x value_y
0 B -0.282863 1.212112
(continues on next page)

2.4. Community tutorials 205


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 D -1.135632 -0.173215
2 D -1.135632 0.119209

In [45]: left_join = df1.merge(df2, on=['key'], how='left')

In [46]: left_join
Out[46]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209

In [47]: right_join = df1.merge(df2, on=['key'], how='right')

In [48]: right_join
Out[48]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
3 E NaN -1.044236

In [49]: outer_join = df1.merge(df2, on=['key'], how='outer')

In [50]: outer_join
Out[50]:
[email protected]
T56GZSRVAH key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236

Missing data

Like SAS, pandas has a representation for missing data - which is the special float value NaN (not a number). Many
of the semantics are the same, for example missing data propagates through numeric operations, and is ignored by
default for aggregations.
In [51]: outer_join
Out[51]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236

In [52]: outer_join['value_x'] + outer_join['value_y']


Out[52]:
0 NaN
(continues on next page)

206 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 0.929249
2 NaN
3 -1.308847
4 -1.016424
5 NaN
dtype: float64

In [53]: outer_join['value_x'].sum()
Out[53]: -3.5940742896293765

One difference is that missing data cannot be compared to its sentinel value. For example, in SAS you could do this
to filter missing values.

data outer_join_nulls;
set outer_join;
if value_x = .;
run;

data outer_join_no_nulls;
set outer_join;
if value_x ^= .;
run;

Which doesn’t work in pandas. Instead, the pd.isna or pd.notna functions should be used for comparisons.

In [54]: outer_join[pd.isna(outer_join['value_x'])]
Out[54]:
key value_x
[email protected] value_y
T56GZSRVAH5 E NaN -1.044236

In [55]: outer_join[pd.notna(outer_join['value_x'])]
Out[55]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209

pandas also provides a variety of methods to work with missing data - some of which would be challenging to express
in SAS. For example, there are methods to drop all rows with any missing values, replacing missing values with a
specified value, like the mean, or forward filling from previous rows. See the missing data documentation for more.

In [56]: outer_join.dropna()
Out[56]:
key value_x value_y
1 B -0.282863 1.212112
3 D -1.135632 -0.173215
4 D -1.135632 0.119209

In [57]: outer_join.fillna(method='ffill')
Out[57]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 1.212112
(continues on next page)

2.4. Community tutorials 207


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E -1.135632 -1.044236

In [58]: outer_join['value_x'].fillna(outer_join['value_x'].mean())
Out[58]:
0 0.469112
1 -0.282863
2 -1.509059
3 -1.135632
4 -1.135632
5 -0.718815
Name: value_x, dtype: float64

GroupBy

Aggregation

SAS’s PROC SUMMARY can be used to group by one or more key variables and compute aggregations on numeric
columns.

proc summary data=tips nway;


class sex smoker;
var total_bill tip;
output out=tips_summed sum=;
[email protected]
run;
T56GZSRVAH
pandas provides a flexible groupby mechanism that allows similar aggregations. See the groupby documentation for
more details and examples.

In [59]: tips_summed = tips.groupby(['sex', 'smoker'])[['total_bill', 'tip']].sum()

In [60]: tips_summed.head()
Out[60]:
total_bill tip
sex smoker
Female No 869.68 149.77
Yes 527.27 96.74
Male No 1725.75 302.00
Yes 1217.07 183.07

Transformation

In SAS, if the group aggregations need to be used with the original frame, it must be merged back together. For
example, to subtract the mean for each observation by smoker group.

proc summary data=tips missing nway;


class smoker;
var total_bill;
output out=smoker_means mean(total_bill)=group_bill;
run;

(continues on next page)

208 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


proc sort data=tips;
by smoker;
run;

data tips;
merge tips(in=a) smoker_means(in=b);
by smoker;
adj_total_bill = total_bill - group_bill;
if a and b;
run;

pandas groupby provides a transform mechanism that allows these type of operations to be succinctly expressed
in one operation.

In [61]: gb = tips.groupby('smoker')['total_bill']

In [62]: tips['adj_total_bill'] = tips['total_bill'] - gb.transform('mean')

In [63]: tips.head()
Out[63]:
total_bill tip sex smoker day time size adj_total_bill
67 1.07 1.00 Female Yes Sat Dinner 1 -17.686344
92 3.75 1.00 Female Yes Fri Dinner 2 -15.006344
111 5.25 1.00 Female No Sat Dinner 1 -11.938278
145 6.35 1.50 Female No Thur Lunch 2 -10.838278
135 6.51 1.25 Female No Thur Lunch 2 -10.678278

[email protected]
T56GZSRVAHBy group processing

In addition to aggregation, pandas groupby can be used to replicate most other by group processing from SAS. For
example, this DATA step reads the data by sex/smoker group and filters to the first entry for each.

proc sort data=tips;


by sex smoker;
run;

data tips_first;
set tips;
by sex smoker;
if FIRST.sex or FIRST.smoker then output;
run;

In pandas this would be written as:

In [64]: tips.groupby(['sex', 'smoker']).first()


Out[64]:
total_bill tip day time size adj_total_bill
sex smoker
Female No 5.25 1.00 Sat Dinner 1 -11.938278
Yes 1.07 1.00 Sat Dinner 1 -17.686344
Male No 5.51 2.00 Thur Lunch 2 -11.678278
Yes 5.25 5.15 Sun Dinner 2 -13.506344

2.4. Community tutorials 209


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Other Considerations

Disk vs memory

pandas operates exclusively in memory, where a SAS data set exists on disk. This means that the size of data able to
be loaded in pandas is limited by your machine’s memory, but also that the operations on that data may be faster.
If out of core processing is needed, one possibility is the dask.dataframe library (currently in development) which
provides a subset of pandas functionality for an on-disk DataFrame

Data interop

pandas provides a read_sas() method that can read SAS data saved in the XPORT or SAS7BDAT binary format.

libname xportout xport 'transport-file.xpt';


data xportout.tips;
set tips(rename=(total_bill=tbill));
* xport variable names limited to 6 characters;
run;

df = pd.read_sas('transport-file.xpt')
df = pd.read_sas('binary-file.sas7bdat')

You can also specify the file format directly. By default, pandas will try to infer the file format based on its extension.

df = pd.read_sas('transport-file.xpt', format='xport')
[email protected]
df = pd.read_sas('binary-file.sas7bdat', format='sas7bdat')
T56GZSRVAH
XPORT is a relatively limited format and the parsing of it is not as optimized as some of the other pandas readers. An
alternative way to interop data between SAS and pandas is to serialize to csv.

# version 0.17, 10M rows

In [8]: %time df = pd.read_sas('big.xpt')


Wall time: 14.6 s

In [9]: %time df = pd.read_csv('big.csv')


Wall time: 4.86 s

Comparison with Stata

For potential users coming from Stata this page is meant to demonstrate how different Stata operations would be
performed in pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas to familiarize yourself with the
library.
As is customary, we import pandas and NumPy as follows. This means that we can refer to the libraries as pd and np,
respectively, for the rest of the document.

In [1]: import pandas as pd

In [2]: import numpy as np

210 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Note: Throughout this tutorial, the pandas DataFrame will be displayed by calling df.head(), which displays
the first N (default 5) rows of the DataFrame. This is often used in interactive work (e.g. Jupyter notebook or
terminal) – the equivalent in Stata would be:

list in 1/5

Data structures

General terminology translation

pandas Stata
DataFrame data set
column variable
row observation
groupby bysort
NaN .

DataFrame / Series

A DataFrame in pandas is analogous to a Stata data set – a two-dimensional data source with labeled columns that
can be of different types. As will be shown in this document, almost any operation that can be applied to a data set in
[email protected]
T56GZSRVAHStata can also be accomplished in pandas.
A Series is the data structure that represents one column of a DataFrame. Stata doesn’t have a separate data
structure for a single column, but in general, working with a Series is analogous to referencing a column of a data
set in Stata.

Index

Every DataFrame and Series has an Index – labels on the rows of the data. Stata does not have an exactly
analogous concept. In Stata, a data set’s rows are essentially unlabeled, other than an implicit integer index that can
be accessed with _n.
In pandas, if no index is specified, an integer index is also used by default (first row = 0, second row = 1, and so on).
While using a labeled Index or MultiIndex can enable sophisticated analyses and is ultimately an important part
of pandas to understand, for this comparison we will essentially ignore the Index and just treat the DataFrame as
a collection of columns. Please see the indexing documentation for much more on how to use an Index effectively.

2.4. Community tutorials 211


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Data input / output

Constructing a DataFrame from values

A Stata data set can be built from specified values by placing the data after an input statement and specifying the
column names.
input x y
1 2
3 4
5 6
end

A pandas DataFrame can be constructed in many different ways, but for a small number of values, it is often
convenient to specify it as a Python dictionary, where the keys are the column names and the values are the data.
In [3]: df = pd.DataFrame({'x': [1, 3, 5], 'y': [2, 4, 6]})

In [4]: df
Out[4]:
x y
0 1 2
1 3 4
2 5 6

Reading external data


[email protected]
T56GZSRVAH
Like Stata, pandas provides utilities for reading in data from many formats. The tips data set, found within the
pandas tests (csv) will be used in many of the following examples.
Stata provides import delimited to read csv data into a data set in memory. If the tips.csv file is in the
current working directory, we can import it as follows.
import delimited tips.csv

The pandas method is read_csv(), which works similarly. Additionally, it will automatically download the data
set if presented with a url.
In [5]: url = ('https://raw.github.com/pandas-dev'
...: '/pandas/master/pandas/tests/data/tips.csv')
...:

In [6]: tips = pd.read_csv(url)

In [7]: tips.head()
Out[7]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4

Like import delimited, read_csv() can take a number of parameters to specify how the data should be
parsed. For example, if the data were instead tab delimited, did not have column names, and existed in the current
working directory, the pandas command would be:

212 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

tips = pd.read_csv('tips.csv', sep='\t', header=None)

# alternatively, read_table is an alias to read_csv with tab delimiter


tips = pd.read_table('tips.csv', header=None)

Pandas can also read Stata data sets in .dta format with the read_stata() function.
df = pd.read_stata('data.dta')

In addition to text/csv and Stata files, pandas supports a variety of other data formats such as Excel, SAS, HDF5,
Parquet, and SQL databases. These are all read via a pd.read_* function. See the IO documentation for more
details.

Exporting data

The inverse of import delimited in Stata is export delimited


export delimited tips2.csv

Similarly in pandas, the opposite of read_csv is DataFrame.to_csv().


tips.to_csv('tips2.csv')

Pandas can also export to Stata file format with the DataFrame.to_stata() method.
tips.to_stata('tips2.dta')
[email protected]
T56GZSRVAH
Data operations

Operations on columns

In Stata, arbitrary math expressions can be used with the generate and replace commands on new or existing
columns. The drop command drops the column from the data set.
replace total_bill = total_bill - 2
generate new_bill = total_bill / 2
drop new_bill

pandas provides similar vectorized operations by specifying the individual Series in the DataFrame. New
columns can be assigned in the same way. The DataFrame.drop() method drops a column from the DataFrame.
In [8]: tips['total_bill'] = tips['total_bill'] - 2

In [9]: tips['new_bill'] = tips['total_bill'] / 2

In [10]: tips.head()
Out[10]:
total_bill tip sex smoker day time size new_bill
0 14.99 1.01 Female No Sun Dinner 2 7.495
1 8.34 1.66 Male No Sun Dinner 3 4.170
2 19.01 3.50 Male No Sun Dinner 3 9.505
3 21.68 3.31 Male No Sun Dinner 2 10.840
4 22.59 3.61 Female No Sun Dinner 4 11.295
(continues on next page)

2.4. Community tutorials 213


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [11]: tips = tips.drop('new_bill', axis=1)

Filtering

Filtering in Stata is done with an if clause on one or more columns.

list if total_bill > 10

DataFrames can be filtered in multiple ways; the most intuitive of which is using boolean indexing.

In [12]: tips[tips['total_bill'] > 10].head()


Out[12]:
total_bill tip sex smoker day time size
0 14.99 1.01 Female No Sun Dinner 2
2 19.01 3.50 Male No Sun Dinner 3
3 21.68 3.31 Male No Sun Dinner 2
4 22.59 3.61 Female No Sun Dinner 4
5 23.29 4.71 Male No Sun Dinner 4

If/then logic

In Stata, an if clause can also be used to create new columns.


[email protected]
generate bucket = "low" if total_bill < 10
T56GZSRVAH
replace bucket = "high" if total_bill >= 10

The same operation in pandas can be accomplished using the where method from numpy.

In [13]: tips['bucket'] = np.where(tips['total_bill'] < 10, 'low', 'high')

In [14]: tips.head()
Out[14]:
total_bill tip sex smoker day time size bucket
0 14.99 1.01 Female No Sun Dinner 2 high
1 8.34 1.66 Male No Sun Dinner 3 low
2 19.01 3.50 Male No Sun Dinner 3 high
3 21.68 3.31 Male No Sun Dinner 2 high
4 22.59 3.61 Female No Sun Dinner 4 high

Date functionality

Stata provides a variety of functions to do operations on date/datetime columns.

generate date1 = mdy(1, 15, 2013)


generate date2 = date("Feb152015", "MDY")

generate date1_year = year(date1)


generate date2_month = month(date2)

* shift date to beginning of next month


(continues on next page)

214 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


generate date1_next = mdy(month(date1) + 1, 1, year(date1)) if month(date1) != 12
replace date1_next = mdy(1, 1, year(date1) + 1) if month(date1) == 12
generate months_between = mofd(date2) - mofd(date1)

list date1 date2 date1_year date2_month date1_next months_between

The equivalent pandas operations are shown below. In addition to these functions, pandas supports other Time Series
features not available in Stata (such as time zone handling and custom offsets) – see the timeseries documentation for
more details.

In [15]: tips['date1'] = pd.Timestamp('2013-01-15')

In [16]: tips['date2'] = pd.Timestamp('2015-02-15')

In [17]: tips['date1_year'] = tips['date1'].dt.year

In [18]: tips['date2_month'] = tips['date2'].dt.month

In [19]: tips['date1_next'] = tips['date1'] + pd.offsets.MonthBegin()

In [20]: tips['months_between'] = (tips['date2'].dt.to_period('M')


....: - tips['date1'].dt.to_period('M'))
....:

In [21]: tips[['date1', 'date2', 'date1_year', 'date2_month', 'date1_next',


....: 'months_between']].head()
....:
Out[21]:
[email protected]
T56GZSRVAH date1 date2 date1_year date2_month date1_next months_between
0 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
1 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
2 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
3 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>
4 2013-01-15 2015-02-15 2013 2 2013-02-01 <25 * MonthEnds>

Selection of columns

Stata provides keywords to select, drop, and rename columns.

keep sex total_bill tip

drop sex

rename total_bill total_bill_2

The same operations are expressed in pandas below. Note that in contrast to Stata, these operations do not happen in
place. To make these changes persist, assign the operation back to a variable.

# keep
In [22]: tips[['sex', 'total_bill', 'tip']].head()
Out[22]:
sex total_bill tip
0 Female 14.99 1.01
1 Male 8.34 1.66
(continues on next page)

2.4. Community tutorials 215


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2 Male 19.01 3.50
3 Male 21.68 3.31
4 Female 22.59 3.61

# drop
In [23]: tips.drop('sex', axis=1).head()
Out[23]:
total_bill tip smoker day time size
0 14.99 1.01 No Sun Dinner 2
1 8.34 1.66 No Sun Dinner 3
2 19.01 3.50 No Sun Dinner 3
3 21.68 3.31 No Sun Dinner 2
4 22.59 3.61 No Sun Dinner 4

# rename
In [24]: tips.rename(columns={'total_bill': 'total_bill_2'}).head()
Out[24]:
total_bill_2 tip sex smoker day time size
0 14.99 1.01 Female No Sun Dinner 2
1 8.34 1.66 Male No Sun Dinner 3
2 19.01 3.50 Male No Sun Dinner 3
3 21.68 3.31 Male No Sun Dinner 2
4 22.59 3.61 Female No Sun Dinner 4

Sorting by values

[email protected]
T56GZSRVAHSorting in Stata is accomplished via sort
sort sex total_bill

pandas objects have a DataFrame.sort_values() method, which takes a list of columns to sort by.

In [25]: tips = tips.sort_values(['sex', 'total_bill'])

In [26]: tips.head()
Out[26]:
total_bill tip sex smoker day time size
67 1.07 1.00 Female Yes Sat Dinner 1
92 3.75 1.00 Female Yes Fri Dinner 2
111 5.25 1.00 Female No Sat Dinner 1
145 6.35 1.50 Female No Thur Lunch 2
135 6.51 1.25 Female No Thur Lunch 2

String processing

Finding length of string

Stata determines the length of a character string with the strlen() and ustrlen() functions for ASCII and
Unicode strings, respectively.

generate strlen_time = strlen(time)


generate ustrlen_time = ustrlen(time)

216 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Python determines the length of a character string with the len function. In Python 3, all strings are Unicode strings.
len includes trailing blanks. Use len and rstrip to exclude trailing blanks.
In [27]: tips['time'].str.len().head()
Out[27]:
67 6
92 6
111 6
145 5
135 5
Name: time, dtype: int64

In [28]: tips['time'].str.rstrip().str.len().head()
Out[28]:
67 6
92 6
111 6
145 5
135 5
Name: time, dtype: int64

Finding position of substring

Stata determines the position of a character in a string with the strpos() function. This takes the string defined by
the first argument and searches for the first position of the substring you supply as the second argument.
generate str_position = strpos(sex, "ale")
[email protected]
T56GZSRVAH
Python determines the position of a character in a string with the find() function. find searches for the first
position of the substring. If the substring is found, the function returns its position. Keep in mind that Python indexes
are zero-based and the function will return -1 if it fails to find the substring.
In [29]: tips['sex'].str.find("ale").head()
Out[29]:
67 3
92 3
111 3
145 3
135 3
Name: sex, dtype: int64

Extracting substring by position

Stata extracts a substring from a string based on its position with the substr() function.
generate short_sex = substr(sex, 1, 1)

With pandas you can use [] notation to extract a substring from a string by position locations. Keep in mind that
Python indexes are zero-based.
In [30]: tips['sex'].str[0:1].head()
Out[30]:
67 F
92 F
(continues on next page)

2.4. Community tutorials 217


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


111 F
145 F
135 F
Name: sex, dtype: object

Extracting nth word

The Stata word() function returns the nth word from a string. The first argument is the string you want to parse and
the second argument specifies which word you want to extract.

clear
input str20 string
"John Smith"
"Jane Cook"
end

generate first_name = word(name, 1)


generate last_name = word(name, -1)

Python extracts a substring from a string based on its text by using regular expressions. There are much more powerful
approaches, but this just shows a simple approach.

In [31]: firstlast = pd.DataFrame({'string': ['John Smith', 'Jane Cook']})

In [32]: firstlast['First_Name'] = firstlast['string'].str.split(" ", expand=True)[0]


[email protected]
T56GZSRVAHIn [33]: firstlast['Last_Name'] = firstlast['string'].str.rsplit(" ", expand=True)[0]

In [34]: firstlast
Out[34]:
string First_Name Last_Name
0 John Smith John John
1 Jane Cook Jane Jane

Changing case

The Stata strupper(), strlower(), strproper(), ustrupper(), ustrlower(), and ustrtitle()


functions change the case of ASCII and Unicode strings, respectively.

clear
input str20 string
"John Smith"
"Jane Cook"
end

generate upper = strupper(string)


generate lower = strlower(string)
generate title = strproper(string)
list

The equivalent Python functions are upper, lower, and title.

218 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [35]: firstlast = pd.DataFrame({'string': ['John Smith', 'Jane Cook']})

In [36]: firstlast['upper'] = firstlast['string'].str.upper()

In [37]: firstlast['lower'] = firstlast['string'].str.lower()

In [38]: firstlast['title'] = firstlast['string'].str.title()

In [39]: firstlast
Out[39]:
string upper lower title
0 John Smith JOHN SMITH john smith John Smith
1 Jane Cook JANE COOK jane cook Jane Cook

Merging

The following tables will be used in the merge examples


In [40]: df1 = pd.DataFrame({'key': ['A', 'B', 'C', 'D'],
....: 'value': np.random.randn(4)})
....:

In [41]: df1
Out[41]:
key value
0 A 0.469112
1 B -0.282863
[email protected]
T56GZSRVAH 2 C -1.509059
3 D -1.135632

In [42]: df2 = pd.DataFrame({'key': ['B', 'D', 'D', 'E'],


....: 'value': np.random.randn(4)})
....:

In [43]: df2
Out[43]:
key value
0 B 1.212112
1 D -0.173215
2 D 0.119209
3 E -1.044236

In Stata, to perform a merge, one data set must be in memory and the other must be referenced as a file name on disk.
In contrast, Python must have both DataFrames already in memory.
By default, Stata performs an outer join, where all observations from both data sets are left in memory after the merge.
One can keep only observations from the initial data set, the merged data set, or the intersection of the two by using
the values created in the _merge variable.

* First create df2 and save to disk


clear
input str1 key
B
D
D
E
(continues on next page)

2.4. Community tutorials 219


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


end
generate value = rnormal()
save df2.dta

* N ow create df1 in memory


clear
input str1 key
A
B
C
D
end
generate value = rnormal()

preserve

* Left join
merge 1:n key using df2.dta
keep if _merge == 1

* Right join
restore, preserve
merge 1:n key using df2.dta
keep if _merge == 2

* Inner join
restore, preserve
merge 1:n key using df2.dta
[email protected]
T56GZSRVAHkeep if _merge == 3

* Outer join
restore
merge 1:n key using df2.dta

pandas DataFrames have a DataFrame.merge() method, which provides similar functionality. Note that different
join types are accomplished via the how keyword.

In [44]: inner_join = df1.merge(df2, on=['key'], how='inner')

In [45]: inner_join
Out[45]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209

In [46]: left_join = df1.merge(df2, on=['key'], how='left')

In [47]: left_join
Out[47]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
(continues on next page)

220 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [48]: right_join = df1.merge(df2, on=['key'], how='right')

In [49]: right_join
Out[49]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
3 E NaN -1.044236

In [50]: outer_join = df1.merge(df2, on=['key'], how='outer')

In [51]: outer_join
Out[51]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236

Missing data

Like Stata, pandas has a representation for missing data – the special float value NaN (not a number). Many of the
[email protected]
T56GZSRVAHsemantics are the same; for example missing data propagates through numeric operations, and is ignored by default
for aggregations.

In [52]: outer_join
Out[52]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236

In [53]: outer_join['value_x'] + outer_join['value_y']


Out[53]:
0 NaN
1 0.929249
2 NaN
3 -1.308847
4 -1.016424
5 NaN
dtype: float64

In [54]: outer_join['value_x'].sum()
Out[54]: -3.5940742896293765

One difference is that missing data cannot be compared to its sentinel value. For example, in Stata you could do this
to filter missing values.

2.4. Community tutorials 221


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

* Keep missing values


list if value_x == .
* Keep non-missing values
list if value_x != .

This doesn’t work in pandas. Instead, the pd.isna() or pd.notna() functions should be used for comparisons.

In [55]: outer_join[pd.isna(outer_join['value_x'])]
Out[55]:
key value_x value_y
5 E NaN -1.044236

In [56]: outer_join[pd.notna(outer_join['value_x'])]
Out[56]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209

Pandas also provides a variety of methods to work with missing data – some of which would be challenging to express
in Stata. For example, there are methods to drop all rows with any missing values, replacing missing values with a
specified value, like the mean, or forward filling from previous rows. See the missing data documentation for more.

# Drop rows with any missing value


In [57]: outer_join.dropna()
Out[57]:
[email protected]
key value_x value_y
T56GZSRVAH1 B -0.282863 1.212112
3 D -1.135632 -0.173215
4 D -1.135632 0.119209

# Fill forwards
In [58]: outer_join.fillna(method='ffill')
Out[58]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 1.212112
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E -1.135632 -1.044236

# Impute missing values with the mean


In [59]: outer_join['value_x'].fillna(outer_join['value_x'].mean())
Out[59]:
0 0.469112
1 -0.282863
2 -1.509059
3 -1.135632
4 -1.135632
5 -0.718815
Name: value_x, dtype: float64

222 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

GroupBy

Aggregation

Stata’s collapse can be used to group by one or more key variables and compute aggregations on numeric columns.

collapse (sum) total_bill tip, by(sex smoker)

pandas provides a flexible groupby mechanism that allows similar aggregations. See the groupby documentation for
more details and examples.

In [60]: tips_summed = tips.groupby(['sex', 'smoker'])[['total_bill', 'tip']].sum()

In [61]: tips_summed.head()
Out[61]:
total_bill tip
sex smoker
Female No 869.68 149.77
Yes 527.27 96.74
Male No 1725.75 302.00
Yes 1217.07 183.07

Transformation

In Stata, if the group aggregations need to be used with the original data set, one would usually use bysort with
egen(). For example, to subtract the mean for each observation by smoker group.
[email protected]
T56GZSRVAH
bysort sex smoker: egen group_bill = mean(total_bill)
generate adj_total_bill = total_bill - group_bill

pandas groupby provides a transform mechanism that allows these type of operations to be succinctly expressed
in one operation.

In [62]: gb = tips.groupby('smoker')['total_bill']

In [63]: tips['adj_total_bill'] = tips['total_bill'] - gb.transform('mean')

In [64]: tips.head()
Out[64]:
total_bill tip sex smoker day time size adj_total_bill
67 1.07 1.00 Female Yes Sat Dinner 1 -17.686344
92 3.75 1.00 Female Yes Fri Dinner 2 -15.006344
111 5.25 1.00 Female No Sat Dinner 1 -11.938278
145 6.35 1.50 Female No Thur Lunch 2 -10.838278
135 6.51 1.25 Female No Thur Lunch 2 -10.678278

2.4. Community tutorials 223


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

By group processing

In addition to aggregation, pandas groupby can be used to replicate most other bysort processing from Stata. For
example, the following example lists the first observation in the current sort order by sex/smoker group.

bysort sex smoker: list if _n == 1

In pandas this would be written as:

In [65]: tips.groupby(['sex', 'smoker']).first()


Out[65]:
total_bill tip day time size adj_total_bill
sex smoker
Female No 5.25 1.00 Sat Dinner 1 -11.938278
Yes 1.07 1.00 Sat Dinner 1 -17.686344
Male No 5.51 2.00 Thur Lunch 2 -11.678278
Yes 5.25 5.15 Sun Dinner 2 -13.506344

Other considerations

Disk vs memory

Pandas and Stata both operate exclusively in memory. This means that the size of data able to be loaded in pandas is
limited by your machine’s memory. If out of core processing is needed, one possibility is the dask.dataframe library,
which provides a subset of pandas functionality for an on-disk DataFrame.
[email protected]
T56GZSRVAH
2.4.8 Tutorials

This is a guide to many pandas tutorials, geared mainly for new users.

Internal guides

pandas’ own 10 Minutes to pandas.


More complex recipes are in the Cookbook.
A handy pandas cheat sheet.

Community guides

pandas Cookbook by Julia Evans

The goal of this 2015 cookbook (by Julia Evans) is to give you some concrete examples for getting started with pandas.
These are examples with real-world data, and all the bugs and weirdness that entails. For the table of contents, see the
pandas-cookbook GitHub repository.

224 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Learn Pandas by Hernan Rojas

A set of lesson for new pandas users: https://bitbucket.org/hrojas/learn-pandas

Practical data analysis with Python

This guide is an introduction to the data analysis process using the Python data ecosystem and an interesting open
dataset. There are four sections covering selected topics as munging data, aggregating data, visualizing data and time
series.

Exercises for new users

Practice your skills with real data sets and exercises. For more resources, please visit the main repository.

Modern pandas

Tutorial series written in 2016 by Tom Augspurger. The source may be found in the GitHub repository
TomAugspurger/effective-pandas.
• Modern Pandas
• Method Chaining
• Indexes
[email protected]
• Performance
T56GZSRVAH
• Tidy Data
• Visualization
• Timeseries

Excel charts with pandas, vincent and xlsxwriter

• Using Pandas and XlsxWriter to create Excel charts

Video tutorials

• Pandas From The Ground Up (2015) (2:24) GitHub repo


• Introduction Into Pandas (2016) (1:28) GitHub repo
• Pandas: .head() to .tail() (2016) (1:26) GitHub repo
• Data analysis in Python with pandas (2016-2018) GitHub repo and Jupyter Notebook
• Best practices with pandas (2018) GitHub repo and Jupyter Notebook

2.4. Community tutorials 225


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Various tutorials

• Wes McKinney’s (pandas BDFL) blog


• Statistical analysis made easy in Python with SciPy and pandas DataFrames, by Randal Olson
• Statistical Data Analysis in Python, tutorial videos, by Christopher Fonnesbeck from SciPy 2013
• Financial analysis in Python, by Thomas Wiecki
• Intro to pandas data structures, by Greg Reda
• Pandas and Python: Top 10, by Manish Amde
• Pandas DataFrames Tutorial, by Karlijn Willems
• A concise tutorial with real life examples

[email protected]
T56GZSRVAH

226 Chapter 2. Getting started


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
CHAPTER

THREE

USER GUIDE

The User Guide covers all of pandas by topic area. Each of the subsections introduces a topic (such as “working with
missing data”), and discusses how pandas approaches the problem, with many examples throughout.
Users brand-new to pandas should start with 10min.
Further information on any specific method can be obtained in the API reference.

3.1 IO tools (text, CSV, HDF5, . . . )

The pandas I/O API is a set of top level reader functions accessed like pandas.read_csv() that generally
return a pandas object. The corresponding writer functions are object methods that are accessed like DataFrame.
to_csv(). Below is a table containing available readers and writers.

[email protected]
T56GZSRVAH Format Data Description Reader Writer
Type
text CSV read_csv to_csv
text Fixed-Width Text File read_fwf
text JSON read_json to_json
text HTML read_html to_html
text Local clipboard read_clipboard to_clipboard
MS Excel read_excel to_excel
binary OpenDocument read_excel
binary HDF5 Format read_hdf to_hdf
binary Feather Format read_feather to_feather
binary Parquet Format read_parquet to_parquet
binary ORC Format read_orc
binary Msgpack read_msgpack to_msgpack
binary Stata read_stata to_stata
binary SAS read_sas
binary SPSS read_spss
binary Python Pickle Format read_pickle to_pickle
SQL SQL read_sql to_sql
SQL Google BigQuery read_gbq to_gbq

Here is an informal performance comparison for some of these IO methods.

Note: For examples that use the StringIO class, make sure you import it according to your Python version, i.e.
from StringIO import StringIO for Python 2 and from io import StringIO for Python 3.

227
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.1.1 CSV & text files

The workhorse function for reading text files (a.k.a. flat files) is read_csv(). See the cookbook for some advanced
strategies.

Parsing options

read_csv() accepts the following common arguments:

Basic

filepath_or_buffer [various] Either a path to a file (a str, pathlib.Path, or py._path.local.


LocalPath), URL (including http, ftp, and S3 locations), or any object with a read() method (such as
an open file or StringIO).
sep [str, defaults to ',' for read_csv(), \t for read_table()] Delimiter to use. If sep is None, the C engine
cannot automatically detect the separator, but the Python parsing engine can, meaning the latter will be used
and automatically detect the separator by Python’s builtin sniffer tool, csv.Sniffer. In addition, separators
longer than 1 character and different from '\s+' will be interpreted as regular expressions and will also force
the use of the Python parsing engine. Note that regex delimiters are prone to ignoring quoted data. Regex
example: '\\r\\t'.
delimiter [str, default None] Alternative argument name for sep.
delim_whitespace [boolean, default False] Specifies whether or not whitespace (e.g. ' ' or '\t') will be used as
the delimiter. Equivalent to setting sep='\s+'. If this option is set to True, nothing should be passed in for
[email protected]
the delimiter parameter.
T56GZSRVAH

Column and index locations and names

header [int or list of ints, default 'infer'] Row number(s) to use as the column names, and the start of the data.
Default behavior is to infer the column names: if no names are passed the behavior is identical to header=0
and column names are inferred from the first line of the file, if column names are passed explicitly then the
behavior is identical to header=None. Explicitly pass header=0 to be able to replace existing names.
The header can be a list of ints that specify row locations for a MultiIndex on the columns e.g. [0,1,3].
Intervening rows that are not specified will be skipped (e.g. 2 in this example is skipped). Note that this
parameter ignores commented lines and empty lines if skip_blank_lines=True, so header=0 denotes the
first line of data rather than the first line of the file.
names [array-like, default None] List of column names to use. If file contains no header row, then you should
explicitly pass header=None. Duplicates in this list are not allowed.
index_col [int, str, sequence of int / str, or False, default None] Column(s) to use as the row labels of the
DataFrame, either given as string name or column index. If a sequence of int / str is given, a MultiIndex
is used.
Note: index_col=False can be used to force pandas to not use the first column as the index, e.g. when you
have a malformed file with delimiters at the end of each line.
usecols [list-like or callable, default None] Return a subset of the columns. If list-like, all elements must either be
positional (i.e. integer indices into the document columns) or strings that correspond to column names provided
either by the user in names or inferred from the document header row(s). For example, a valid list-like usecols
parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].

228 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Element order is ignored, so usecols=[0, 1] is the same as [1, 0]. To instantiate a


DataFrame from data with element order preserved use pd.read_csv(data, usecols=['foo',
'bar'])[['foo', 'bar']] for columns in ['foo', 'bar'] order or pd.read_csv(data,
usecols=['foo', 'bar'])[['bar', 'foo']] for ['bar', 'foo'] order.
If callable, the callable function will be evaluated against the column names, returning names where the callable
function evaluates to True:

In [1]: import pandas as pd

In [2]: from io import StringIO

In [3]: data = ('col1,col2,col3\n'


...: 'a,b,1\n'
...: 'a,b,2\n'
...: 'c,d,3')
...:

In [4]: pd.read_csv(StringIO(data))
Out[4]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3

In [5]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ['COL1', 'COL3


˓→'])

Out[5]:
col1 col3
[email protected]
0 a 1
T56GZSRVAH 1 a 2
2 c 3

Using this parameter results in much faster parsing time and lower memory usage.
squeeze [boolean, default False] If the parsed data only contains one column then return a Series.
prefix [str, default None] Prefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, . . .
mangle_dupe_cols [boolean, default True] Duplicate columns will be specified as ‘X’, ‘X.1’. . . ’X.N’, rather than
‘X’. . . ’X’. Passing in False will cause data to be overwritten if there are duplicate names in the columns.

General parsing configuration

dtype [Type name or dict of column -> type, default None] Data type for data or columns. E.g. {'a': np.
float64, 'b': np.int32} (unsupported with engine='python'). Use str or object together with
suitable na_values settings to preserve and not interpret dtype.
engine [{'c', 'python'}] Parser engine to use. The C engine is faster while the Python engine is currently more
feature-complete.
converters [dict, default None] Dict of functions for converting values in certain columns. Keys can either be integers
or column labels.
true_values [list, default None] Values to consider as True.
false_values [list, default None] Values to consider as False.
skipinitialspace [boolean, default False] Skip spaces after delimiter.

3.1. IO tools (text, CSV, HDF5, . . . ) 229


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

skiprows [list-like or integer, default None] Line numbers to skip (0-indexed) or number of lines to skip (int) at the
start of the file.
If callable, the callable function will be evaluated against the row indices, returning True if the row should be
skipped and False otherwise:

In [6]: data = ('col1,col2,col3\n'


...: 'a,b,1\n'
...: 'a,b,2\n'
...: 'c,d,3')
...:

In [7]: pd.read_csv(StringIO(data))
Out[7]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3

In [8]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)


Out[8]:
col1 col2 col3
0 a b 2

skipfooter [int, default 0] Number of lines at bottom of file to skip (unsupported with engine=’c’).
nrows [int, default None] Number of rows of file to read. Useful for reading pieces of large files.
low_memory [boolean, default True] Internally process the file in chunks, resulting in lower memory use while
parsing, but possibly mixed type inference. To ensure no mixed types either set False, or specify the type with
[email protected]
T56GZSRVAH the dtype parameter. Note that the entire file is read into a single DataFrame regardless, use the chunksize
or iterator parameter to return the data in chunks. (Only valid with C parser)
memory_map [boolean, default False] If a filepath is provided for filepath_or_buffer, map the file object
directly onto memory and access the data directly from there. Using this option can improve performance
because there is no longer any I/O overhead.

NA and missing data handling

na_values [scalar, str, list-like, or dict, default None] Additional strings to recognize as NA/NaN. If dict passed,
specific per-column NA values. See na values const below for a list of the values interpreted as NaN by default.
keep_default_na [boolean, default True] Whether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
• If keep_default_na is True, and na_values are specified, na_values is appended to the default NaN values
used for parsing.
• If keep_default_na is True, and na_values are not specified, only the default NaN values are used for
parsing.
• If keep_default_na is False, and na_values are specified, only the NaN values specified na_values are
used for parsing.
• If keep_default_na is False, and na_values are not specified, no strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and na_values parameters will be ignored.
na_filter [boolean, default True] Detect missing value markers (empty strings and the value of na_values). In data
without any NAs, passing na_filter=False can improve the performance of reading a large file.

230 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

verbose [boolean, default False] Indicate number of NA values placed in non-numeric columns.
skip_blank_lines [boolean, default True] If True, skip over blank lines rather than interpreting as NaN values.

Datetime handling

parse_dates [boolean or list of ints or names or list of lists or dict, default False.]
• If True -> try parsing the index.
• If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column.
• If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column.
• If {'foo': [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’. A fast-path exists for iso8601-
formatted dates.
infer_datetime_format [boolean, default False] If True and parse_dates is enabled for a column, attempt to infer
the datetime format to speed up the processing.
keep_date_col [boolean, default False] If True and parse_dates specifies combining multiple columns then keep
the original columns.
date_parser [function, default None] Function to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the conversion. pandas will try to
call date_parser in three different ways, advancing to the next if an exception occurs: 1) Pass one or more arrays
(as defined by parse_dates) as arguments; 2) concatenate (row-wise) the string values from the columns defined
by parse_dates into a single array and pass that; and 3) call date_parser once for each row using one or more
strings (corresponding to the columns defined by parse_dates) as arguments.
[email protected]
dayfirst [boolean, default False] DD/MM format dates, international and European format.
T56GZSRVAH
cache_dates [boolean, default True] If True, use a cache of unique, converted dates to apply the datetime conversion.
May produce significant speed-up when parsing duplicate date strings, especially ones with timezone offsets.
New in version 0.25.0.

Iteration

iterator [boolean, default False] Return TextFileReader object for iteration or getting chunks with get_chunk().
chunksize [int, default None] Return TextFileReader object for iteration. See iterating and chunking below.

Quoting, compression, and file format

compression [{'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'] For on-the-fly decompres-
sion of on-disk data. If ‘infer’, then use gzip, bz2, zip, or xz if filepath_or_buffer is a string ending in ‘.gz’,
‘.bz2’, ‘.zip’, or ‘.xz’, respectively, and no decompression otherwise. If using ‘zip’, the ZIP file must contain
only one data file to be read in. Set to None for no decompression.
Changed in version 0.24.0: ‘infer’ option added and set to default.
thousands [str, default None] Thousands separator.
decimal [str, default '.'] Character to recognize as decimal point. E.g. use ',' for European data.
float_precision [string, default None] Specifies which converter the C engine should use for floating-point values.
The options are None for the ordinary converter, high for the high-precision converter, and round_trip for
the round-trip converter.

3.1. IO tools (text, CSV, HDF5, . . . ) 231


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

lineterminator [str (length 1), default None] Character to break file into lines. Only valid with C parser.
quotechar [str (length 1)] The character used to denote the start and end of a quoted item. Quoted items can include
the delimiter and it will be ignored.
quoting [int or csv.QUOTE_* instance, default 0] Control field quoting behavior per csv.QUOTE_* constants.
Use one of QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
doublequote [boolean, default True] When quotechar is specified and quoting is not QUOTE_NONE, indi-
cate whether or not to interpret two consecutive quotechar elements inside a field as a single quotechar
element.
escapechar [str (length 1), default None] One-character string used to escape delimiter when quoting is
QUOTE_NONE.
comment [str, default None] Indicates remainder of line should not be parsed. If found at the beginning of a line,
the line will be ignored altogether. This parameter must be a single character. Like empty lines (as long
as skip_blank_lines=True), fully commented lines are ignored by the parameter header but not by
skiprows. For example, if comment='#', parsing ‘#empty\na,b,c\n1,2,3’ with header=0 will result in ‘a,b,c’
being treated as the header.
encoding [str, default None] Encoding to use for UTF when reading/writing (e.g. 'utf-8'). List of Python standard
encodings.
dialect [str or csv.Dialect instance, default None] If provided, this parameter will override values (default or
not) for the following parameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting.
If it is necessary to override values, a ParserWarning will be issued. See csv.Dialect documentation for
more details.

Error handling
[email protected]
T56GZSRVAH
error_bad_lines [boolean, default True] Lines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be returned. If False, then these “bad lines”
will dropped from the DataFrame that is returned. See bad lines below.
warn_bad_lines [boolean, default True] If error_bad_lines is False, and warn_bad_lines is True, a warning for
each “bad line” will be output.

Specifying column data types

You can indicate the data type for the whole DataFrame or individual columns:

In [9]: import numpy as np

In [10]: data = ('a,b,c,d\n'


....: '1,2,3,4\n'
....: '5,6,7,8\n'
....: '9,10,11')
....:

In [11]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [12]: df = pd.read_csv(StringIO(data), dtype=object)


(continues on next page)

232 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [13]: df
Out[13]:
a b c d
0 1 2 3 4
1 5 6 7 8
2 9 10 11 NaN

In [14]: df['a'][0]
Out[14]: '1'

In [15]: df = pd.read_csv(StringIO(data),
....: dtype={'b': object, 'c': np.float64, 'd': 'Int64'})
....:

In [16]: df.dtypes
Out[16]:
a int64
b object
c float64
d Int64
dtype: object

Fortunately, pandas offers more than one way to ensure that your column(s) contain only one dtype. If you’re
unfamiliar with these concepts, you can see here to learn more about dtypes, and here to learn more about object
conversion in pandas.
For instance, you can use the converters argument of read_csv():
[email protected]
T56GZSRVAH
In [17]: data = ("col_1\n"
....: "1\n"
....: "2\n"
....: "'A'\n"
....: "4.22")
....:

In [18]: df = pd.read_csv(StringIO(data), converters={'col_1': str})

In [19]: df
Out[19]:
col_1
0 1
1 2
2 'A'
3 4.22

In [20]: df['col_1'].apply(type).value_counts()
Out[20]:
<class 'str'> 4
Name: col_1, dtype: int64

Or you can use the to_numeric() function to coerce the dtypes after reading in the data,
In [21]: df2 = pd.read_csv(StringIO(data))

In [22]: df2['col_1'] = pd.to_numeric(df2['col_1'], errors='coerce')

(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 233


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [23]: df2
Out[23]:
col_1
0 1.00
1 2.00
2 NaN
3 4.22

In [24]: df2['col_1'].apply(type).value_counts()
Out[24]:
<class 'float'> 4
Name: col_1, dtype: int64

which will convert all valid parsing to floats, leaving the invalid parsing as NaN.
Ultimately, how you deal with reading in columns containing mixed dtypes depends on your specific needs. In the case
above, if you wanted to NaN out the data anomalies, then to_numeric() is probably your best option. However, if
you wanted for all the data to be coerced, no matter the type, then using the converters argument of read_csv()
would certainly be worth trying.

Note: In some cases, reading in abnormal data with columns containing mixed dtypes will result in an inconsistent
dataset. If you rely on pandas to infer the dtypes of your columns, the parsing engine will go and infer the dtypes for
different chunks of the data, rather than the whole dataset at once. Consequently, you can end up with column(s) with
mixed dtypes. For example,

In [25]: col_1 = list(range(500000)) + ['a', 'b'] + list(range(500000))


[email protected]
T56GZSRVAHIn [26]: df = pd.DataFrame({'col_1': col_1})
In [27]: df.to_csv('foo.csv')

In [28]: mixed_df = pd.read_csv('foo.csv')

In [29]: mixed_df['col_1'].apply(type).value_counts()
Out[29]:
<class 'int'> 737858
<class 'str'> 262144
Name: col_1, dtype: int64

In [30]: mixed_df['col_1'].dtype
Out[30]: dtype('O')

will result with mixed_df containing an int dtype for certain chunks of the column, and str for others due to the
mixed dtypes from the data that was read in. It is important to note that the overall column will be marked with a
dtype of object, which is used for columns with mixed dtypes.

234 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Specifying categorical dtype

Categorical columns can be parsed directly by specifying dtype='category' or


dtype=CategoricalDtype(categories, ordered).

In [31]: data = ('col1,col2,col3\n'


....: 'a,b,1\n'
....: 'a,b,2\n'
....: 'c,d,3')
....:

In [32]: pd.read_csv(StringIO(data))
Out[32]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3

In [33]: pd.read_csv(StringIO(data)).dtypes
Out[33]:
col1 object
col2 object
col3 int64
dtype: object

In [34]: pd.read_csv(StringIO(data), dtype='category').dtypes


Out[34]:
col1 category
col2 category
[email protected]
T56GZSRVAH col3 category
dtype: object

Individual columns can be parsed as a Categorical using a dict specification:

In [35]: pd.read_csv(StringIO(data), dtype={'col1': 'category'}).dtypes


Out[35]:
col1 category
col2 object
col3 int64
dtype: object

New in version 0.21.0.


Specifying dtype='category' will result in an unordered Categorical whose categories are the unique
values observed in the data. For more control on the categories and order, create a CategoricalDtype ahead of
time, and pass that for that column’s dtype.

In [36]: from pandas.api.types import CategoricalDtype

In [37]: dtype = CategoricalDtype(['d', 'c', 'b', 'a'], ordered=True)

In [38]: pd.read_csv(StringIO(data), dtype={'col1': dtype}).dtypes


Out[38]:
col1 category
col2 object
col3 int64
dtype: object

3.1. IO tools (text, CSV, HDF5, . . . ) 235


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

When using dtype=CategoricalDtype, “unexpected” values outside of dtype.categories are treated as


missing values.

In [39]: dtype = CategoricalDtype(['a', 'b', 'd']) # No 'c'

In [40]: pd.read_csv(StringIO(data), dtype={'col1': dtype}).col1


Out[40]:
0 a
1 a
2 NaN
Name: col1, dtype: category
Categories (3, object): [a, b, d]

This matches the behavior of Categorical.set_categories().

Note: With dtype='category', the resulting categories will always be parsed as strings (object dtype). If the
categories are numeric they can be converted using the to_numeric() function, or as appropriate, another converter
such as to_datetime().
When dtype is a CategoricalDtype with homogeneous categories ( all numeric, all datetimes, etc.), the
conversion is done automatically.

In [41]: df = pd.read_csv(StringIO(data), dtype='category')

In [42]: df.dtypes
Out[42]:
col1 category
col2 category
[email protected]
col3 category
T56GZSRVAHdtype: object

In [43]: df['col3']
Out[43]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, object): [1, 2, 3]

In [44]: df['col3'].cat.categories = pd.to_numeric(df['col3'].cat.categories)

In [45]: df['col3']
Out[45]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, int64): [1, 2, 3]

236 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Naming and using columns

Handling column names

A file may or may not have a header row. pandas assumes the first row should be used as the column names:

In [46]: data = ('a,b,c\n'


....: '1,2,3\n'
....: '4,5,6\n'
....: '7,8,9')
....:

In [47]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9

In [48]: pd.read_csv(StringIO(data))
Out[48]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9

By specifying the names argument in conjunction with header you can indicate other names to use and whether or
not to throw away the header row (if any):
[email protected]
In [49]: print(data)
T56GZSRVAHa,b,c
1,2,3
4,5,6
7,8,9

In [50]: pd.read_csv(StringIO(data), names=['foo', 'bar', 'baz'], header=0)


Out[50]:
foo bar baz
0 1 2 3
1 4 5 6
2 7 8 9

In [51]: pd.read_csv(StringIO(data), names=['foo', 'bar', 'baz'], header=None)


Out[51]:
foo bar baz
0 a b c
1 1 2 3
2 4 5 6
3 7 8 9

If the header is in a row other than the first, pass the row number to header. This will skip the preceding rows:

In [52]: data = ('skip this skip it\n'


....: 'a,b,c\n'
....: '1,2,3\n'
....: '4,5,6\n'
....: '7,8,9')
....:
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 237


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [53]: pd.read_csv(StringIO(data), header=1)


Out[53]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9

Note: Default behavior is to infer the column names: if no names are passed the behavior is identical to header=0
and column names are inferred from the first non-blank line of the file, if column names are passed explicitly then the
behavior is identical to header=None.

Duplicate names parsing

If the file or header contains duplicate names, pandas will by default distinguish between them so as to prevent
overwriting data:

In [54]: data = ('a,b,a\n'


....: '0,1,2\n'
....: '3,4,5')
....:

In [55]: pd.read_csv(StringIO(data))
Out[55]:
[email protected]
a b a.1
T56GZSRVAH0 0 1 2
1 3 4 5

There is no more duplicate data because mangle_dupe_cols=True by default, which modifies a series of dupli-
cate columns ‘X’, . . . , ‘X’ to become ‘X’, ‘X.1’, . . . , ‘X.N’. If mangle_dupe_cols=False, duplicate data can
arise:

In [2]: data = 'a,b,a\n0,1,2\n3,4,5'


In [3]: pd.read_csv(StringIO(data), mangle_dupe_cols=False)
Out[3]:
a b a
0 2 1 2
1 5 4 5

To prevent users from encountering this problem with duplicate data, a ValueError exception is raised if
mangle_dupe_cols != True:

In [2]: data = 'a,b,a\n0,1,2\n3,4,5'


In [3]: pd.read_csv(StringIO(data), mangle_dupe_cols=False)
...
ValueError: Setting mangle_dupe_cols=False is not supported yet

238 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Filtering columns (usecols)

The usecols argument allows you to select any subset of the columns in a file, either using the column names,
position numbers or a callable:
In [56]: data = 'a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz'

In [57]: pd.read_csv(StringIO(data))
Out[57]:
a b c d
0 1 2 3 foo
1 4 5 6 bar
2 7 8 9 baz

In [58]: pd.read_csv(StringIO(data), usecols=['b', 'd'])


Out[58]:
b d
0 2 foo
1 5 bar
2 8 baz

In [59]: pd.read_csv(StringIO(data), usecols=[0, 2, 3])


Out[59]:
a c d
0 1 3 foo
1 4 6 bar
2 7 9 baz

In [60]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ['A', 'C'])


[email protected]
T56GZSRVAHOut[60]:
a c
0 1 3
1 4 6
2 7 9

The usecols argument can also be used to specify which columns not to use in the final result:
In [61]: pd.read_csv(StringIO(data), usecols=lambda x: x not in ['a', 'c'])
Out[61]:
b d
0 2 foo
1 5 bar
2 8 baz

In this case, the callable is specifying that we exclude the “a” and “c” columns from the output.

Comments and empty lines

Ignoring line comments and empty lines

If the comment parameter is specified, then completely commented lines will be ignored. By default, completely
blank lines will be ignored as well.
In [62]: data = ('\n'
....: 'a,b,c\n'
....: ' \n'
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 239


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


....: '# commented line\n'
....: '1,2,3\n'
....: '\n'
....: '4,5,6')
....:

In [63]: print(data)

a,b,c

# commented line
1,2,3

4,5,6

In [64]: pd.read_csv(StringIO(data), comment='#')


Out[64]:
a b c
0 1 2 3
1 4 5 6

If skip_blank_lines=False, then read_csv will not ignore blank lines:

In [65]: data = ('a,b,c\n'


....: '\n'
....: '1,2,3\n'
....: '\n'
....:
[email protected] '\n'
T56GZSRVAH ....: '4,5,6')
....:

In [66]: pd.read_csv(StringIO(data), skip_blank_lines=False)


Out[66]:
a b c
0 NaN NaN NaN
1 1.0 2.0 3.0
2 NaN NaN NaN
3 NaN NaN NaN
4 4.0 5.0 6.0

Warning: The presence of ignored lines might create ambiguities involving line numbers; the parameter header
uses row numbers (ignoring commented/empty lines), while skiprows uses line numbers (including com-
mented/empty lines):
In [67]: data = ('#comment\n'
....: 'a,b,c\n'
....: 'A,B,C\n'
....: '1,2,3')
....:

In [68]: pd.read_csv(StringIO(data), comment='#', header=1)


Out[68]:
A B C
0 1 2 3

In [69]: data = ('A,B,C\n'

240 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

....: '#comment\n'
....: 'a,b,c\n'
....: '1,2,3')
....:

In [70]: pd.read_csv(StringIO(data), comment='#', skiprows=2)


Out[70]:
a b c
0 1 2 3

If both header and skiprows are specified, header will be relative to the end of skiprows. For example:

In [71]: data = ('# empty\n'


....: '# second empty line\n'
....: '# third emptyline\n'
....: 'X,Y,Z\n'
....: '1,2,3\n'
....: 'A,B,C\n'
....: '1,2.,4.\n'
....: '5.,NaN,10.0\n')
....:

In [72]: print(data)
# empty
# second empty line
# third emptyline
X,Y,Z
[email protected]
1,2,3
T56GZSRVAHA,B,C
1,2.,4.
5.,NaN,10.0

In [73]: pd.read_csv(StringIO(data), comment='#', skiprows=4, header=1)


Out[73]:
A B C
0 1.0 2.0 4.0
1 5.0 NaN 10.0

Comments

Sometimes comments or meta data may be included in a file:

In [74]: print(open('tmp.csv').read())
ID,level,category
Patient1,123000,x # really unpleasant
Patient2,23000,y # wouldn't take his medicine
Patient3,1234018,z # awesome

By default, the parser includes the comments in the output:

In [75]: df = pd.read_csv('tmp.csv')

In [76]: df
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 241


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[76]:
ID level category
0 Patient1 123000 x # really unpleasant
1 Patient2 23000 y # wouldn't take his medicine
2 Patient3 1234018 z # awesome

We can suppress the comments using the comment keyword:

In [77]: df = pd.read_csv('tmp.csv', comment='#')

In [78]: df
Out[78]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z

Dealing with Unicode data

The encoding argument should be used for encoded unicode data, which will result in byte strings being decoded
to unicode in the result:

In [79]: from io import BytesIO

In [80]: data = (b'word,length\n'


....: b'Tr\xc3\xa4umen,7\n'
[email protected]
....: b'Gr\xc3\xbc\xc3\x9fe,5')
T56GZSRVAH ....:

In [81]: data = data.decode('utf8').encode('latin-1')

In [82]: df = pd.read_csv(BytesIO(data), encoding='latin-1')

In [83]: df
Out[83]:
word length
0 Träumen 7
1 Grüße 5

In [84]: df['word'][1]
Out[84]: 'Grüße'

Some formats which encode all characters as multiple bytes, like UTF-16, won’t parse correctly at all without speci-
fying the encoding. Full list of Python standard encodings.

242 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Index columns and trailing delimiters

If a file has one more column of data than the number of column names, the first column will be used as the
DataFrame’s row names:

In [85]: data = ('a,b,c\n'


....: '4,apple,bat,5.7\n'
....: '8,orange,cow,10')
....:

In [86]: pd.read_csv(StringIO(data))
Out[86]:
a b c
4 apple bat 5.7
8 orange cow 10.0

In [87]: data = ('index,a,b,c\n'


....: '4,apple,bat,5.7\n'
....: '8,orange,cow,10')
....:

In [88]: pd.read_csv(StringIO(data), index_col=0)


Out[88]:
a b c
index
4 apple bat 5.7
8 orange cow 10.0

[email protected]
Ordinarily, you can achieve this behavior using the index_col option.
T56GZSRVAH
There are some exception cases when a file has been prepared with delimiters at the end of each data line, confusing
the parser. To explicitly disable the index column inference and discard the last column, pass index_col=False:

In [89]: data = ('a,b,c\n'


....: '4,apple,bat,\n'
....: '8,orange,cow,')
....:

In [90]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,

In [91]: pd.read_csv(StringIO(data))
Out[91]:
a b c
4 apple bat NaN
8 orange cow NaN

In [92]: pd.read_csv(StringIO(data), index_col=False)


Out[92]:
a b c
0 4 apple bat
1 8 orange cow

If a subset of data is being parsed using the usecols option, the index_col specification is based on that subset,
not the original data.

3.1. IO tools (text, CSV, HDF5, . . . ) 243


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [93]: data = ('a,b,c\n'


....: '4,apple,bat,\n'
....: '8,orange,cow,')
....:

In [94]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,

In [95]: pd.read_csv(StringIO(data), usecols=['b', 'c'])


Out[95]:
b c
4 bat NaN
8 cow NaN

In [96]: pd.read_csv(StringIO(data), usecols=['b', 'c'], index_col=0)


Out[96]:
b c
4 bat NaN
8 cow NaN

Date Handling

Specifying date columns

[email protected]
To better facilitate working with datetime data, read_csv() uses the keyword arguments parse_dates and
T56GZSRVAHdate_parser to allow users to specify a variety of columns and date/time formats to turn the input text data into
datetime objects.
The simplest case is to just pass in parse_dates=True:

# Use a column as an index, and parse it as dates.


In [97]: df = pd.read_csv('foo.csv', index_col=0, parse_dates=True)

In [98]: df
Out[98]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5

# These are Python datetime objects


In [99]: df.index
Out[99]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype=
˓→'datetime64[ns]', name='date', freq=None)

It is often the case that we may want to store date and time data separately, or store various date fields separately. the
parse_dates keyword can be used to specify a combination of columns to parse the dates and/or times from.
You can specify a list of column lists to parse_dates, the resulting date columns will be prepended to the output
(so as to not affect the existing column order) and the new column names will be the concatenation of the component
column names:

244 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [100]: print(open('tmp.csv').read())
KORD,19990127, 19:00:00, 18:56:00, 0.8100
KORD,19990127, 20:00:00, 19:56:00, 0.0100
KORD,19990127, 21:00:00, 20:56:00, -0.5900
KORD,19990127, 21:00:00, 21:18:00, -0.9900
KORD,19990127, 22:00:00, 21:56:00, -0.5900
KORD,19990127, 23:00:00, 22:56:00, -0.5900

In [101]: df = pd.read_csv('tmp.csv', header=None, parse_dates=[[1, 2], [1, 3]])

In [102]: df
Out[102]:
1_2 1_3 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59

By default the parser removes the component date columns, but you can choose to retain them via the
keep_date_col keyword:

In [103]: df = pd.read_csv('tmp.csv', header=None, parse_dates=[[1, 2], [1, 3]],


.....: keep_date_col=True)
.....:

In [104]: df
[email protected]
Out[104]:
T56GZSRVAH 1_2 1_3 0 1 2 3 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 19990127 19:00:00 18:56:00 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 19990127 20:00:00 19:56:00 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD 19990127 21:00:00 20:56:00 -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD 19990127 21:00:00 21:18:00 -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD 19990127 22:00:00 21:56:00 -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD 19990127 23:00:00 22:56:00 -0.59

Note that if you wish to combine multiple columns into a single date column, a nested list must be used. In other
words, parse_dates=[1, 2] indicates that the second and third columns should each be parsed as separate date
columns while parse_dates=[[1, 2]] means the two columns should be parsed into a single column.
You can also use a dict to specify custom name columns:

In [105]: date_spec = {'nominal': [1, 2], 'actual': [1, 3]}

In [106]: df = pd.read_csv('tmp.csv', header=None, parse_dates=date_spec)

In [107]: df
Out[107]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59

It is important to remember that if multiple text columns are to be parsed into a single date column, then a new column

3.1. IO tools (text, CSV, HDF5, . . . ) 245


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

is prepended to the data. The index_col specification is based off of this new set of columns rather than the original
data columns:

In [108]: date_spec = {'nominal': [1, 2], 'actual': [1, 3]}

In [109]: df = pd.read_csv('tmp.csv', header=None, parse_dates=date_spec,


.....: index_col=0) # index is the nominal column
.....:

In [110]: df
Out[110]:
actual 0 4
nominal
1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59

Note: If a column or index contains an unparsable date, the entire column or index will be returned unaltered as an
object data type. For non-standard datetime parsing, use to_datetime() after pd.read_csv.

Note: read_csv has a fast_path for parsing datetime strings in iso8601 format, e.g “2000-01-01T00:01:02+00:00” and
similar variations. If you can arrange for your data to store datetimes in this format, load times will be significantly
faster, ~20x has been observed.
[email protected]
T56GZSRVAH

Note: When passing a dict as the parse_dates argument, the order of the columns prepended is not guaranteed,
because dict objects do not impose an ordering on their keys. On Python 2.7+ you may use collections.OrderedDict
instead of a regular dict if this matters to you. Because of this, when using a dict for ‘parse_dates’ in conjunction with
the index_col argument, it’s best to specify index_col as a column label rather then as an index on the resulting frame.

Date parsing functions

Finally, the parser allows you to specify a custom date_parser function to take full advantage of the flexibility of
the date parsing API:

In [111]: df = pd.read_csv('tmp.csv', header=None, parse_dates=date_spec,


.....: date_parser=pd.io.date_converters.parse_date_time)
.....:

In [112]: df
Out[112]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59

246 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Pandas will try to call the date_parser function in three different ways. If an exception is raised, the next one is
tried:
1. date_parser is first called with one or more arrays as arguments, as defined using parse_dates (e.g.,
date_parser(['2013', '2013'], ['1', '2'])).
2. If #1 fails, date_parser is called with all the columns concatenated row-wise into a single array (e.g.,
date_parser(['2013 1', '2013 2'])).
3. If #2 fails, date_parser is called once for every row with one or more string arguments from
the columns indicated with parse_dates (e.g., date_parser('2013', '1') for the first row,
date_parser('2013', '2') for the second, etc.).
Note that performance-wise, you should try these methods of parsing dates in order:
1. Try to infer the format using infer_datetime_format=True (see section below).
2. If you know the format, use pd.to_datetime(): date_parser=lambda x: pd.
to_datetime(x, format=...).
3. If you have a really non-standard format, use a custom date_parser function. For optimal performance, this
should be vectorized, i.e., it should accept arrays as arguments.
You can explore the date parsing functionality in date_converters.py and add your own. We would love to turn this
module into a community supported set of date/time parsers. To get you started, date_converters.py contains
functions to parse dual date and time columns, year/month/day columns, and year/month/day/hour/minute/second
columns. It also contains a generic_parser function so you can curry it with a function that deals with a single
date rather than the entire array.

Parsing a CSV with mixed timezones


[email protected]
T56GZSRVAH
Pandas cannot natively represent a column or index with mixed timezones. If your CSV file contains columns with a
mixture of timezones, the default result will be an object-dtype column with strings, even with parse_dates.

In [113]: content = """\


.....: a
.....: 2000-01-01T00:00:00+05:00
.....: 2000-01-01T00:00:00+06:00"""
.....:

In [114]: df = pd.read_csv(StringIO(content), parse_dates=['a'])

In [115]: df['a']
Out[115]:
0 2000-01-01 00:00:00+05:00
1 2000-01-01 00:00:00+06:00
Name: a, dtype: object

To parse the mixed-timezone values as a datetime column, pass a partially-applied to_datetime() with
utc=True as the date_parser.

In [116]: df = pd.read_csv(StringIO(content), parse_dates=['a'],


.....: date_parser=lambda col: pd.to_datetime(col, utc=True))
.....:

In [117]: df['a']
Out[117]:
0 1999-12-31 19:00:00+00:00
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 247


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 1999-12-31 18:00:00+00:00
Name: a, dtype: datetime64[ns, UTC]

Inferring datetime format

If you have parse_dates enabled for some or all of your columns, and your datetime strings are all formatted the
same way, you may get a large speed up by setting infer_datetime_format=True. If set, pandas will attempt
to guess the format of your datetime strings, and then use a faster means of parsing the strings. 5-10x parsing speeds
have been observed. pandas will fallback to the usual parsing if either the format cannot be guessed or the format that
was guessed cannot properly parse the entire column of strings. So in general, infer_datetime_format should
not have any negative consequences if enabled.
Here are some examples of datetime strings that can be guessed (All representing December 30th, 2011 at 00:00:00):
• “20111230”
• “2011/12/30”
• “20111230 00:00:00”
• “12/30/2011 00:00:00”
• “30/Dec/2011 00:00:00”
• “30/December/2011 00:00:00”
Note that infer_datetime_format is sensitive to dayfirst. With dayfirst=True, it will guess
“01/12/2011” to be December 1st. With dayfirst=False (default) it will guess “01/12/2011” to be January
[email protected]
12th.
T56GZSRVAH
# Try to infer the format for the index column
In [118]: df = pd.read_csv('foo.csv', index_col=0, parse_dates=True,
.....: infer_datetime_format=True)
.....:

In [119]: df
Out[119]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5

International date formats

While US date formats tend to be MM/DD/YYYY, many international formats use DD/MM/YYYY instead. For
convenience, a dayfirst keyword is provided:

In [120]: print(open('tmp.csv').read())
date,value,cat
1/6/2000,5,a
2/6/2000,10,b
3/6/2000,15,c

In [121]: pd.read_csv('tmp.csv', parse_dates=[0])


(continues on next page)

248 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[121]:
date value cat
0 2000-01-06 5 a
1 2000-02-06 10 b
2 2000-03-06 15 c

In [122]: pd.read_csv('tmp.csv', dayfirst=True, parse_dates=[0])


Out[122]:
date value cat
0 2000-06-01 5 a
1 2000-06-02 10 b
2 2000-06-03 15 c

Specifying method for floating-point conversion

The parameter float_precision can be specified in order to use a specific floating-point converter during parsing
with the C engine. The options are the ordinary converter, the high-precision converter, and the round-trip converter
(which is guaranteed to round-trip values after writing to a file). For example:
In [123]: val = '0.3066101993807095471566981359501369297504425048828125'

In [124]: data = 'a,b,c\n1,2,{0}'.format(val)

In [125]: abs(pd.read_csv(StringIO(data), engine='c',


.....: float_precision=None)['c'][0] - float(val))
.....:
[email protected]
Out[125]: 1.1102230246251565e-16
T56GZSRVAH
In [126]: abs(pd.read_csv(StringIO(data), engine='c',
.....: float_precision='high')['c'][0] - float(val))
.....:
Out[126]: 5.551115123125783e-17

In [127]: abs(pd.read_csv(StringIO(data), engine='c',


.....: float_precision='round_trip')['c'][0] - float(val))
.....:
Out[127]: 0.0

Thousand separators

For large numbers that have been written with a thousands separator, you can set the thousands keyword to a string
of length 1 so that integers will be parsed correctly:
By default, numbers with a thousands separator will be parsed as strings:
In [128]: print(open('tmp.csv').read())
ID|level|category
Patient1|123,000|x
Patient2|23,000|y
Patient3|1,234,018|z

In [129]: df = pd.read_csv('tmp.csv', sep='|')

In [130]: df
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 249


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[130]:
ID level category
0 Patient1 123,000 x
1 Patient2 23,000 y
2 Patient3 1,234,018 z

In [131]: df.level.dtype
Out[131]: dtype('O')

The thousands keyword allows integers to be parsed correctly:

In [132]: print(open('tmp.csv').read())
ID|level|category
Patient1|123,000|x
Patient2|23,000|y
Patient3|1,234,018|z

In [133]: df = pd.read_csv('tmp.csv', sep='|', thousands=',')

In [134]: df
Out[134]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z

In [135]: df.level.dtype
Out[135]: dtype('int64')
[email protected]
T56GZSRVAH

NA values

To control which values are parsed as missing values (which are signified by NaN), specify a string in na_values.
If you specify a list of strings, then all values in it are considered to be missing values. If you specify a number (a
float, like 5.0 or an integer like 5), the corresponding equivalent values will also imply a missing value (in this
case effectively [5.0, 5] are recognized as NaN).
To completely override the default values that are recognized as missing, specify keep_default_na=False.
The default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/
A N/A', '#N/A', 'N/A', 'n/a', 'NA', '<NA>', '#NA', 'NULL', 'null', 'NaN',
'-NaN', 'nan', '-nan', ''].
Let us consider some examples:

pd.read_csv('path_to_file.csv', na_values=[5])

In the example above 5 and 5.0 will be recognized as NaN, in addition to the defaults. A string will first be interpreted
as a numerical 5, then as a NaN.

pd.read_csv('path_to_file.csv', keep_default_na=False, na_values=[""])

Above, only an empty field will be recognized as NaN.

pd.read_csv('path_to_file.csv', keep_default_na=False, na_values=["NA", "0"])

Above, both NA and 0 as strings are NaN.

250 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

pd.read_csv('path_to_file.csv', na_values=["Nope"])

The default values, in addition to the string "Nope" are recognized as NaN.

Infinity

inf like values will be parsed as np.inf (positive infinity), and -inf as -np.inf (negative infinity). These will
ignore the case of the value, meaning Inf, will also be parsed as np.inf.

Returning Series

Using the squeeze keyword, the parser will return output with a single column as a Series:

In [136]: print(open('tmp.csv').read())
level
Patient1,123000
Patient2,23000
Patient3,1234018

In [137]: output = pd.read_csv('tmp.csv', squeeze=True)

In [138]: output
Out[138]:
Patient1 123000
Patient2 23000
Patient3 1234018
[email protected]
Name: level, dtype: int64
T56GZSRVAH
In [139]: type(output)
Out[139]: pandas.core.series.Series

Boolean values

The common values True, False, TRUE, and FALSE are all recognized as boolean. Occasionally you might want to
recognize other values as being boolean. To do this, use the true_values and false_values options as follows:

In [140]: data = ('a,b,c\n'


.....: '1,Yes,2\n'
.....: '3,No,4')
.....:

In [141]: print(data)
a,b,c
1,Yes,2
3,No,4

In [142]: pd.read_csv(StringIO(data))
Out[142]:
a b c
0 1 Yes 2
1 3 No 4

In [143]: pd.read_csv(StringIO(data), true_values=['Yes'], false_values=['No'])


(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 251


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[143]:
a b c
0 1 True 2
1 3 False 4

Handling “bad” lines

Some files may have malformed lines with too few fields or too many. Lines with too few fields will have NA values
filled in the trailing fields. Lines with too many fields will raise an error by default:

In [144]: data = ('a,b,c\n'


.....: '1,2,3\n'
.....: '4,5,6,7\n'
.....: '8,9,10')
.....:

In [145]: pd.read_csv(StringIO(data))
---------------------------------------------------------------------------
ParserError Traceback (most recent call last)
<ipython-input-145-6388c394e6b8> in <module>
----> 1 pd.read_csv(StringIO(data))

/pandas/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header,


˓→names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine,

˓→converters, true_values, false_values, skipinitialspace, skiprows, skipfooter,

˓→nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_


[email protected]
˓→dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates,
T56GZSRVAH˓→iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar,
˓→quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_

˓→bad_lines, delim_whitespace, low_memory, memory_map, float_precision)

674 )
675
--> 676 return _read(filepath_or_buffer, kwds)
677
678 parser_f.__name__ = name

/pandas/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)


452
453 try:
--> 454 data = parser.read(nrows)
455 finally:
456 parser.close()

/pandas/pandas/io/parsers.py in read(self, nrows)


1131 def read(self, nrows=None):
1132 nrows = _validate_integer("nrows", nrows)
-> 1133 ret = self._engine.read(nrows)
1134
1135 # May alter columns / col_dict

/pandas/pandas/io/parsers.py in read(self, nrows)


2035 def read(self, nrows=None):
2036 try:
-> 2037 data = self._reader.read(nrows)
2038 except StopIteration:
(continues on next page)

252 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2039 if self._first_chunk:

/pandas/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()

/pandas/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()

/pandas/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()

/pandas/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()

/pandas/pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()

ParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4

You can elect to skip bad lines:

In [29]: pd.read_csv(StringIO(data), error_bad_lines=False)


Skipping line 3: expected 3 fields, saw 4

Out[29]:
a b c
0 1 2 3
1 8 9 10

You can also use the usecols parameter to eliminate extraneous column data that appear in some lines but not others:

In [30]: pd.read_csv(StringIO(data), usecols=[0, 1, 2])


[email protected]
T56GZSRVAH Out[30]:
a b c
0 1 2 3
1 4 5 6
2 8 9 10

Dialect

The dialect keyword gives greater flexibility in specifying the file format. By default it uses the Excel dialect but
you can specify either the dialect name or a csv.Dialect instance.
Suppose you had data with unenclosed quotes:

In [146]: print(data)
label1,label2,label3
index1,"a,c,e
index2,b,d,f

By default, read_csv uses the Excel dialect and treats the double quote as the quote character, which causes it to
fail when it finds a newline before it finds the closing double quote.
We can get around this using dialect:

In [147]: import csv

In [148]: dia = csv.excel()

(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 253


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [149]: dia.quoting = csv.QUOTE_NONE

In [150]: pd.read_csv(StringIO(data), dialect=dia)


Out[150]:
label1 label2 label3
index1 "a c e
index2 b d f

All of the dialect options can be specified separately by keyword arguments:

In [151]: data = 'a,b,c~1,2,3~4,5,6'

In [152]: pd.read_csv(StringIO(data), lineterminator='~')


Out[152]:
a b c
0 1 2 3
1 4 5 6

Another common dialect option is skipinitialspace, to skip any whitespace after a delimiter:

In [153]: data = 'a, b, c\n1, 2, 3\n4, 5, 6'

In [154]: print(data)
a, b, c
1, 2, 3
4, 5, 6

In [155]: pd.read_csv(StringIO(data), skipinitialspace=True)


[email protected]
T56GZSRVAHOut[155]:
a b c
0 1 2 3
1 4 5 6

The parsers make every attempt to “do the right thing” and not be fragile. Type inference is a pretty big deal. If a
column can be coerced to integer dtype without altering the contents, the parser will do so. Any non-numeric columns
will come through as object dtype as with the rest of pandas objects.

Quoting and Escape Characters

Quotes (and other escape characters) in embedded fields can be handled in any number of ways. One way is to use
backslashes; to properly parse this data, you should pass the escapechar option:

In [156]: data = 'a,b\n"hello, \\"Bob\\", nice to see you",5'

In [157]: print(data)
a,b
"hello, \"Bob\", nice to see you",5

In [158]: pd.read_csv(StringIO(data), escapechar='\\')


Out[158]:
a b
0 hello, "Bob", nice to see you 5

254 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Files with fixed width columns

While read_csv() reads delimited data, the read_fwf() function works with data files that have known and fixed
column widths. The function parameters to read_fwf are largely the same as read_csv with two extra parameters,
and a different usage of the delimiter parameter:
• colspecs: A list of pairs (tuples) giving the extents of the fixed-width fields of each line as half-open intervals
(i.e., [from, to[ ). String value ‘infer’ can be used to instruct the parser to try detecting the column specifications
from the first 100 rows of the data. Default behavior, if not specified, is to infer.
• widths: A list of field widths which can be used instead of ‘colspecs’ if the intervals are contiguous.
• delimiter: Characters to consider as filler characters in the fixed-width file. Can be used to specify the filler
character of the fields if it is not spaces (e.g., ‘~’).
Consider a typical fixed-width data file:

In [159]: print(open('bar.csv').read())
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3

In order to parse this file into a DataFrame, we simply need to supply the column specifications to the read_fwf
function along with the file name:

# Column specifications are a list of half-intervals


In [160]: colspecs = [(0, 6), (8, 20), (21, 33), (34, 43)]
[email protected]
T56GZSRVAHIn [161]: df = pd.read_fwf('bar.csv', colspecs=colspecs, header=None, index_col=0)
In [162]: df
Out[162]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3

Note how the parser automatically picks column names X.<column number> when header=None argument is spec-
ified. Alternatively, you can supply just the column widths for contiguous columns:

# Widths are a list of integers


In [163]: widths = [6, 14, 13, 10]

In [164]: df = pd.read_fwf('bar.csv', widths=widths, header=None)

In [165]: df
Out[165]:
0 1 2 3
0 id8141 360.242940 149.910199 11950.7
1 id1594 444.953632 166.985655 11788.4
2 id1849 364.136849 183.628767 11806.2
3 id1230 413.836124 184.375703 11916.8
4 id1948 502.953953 173.237159 12468.3

3.1. IO tools (text, CSV, HDF5, . . . ) 255


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

The parser will take care of extra white spaces around the columns so it’s ok to have extra separation between the
columns in the file.
By default, read_fwf will try to infer the file’s colspecs by using the first 100 rows of the file. It can do it
only in cases when the columns are aligned and correctly separated by the provided delimiter (default delimiter is
whitespace).

In [166]: df = pd.read_fwf('bar.csv', header=None, index_col=0)

In [167]: df
Out[167]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3

read_fwf supports the dtype parameter for specifying the types of parsed columns to be different from the inferred
type.

In [168]: pd.read_fwf('bar.csv', header=None, index_col=0).dtypes


Out[168]:
1 float64
2 float64
3 float64
dtype: object

[email protected]
In [169]: pd.read_fwf('bar.csv', header=None, dtype={2: 'object'}).dtypes
T56GZSRVAHOut[169]:
0 object
1 float64
2 object
3 float64
dtype: object

Indexes

Files with an “implicit” index column

Consider a file with one less entry in the header than the number of data column:

In [170]: print(open('foo.csv').read())
A,B,C
20090101,a,1,2
20090102,b,3,4
20090103,c,4,5

In this special case, read_csv assumes that the first column is to be used as the index of the DataFrame:

In [171]: pd.read_csv('foo.csv')
Out[171]:
A B C
20090101 a 1 2
(continues on next page)

256 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


20090102 b 3 4
20090103 c 4 5

Note that the dates weren’t automatically parsed. In that case you would need to do as before:
In [172]: df = pd.read_csv('foo.csv', parse_dates=True)

In [173]: df.index
Out[173]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype=
˓→'datetime64[ns]', freq=None)

Reading an index with a MultiIndex

Suppose you have data indexed by two columns:


In [174]: print(open('data/mindex_ex.csv').read())
year,indiv,zit,xit
1977,"A",1.2,.6
1977,"B",1.5,.5
1977,"C",1.7,.8
1978,"A",.2,.06
1978,"B",.7,.2
1978,"C",.8,.3
1978,"D",.9,.5
1978,"E",1.4,.9
1979,"C",.2,.15
[email protected]
1979,"D",.14,.05
T56GZSRVAH
1979,"E",.5,.15
1979,"F",1.2,.5
1979,"G",3.4,1.9
1979,"H",5.4,2.7
1979,"I",6.4,1.2

The index_col argument to read_csv can take a list of column numbers to turn multiple columns into a
MultiIndex for the index of the returned object:
In [175]: df = pd.read_csv("data/mindex_ex.csv", index_col=[0, 1])

In [176]: df
Out[176]:
zit xit
year indiv
1977 A 1.20 0.60
B 1.50 0.50
C 1.70 0.80
1978 A 0.20 0.06
B 0.70 0.20
C 0.80 0.30
D 0.90 0.50
E 1.40 0.90
1979 C 0.20 0.15
D 0.14 0.05
E 0.50 0.15
F 1.20 0.50
G 3.40 1.90
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 257


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


H 5.40 2.70
I 6.40 1.20

In [177]: df.loc[1978]
Out[177]:
zit xit
indiv
A 0.2 0.06
B 0.7 0.20
C 0.8 0.30
D 0.9 0.50
E 1.4 0.90

Reading columns with a MultiIndex

By specifying list of row locations for the header argument, you can read in a MultiIndex for the columns.
Specifying non-consecutive rows will skip the intervening rows.
In [178]: from pandas._testing import makeCustomDataframe as mkdf

In [179]: df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4)

In [180]: df.to_csv('mi.csv')

In [181]: print(open('mi.csv').read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
[email protected]
T56GZSRVAHC1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2

In [182]: pd.read_csv('mi.csv', header=[0, 1, 2, 3], index_col=[0, 1])


Out[182]:
C0 C_l0_g0 C_l0_g1 C_l0_g2
C1 C_l1_g0 C_l1_g1 C_l1_g2
C2 C_l2_g0 C_l2_g1 C_l2_g2
C3 C_l3_g0 C_l3_g1 C_l3_g2
R0 R1
R_l0_g0 R_l1_g0 R0C0 R0C1 R0C2
R_l0_g1 R_l1_g1 R1C0 R1C1 R1C2
R_l0_g2 R_l1_g2 R2C0 R2C1 R2C2
R_l0_g3 R_l1_g3 R3C0 R3C1 R3C2
R_l0_g4 R_l1_g4 R4C0 R4C1 R4C2

read_csv is also able to interpret a more common format of multi-columns indices.


In [183]: print(open('mi2.csv').read())
,a,a,a,b,c,c
,q,r,s,t,u,v
(continues on next page)

258 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


one,1,2,3,4,5,6
two,7,8,9,10,11,12

In [184]: pd.read_csv('mi2.csv', header=[0, 1], index_col=0)


Out[184]:
a b c
q r s t u v
one 1 2 3 4 5 6
two 7 8 9 10 11 12

Note: If an index_col is not specified (e.g. you don’t have an index, or wrote it with df.to_csv(...,
index=False), then any names on the columns index will be lost.

Automatically “sniffing” the delimiter

read_csv is capable of inferring delimited (not necessarily comma-separated) files, as pandas uses the csv.
Sniffer class of the csv module. For this, you have to specify sep=None.

In [185]: print(open('tmp2.sv').read())
:0:1:2:3
0:0.4691122999071863:-0.2828633443286633:-1.5090585031735124:-1.1356323710171934
1:1.2121120250208506:-0.17321464905330858:0.11920871129693428:-1.0442359662799567
2:-0.8618489633477999:-2.1045692188948086:-0.4949292740687813:1.071803807037338
3:0.7215551622443669:-0.7067711336300845:-1.0395749851146963:0.27185988554282986
4:-0.42497232978883753:0.567020349793672:0.27623201927771873:-1.0874006912859915
5:-0.6736897080883706:0.1136484096888855:-1.4784265524372235:0.5249876671147047
[email protected]
6:0.4047052186802365:0.5770459859204836:-1.7150020161146375:-1.0392684835147725
T56GZSRVAH7:-0.3706468582364464:-1.1578922506419993:-1.344311812731667:0.8448851414248841
8:1.0757697837155533:-0.10904997528022223:1.6435630703622064:-1.4693879595399115
9:0.35702056413309086:-0.6746001037299882:-1.776903716971867:-0.9689138124473498

In [186]: pd.read_csv('tmp2.sv', sep=None, engine='python')


Out[186]:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
6 6 0.404705 0.577046 -1.715002 -1.039268
7 7 -0.370647 -1.157892 -1.344312 0.844885
8 8 1.075770 -0.109050 1.643563 -1.469388
9 9 0.357021 -0.674600 -1.776904 -0.968914

3.1. IO tools (text, CSV, HDF5, . . . ) 259


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Reading multiple files to create a single DataFrame

It’s best to use concat() to combine multiple files. See the cookbook for an example.

Iterating through files chunk by chunk

Suppose you wish to iterate through a (potentially very large) file lazily rather than reading the entire file into memory,
such as the following:
In [187]: print(open('tmp.sv').read())
|0|1|2|3
0|0.4691122999071863|-0.2828633443286633|-1.5090585031735124|-1.1356323710171934
1|1.2121120250208506|-0.17321464905330858|0.11920871129693428|-1.0442359662799567
2|-0.8618489633477999|-2.1045692188948086|-0.4949292740687813|1.071803807037338
3|0.7215551622443669|-0.7067711336300845|-1.0395749851146963|0.27185988554282986
4|-0.42497232978883753|0.567020349793672|0.27623201927771873|-1.0874006912859915
5|-0.6736897080883706|0.1136484096888855|-1.4784265524372235|0.5249876671147047
6|0.4047052186802365|0.5770459859204836|-1.7150020161146375|-1.0392684835147725
7|-0.3706468582364464|-1.1578922506419993|-1.344311812731667|0.8448851414248841
8|1.0757697837155533|-0.10904997528022223|1.6435630703622064|-1.4693879595399115
9|0.35702056413309086|-0.6746001037299882|-1.776903716971867|-0.9689138124473498

In [188]: table = pd.read_csv('tmp.sv', sep='|')

In [189]: table
Out[189]:
Unnamed: 0 0 1 2 3
[email protected]
T56GZSRVAH 0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
6 6 0.404705 0.577046 -1.715002 -1.039268
7 7 -0.370647 -1.157892 -1.344312 0.844885
8 8 1.075770 -0.109050 1.643563 -1.469388
9 9 0.357021 -0.674600 -1.776904 -0.968914

By specifying a chunksize to read_csv, the return value will be an iterable object of type TextFileReader:
In [190]: reader = pd.read_csv('tmp.sv', sep='|', chunksize=4)

In [191]: reader
Out[191]: <pandas.io.parsers.TextFileReader at 0x7f3d18adb350>

In [192]: for chunk in reader:


.....: print(chunk)
.....:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
Unnamed: 0 0 1 2 3
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
(continues on next page)

260 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


6 6 0.404705 0.577046 -1.715002 -1.039268
7 7 -0.370647 -1.157892 -1.344312 0.844885
Unnamed: 0 0 1 2 3
8 8 1.075770 -0.10905 1.643563 -1.469388
9 9 0.357021 -0.67460 -1.776904 -0.968914

Specifying iterator=True will also return the TextFileReader object:

In [193]: reader = pd.read_csv('tmp.sv', sep='|', iterator=True)

In [194]: reader.get_chunk(5)
Out[194]:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401

Specifying the parser engine

Under the hood pandas uses a fast and efficient parser implemented in C as well as a Python implementation which is
currently more feature-complete. Where possible pandas uses the C parser (specified as engine='c'), but may fall
back to Python if C-unsupported options are specified. Currently, C-unsupported options include:
• sep other than a single character (e.g. regex separators)
[email protected]
T56GZSRVAH • skipfooter
• sep=None with delim_whitespace=False
Specifying any of the above options will produce a ParserWarning unless the python engine is selected explicitly
using engine='python'.

Reading remote files

You can pass in a URL to a CSV file:

df = pd.read_csv('https://download.bls.gov/pub/time.series/cu/cu.item',
sep='\t')

S3 URLs are handled as well but require installing the S3Fs library:

df = pd.read_csv('s3://pandas-test/tips.csv')

If your S3 bucket requires credentials you will need to set them as environment variables or in the ~/.aws/
credentials config file, refer to the S3Fs documentation on credentials.

3.1. IO tools (text, CSV, HDF5, . . . ) 261


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Writing out data

Writing to CSV format

The Series and DataFrame objects have an instance method to_csv which allows storing the contents of the
object as a comma-separated-values file. The function takes a number of arguments. Only the first is required.
• path_or_buf: A string path to the file to write or a file object. If a file object it must be opened with
newline=”
• sep : Field delimiter for the output file (default “,”)
• na_rep: A string representation of a missing value (default ‘’)
• float_format: Format string for floating point numbers
• columns: Columns to write (default None)
• header: Whether to write out the column names (default True)
• index: whether to write row (index) names (default True)
• index_label: Column label(s) for index column(s) if desired. If None (default), and header and index are
True, then the index names are used. (A sequence should be given if the DataFrame uses MultiIndex).
• mode : Python write mode, default ‘w’
• encoding: a string representing the encoding to use if the contents are non-ASCII, for Python versions prior
to 3
• line_terminator: Character sequence denoting line end (default os.linesep)
[email protected]
• quoting: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set
T56GZSRVAH a float_format then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-
numeric
• quotechar: Character used to quote fields (default ‘”’)
• doublequote: Control quoting of quotechar in fields (default True)
• escapechar: Character used to escape sep and quotechar when appropriate (default None)
• chunksize: Number of rows to write at a time
• date_format: Format string for datetime objects

Writing a formatted string

The DataFrame object has an instance method to_string which allows control over the string representation of
the object. All arguments are optional:
• buf default None, for example a StringIO object
• columns default None, which columns to write
• col_space default None, minimum width of each column.
• na_rep default NaN, representation of NA value
• formatters default None, a dictionary (by column) of functions each of which takes a single argument and
returns a formatted string
• float_format default None, a function which takes a single (float) argument and returns a formatted string;
to be applied to floats in the DataFrame.

262 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• sparsify default True, set to False for a DataFrame with a hierarchical index to print every MultiIndex key
at each row.
• index_names default True, will print the names of the indices
• index default True, will print the index (ie, row labels)
• header default True, will print the column labels
• justify default left, will print column headers left- or right-justified
The Series object also has a to_string method, but with only the buf, na_rep, float_format arguments.
There is also a length argument which, if set to True, will additionally output the length of the Series.

3.1.2 JSON

Read and write JSON format files and strings.

Writing JSON

A Series or DataFrame can be converted to a valid JSON string. Use to_json with optional parameters:
• path_or_buf : the pathname or buffer to write the output This can be None in which case a JSON string is
returned
• orient :
Series:

[email protected]– default is index


T56GZSRVAH
– allowed values are {split, records, index}
DataFrame:
– default is columns
– allowed values are {split, records, index, columns, values, table}
The format of the JSON string

split dict like {index -> [index], columns -> [columns], data -> [values]}
records list like [{column -> value}, . . . , {column -> value}]
index dict like {index -> {column -> value}}
columns dict like {column -> {index -> value}}
values just the values array

• date_format : string, type of date conversion, ‘epoch’ for timestamp, ‘iso’ for ISO8601.
• double_precision : The number of decimal places to use when encoding floating point values, default 10.
• force_ascii : force encoded string to be ASCII, default True.
• date_unit : The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’ or
‘ns’ for seconds, milliseconds, microseconds and nanoseconds respectively. Default ‘ms’.
• default_handler : The handler to call if an object cannot otherwise be converted to a suitable format for
JSON. Takes a single argument, which is the object to convert, and returns a serializable object.
• lines : If records orient, then will write each record per line as json.

3.1. IO tools (text, CSV, HDF5, . . . ) 263


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Note NaN’s, NaT’s and None will be converted to null and datetime objects will be converted based on the
date_format and date_unit parameters.

In [195]: dfj = pd.DataFrame(np.random.randn(5, 2), columns=list('AB'))

In [196]: json = dfj.to_json()

In [197]: json
Out[197]: '{"A":{"0":-1.2945235903,"1":0.2766617129,"2":-0.0139597524,"3":-0.
˓→0061535699,"4":0.8957173022},"B":{"0":0.4137381054,"1":-0.472034511,"2":-0.

˓→3625429925,"3":-0.923060654,"4":0.8052440254}}'

Orient options

There are a number of different options for the format of the resulting JSON file / string. Consider the following
DataFrame and Series:

In [198]: dfjo = pd.DataFrame(dict(A=range(1, 4), B=range(4, 7), C=range(7, 10)),


.....: columns=list('ABC'), index=list('xyz'))
.....:

In [199]: dfjo
Out[199]:
A B C
x 1 4 7
y 2 5 8
z 3 6 9
[email protected]
T56GZSRVAHIn [200]: sjo = pd.Series(dict(x=15, y=16, z=17), name='D')

In [201]: sjo
Out[201]:
x 15
y 16
z 17
Name: D, dtype: int64

Column oriented (the default for DataFrame) serializes the data as nested JSON objects with column labels acting
as the primary index:

In [202]: dfjo.to_json(orient="columns")
Out[202]: '{"A":{"x":1,"y":2,"z":3},"B":{"x":4,"y":5,"z":6},"C":{"x":7,"y":8,"z":9}}'

# Not available for Series

Index oriented (the default for Series) similar to column oriented but the index labels are now primary:

In [203]: dfjo.to_json(orient="index")
Out[203]: '{"x":{"A":1,"B":4,"C":7},"y":{"A":2,"B":5,"C":8},"z":{"A":3,"B":6,"C":9}}'

In [204]: sjo.to_json(orient="index")
Out[204]: '{"x":15,"y":16,"z":17}'

Record oriented serializes the data to a JSON array of column -> value records, index labels are not included. This is
useful for passing DataFrame data to plotting libraries, for example the JavaScript library d3.js:

264 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [205]: dfjo.to_json(orient="records")
Out[205]: '[{"A":1,"B":4,"C":7},{"A":2,"B":5,"C":8},{"A":3,"B":6,"C":9}]'

In [206]: sjo.to_json(orient="records")
Out[206]: '[15,16,17]'

Value oriented is a bare-bones option which serializes to nested JSON arrays of values only, column and index labels
are not included:

In [207]: dfjo.to_json(orient="values")
Out[207]: '[[1,4,7],[2,5,8],[3,6,9]]'

# Not available for Series

Split oriented serializes to a JSON object containing separate entries for values, index and columns. Name is also
included for Series:

In [208]: dfjo.to_json(orient="split")
Out[208]: '{"columns":["A","B","C"],"index":["x","y","z"],"data":[[1,4,7],[2,5,8],[3,
˓→6,9]]}'

In [209]: sjo.to_json(orient="split")
Out[209]: '{"name":"D","index":["x","y","z"],"data":[15,16,17]}'

Table oriented serializes to the JSON Table Schema, allowing for the preservation of metadata including but not
limited to dtypes and index names.

[email protected]
Note: Any orient option that encodes to a JSON object will not preserve the ordering of index and column labels
T56GZSRVAHduring round-trip serialization. If you wish to preserve label ordering use the split option as it uses ordered containers.

Date handling

Writing in ISO date format:

In [210]: dfd = pd.DataFrame(np.random.randn(5, 2), columns=list('AB'))

In [211]: dfd['date'] = pd.Timestamp('20130101')

In [212]: dfd = dfd.sort_index(1, ascending=False)

In [213]: json = dfd.to_json(date_format='iso')

In [214]: json
Out[214]: '{"date":{"0":"2013-01-01T00:00:00.000Z","1":"2013-01-01T00:00:00.000Z","2":
˓→"2013-01-01T00:00:00.000Z","3":"2013-01-01T00:00:00.000Z","4":"2013-01-01T00:00:00.

˓→000Z"},"B":{"0":2.5656459463,"1":1.3403088498,"2":-0.2261692849,"3":0.8138502857,"4

˓→":-0.8273169356},"A":{"0":-1.2064117817,"1":1.4312559863,"2":-1.1702987971,"3":0.

˓→4108345112,"4":0.1320031703}}'

Writing in ISO date format, with microseconds:

In [215]: json = dfd.to_json(date_format='iso', date_unit='us')

(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 265


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [216]: json
Out[216]: '{"date":{"0":"2013-01-01T00:00:00.000000Z","1":"2013-01-01T00:00:00.000000Z
˓→","2":"2013-01-01T00:00:00.000000Z","3":"2013-01-01T00:00:00.000000Z","4":"2013-01-

˓→01T00:00:00.000000Z"},"B":{"0":2.5656459463,"1":1.3403088498,"2":-0.2261692849,"3":

˓→0.8138502857,"4":-0.8273169356},"A":{"0":-1.2064117817,"1":1.4312559863,"2":-1.

˓→1702987971,"3":0.4108345112,"4":0.1320031703}}'

Epoch timestamps, in seconds:

In [217]: json = dfd.to_json(date_format='epoch', date_unit='s')

In [218]: json
Out[218]: '{"date":{"0":1356998400,"1":1356998400,"2":1356998400,"3":1356998400,"4":
˓→1356998400},"B":{"0":2.5656459463,"1":1.3403088498,"2":-0.2261692849,"3":0.

˓→8138502857,"4":-0.8273169356},"A":{"0":-1.2064117817,"1":1.4312559863,"2":-1.

˓→1702987971,"3":0.4108345112,"4":0.1320031703}}'

Writing to a file, with a date index and a date column:

In [219]: dfj2 = dfj.copy()

In [220]: dfj2['date'] = pd.Timestamp('20130101')

In [221]: dfj2['ints'] = list(range(5))

In [222]: dfj2['bools'] = True

In [223]: dfj2.index = pd.date_range('20130101', periods=5)


[email protected]
T56GZSRVAH
In [224]: dfj2.to_json('test.json')

In [225]: with open('test.json') as fh:


.....: print(fh.read())
.....:
{"A":{"1356998400000":-1.2945235903,"1357084800000":0.2766617129,"1357171200000":-0.
˓→0139597524,"1357257600000":-0.0061535699,"1357344000000":0.8957173022},"B":{

˓→"1356998400000":0.4137381054,"1357084800000":-0.472034511,"1357171200000":-0.

˓→3625429925,"1357257600000":-0.923060654,"1357344000000":0.8052440254},"date":{

˓→"1356998400000":1356998400000,"1357084800000":1356998400000,"1357171200000":

˓→1356998400000,"1357257600000":1356998400000,"1357344000000":1356998400000},"ints":{

˓→"1356998400000":0,"1357084800000":1,"1357171200000":2,"1357257600000":3,

˓→"1357344000000":4},"bools":{"1356998400000":true,"1357084800000":true,"1357171200000

˓→":true,"1357257600000":true,"1357344000000":true}}

Fallback behavior

If the JSON serializer cannot handle the container contents directly it will fall back in the following manner:
• if the dtype is unsupported (e.g. np.complex) then the default_handler, if provided, will be called for
each value, otherwise an exception is raised.
• if an object is unsupported it will attempt the following:
– check if the object has defined a toDict method and call it. A toDict method should return a dict
which will then be JSON serialized.
– invoke the default_handler if one was provided.

266 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

– convert the object to a dict by traversing its contents. However this will often fail with an
OverflowError or give unexpected results.
In general the best approach for unsupported objects or dtypes is to provide a default_handler. For example:

>>> DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json() # raises


RuntimeError: Unhandled numpy dtype 15

can be dealt with by specifying a simple default_handler:

In [226]: pd.DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json(default_handler=str)


Out[226]: '{"0":{"0":"(1+0j)","1":"(2+0j)","2":"(1+2j)"}}'

Reading JSON

Reading a JSON string to pandas object can take a number of parameters. The parser will try to parse a DataFrame
if typ is not supplied or is None. To explicitly force Series parsing, pass typ=series
• filepath_or_buffer : a VALID JSON string or file handle / StringIO. The string could be a URL. Valid
URL schemes include http, ftp, S3, and file. For file URLs, a host is expected. For instance, a local file could be
file ://localhost/path/to/table.json
• typ : type of object to recover (series or frame), default ‘frame’
• orient :
Series :
– default is index
[email protected]
T56GZSRVAH – allowed values are {split, records, index}
DataFrame
– default is columns
– allowed values are {split, records, index, columns, values, table}
The format of the JSON string

split dict like {index -> [index], columns -> [columns], data -> [values]}
records list like [{column -> value}, . . . , {column -> value}]
index dict like {index -> {column -> value}}
columns dict like {column -> {index -> value}}
values just the values array
table adhering to the JSON Table Schema

• dtype : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don’t infer dtypes at
all, default is True, apply only to the data.
• convert_axes : boolean, try to convert the axes to the proper dtypes, default is True
• convert_dates : a list of columns to parse for dates; If True, then try to parse date-like columns, default
is True.
• keep_default_dates : boolean, default True. If parsing dates, then parse the default date-like columns.
• numpy : direct decoding to NumPy arrays. default is False; Supports numeric data only, although labels may
be non-numeric. Also note that the JSON ordering MUST be the same for each term if numpy=True.

3.1. IO tools (text, CSV, HDF5, . . . ) 267


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• precise_float : boolean, default False. Set to enable usage of higher precision (strtod) function when
decoding string to double values. Default (False) is to use fast but less precise builtin functionality.
• date_unit : string, the timestamp unit to detect if converting dates. Default None. By default the timestamp
precision will be detected, if this is not desired then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force timestamp
precision to seconds, milliseconds, microseconds or nanoseconds respectively.
• lines : reads file as one json object per line.
• encoding : The encoding to use to decode py3 bytes.
• chunksize : when used in combination with lines=True, return a JsonReader which reads in chunksize
lines per iteration.
The parser will raise one of ValueError/TypeError/AssertionError if the JSON is not parseable.
If a non-default orient was used when encoding to JSON be sure to pass the same option here so that decoding
produces sensible results, see Orient Options for an overview.

Data conversion

The default of convert_axes=True, dtype=True, and convert_dates=True will try to parse the axes, and
all of the data into appropriate types, including dates. If you need to override specific dtypes, pass a dict to dtype.
convert_axes should only be set to False if you need to preserve string-like numbers (e.g. ‘1’, ‘2’) in an axes.

Note: Large integer values may be converted to dates if convert_dates=True and the data and / or column labels
appear ‘date-like’. The exact threshold depends on the date_unit specified. ‘date-like’ means that the column label
meets one of the following criteria:
[email protected]
T56GZSRVAH • it ends with '_at'
• it ends with '_time'
• it begins with 'timestamp'
• it is 'modified'
• it is 'date'

Warning: When reading JSON data, automatic coercing into dtypes has some quirks:
• an index can be reconstructed in a different order from serialization, that is, the returned order is not guaran-
teed to be the same as before serialization
• a column that was float data will be converted to integer if it can be done safely, e.g. a column of 1.
• bool columns will be converted to integer on reconstruction
Thus there are times where you may want to specify specific dtypes via the dtype keyword argument.

Reading from a JSON string:

In [227]: pd.read_json(json)
Out[227]:
date B A
0 2013-01-01 2.565646 -1.206412
1 2013-01-01 1.340309 1.431256
(continues on next page)

268 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2 2013-01-01 -0.226169 -1.170299
3 2013-01-01 0.813850 0.410835
4 2013-01-01 -0.827317 0.132003

Reading from a file:

In [228]: pd.read_json('test.json')
Out[228]:
A B date ints bools
2013-01-01 -1.294524 0.413738 2013-01-01 0 True
2013-01-02 0.276662 -0.472035 2013-01-01 1 True
2013-01-03 -0.013960 -0.362543 2013-01-01 2 True
2013-01-04 -0.006154 -0.923061 2013-01-01 3 True
2013-01-05 0.895717 0.805244 2013-01-01 4 True

Don’t convert any data (but still convert axes and dates):

In [229]: pd.read_json('test.json', dtype=object).dtypes


Out[229]:
A object
B object
date object
ints object
bools object
dtype: object

Specify dtypes for conversion:


[email protected]
T56GZSRVAHIn [230]: pd.read_json('test.json', dtype={'A': 'float32', 'bools': 'int8'}).dtypes
Out[230]:
A float32
B float64
date datetime64[ns]
ints int64
bools int8
dtype: object

Preserve string indices:

In [231]: si = pd.DataFrame(np.zeros((4, 4)), columns=list(range(4)),


.....: index=[str(i) for i in range(4)])
.....:

In [232]: si
Out[232]:
0 1 2 3
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0

In [233]: si.index
Out[233]: Index(['0', '1', '2', '3'], dtype='object')

In [234]: si.columns
Out[234]: Int64Index([0, 1, 2, 3], dtype='int64')
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 269


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [235]: json = si.to_json()

In [236]: sij = pd.read_json(json, convert_axes=False)

In [237]: sij
Out[237]:
0 1 2 3
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0

In [238]: sij.index
Out[238]: Index(['0', '1', '2', '3'], dtype='object')

In [239]: sij.columns
Out[239]: Index(['0', '1', '2', '3'], dtype='object')

Dates written in nanoseconds need to be read back in nanoseconds:


In [240]: json = dfj2.to_json(date_unit='ns')

# Try to parse timestamps as milliseconds -> Won't Work


In [241]: dfju = pd.read_json(json, date_unit='ms')

In [242]: dfju
Out[242]:
[email protected]
T56GZSRVAH A B date ints bools
1356998400000000000 -1.294524 0.413738 1356998400000000000 0 True
1357084800000000000 0.276662 -0.472035 1356998400000000000 1 True
1357171200000000000 -0.013960 -0.362543 1356998400000000000 2 True
1357257600000000000 -0.006154 -0.923061 1356998400000000000 3 True
1357344000000000000 0.895717 0.805244 1356998400000000000 4 True

# Let pandas detect the correct precision


In [243]: dfju = pd.read_json(json)

In [244]: dfju
Out[244]:
A B date ints bools
2013-01-01 -1.294524 0.413738 2013-01-01 0 True
2013-01-02 0.276662 -0.472035 2013-01-01 1 True
2013-01-03 -0.013960 -0.362543 2013-01-01 2 True
2013-01-04 -0.006154 -0.923061 2013-01-01 3 True
2013-01-05 0.895717 0.805244 2013-01-01 4 True

# Or specify that all timestamps are in nanoseconds


In [245]: dfju = pd.read_json(json, date_unit='ns')

In [246]: dfju
Out[246]:
A B date ints bools
2013-01-01 -1.294524 0.413738 2013-01-01 0 True
2013-01-02 0.276662 -0.472035 2013-01-01 1 True
2013-01-03 -0.013960 -0.362543 2013-01-01 2 True
2013-01-04 -0.006154 -0.923061 2013-01-01 3 True
(continues on next page)

270 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2013-01-05 0.895717 0.805244 2013-01-01 4 True

The Numpy parameter

Note: This param has been deprecated as of version 1.0.0 and will raise a FutureWarning.
This supports numeric data only. Index and columns labels may be non-numeric, e.g. strings, dates etc.

If numpy=True is passed to read_json an attempt will be made to sniff an appropriate dtype during deserialization
and to subsequently decode directly to NumPy arrays, bypassing the need for intermediate Python objects.
This can provide speedups if you are deserialising a large amount of numeric data:

In [247]: randfloats = np.random.uniform(-100, 1000, 10000)

In [248]: randfloats.shape = (1000, 10)

In [249]: dffloats = pd.DataFrame(randfloats, columns=list('ABCDEFGHIJ'))

In [250]: jsonfloats = dffloats.to_json()

In [251]: %timeit pd.read_json(jsonfloats)


12.6 ms +- 185 us per loop (mean +- std. dev. of 7 runs, 100 loops each)

[email protected]
T56GZSRVAHIn [252]: %timeit pd.read_json(jsonfloats, numpy=True)
9.34 ms +- 88.5 us per loop (mean +- std. dev. of 7 runs, 100 loops each)

The speedup is less noticeable for smaller datasets:

In [253]: jsonfloats = dffloats.head(100).to_json()

In [254]: %timeit pd.read_json(jsonfloats)


8.06 ms +- 190 us per loop (mean +- std. dev. of 7 runs, 100 loops each)

In [255]: %timeit pd.read_json(jsonfloats, numpy=True)


6.69 ms +- 44.6 us per loop (mean +- std. dev. of 7 runs, 100 loops each)

Warning: Direct NumPy decoding makes a number of assumptions and may fail or produce unexpected output if
these assumptions are not satisfied:
• data is numeric.
• data is uniform. The dtype is sniffed from the first value decoded. A ValueError may be raised, or
incorrect output may be produced if this condition is not satisfied.
• labels are ordered. Labels are only read from the first container, it is assumed that each subsequent row /
column has been encoded in the same order. This should be satisfied if the data was encoded using to_json
but may not be the case if the JSON is from another source.

3.1. IO tools (text, CSV, HDF5, . . . ) 271


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Normalization

pandas provides a utility function to take a dict or list of dicts and normalize this semi-structured data into a flat table.

In [256]: data = [{'id': 1, 'name': {'first': 'Coleen', 'last': 'Volk'}},


.....: {'name': {'given': 'Mose', 'family': 'Regner'}},
.....: {'id': 2, 'name': 'Faye Raker'}]
.....:

In [257]: pd.json_normalize(data)
Out[257]:
id name.first name.last name.given name.family name
0 1.0 Coleen Volk NaN NaN NaN
1 NaN NaN NaN Mose Regner NaN
2 2.0 NaN NaN NaN NaN Faye Raker

In [258]: data = [{'state': 'Florida',


.....: 'shortname': 'FL',
.....: 'info': {'governor': 'Rick Scott'},
.....: 'county': [{'name': 'Dade', 'population': 12345},
.....: {'name': 'Broward', 'population': 40000},
.....: {'name': 'Palm Beach', 'population': 60000}]},
.....: {'state': 'Ohio',
.....: 'shortname': 'OH',
.....: 'info': {'governor': 'John Kasich'},
.....: 'county': [{'name': 'Summit', 'population': 1234},
.....: {'name': 'Cuyahoga', 'population': 1337}]}]
.....:
[email protected]
T56GZSRVAHIn [259]: pd.json_normalize(data, 'county', ['state', 'shortname', ['info', 'governor
˓→']])

Out[259]:
name population state shortname info.governor
0 Dade 12345 Florida FL Rick Scott
1 Broward 40000 Florida FL Rick Scott
2 Palm Beach 60000 Florida FL Rick Scott
3 Summit 1234 Ohio OH John Kasich
4 Cuyahoga 1337 Ohio OH John Kasich

The max_level parameter provides more control over which level to end normalization. With max_level=1 the follow-
ing snippet normalizes until 1st nesting level of the provided dict.

In [260]: data = [{'CreatedBy': {'Name': 'User001'},


.....: 'Lookup': {'TextField': 'Some text',
.....: 'UserField': {'Id': 'ID001',
.....: 'Name': 'Name001'}},
.....: 'Image': {'a': 'b'}
.....: }]
.....:

In [261]: pd.json_normalize(data, max_level=1)


Out[261]:
CreatedBy.Name Lookup.TextField Lookup.UserField Image.a
0 User001 Some text {'Id': 'ID001', 'Name': 'Name001'} b

272 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Line delimited json

pandas is able to read and write line-delimited json files that are common in data processing pipelines using Hadoop
or Spark.
New in version 0.21.0.
For line-delimited json files, pandas can also return an iterator which reads in chunksize lines at a time. This can
be useful for large files or to read from a stream.
In [262]: jsonl = '''
.....: {"a": 1, "b": 2}
.....: {"a": 3, "b": 4}
.....: '''
.....:

In [263]: df = pd.read_json(jsonl, lines=True)

In [264]: df
Out[264]:
a b
0 1 2
1 3 4

In [265]: df.to_json(orient='records', lines=True)


Out[265]: '{"a":1,"b":2}\n{"a":3,"b":4}'

# reader is an iterator that returns `chunksize` lines each iteration


In [266]: reader = pd.read_json(StringIO(jsonl), lines=True, chunksize=1)
[email protected]
T56GZSRVAHIn [267]: reader
Out[267]: <pandas.io.json._json.JsonReader at 0x7f3d189ec910>

In [268]: for chunk in reader:


.....: print(chunk)
.....:
Empty DataFrame
Columns: []
Index: []
a b
0 1 2
a b
1 3 4

Table schema

Table Schema is a spec for describing tabular datasets as a JSON object. The JSON includes information on the field
names, types, and other attributes. You can use the orient table to build a JSON string with two fields, schema and
data.
In [269]: df = pd.DataFrame({'A': [1, 2, 3],
.....: 'B': ['a', 'b', 'c'],
.....: 'C': pd.date_range('2016-01-01', freq='d', periods=3)},
.....: index=pd.Index(range(3), name='idx'))
.....:

In [270]: df
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 273


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[270]:
A B C
idx
0 1 a 2016-01-01
1 2 b 2016-01-02
2 3 c 2016-01-03

In [271]: df.to_json(orient='table', date_format="iso")


Out[271]: '{"schema":{"fields":[{"name":"idx","type":"integer"},{"name":"A","type":
˓→"integer"},{"name":"B","type":"string"},{"name":"C","type":"datetime"}],"primaryKey

˓→":["idx"],"pandas_version":"0.20.0"},"data":[{"idx":0,"A":1,"B":"a","C":"2016-01-

˓→01T00:00:00.000Z"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000Z"},{"idx":2,

˓→"A":3,"B":"c","C":"2016-01-03T00:00:00.000Z"}]}'

The schema field contains the fields key, which itself contains a list of column name to type pairs, including the
Index or MultiIndex (see below for a list of types). The schema field also contains a primaryKey field if the
(Multi)index is unique.
The second field, data, contains the serialized data with the records orient. The index is included, and any
datetimes are ISO 8601 formatted, as required by the Table Schema spec.
The full list of types supported are described in the Table Schema spec. This table shows the mapping from pandas
types:

Pandas type Table Schema type


int64 integer
float64 number
[email protected] bool boolean
T56GZSRVAH
datetime64[ns] datetime
timedelta64[ns] duration
categorical any
object str

A few notes on the generated table schema:


• The schema object contains a pandas_version field. This contains the version of pandas’ dialect of the
schema, and will be incremented with each revision.
• All dates are converted to UTC when serializing. Even timezone naive values, which are treated as UTC with
an offset of 0.

In [272]: from pandas.io.json import build_table_schema

In [273]: s = pd.Series(pd.date_range('2016', periods=4))

In [274]: build_table_schema(s)
Out[274]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime'}],
'primaryKey': ['index'],
'pandas_version': '0.20.0'}

• datetimes with a timezone (before serializing), include an additional field tz with the time zone name (e.g.
'US/Central').

274 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [275]: s_tz = pd.Series(pd.date_range('2016', periods=12,


.....: tz='US/Central'))
.....:

In [276]: build_table_schema(s_tz)
Out[276]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime', 'tz': 'US/Central'}],
'primaryKey': ['index'],
'pandas_version': '0.20.0'}

• Periods are converted to timestamps before serialization, and so have the same behavior of being converted to
UTC. In addition, periods will contain and additional field freq with the period’s frequency, e.g. 'A-DEC'.

In [277]: s_per = pd.Series(1, index=pd.period_range('2016', freq='A-DEC',


.....: periods=4))
.....:

In [278]: build_table_schema(s_per)
Out[278]:
{'fields': [{'name': 'index', 'type': 'datetime', 'freq': 'A-DEC'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': ['index'],
'pandas_version': '0.20.0'}

• Categoricals use the any type and an enum constraint listing the set of possible values. Additionally, an
ordered field is included:
[email protected]
In [279]: s_cat = pd.Series(pd.Categorical(['a', 'b', 'a']))
T56GZSRVAH
In [280]: build_table_schema(s_cat)
Out[280]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values',
'type': 'any',
'constraints': {'enum': ['a', 'b']},
'ordered': False}],
'primaryKey': ['index'],
'pandas_version': '0.20.0'}

• A primaryKey field, containing an array of labels, is included if the index is unique:

In [281]: s_dupe = pd.Series([1, 2], index=[1, 1])

In [282]: build_table_schema(s_dupe)
Out[282]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'pandas_version': '0.20.0'}

• The primaryKey behavior is the same with MultiIndexes, but in this case the primaryKey is an array:

In [283]: s_multi = pd.Series(1, index=pd.MultiIndex.from_product([('a', 'b'),


.....: (0, 1)]))
.....:

In [284]: build_table_schema(s_multi)
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 275


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[284]:
{'fields': [{'name': 'level_0', 'type': 'string'},
{'name': 'level_1', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': FrozenList(['level_0', 'level_1']),
'pandas_version': '0.20.0'}

• The default naming roughly follows these rules:


– For series, the object.name is used. If that’s none, then the name is values
– For DataFrames, the stringified version of the column name is used
– For Index (not MultiIndex), index.name is used, with a fallback to index if that is None.
– For MultiIndex, mi.names is used. If any level has no name, then level_<i> is used.
New in version 0.23.0.
read_json also accepts orient='table' as an argument. This allows for the preservation of metadata such as
dtypes and index names in a round-trippable manner.
In [285]: df = pd.DataFrame({'foo': [1, 2, 3, 4],
.....: 'bar': ['a', 'b', 'c', 'd'],
.....: 'baz': pd.date_range('2018-01-01', freq='d', periods=4),
.....: 'qux': pd.Categorical(['a', 'b', 'c', 'c'])
.....: }, index=pd.Index(range(4), name='idx'))
.....:

In [286]: df
[email protected]
T56GZSRVAH Out[286]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c

In [287]: df.dtypes
Out[287]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object

In [288]: df.to_json('test.json', orient='table')

In [289]: new_df = pd.read_json('test.json', orient='table')

In [290]: new_df
Out[290]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c

(continues on next page)

276 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [291]: new_df.dtypes
Out[291]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object

Please note that the literal string ‘index’ as the name of an Index is not round-trippable, nor are any names begin-
ning with 'level_' within a MultiIndex. These are used by default in DataFrame.to_json() to indicate
missing values and the subsequent read cannot distinguish the intent.

In [292]: df.index.name = 'index'

In [293]: df.to_json('test.json', orient='table')

In [294]: new_df = pd.read_json('test.json', orient='table')

In [295]: print(new_df.index.name)
None

3.1.3 HTML

Reading HTML content

[email protected]
T56GZSRVAH
Warning: We highly encourage you to read the HTML Table Parsing gotchas below regarding the issues sur-
rounding the BeautifulSoup4/html5lib/lxml parsers.

The top-level read_html() function can accept an HTML string/file/URL and will parse HTML tables into list of
pandas DataFrames. Let’s look at a few examples.

Note: read_html returns a list of DataFrame objects, even if there is only a single table contained in the
HTML content.

Read a URL with no options:

In [296]: url = 'https://www.fdic.gov/bank/individual/failed/banklist.html'

In [297]: dfs = pd.read_html(url)

In [298]: dfs
Out[298]:
[ Bank Name City ST CERT
˓→Acquiring Institution Closing Date
0 Ericson State Bank Ericson NE 18265 Farmers and
˓→Merchants Bank February 14, 2020
1 City National Bank of New Jersey Newark NJ 21111
˓→Industrial Bank November 1, 2019
2 Resolute Bank Maumee OH 58317
˓→Buckeye State Bank October 25, 2019
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 277


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


3 Louisa Community Bank Louisa KY 58112 Kentucky Farmers
˓→Bank Corporation October 25, 2019
4 The Enloe State Bank Cooper TX 10716
˓→Legend Bank, N. A. May 31, 2019
.. ... ... .. ...
˓→ ... ...
555 Superior Bank, FSB Hinsdale IL 32646
˓→Superior Federal, FSB July 27, 2001
556 Malta National Bank Malta OH 6629
˓→North Valley Bank May 3, 2001
557 First Alliance Bank & Trust Co. Manchester NH 34264 Southern New
˓→Hampshire Bank & Trust February 2, 2001
558 National State Bank of Metropolis Metropolis IL 3815 Banterra
˓→Bank of Marion December 14, 2000
559 Bank of Honolulu Honolulu HI 21029 Bank
˓→of the Orient October 13, 2000

[560 rows x 6 columns]]

Note: The data from the above URL changes every Monday so the resulting data above and the data below may be
slightly different.

Read in the content of the file from the above URL and pass it to read_html as a string:
In [299]: with open(file_path, 'r') as f:
.....: dfs = pd.read_html(f.read())
[email protected]
T56GZSRVAH .....:

In [300]: dfs
Out[300]:
[ Bank Name City ST CERT
˓→ Acquiring Institution Closing Date Updated Date
0 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386
˓→ North Shore Bank, FSB May 31, 2013 May 31, 2013
1 Central Arizona Bank Scottsdale AZ 34527
˓→ Western State Bank May 14, 2013 May 20, 2013
2 Sunrise Bank Valdosta GA 58185
˓→ Synovus Bank May 10, 2013 May 21, 2013
3 Pisgah Community Bank Asheville NC 58701
˓→ Capital Bank, N.A. May 10, 2013 May 14, 2013
4 Douglas County Bank Douglasville GA 21649
˓→ Hamilton State Bank April 26, 2013 May 16, 2013
.. ... ... .. ...
˓→ ... ... ...
500 Superior Bank, FSB Hinsdale IL 32646
˓→ Superior Federal, FSB July 27, 2001 June 5, 2012
501 Malta National Bank Malta OH 6629
˓→ North Valley Bank May 3, 2001 November 18, 2002
502 First Alliance Bank & Trust Co. Manchester NH 34264 Southern New
˓→Hampshire Bank & Trust February 2, 2001 February 18, 2003
503 National State Bank of Metropolis Metropolis IL 3815
˓→Banterra Bank of Marion December 14, 2000 March 17, 2005
504 Bank of Honolulu Honolulu HI 21029
˓→ Bank of the Orient October 13, 2000 March 17, 2005

(continues on next page)

278 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


[505 rows x 7 columns]]

You can even pass in an instance of StringIO if you so desire:

In [301]: with open(file_path, 'r') as f:


.....: sio = StringIO(f.read())
.....:

In [302]: dfs = pd.read_html(sio)

In [303]: dfs
Out[303]:
[ Bank Name City ST CERT
˓→ Acquiring Institution Closing Date Updated Date
0 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386
˓→ North Shore Bank, FSB May 31, 2013 May 31, 2013
1 Central Arizona Bank Scottsdale AZ 34527
˓→ Western State Bank May 14, 2013 May 20, 2013
2 Sunrise Bank Valdosta GA 58185
˓→ Synovus Bank May 10, 2013 May 21, 2013
3 Pisgah Community Bank Asheville NC 58701
˓→ Capital Bank, N.A. May 10, 2013 May 14, 2013
4 Douglas County Bank Douglasville GA 21649
˓→ Hamilton State Bank April 26, 2013 May 16, 2013
.. ... ... .. ...
˓→ ... ... ...
500 Superior Bank, FSB Hinsdale IL 32646
˓→ Superior Federal, FSB July 27, 2001 June 5, 2012
[email protected]
T56GZSRVAH 501 Malta National Bank Malta OH 6629
˓→ North Valley Bank May 3, 2001 November 18, 2002
502 First Alliance Bank & Trust Co. Manchester NH 34264 Southern New
˓→Hampshire Bank & Trust February 2, 2001 February 18, 2003
503 National State Bank of Metropolis Metropolis IL 3815
˓→Banterra Bank of Marion December 14, 2000 March 17, 2005
504 Bank of Honolulu Honolulu HI 21029
˓→ Bank of the Orient October 13, 2000 March 17, 2005

[505 rows x 7 columns]]

Note: The following examples are not run by the IPython evaluator due to the fact that having so many network-
accessing functions slows down the documentation build. If you spot an error or an example that doesn’t run, please
do not hesitate to report it over on pandas GitHub issues page.

Read a URL and match a table that contains specific text:

match = 'Metcalf Bank'


df_list = pd.read_html(url, match=match)

Specify a header row (by default <th> or <td> elements located within a <thead> are used to form the column
index, if multiple rows are contained within <thead> then a MultiIndex is created); if specified, the header row is
taken from the data minus the parsed header elements (<th> elements).

dfs = pd.read_html(url, header=0)

Specify an index column:

3.1. IO tools (text, CSV, HDF5, . . . ) 279


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

dfs = pd.read_html(url, index_col=0)

Specify a number of rows to skip:

dfs = pd.read_html(url, skiprows=0)

Specify a number of rows to skip using a list (xrange (Python 2 only) works as well):

dfs = pd.read_html(url, skiprows=range(2))

Specify an HTML attribute:

dfs1 = pd.read_html(url, attrs={'id': 'table'})


dfs2 = pd.read_html(url, attrs={'class': 'sortable'})
print(np.array_equal(dfs1[0], dfs2[0])) # Should be True

Specify values that should be converted to NaN:

dfs = pd.read_html(url, na_values=['No Acquirer'])

Specify whether to keep the default set of NaN values:

dfs = pd.read_html(url, keep_default_na=False)

Specify converters for columns. This is useful for numerical text data that has leading zeros. By default columns that
are numerical are cast to numeric types and the leading zeros are lost. To avoid this, we can convert these columns to
strings.
[email protected]
T56GZSRVAHurl_mcc = 'https://en.wikipedia.org/wiki/Mobile_country_code'
dfs = pd.read_html(url_mcc, match='Telekom Albania', header=0,
converters={'MNC': str})

Use some combination of the above:

dfs = pd.read_html(url, match='Metcalf Bank', index_col=0)

Read in pandas to_html output (with some loss of floating point precision):

df = pd.DataFrame(np.random.randn(2, 2))
s = df.to_html(float_format='{0:.40g}'.format)
dfin = pd.read_html(s, index_col=0)

The lxml backend will raise an error on a failed parse if that is the only parser you provide. If you only have a single
parser you can provide just a string, but it is considered good practice to pass a list with one string if, for example, the
function expects a sequence of strings. You may use:

dfs = pd.read_html(url, 'Metcalf Bank', index_col=0, flavor=['lxml'])

Or you could pass flavor='lxml' without a list:

dfs = pd.read_html(url, 'Metcalf Bank', index_col=0, flavor='lxml')

However, if you have bs4 and html5lib installed and pass None or ['lxml', 'bs4'] then the parse will most
likely succeed. Note that as soon as a parse succeeds, the function will return.

dfs = pd.read_html(url, 'Metcalf Bank', index_col=0, flavor=['lxml', 'bs4'])

280 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Writing to HTML files

DataFrame objects have an instance method to_html which renders the contents of the DataFrame as an HTML
table. The function arguments are as in the method to_string described above.

Note: Not all of the possible options for DataFrame.to_html are shown here for brevity’s sake. See
to_html() for the full set of options.

In [304]: df = pd.DataFrame(np.random.randn(2, 2))

In [305]: df
Out[305]:
0 1
0 -0.184744 0.496971
1 -0.856240 1.857977

In [306]: print(df.to_html()) # raw html


<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
[email protected]
<th>0</th>
T56GZSRVAH
<td>-0.184744</td>
<td>0.496971</td>
</tr>
<tr>
<th>1</th>
<td>-0.856240</td>
<td>1.857977</td>
</tr>
</tbody>
</table>

HTML:
The columns argument will limit the columns shown:

In [307]: print(df.to_html(columns=[0]))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-0.184744</td>
</tr>
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 281


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


<tr>
<th>1</th>
<td>-0.856240</td>
</tr>
</tbody>
</table>

HTML:
float_format takes a Python callable to control the precision of floating point values:

In [308]: print(df.to_html(float_format='{0:.10f}'.format))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-0.1847438576</td>
<td>0.4969711327</td>
</tr>
<tr>
<th>1</th>
[email protected]
<td>-0.8562396763</td>
T56GZSRVAH <td>1.8579766508</td>
</tr>
</tbody>
</table>

HTML:
bold_rows will make the row labels bold by default, but you can turn that off:

In [309]: print(df.to_html(bold_rows=False))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>-0.184744</td>
<td>0.496971</td>
</tr>
<tr>
<td>1</td>
<td>-0.856240</td>
<td>1.857977</td>
(continues on next page)

282 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


</tr>
</tbody>
</table>

The classes argument provides the ability to give the resulting HTML table CSS classes. Note that these classes
are appended to the existing 'dataframe' class.
In [310]: print(df.to_html(classes=['awesome_table_class', 'even_more_awesome_class
˓→']))

<table border="1" class="dataframe awesome_table_class even_more_awesome_class">


<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-0.184744</td>
<td>0.496971</td>
</tr>
<tr>
<th>1</th>
<td>-0.856240</td>
<td>1.857977</td>
</tr>
[email protected]
T56GZSRVAH </tbody>
</table>

The render_links argument provides the ability to add hyperlinks to cells that contain URLs.
New in version 0.24.
In [311]: url_df = pd.DataFrame({
.....: 'name': ['Python', 'Pandas'],
.....: 'url': ['https://www.python.org/', 'https://pandas.pydata.org']})
.....:

In [312]: print(url_df.to_html(render_links=True))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>url</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Python</td>
<td><a href="https://www.python.org/" target="_blank">https://www.python.org/</
˓→a></td>

</tr>
<tr>
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 283


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


<th>1</th>
<td>Pandas</td>
<td><a href="https://pandas.pydata.org" target="_blank">https://pandas.pydata.
˓→org</a></td>

</tr>
</tbody>
</table>

HTML:
Finally, the escape argument allows you to control whether the “<”, “>” and “&” characters escaped in the resulting
HTML (by default it is True). So to get the HTML without escaped characters pass escape=False
In [313]: df = pd.DataFrame({'a': list('&<>'), 'b': np.random.randn(3)})

Escaped:
In [314]: print(df.to_html())
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
[email protected]
<th>0</th>
T56GZSRVAH <td>&amp;</td>
<td>-0.474063</td>
</tr>
<tr>
<th>1</th>
<td>&lt;</td>
<td>-0.230305</td>
</tr>
<tr>
<th>2</th>
<td>&gt;</td>
<td>-0.400654</td>
</tr>
</tbody>
</table>

Not escaped:
In [315]: print(df.to_html(escape=False))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
(continues on next page)

284 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


<tr>
<th>0</th>
<td>&</td>
<td>-0.474063</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>-0.230305</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-0.400654</td>
</tr>
</tbody>
</table>

Note: Some browsers may not show a difference in the rendering of the previous two HTML tables.

HTML Table Parsing Gotchas

There are some versioning issues surrounding the libraries that are used to parse HTML tables in the top-level pandas
io function read_html.
[email protected]
T56GZSRVAHIssues with lxml
• Benefits
– lxml is very fast.
– lxml requires Cython to install correctly.
• Drawbacks
– lxml does not make any guarantees about the results of its parse unless it is given strictly valid markup.
– In light of the above, we have chosen to allow you, the user, to use the lxml backend, but this backend
will use html5lib if lxml fails to parse
– It is therefore highly recommended that you install both BeautifulSoup4 and html5lib, so that you will
still get a valid result (provided everything else is valid) even if lxml fails.
Issues with BeautifulSoup4 using lxml as a backend
• The above issues hold here as well since BeautifulSoup4 is essentially just a wrapper around a parser backend.
Issues with BeautifulSoup4 using html5lib as a backend
• Benefits
– html5lib is far more lenient than lxml and consequently deals with real-life markup in a much saner way
rather than just, e.g., dropping an element without notifying you.
– html5lib generates valid HTML5 markup from invalid markup automatically. This is extremely important
for parsing HTML tables, since it guarantees a valid document. However, that does NOT mean that it is
“correct”, since the process of fixing markup does not have a single definition.
– html5lib is pure Python and requires no additional build steps beyond its own installation.

3.1. IO tools (text, CSV, HDF5, . . . ) 285


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• Drawbacks
– The biggest drawback to using html5lib is that it is slow as molasses. However consider the fact that many
tables on the web are not big enough for the parsing algorithm runtime to matter. It is more likely that the
bottleneck will be in the process of reading the raw text from the URL over the web, i.e., IO (input-output).
For very large tables, this might not be true.

3.1.4 Excel files

The read_excel() method can read Excel 2003 (.xls) files using the xlrd Python module. Excel 2007+ (.
xlsx) files can be read using either xlrd or openpyxl. Binary Excel (.xlsb) files can be read using pyxlsb.
The to_excel() instance method is used for saving a DataFrame to Excel. Generally the semantics are similar
to working with csv data. See the cookbook for some advanced strategies.

Reading Excel files

In the most basic use-case, read_excel takes a path to an Excel file, and the sheet_name indicating which sheet
to parse.

# Returns a DataFrame
pd.read_excel('path_to_file.xls', sheet_name='Sheet1')

ExcelFile class

To facilitate working with multiple sheets from the same file, the ExcelFile class can be used to wrap the file and
[email protected]
T56GZSRVAHcan be passed into read_excel There will be a performance benefit for reading multiple sheets as the file is read
into memory only once.

xlsx = pd.ExcelFile('path_to_file.xls')
df = pd.read_excel(xlsx, 'Sheet1')

The ExcelFile class can also be used as a context manager.

with pd.ExcelFile('path_to_file.xls') as xls:


df1 = pd.read_excel(xls, 'Sheet1')
df2 = pd.read_excel(xls, 'Sheet2')

The sheet_names property will generate a list of the sheet names in the file.
The primary use-case for an ExcelFile is parsing multiple sheets with different parameters:

data = {}
# For when Sheet1's format differs from Sheet2
with pd.ExcelFile('path_to_file.xls') as xls:
data['Sheet1'] = pd.read_excel(xls, 'Sheet1', index_col=None,
na_values=['NA'])
data['Sheet2'] = pd.read_excel(xls, 'Sheet2', index_col=1)

Note that if the same parsing parameters are used for all sheets, a list of sheet names can simply be passed to
read_excel with no loss in performance.

286 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

# using the ExcelFile class


data = {}
with pd.ExcelFile('path_to_file.xls') as xls:
data['Sheet1'] = pd.read_excel(xls, 'Sheet1', index_col=None,
na_values=['NA'])
data['Sheet2'] = pd.read_excel(xls, 'Sheet2', index_col=None,
na_values=['NA'])

# equivalent using the read_excel function


data = pd.read_excel('path_to_file.xls', ['Sheet1', 'Sheet2'],
index_col=None, na_values=['NA'])

ExcelFile can also be called with a xlrd.book.Book object as a parameter. This allows the user to control
how the excel file is read. For example, sheets can be loaded on demand by calling xlrd.open_workbook() with
on_demand=True.

import xlrd
xlrd_book = xlrd.open_workbook('path_to_file.xls', on_demand=True)
with pd.ExcelFile(xlrd_book) as xls:
df1 = pd.read_excel(xls, 'Sheet1')
df2 = pd.read_excel(xls, 'Sheet2')

Specifying sheets

Note: The second argument is sheet_name, not to be confused with ExcelFile.sheet_names.


[email protected]
T56GZSRVAH

Note: An ExcelFile’s attribute sheet_names provides access to a list of sheets.

• The arguments sheet_name allows specifying the sheet or sheets to read.


• The default value for sheet_name is 0, indicating to read the first sheet
• Pass a string to refer to the name of a particular sheet in the workbook.
• Pass an integer to refer to the index of a sheet. Indices follow Python convention, beginning at 0.
• Pass a list of either strings or integers, to return a dictionary of specified sheets.
• Pass a None to return a dictionary of all available sheets.

# Returns a DataFrame
pd.read_excel('path_to_file.xls', 'Sheet1', index_col=None, na_values=['NA'])

Using the sheet index:

# Returns a DataFrame
pd.read_excel('path_to_file.xls', 0, index_col=None, na_values=['NA'])

Using all default values:

# Returns a DataFrame
pd.read_excel('path_to_file.xls')

Using None to get all sheets:

3.1. IO tools (text, CSV, HDF5, . . . ) 287


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

# Returns a dictionary of DataFrames


pd.read_excel('path_to_file.xls', sheet_name=None)

Using a list to get multiple sheets:

# Returns the 1st and 4th sheet, as a dictionary of DataFrames.


pd.read_excel('path_to_file.xls', sheet_name=['Sheet1', 3])

read_excel can read more than one sheet, by setting sheet_name to either a list of sheet names, a list of sheet
positions, or None to read all sheets. Sheets can be specified by sheet index or sheet name, using an integer or string,
respectively.

Reading a MultiIndex

read_excel can read a MultiIndex index, by passing a list of columns to index_col and a MultiIndex
column by passing a list of rows to header. If either the index or columns have serialized level names those will
be read in as well by specifying the rows/columns that make up the levels.
For example, to read in a MultiIndex index without names:

In [316]: df = pd.DataFrame({'a': [1, 2, 3, 4], 'b': [5, 6, 7, 8]},


.....: index=pd.MultiIndex.from_product([['a', 'b'], ['c', 'd
˓→']]))

.....:

In [317]: df.to_excel('path_to_file.xlsx')
[email protected]
T56GZSRVAHIn [318]: df = pd.read_excel('path_to_file.xlsx', index_col=[0, 1])
In [319]: df
Out[319]:
a b
a c 1 5
d 2 6
b c 3 7
d 4 8

If the index has level names, they will parsed as well, using the same parameters.

In [320]: df.index = df.index.set_names(['lvl1', 'lvl2'])

In [321]: df.to_excel('path_to_file.xlsx')

In [322]: df = pd.read_excel('path_to_file.xlsx', index_col=[0, 1])

In [323]: df
Out[323]:
a b
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8

If the source file has both MultiIndex index and columns, lists specifying each should be passed to index_col
and header:

288 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [324]: df.columns = pd.MultiIndex.from_product([['a'], ['b', 'd']],


.....: names=['c1', 'c2'])
.....:

In [325]: df.to_excel('path_to_file.xlsx')

In [326]: df = pd.read_excel('path_to_file.xlsx', index_col=[0, 1], header=[0, 1])

In [327]: df
Out[327]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8

Parsing specific columns

It is often the case that users will insert columns to do temporary computations in Excel and you may not want to read
in those columns. read_excel takes a usecols keyword to allow you to specify a subset of columns to parse.
Deprecated since version 0.24.0.
Passing in an integer for usecols has been deprecated. Please pass in a list of ints from 0 to usecols inclusive
instead.
[email protected]
T56GZSRVAHIf usecols is an integer, then it is assumed to indicate the last column to be parsed.

pd.read_excel('path_to_file.xls', 'Sheet1', usecols=2)

You can also specify a comma-delimited set of Excel columns and ranges as a string:

pd.read_excel('path_to_file.xls', 'Sheet1', usecols='A,C:E')

If usecols is a list of integers, then it is assumed to be the file column indices to be parsed.

pd.read_excel('path_to_file.xls', 'Sheet1', usecols=[0, 2, 3])

Element order is ignored, so usecols=[0, 1] is the same as [1, 0].


New in version 0.24.
If usecols is a list of strings, it is assumed that each string corresponds to a column name provided either by the
user in names or inferred from the document header row(s). Those strings define which columns will be parsed:

pd.read_excel('path_to_file.xls', 'Sheet1', usecols=['foo', 'bar'])

Element order is ignored, so usecols=['baz', 'joe'] is the same as ['joe', 'baz'].


New in version 0.24.
If usecols is callable, the callable function will be evaluated against the column names, returning names where the
callable function evaluates to True.

pd.read_excel('path_to_file.xls', 'Sheet1', usecols=lambda x: x.isalpha())

3.1. IO tools (text, CSV, HDF5, . . . ) 289


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Parsing dates

Datetime-like values are normally automatically converted to the appropriate dtype when reading the excel file. But
if you have a column of strings that look like dates (but are not actually formatted as dates in excel), you can use the
parse_dates keyword to parse those strings to datetimes:

pd.read_excel('path_to_file.xls', 'Sheet1', parse_dates=['date_strings'])

Cell converters

It is possible to transform the contents of Excel cells via the converters option. For instance, to convert a column
to boolean:

pd.read_excel('path_to_file.xls', 'Sheet1', converters={'MyBools': bool})

This options handles missing values and treats exceptions in the converters as missing data. Transformations are
applied cell by cell rather than to the column as a whole, so the array dtype is not guaranteed. For instance, a column
of integers with missing values cannot be transformed to an array with integer dtype, because NaN is strictly a float.
You can manually mask missing data to recover integer dtype:

def cfun(x):
return int(x) if x else -1

pd.read_excel('path_to_file.xls', 'Sheet1', converters={'MyInts': cfun})

[email protected]
T56GZSRVAH
Dtype specifications

As an alternative to converters, the type for an entire column can be specified using the dtype keyword, which takes a
dictionary mapping column names to types. To interpret data with no type inference, use the type str or object.

pd.read_excel('path_to_file.xls', dtype={'MyInts': 'int64', 'MyText': str})

Writing Excel files

Writing Excel files to disk

To write a DataFrame object to a sheet of an Excel file, you can use the to_excel instance method. The arguments
are largely the same as to_csv described above, the first argument being the name of the excel file, and the optional
second argument the name of the sheet to which the DataFrame should be written. For example:

df.to_excel('path_to_file.xlsx', sheet_name='Sheet1')

Files with a .xls extension will be written using xlwt and those with a .xlsx extension will be written using
xlsxwriter (if available) or openpyxl.
The DataFrame will be written in a way that tries to mimic the REPL output. The index_label will be placed
in the second row instead of the first. You can place it in the first row by setting the merge_cells option in
to_excel() to False:

290 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

df.to_excel('path_to_file.xlsx', index_label='label', merge_cells=False)

In order to write separate DataFrames to separate sheets in a single Excel file, one can pass an ExcelWriter.

with pd.ExcelWriter('path_to_file.xlsx') as writer:


df1.to_excel(writer, sheet_name='Sheet1')
df2.to_excel(writer, sheet_name='Sheet2')

Note: Wringing a little more performance out of read_excel Internally, Excel stores all numeric data as floats.
Because this can produce unexpected behavior when reading in data, pandas defaults to trying to convert integers to
floats if it doesn’t lose information (1.0 --> 1). You can pass convert_float=False to disable this behavior,
which may give a slight performance improvement.

Writing Excel files to memory

Pandas supports writing Excel files to buffer-like objects such as StringIO or BytesIO using ExcelWriter.

# Safe import for either Python 2.x or 3.x


try:
from io import BytesIO
except ImportError:
from cStringIO import StringIO as BytesIO

bio = BytesIO()
[email protected]
T56GZSRVAH# By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter(bio, engine='xlsxwriter')
df.to_excel(writer, sheet_name='Sheet1')

# Save the workbook


writer.save()

# Seek to the beginning and read to copy the workbook to a variable in memory
bio.seek(0)
workbook = bio.read()

Note: engine is optional but recommended. Setting the engine determines the version of workbook produced.
Setting engine='xlrd' will produce an Excel 2003-format workbook (xls). Using either 'openpyxl' or
'xlsxwriter' will produce an Excel 2007-format workbook (xlsx). If omitted, an Excel 2007-formatted workbook
is produced.

3.1. IO tools (text, CSV, HDF5, . . . ) 291


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Excel writer engines

Pandas chooses an Excel writer via two methods:


1. the engine keyword argument
2. the filename extension (via the default specified in config options)
By default, pandas uses the XlsxWriter for .xlsx, openpyxl for .xlsm, and xlwt for .xls files. If you have multiple
engines installed, you can set the default engine through setting the config options io.excel.xlsx.writer and
io.excel.xls.writer. pandas will fall back on openpyxl for .xlsx files if Xlsxwriter is not available.
To specify which writer you want to use, you can pass an engine keyword argument to to_excel and to
ExcelWriter. The built-in engines are:
• openpyxl: version 2.4 or higher is required
• xlsxwriter
• xlwt

# By setting the 'engine' in the DataFrame 'to_excel()' methods.


df.to_excel('path_to_file.xlsx', sheet_name='Sheet1', engine='xlsxwriter')

# By setting the 'engine' in the ExcelWriter constructor.


writer = pd.ExcelWriter('path_to_file.xlsx', engine='xlsxwriter')

# Or via pandas configuration.


from pandas import options # noqa: E402
options.io.excel.xlsx.writer = 'xlsxwriter'

[email protected]
df.to_excel('path_to_file.xlsx', sheet_name='Sheet1')
T56GZSRVAH

Style and formatting

The look and feel of Excel worksheets created from pandas can be modified using the following parameters on the
DataFrame’s to_excel method.
• float_format : Format string for floating point numbers (default None).
• freeze_panes : A tuple of two integers representing the bottommost row and rightmost column to freeze.
Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default None).
Using the Xlsxwriter engine provides many options for controlling the format of an Excel worksheet created with
the to_excel method. Excellent examples can be found in the Xlsxwriter documentation here: https://xlsxwriter.
readthedocs.io/working_with_pandas.html

3.1.5 OpenDocument Spreadsheets

New in version 0.25.


The read_excel() method can also read OpenDocument spreadsheets using the odfpy module. The semantics
and features for reading OpenDocument spreadsheets match what can be done for Excel files using engine='odf'.

# Returns a DataFrame
pd.read_excel('path_to_file.ods', engine='odf')

292 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Note: Currently pandas only supports reading OpenDocument spreadsheets. Writing is not implemented.

3.1.6 Binary Excel (.xlsb) files

New in version 1.0.0.


The read_excel() method can also read binary Excel files using the pyxlsb module. The semantics and features
for reading binary Excel files mostly match what can be done for Excel files using engine='pyxlsb'. pyxlsb
does not recognize datetime types in files and will return floats instead.
# Returns a DataFrame
pd.read_excel('path_to_file.xlsb', engine='pyxlsb')

Note: Currently pandas only supports reading binary Excel files. Writing is not implemented.

3.1.7 Clipboard

A handy way to grab data is to use the read_clipboard() method, which takes the contents of the clipboard
buffer and passes them to the read_csv method. For instance, you can copy the following text to the clipboard
(CTRL-C on many operating systems):
A B C
[email protected]
x 1 4 p
T56GZSRVAHy 2 5 q
z 3 6 r

And then import the data directly to a DataFrame by calling:


>>> clipdf = pd.read_clipboard()
>>> clipdf
A B C
x 1 4 p
y 2 5 q
z 3 6 r

The to_clipboard method can be used to write the contents of a DataFrame to the clipboard. Following which
you can paste the clipboard contents into other applications (CTRL-V on many operating systems). Here we illustrate
writing a DataFrame into clipboard and reading it back.
>>> df = pd.DataFrame({'A': [1, 2, 3],
... 'B': [4, 5, 6],
... 'C': ['p', 'q', 'r']},
... index=['x', 'y', 'z'])
>>> df
A B C
x 1 4 p
y 2 5 q
z 3 6 r
>>> df.to_clipboard()
>>> pd.read_clipboard()
A B C
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 293


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


x 1 4 p
y 2 5 q
z 3 6 r

We can see that we got the same content back, which we had earlier written to the clipboard.

Note: You may need to install xclip or xsel (with PyQt5, PyQt4 or qtpy) on Linux to use these methods.

3.1.8 Pickling

All pandas objects are equipped with to_pickle methods which use Python’s cPickle module to save data
structures to disk using the pickle format.

In [328]: df
Out[328]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8

In [329]: df.to_pickle('foo.pkl')
[email protected]
T56GZSRVAHThe read_pickle function in the pandas namespace can be used to load any pickled pandas object (or any other
pickled object) from file:

In [330]: pd.read_pickle('foo.pkl')
Out[330]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8

Warning: Loading pickled data received from untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html

Warning: read_pickle() is only guaranteed backwards compatible back to pandas version 0.20.3

294 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Compressed pickle files

read_pickle(), DataFrame.to_pickle() and Series.to_pickle() can read and write compressed


pickle files. The compression types of gzip, bz2, xz are supported for reading and writing. The zip file format
only supports reading and must contain only one data file to be read.
The compression type can be an explicit parameter or be inferred from the file extension. If ‘infer’, then use gzip,
bz2, zip, or xz if filename ends in '.gz', '.bz2', '.zip', or '.xz', respectively.

In [331]: df = pd.DataFrame({
.....: 'A': np.random.randn(1000),
.....: 'B': 'foo',
.....: 'C': pd.date_range('20130101', periods=1000, freq='s')})
.....:

In [332]: df
Out[332]:
A B C
0 -0.288267 foo 2013-01-01 00:00:00
1 -0.084905 foo 2013-01-01 00:00:01
2 0.004772 foo 2013-01-01 00:00:02
3 1.382989 foo 2013-01-01 00:00:03
4 0.343635 foo 2013-01-01 00:00:04
.. ... ... ...
995 -0.220893 foo 2013-01-01 00:16:35
996 0.492996 foo 2013-01-01 00:16:36
997 -0.461625 foo 2013-01-01 00:16:37
998 1.361779 foo 2013-01-01 00:16:38
999 -1.197988 foo 2013-01-01 00:16:39
[email protected]
T56GZSRVAH
[1000 rows x 3 columns]

Using an explicit compression type:

In [333]: df.to_pickle("data.pkl.compress", compression="gzip")

In [334]: rt = pd.read_pickle("data.pkl.compress", compression="gzip")

In [335]: rt
Out[335]:
A B C
0 -0.288267 foo 2013-01-01 00:00:00
1 -0.084905 foo 2013-01-01 00:00:01
2 0.004772 foo 2013-01-01 00:00:02
3 1.382989 foo 2013-01-01 00:00:03
4 0.343635 foo 2013-01-01 00:00:04
.. ... ... ...
995 -0.220893 foo 2013-01-01 00:16:35
996 0.492996 foo 2013-01-01 00:16:36
997 -0.461625 foo 2013-01-01 00:16:37
998 1.361779 foo 2013-01-01 00:16:38
999 -1.197988 foo 2013-01-01 00:16:39

[1000 rows x 3 columns]

Inferring compression type from the extension:

3.1. IO tools (text, CSV, HDF5, . . . ) 295


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [336]: df.to_pickle("data.pkl.xz", compression="infer")

In [337]: rt = pd.read_pickle("data.pkl.xz", compression="infer")

In [338]: rt
Out[338]:
A B C
0 -0.288267 foo 2013-01-01 00:00:00
1 -0.084905 foo 2013-01-01 00:00:01
2 0.004772 foo 2013-01-01 00:00:02
3 1.382989 foo 2013-01-01 00:00:03
4 0.343635 foo 2013-01-01 00:00:04
.. ... ... ...
995 -0.220893 foo 2013-01-01 00:16:35
996 0.492996 foo 2013-01-01 00:16:36
997 -0.461625 foo 2013-01-01 00:16:37
998 1.361779 foo 2013-01-01 00:16:38
999 -1.197988 foo 2013-01-01 00:16:39

[1000 rows x 3 columns]

The default is to ‘infer’:

In [339]: df.to_pickle("data.pkl.gz")

In [340]: rt = pd.read_pickle("data.pkl.gz")

In [341]: rt
[email protected]
Out[341]:
T56GZSRVAH A B C
0 -0.288267 foo 2013-01-01 00:00:00
1 -0.084905 foo 2013-01-01 00:00:01
2 0.004772 foo 2013-01-01 00:00:02
3 1.382989 foo 2013-01-01 00:00:03
4 0.343635 foo 2013-01-01 00:00:04
.. ... ... ...
995 -0.220893 foo 2013-01-01 00:16:35
996 0.492996 foo 2013-01-01 00:16:36
997 -0.461625 foo 2013-01-01 00:16:37
998 1.361779 foo 2013-01-01 00:16:38
999 -1.197988 foo 2013-01-01 00:16:39

[1000 rows x 3 columns]

In [342]: df["A"].to_pickle("s1.pkl.bz2")

In [343]: rt = pd.read_pickle("s1.pkl.bz2")

In [344]: rt
Out[344]:
0 -0.288267
1 -0.084905
2 0.004772
3 1.382989
4 0.343635
...
995 -0.220893
(continues on next page)

296 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


996 0.492996
997 -0.461625
998 1.361779
999 -1.197988
Name: A, Length: 1000, dtype: float64

3.1.9 msgpack

pandas support for msgpack has been removed in version 1.0.0. It is recommended to use pyarrow for on-the-wire
transmission of pandas objects.
Example pyarrow usage:

>>> import pandas as pd


>>> import pyarrow as pa
>>> df = pd.DataFrame({'A': [1, 2, 3]})
>>> context = pa.default_serialization_context()
>>> df_bytestring = context.serialize(df).to_buffer().to_pybytes()

For documentation on pyarrow, see here.

3.1.10 HDF5 (PyTables)

HDFStore is a dict-like object which reads and writes pandas using the high performance HDF5 format using the
excellent PyTables library. See the cookbook for some advanced strategies
[email protected]
T56GZSRVAH
Warning: pandas requires PyTables >= 3.0.0. There is a indexing bug in PyTables < 3.2 which may appear
when querying stores using an index. If you see a subset of results being returned, upgrade to PyTables >= 3.2.
Stores created previously will need to be rewritten using the updated version.

In [345]: store = pd.HDFStore('store.h5')

In [346]: print(store)
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5

Objects can be written to the file just like adding key-value pairs to a dict:

In [347]: index = pd.date_range('1/1/2000', periods=8)

In [348]: s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])

In [349]: df = pd.DataFrame(np.random.randn(8, 3), index=index,


.....: columns=['A', 'B', 'C'])
.....:

# store.put('s', s) is an equivalent method


In [350]: store['s'] = s

In [351]: store['df'] = df

(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 297


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [352]: store
Out[352]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5

In a current or later Python session, you can retrieve stored objects:


# store.get('df') is an equivalent method
In [353]: store['df']
Out[353]:
A B C
2000-01-01 1.334065 0.521036 0.930384
2000-01-02 -1.613932 1.088104 -0.632963
2000-01-03 -0.585314 -0.275038 -0.937512
2000-01-04 0.632369 -1.249657 0.975593
2000-01-05 1.060617 -0.143682 0.218423
2000-01-06 3.050329 1.317933 -0.963725
2000-01-07 -0.539452 -0.771133 0.023751
2000-01-08 0.649464 -1.736427 0.197288

# dotted (attribute) access provides get as well


In [354]: store.df
Out[354]:
A B C
2000-01-01 1.334065 0.521036 0.930384
2000-01-02 -1.613932 1.088104 -0.632963
2000-01-03 -0.585314 -0.275038 -0.937512
2000-01-04 0.632369 -1.249657 0.975593
[email protected]
T56GZSRVAH2000-01-05 1.060617 -0.143682 0.218423
2000-01-06 3.050329 1.317933 -0.963725
2000-01-07 -0.539452 -0.771133 0.023751
2000-01-08 0.649464 -1.736427 0.197288

Deletion of the object specified by the key:


# store.remove('df') is an equivalent method
In [355]: del store['df']

In [356]: store
Out[356]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5

Closing a Store and using a context manager:


In [357]: store.close()

In [358]: store
Out[358]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5

In [359]: store.is_open
Out[359]: False

# Working with, and automatically closing the store using a context manager
In [360]: with pd.HDFStore('store.h5') as store:
(continues on next page)

298 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


.....: store.keys()
.....:

Read/write API

HDFStore supports a top-level API using read_hdf for reading and to_hdf for writing, similar to how
read_csv and to_csv work.

In [361]: df_tl = pd.DataFrame({'A': list(range(5)), 'B': list(range(5))})

In [362]: df_tl.to_hdf('store_tl.h5', 'table', append=True)

In [363]: pd.read_hdf('store_tl.h5', 'table', where=['index>2'])


Out[363]:
A B
3 3 3
4 4 4

HDFStore will by default not drop rows that are all missing. This behavior can be changed by setting dropna=True.

In [364]: df_with_missing = pd.DataFrame({'col1': [0, np.nan, 2],


.....: 'col2': [1, np.nan, np.nan]})
.....:

In [365]: df_with_missing
Out[365]:
[email protected]
col1 col2
T56GZSRVAH0 0.0 1.0
1 NaN NaN
2 2.0 NaN

In [366]: df_with_missing.to_hdf('file.h5', 'df_with_missing',


.....: format='table', mode='w')
.....:

In [367]: pd.read_hdf('file.h5', 'df_with_missing')


Out[367]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN

In [368]: df_with_missing.to_hdf('file.h5', 'df_with_missing',


.....: format='table', mode='w', dropna=True)
.....:

In [369]: pd.read_hdf('file.h5', 'df_with_missing')


Out[369]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN

3.1. IO tools (text, CSV, HDF5, . . . ) 299


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Fixed format

The examples above show storing using put, which write the HDF5 to PyTables in a fixed array format, called
the fixed format. These types of stores are not appendable once written (though you can simply remove them and
rewrite). Nor are they queryable; they must be retrieved in their entirety. They also do not support dataframes with
non-unique column names. The fixed format stores offer very fast writing and slightly faster reading than table
stores. This format is specified by default when using put or to_hdf or by format='fixed' or format='f'.

Warning: A fixed format will raise a TypeError if you try to retrieve using a where:
>>> pd.DataFrame(np.random.randn(10, 2)).to_hdf('test_fixed.h5', 'df')
>>> pd.read_hdf('test_fixed.h5', 'df', where='index>5')
TypeError: cannot pass a where specification when reading a fixed format.
this store must be selected in its entirety

Table format

HDFStore supports another PyTables format on disk, the table format. Conceptually a table is shaped very
much like a DataFrame, with rows and columns. A table may be appended to in the same or other sessions.
In addition, delete and query type operations are supported. This format is specified by format='table' or
format='t' to append or put or to_hdf.
This format can be set as an option as well pd.set_option('io.hdf.default_format','table') to
enable put/append/to_hdf to by default store in the table format.
In [370]: store = pd.HDFStore('store.h5')
[email protected]
T56GZSRVAH
In [371]: df1 = df[0:4]

In [372]: df2 = df[4:]

# append data (creates a table automatically)


In [373]: store.append('df', df1)

In [374]: store.append('df', df2)

In [375]: store
Out[375]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5

# select the entire object


In [376]: store.select('df')
Out[376]:
A B C
2000-01-01 1.334065 0.521036 0.930384
2000-01-02 -1.613932 1.088104 -0.632963
2000-01-03 -0.585314 -0.275038 -0.937512
2000-01-04 0.632369 -1.249657 0.975593
2000-01-05 1.060617 -0.143682 0.218423
2000-01-06 3.050329 1.317933 -0.963725
2000-01-07 -0.539452 -0.771133 0.023751
2000-01-08 0.649464 -1.736427 0.197288

# the type of stored data


(continues on next page)

300 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [377]: store.root.df._v_attrs.pandas_type
Out[377]: 'frame_table'

Note: You can also create a table by passing format='table' or format='t' to a put operation.

Hierarchical keys

Keys to a store can be specified as a string. These can be in a hierarchical path-name like format (e.g. foo/bar/
bah), which will generate a hierarchy of sub-stores (or Groups in PyTables parlance). Keys can be specified without
the leading ‘/’ and are always absolute (e.g. ‘foo’ refers to ‘/foo’). Removal operations can remove everything in the
sub-store and below, so be careful.

In [378]: store.put('foo/bar/bah', df)

In [379]: store.append('food/orange', df)

In [380]: store.append('food/apple', df)

In [381]: store
Out[381]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5

# a list of keys are returned


[email protected]
In [382]: store.keys()
T56GZSRVAHOut[382]: ['/df', '/food/apple', '/food/orange', '/foo/bar/bah']

# remove all nodes under this level


In [383]: store.remove('food')

In [384]: store
Out[384]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5

You can walk through the group hierarchy using the walk method which will yield a tuple for each group key along
with the relative keys of its contents.
New in version 0.24.0.

In [385]: for (path, subgroups, subkeys) in store.walk():


.....: for subgroup in subgroups:
.....: print('GROUP: {}/{}'.format(path, subgroup))
.....: for subkey in subkeys:
.....: key = '/'.join([path, subkey])
.....: print('KEY: {}'.format(key))
.....: print(store.get(key))
.....:
GROUP: /foo
KEY: /df
A B C
2000-01-01 1.334065 0.521036 0.930384
2000-01-02 -1.613932 1.088104 -0.632963
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 301


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-03 -0.585314 -0.275038 -0.937512
2000-01-04 0.632369 -1.249657 0.975593
2000-01-05 1.060617 -0.143682 0.218423
2000-01-06 3.050329 1.317933 -0.963725
2000-01-07 -0.539452 -0.771133 0.023751
2000-01-08 0.649464 -1.736427 0.197288
GROUP: /foo/bar
KEY: /foo/bar/bah
A B C
2000-01-01 1.334065 0.521036 0.930384
2000-01-02 -1.613932 1.088104 -0.632963
2000-01-03 -0.585314 -0.275038 -0.937512
2000-01-04 0.632369 -1.249657 0.975593
2000-01-05 1.060617 -0.143682 0.218423
2000-01-06 3.050329 1.317933 -0.963725
2000-01-07 -0.539452 -0.771133 0.023751
2000-01-08 0.649464 -1.736427 0.197288

Warning: Hierarchical keys cannot be retrieved as dotted (attribute) access as described above for items stored
under the root node.
In [8]: store.foo.bar.bah
AttributeError: 'HDFStore' object has no attribute 'foo'

# you can directly access the actual PyTables node but using the root node
In [9]: store.root.foo.bar.bah
Out[9]:
[email protected]
T56GZSRVAH /foo/bar/bah (Group) ''
children := ['block0_items' (Array), 'block0_values' (Array), 'axis0' (Array),
˓→'axis1' (Array)]

Instead, use explicit string based keys:


In [386]: store['foo/bar/bah']
Out[386]:
A B C
2000-01-01 1.334065 0.521036 0.930384
2000-01-02 -1.613932 1.088104 -0.632963
2000-01-03 -0.585314 -0.275038 -0.937512
2000-01-04 0.632369 -1.249657 0.975593
2000-01-05 1.060617 -0.143682 0.218423
2000-01-06 3.050329 1.317933 -0.963725
2000-01-07 -0.539452 -0.771133 0.023751
2000-01-08 0.649464 -1.736427 0.197288

302 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Storing types

Storing mixed types in a table

Storing mixed-dtype data is supported. Strings are stored as a fixed-width using the maximum size of the appended
column. Subsequent attempts at appending longer strings will raise a ValueError.
Passing min_itemsize={`values`: size} as a parameter to append will set a larger minimum for the string
columns. Storing floats, strings, ints, bools, datetime64 are currently supported. For string
columns, passing nan_rep = 'nan' to append will change the default nan representation on disk (which con-
verts to/from np.nan), this defaults to nan.

In [387]: df_mixed = pd.DataFrame({'A': np.random.randn(8),


.....: 'B': np.random.randn(8),
.....: 'C': np.array(np.random.randn(8), dtype='float32'),
.....: 'string': 'string',
.....: 'int': 1,
.....: 'bool': True,
.....: 'datetime64': pd.Timestamp('20010102')},
.....: index=list(range(8)))
.....:

In [388]: df_mixed.loc[df_mixed.index[3:5],
.....: ['A', 'B', 'string', 'datetime64']] = np.nan
.....:

In [389]: store.append('df_mixed', df_mixed, min_itemsize={'values': 50})

[email protected]
In [390]: df_mixed1 = store.select('df_mixed')
T56GZSRVAH
In [391]: df_mixed1
Out[391]:
A B C string int bool datetime64
0 -0.116008 0.743946 -0.398501 string 1 True 2001-01-02
1 0.592375 -0.533097 -0.677311 string 1 True 2001-01-02
2 0.476481 -0.140850 -0.874991 string 1 True 2001-01-02
3 NaN NaN -1.167564 NaN 1 True NaT
4 NaN NaN -0.593353 NaN 1 True NaT
5 0.852727 0.463819 0.146262 string 1 True 2001-01-02
6 -1.177365 0.793644 -0.131959 string 1 True 2001-01-02
7 1.236988 0.221252 0.089012 string 1 True 2001-01-02

In [392]: df_mixed1.dtypes.value_counts()
Out[392]:
float64 2
bool 1
float32 1
object 1
datetime64[ns] 1
int64 1
dtype: int64

# we have provided a minimum string column size


In [393]: store.root.df_mixed.table
Out[393]:
/df_mixed/table (Table(8,)) ''
description := {
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 303


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(2,), dflt=0.0, pos=1),
"values_block_1": Float32Col(shape=(1,), dflt=0.0, pos=2),
"values_block_2": Int64Col(shape=(1,), dflt=0, pos=3),
"values_block_3": Int64Col(shape=(1,), dflt=0, pos=4),
"values_block_4": BoolCol(shape=(1,), dflt=False, pos=5),
"values_block_5": StringCol(itemsize=50, shape=(1,), dflt=b'', pos=6)}
byteorder := 'little'
chunkshape := (689,)
autoindex := True
colindexes := {
"index": Index(6, medium, shuffle, zlib(1)).is_csi=False}

Storing MultiIndex DataFrames

Storing MultiIndex DataFrames as tables is very similar to storing/selecting from homogeneous index
DataFrames.
In [394]: index = pd.MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
.....: ['one', 'two', 'three']],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
.....: [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: names=['foo', 'bar'])
.....:

In [395]: df_mi = pd.DataFrame(np.random.randn(10, 3), index=index,


[email protected] columns=['A', 'B', 'C'])
T56GZSRVAH .....:
.....:

In [396]: df_mi
Out[396]:
A B C
foo bar
foo one 0.667450 0.169405 -1.358046
two -0.105563 0.492195 0.076693
three 0.213685 -0.285283 -1.210529
bar one -1.408386 0.941577 -0.342447
two 0.222031 0.052607 2.093214
baz two 1.064908 1.778161 -0.913867
three -0.030004 -0.399846 -1.234765
qux one 0.081323 -0.268494 0.168016
two -0.898283 -0.218499 1.408028
three -1.267828 -0.689263 0.520995

In [397]: store.append('df_mi', df_mi)

In [398]: store.select('df_mi')
Out[398]:
A B C
foo bar
foo one 0.667450 0.169405 -1.358046
two -0.105563 0.492195 0.076693
three 0.213685 -0.285283 -1.210529
bar one -1.408386 0.941577 -0.342447
two 0.222031 0.052607 2.093214
(continues on next page)

304 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


baz two 1.064908 1.778161 -0.913867
three -0.030004 -0.399846 -1.234765
qux one 0.081323 -0.268494 0.168016
two -0.898283 -0.218499 1.408028
three -1.267828 -0.689263 0.520995

# the levels are automatically included as data columns


In [399]: store.select('df_mi', 'foo=bar')
Out[399]:
A B C
foo bar
bar one -1.408386 0.941577 -0.342447
two 0.222031 0.052607 2.093214

Note: The index keyword is reserved and cannot be use as a level name.

Querying

Querying a table

select and delete operations have an optional criterion that can be specified to select/delete only a subset of the
data. This allows one to have a very large on-disk table and retrieve only a portion of the data.
A query is specified using the Term class under the hood, as a boolean expression.
[email protected]
T56GZSRVAH • index and columns are supported indexers of DataFrames.
• if data_columns are specified, these can be used as additional indexers.
• level name in a MultiIndex, with default name level_0, level_1, . . . if not provided.
Valid comparison operators are:
=, ==, !=, >, >=, <, <=
Valid boolean expressions are combined with:
• | : or
• & : and
• ( and ) : for grouping
These rules are similar to how boolean expressions are used in pandas for indexing.

Note:
• = will be automatically expanded to the comparison operator ==
• ~ is the not operator, but can only be used in very limited circumstances
• If a list/tuple of expressions is passed they will be combined via &

The following are valid expressions:


• 'index >= date'
• "columns = ['A', 'D']"

3.1. IO tools (text, CSV, HDF5, . . . ) 305


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• "columns in ['A', 'D']"


• 'columns = A'
• 'columns == A'
• "~(columns = ['A', 'B'])"
• 'index > df.index[3] & string = "bar"'
• '(index > df.index[3] & index <= df.index[6]) | string = "bar"'
• "ts >= Timestamp('2012-02-01')"
• "major_axis>=20130101"
The indexers are on the left-hand side of the sub-expression:
columns, major_axis, ts
The right-hand side of the sub-expression (after a comparison operator) can be:
• functions that will be evaluated, e.g. Timestamp('2012-02-01')
• strings, e.g. "bar"
• date-like, e.g. 20130101, or "20130101"
• lists, e.g. "['A', 'B']"
• variables that are defined in the local names space, e.g. date

Note: Passing a string to a query by interpolating it into the query expression is not recommended. Simply assign the
string of interest to a variable and use that variable in an expression. For example, do this
[email protected]
T56GZSRVAH
string = "HolyMoly'"
store.select('df', 'index == string')

instead of this

string = "HolyMoly'"
store.select('df', 'index == %s' % string)

The latter will not work and will raise a SyntaxError.Note that there’s a single quote followed by a double quote
in the string variable.
If you must interpolate, use the '%r' format specifier

store.select('df', 'index == %r' % string)

which will quote string.

Here are some examples:

In [400]: dfq = pd.DataFrame(np.random.randn(10, 4), columns=list('ABCD'),


.....: index=pd.date_range('20130101', periods=10))
.....:

In [401]: store.append('dfq', dfq, format='table', data_columns=True)

Use boolean expressions, with in-line function evaluation.

306 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [402]: store.select('dfq', "index>pd.Timestamp('20130104') & columns=['A', 'B']")


Out[402]:
A B
2013-01-05 -1.083889 0.811865
2013-01-06 -0.402227 1.618922
2013-01-07 0.948196 0.183573
2013-01-08 -1.043530 -0.708145
2013-01-09 0.813949 1.508891
2013-01-10 1.176488 -1.246093

Use inline column reference.

In [403]: store.select('dfq', where="A>0 or C>0")


Out[403]:
A B C D
2013-01-01 0.620028 0.159416 -0.263043 -0.639244
2013-01-04 -0.536722 1.005707 0.296917 0.139796
2013-01-05 -1.083889 0.811865 1.648435 -0.164377
2013-01-07 0.948196 0.183573 0.145277 0.308146
2013-01-08 -1.043530 -0.708145 1.430905 -0.850136
2013-01-09 0.813949 1.508891 -1.556154 0.187597
2013-01-10 1.176488 -1.246093 -0.002726 -0.444249

The columns keyword can be supplied to select a list of columns to be returned, this is equivalent to passing a
'columns=list_of_columns_to_filter':

In [404]: store.select('df', "columns=['A', 'B']")


Out[404]:
[email protected] A B
T56GZSRVAH2000-01-01 1.334065 0.521036
2000-01-02 -1.613932 1.088104
2000-01-03 -0.585314 -0.275038
2000-01-04 0.632369 -1.249657
2000-01-05 1.060617 -0.143682
2000-01-06 3.050329 1.317933
2000-01-07 -0.539452 -0.771133
2000-01-08 0.649464 -1.736427

start and stop parameters can be specified to limit the total search space. These are in terms of the total number
of rows in a table.

Note: select will raise a ValueError if the query expression has an unknown variable reference. Usually this
means that you are trying to select on a column that is not a data_column.
select will raise a SyntaxError if the query expression is not valid.

3.1. IO tools (text, CSV, HDF5, . . . ) 307


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Query timedelta64[ns]

You can store and query using the timedelta64[ns] type. Terms can be specified in the format:
<float>(<unit>), where float may be signed (and fractional), and unit can be D,s,ms,us,ns for the timedelta.
Here’s an example:

In [405]: from datetime import timedelta

In [406]: dftd = pd.DataFrame({'A': pd.Timestamp('20130101'),


.....: 'B': [pd.Timestamp('20130101') + timedelta(days=i,
.....: seconds=10)
.....: for i in range(10)]})
.....:

In [407]: dftd['C'] = dftd['A'] - dftd['B']

In [408]: dftd
Out[408]:
A B C
0 2013-01-01 2013-01-01 00:00:10 -1 days +23:59:50
1 2013-01-01 2013-01-02 00:00:10 -2 days +23:59:50
2 2013-01-01 2013-01-03 00:00:10 -3 days +23:59:50
3 2013-01-01 2013-01-04 00:00:10 -4 days +23:59:50
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
[email protected]
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
T56GZSRVAH
In [409]: store.append('dftd', dftd, data_columns=True)

In [410]: store.select('dftd', "C<'-3.5D'")


Out[410]:
A B C
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50

Query MultiIndex

Selecting from a MultiIndex can be achieved by using the name of the level.

In [411]: df_mi.index.names
Out[411]: FrozenList(['foo', 'bar'])

In [412]: store.select('df_mi', "foo=baz and bar=two")


Out[412]:
A B C
foo bar
baz two 1.064908 1.778161 -0.913867

If the MultiIndex levels names are None, the levels are automatically made available via the level_n keyword

308 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

with n the level of the MultiIndex you want to select from.

In [413]: index = pd.MultiIndex(


.....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: )
.....:

In [414]: df_mi_2 = pd.DataFrame(np.random.randn(10, 3),


.....: index=index, columns=["A", "B", "C"])
.....:

In [415]: df_mi_2
Out[415]:
A B C
foo one 0.856838 1.491776 0.001283
two 0.701816 -1.097917 0.102588
three 0.661740 0.443531 0.559313
bar one -0.459055 -1.222598 -0.455304
two -0.781163 0.826204 -0.530057
baz two 0.296135 1.366810 1.073372
three -0.994957 0.755314 2.119746
qux one -2.628174 -0.089460 -0.133636
two 0.337920 -0.634027 0.421107
three 0.604303 1.053434 1.109090

In [416]: store.append("df_mi_2", df_mi_2)

# the levels are automatically included as data columns with keyword level_n
[email protected]
In [417]: store.select("df_mi_2", "level_0=foo and level_1=two")
T56GZSRVAHOut[417]:
A B C
foo two 0.701816 -1.097917 0.102588

Indexing

You can create/modify an index for a table with create_table_index after data is already in the table (after and
append/put operation). Creating a table index is highly encouraged. This will speed your queries a great deal
when you use a select with the indexed dimension as the where.

Note: Indexes are automagically created on the indexables and any data columns you specify. This behavior can be
turned off by passing index=False to append.

# we have automagically already created an index (in the first section)


In [418]: i = store.root.df.table.cols.index.index

In [419]: i.optlevel, i.kind


Out[419]: (6, 'medium')

# change an index by passing new parameters


In [420]: store.create_table_index('df', optlevel=9, kind='full')

In [421]: i = store.root.df.table.cols.index.index

(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 309


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [422]: i.optlevel, i.kind
Out[422]: (9, 'full')

Oftentimes when appending large amounts of data to a store, it is useful to turn off index creation for each append,
then recreate at the end.

In [423]: df_1 = pd.DataFrame(np.random.randn(10, 2), columns=list('AB'))

In [424]: df_2 = pd.DataFrame(np.random.randn(10, 2), columns=list('AB'))

In [425]: st = pd.HDFStore('appends.h5', mode='w')

In [426]: st.append('df', df_1, data_columns=['B'], index=False)

In [427]: st.append('df', df_2, data_columns=['B'], index=False)

In [428]: st.get_storer('df').table
Out[428]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)

Then create the index when finished appending.


[email protected]
T56GZSRVAHIn [429]: st.create_table_index('df', columns=['B'], optlevel=9, kind='full')

In [430]: st.get_storer('df').table
Out[430]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
autoindex := True
colindexes := {
"B": Index(9, full, shuffle, zlib(1)).is_csi=True}

In [431]: st.close()

See here for how to create a completely-sorted-index (CSI) on an existing store.

310 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Query via data columns

You can designate (and index) certain columns that you want to be able to perform queries (other than the indexable
columns, which you can always query). For instance say you want to perform this common operation, on-disk, and
return just the frame that matches this query. You can specify data_columns = True to force all columns to be
data_columns.

In [432]: df_dc = df.copy()

In [433]: df_dc['string'] = 'foo'

In [434]: df_dc.loc[df_dc.index[4:6], 'string'] = np.nan

In [435]: df_dc.loc[df_dc.index[7:9], 'string'] = 'bar'

In [436]: df_dc['string2'] = 'cool'

In [437]: df_dc.loc[df_dc.index[1:3], ['B', 'C']] = 1.0

In [438]: df_dc
Out[438]:
A B C string string2
2000-01-01 1.334065 0.521036 0.930384 foo cool
2000-01-02 -1.613932 1.000000 1.000000 foo cool
2000-01-03 -0.585314 1.000000 1.000000 foo cool
2000-01-04 0.632369 -1.249657 0.975593 foo cool
2000-01-05 1.060617 -0.143682 0.218423 NaN cool
2000-01-06 3.050329 1.317933 -0.963725 NaN cool
[email protected]
2000-01-07 -0.539452 -0.771133 0.023751 foo cool
T56GZSRVAH2000-01-08 0.649464 -1.736427 0.197288 bar cool

# on-disk operations
In [439]: store.append('df_dc', df_dc, data_columns=['B', 'C', 'string', 'string2'])

In [440]: store.select('df_dc', where='B > 0')


Out[440]:
A B C string string2
2000-01-01 1.334065 0.521036 0.930384 foo cool
2000-01-02 -1.613932 1.000000 1.000000 foo cool
2000-01-03 -0.585314 1.000000 1.000000 foo cool
2000-01-06 3.050329 1.317933 -0.963725 NaN cool

# getting creative
In [441]: store.select('df_dc', 'B > 0 & C > 0 & string == foo')
Out[441]:
A B C string string2
2000-01-01 1.334065 0.521036 0.930384 foo cool
2000-01-02 -1.613932 1.000000 1.000000 foo cool
2000-01-03 -0.585314 1.000000 1.000000 foo cool

# this is in-memory version of this type of selection


In [442]: df_dc[(df_dc.B > 0) & (df_dc.C > 0) & (df_dc.string == 'foo')]
Out[442]:
A B C string string2
2000-01-01 1.334065 0.521036 0.930384 foo cool
2000-01-02 -1.613932 1.000000 1.000000 foo cool
2000-01-03 -0.585314 1.000000 1.000000 foo cool
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 311


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

# we have automagically created this index and the B/C/string/string2


# columns are stored separately as ``PyTables`` columns
In [443]: store.root.df_dc.table
Out[443]:
/df_dc/table (Table(8,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2),
"C": Float64Col(shape=(), dflt=0.0, pos=3),
"string": StringCol(itemsize=3, shape=(), dflt=b'', pos=4),
"string2": StringCol(itemsize=4, shape=(), dflt=b'', pos=5)}
byteorder := 'little'
chunkshape := (1680,)
autoindex := True
colindexes := {
"index": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"B": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"C": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"string": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"string2": Index(6, medium, shuffle, zlib(1)).is_csi=False}

There is some performance degradation by making lots of columns into data columns, so it is up to the user to designate
these. In addition, you cannot change data columns (nor indexables) after the first append/put operation (Of course
you can simply read in the data and create a new table!).

[email protected]
T56GZSRVAHIterator

You can pass iterator=True or chunksize=number_in_a_chunk to select and


select_as_multiple to return an iterator on the results. The default is 50,000 rows returned in a chunk.

In [444]: for df in store.select('df', chunksize=3):


.....: print(df)
.....:
A B C
2000-01-01 1.334065 0.521036 0.930384
2000-01-02 -1.613932 1.088104 -0.632963
2000-01-03 -0.585314 -0.275038 -0.937512
A B C
2000-01-04 0.632369 -1.249657 0.975593
2000-01-05 1.060617 -0.143682 0.218423
2000-01-06 3.050329 1.317933 -0.963725
A B C
2000-01-07 -0.539452 -0.771133 0.023751
2000-01-08 0.649464 -1.736427 0.197288

Note: You can also use the iterator with read_hdf which will open, then automatically close the store when finished
iterating.

for df in pd.read_hdf('store.h5', 'df', chunksize=3):


print(df)

312 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Note, that the chunksize keyword applies to the source rows. So if you are doing a query, then the chunksize will
subdivide the total rows in the table and the query applied, returning an iterator on potentially unequal sized chunks.
Here is a recipe for generating a query and using it to create equal sized return chunks.
In [445]: dfeq = pd.DataFrame({'number': np.arange(1, 11)})

In [446]: dfeq
Out[446]:
number
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10

In [447]: store.append('dfeq', dfeq, data_columns=['number'])

In [448]: def chunks(l, n):


.....: return [l[i:i + n] for i in range(0, len(l), n)]
.....:

In [449]: evens = [2, 4, 6, 8, 10]

In [450]: coordinates = store.select_as_coordinates('dfeq', 'number=evens')


[email protected]
T56GZSRVAH
In [451]: for c in chunks(coordinates, 2):
.....: print(store.select('dfeq', where=c))
.....:
number
1 2
3 4
number
5 6
7 8
number
9 10

Advanced queries

Select a single column

To retrieve a single indexable or data column, use the method select_column. This will, for example, enable you
to get the index very quickly. These return a Series of the result, indexed by the row number. These do not currently
accept the where selector.
In [452]: store.select_column('df_dc', 'index')
Out[452]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 313


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


3 2000-01-04
4 2000-01-05
5 2000-01-06
6 2000-01-07
7 2000-01-08
Name: index, dtype: datetime64[ns]

In [453]: store.select_column('df_dc', 'string')


Out[453]:
0 foo
1 foo
2 foo
3 foo
4 NaN
5 NaN
6 foo
7 bar
Name: string, dtype: object

Selecting coordinates

Sometimes you want to get the coordinates (a.k.a the index locations) of your query. This returns an Int64Index
of the resulting locations. These coordinates can also be passed to subsequent where operations.

In [454]: df_coord = pd.DataFrame(np.random.randn(1000, 2),


.....: index=pd.date_range('20000101', periods=1000))
[email protected]
T56GZSRVAH .....:

In [455]: store.append('df_coord', df_coord)

In [456]: c = store.select_as_coordinates('df_coord', 'index > 20020101')

In [457]: c
Out[457]:
Int64Index([732, 733, 734, 735, 736, 737, 738, 739, 740, 741,
...
990, 991, 992, 993, 994, 995, 996, 997, 998, 999],
dtype='int64', length=268)

In [458]: store.select('df_coord', where=c)


Out[458]:
0 1
2002-01-02 -0.165548 0.646989
2002-01-03 0.782753 -0.123409
2002-01-04 -0.391932 -0.740915
2002-01-05 1.211070 -0.668715
2002-01-06 0.341987 -0.685867
... ... ...
2002-09-22 1.788110 -0.405908
2002-09-23 -0.801912 0.768460
2002-09-24 0.466284 -0.457411
2002-09-25 -0.364060 0.785367
2002-09-26 -1.463093 1.187315

[268 rows x 2 columns]

314 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Selecting using a where mask

Sometime your query can involve creating a list of rows to select. Usually this mask would be a resulting index
from an indexing operation. This example selects the months of a datetimeindex which are 5.

In [459]: df_mask = pd.DataFrame(np.random.randn(1000, 2),


.....: index=pd.date_range('20000101', periods=1000))
.....:

In [460]: store.append('df_mask', df_mask)

In [461]: c = store.select_column('df_mask', 'index')

In [462]: where = c[pd.DatetimeIndex(c).month == 5].index

In [463]: store.select('df_mask', where=where)


Out[463]:
0 1
2000-05-01 1.735883 -2.615261
2000-05-02 0.422173 2.425154
2000-05-03 0.632453 -0.165640
2000-05-04 -1.017207 -0.005696
2000-05-05 0.299606 0.070606
... ... ...
2002-05-27 0.234503 1.199126
2002-05-28 -3.021833 -1.016828
2002-05-29 0.522794 0.063465
2002-05-30 -1.653736 0.031709
2002-05-31 -0.968402 -0.393583
[email protected]
T56GZSRVAH
[93 rows x 2 columns]

Storer object

If you want to inspect the stored object, retrieve via get_storer. You could use this programmatically to say get
the number of rows in an object.

In [464]: store.get_storer('df_dc').nrows
Out[464]: 8

Multiple table queries

The methods append_to_multiple and select_as_multiple can perform appending/selecting from mul-
tiple tables at once. The idea is to have one table (call it the selector table) that you index most/all of the columns, and
perform your queries. The other table(s) are data tables with an index matching the selector table’s index. You can
then perform a very fast query on the selector table, yet get lots of data back. This method is similar to having a very
wide table, but enables more efficient queries.
The append_to_multiple method splits a given single DataFrame into multiple tables according to d, a dictio-
nary that maps the table names to a list of ‘columns’ you want in that table. If None is used in place of a list, that
table will have the remaining unspecified columns of the given DataFrame. The argument selector defines which
table is the selector table (which you can make queries from). The argument dropna will drop rows from the input
DataFrame to ensure tables are synchronized. This means that if a row for one of the tables being written to is
entirely np.NaN, that row will be dropped from all tables.

3.1. IO tools (text, CSV, HDF5, . . . ) 315


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

If dropna is False, THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES. Remember that
entirely np.Nan rows are not written to the HDFStore, so if you choose to call dropna=False, some tables may
have more rows than others, and therefore select_as_multiple may not work or it may return unexpected
results.

In [465]: df_mt = pd.DataFrame(np.random.randn(8, 6),


.....: index=pd.date_range('1/1/2000', periods=8),
.....: columns=['A', 'B', 'C', 'D', 'E', 'F'])
.....:

In [466]: df_mt['foo'] = 'bar'

In [467]: df_mt.loc[df_mt.index[1], ('A', 'B')] = np.nan

# you can also create the tables individually


In [468]: store.append_to_multiple({'df1_mt': ['A', 'B'], 'df2_mt': None},
.....: df_mt, selector='df1_mt')
.....:

In [469]: store
Out[469]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5

# individual tables were created


In [470]: store.select('df1_mt')
Out[470]:
A B
2000-01-01 1.251079 -0.362628
[email protected]
T56GZSRVAH2000-01-02 NaN NaN
2000-01-03 0.719421 -0.448886
2000-01-04 1.140998 -0.877922
2000-01-05 1.043605 1.798494
2000-01-06 -0.467812 -0.027965
2000-01-07 0.150568 0.754820
2000-01-08 -0.596306 -0.910022

In [471]: store.select('df2_mt')
Out[471]:
C D E F foo
2000-01-01 1.602451 -0.221229 0.712403 0.465927 bar
2000-01-02 -0.525571 0.851566 -0.681308 -0.549386 bar
2000-01-03 -0.044171 1.396628 1.041242 -1.588171 bar
2000-01-04 0.463351 -0.861042 -2.192841 -1.025263 bar
2000-01-05 -1.954845 -1.712882 -0.204377 -1.608953 bar
2000-01-06 1.601542 -0.417884 -2.757922 -0.307713 bar
2000-01-07 -1.935461 1.007668 0.079529 -1.459471 bar
2000-01-08 -1.057072 -0.864360 -1.124870 1.732966 bar

# as a multiple
In [472]: store.select_as_multiple(['df1_mt', 'df2_mt'], where=['A>0', 'B>0'],
.....: selector='df1_mt')
.....:
Out[472]:
A B C D E F foo
2000-01-05 1.043605 1.798494 -1.954845 -1.712882 -0.204377 -1.608953 bar
2000-01-07 0.150568 0.754820 -1.935461 1.007668 0.079529 -1.459471 bar

316 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Delete from a table

You can delete from a table selectively by specifying a where. In deleting rows, it is important to understand the
PyTables deletes rows by erasing the rows, then moving the following data. Thus deleting can potentially be a very
expensive operation depending on the orientation of your data. To get optimal performance, it’s worthwhile to have
the dimension you are deleting be the first of the indexables.
Data is ordered (on the disk) in terms of the indexables. Here’s a simple use case. You store panel-type data, with
dates in the major_axis and ids in the minor_axis. The data is then interleaved like this:
• date_1
– id_1
– id_2
– .
– id_n
• date_2
– id_1
– .
– id_n
It should be clear that a delete operation on the major_axis will be fairly quick, as one chunk is removed, then the
following data moved. On the other hand a delete operation on the minor_axis will be very expensive. In this case
it would almost certainly be faster to rewrite the table using a where that selects all but the missing data.

[email protected]
T56GZSRVAH Warning: Please note that HDF5 DOES NOT RECLAIM SPACE in the h5 files automatically. Thus, repeatedly
deleting (or removing nodes) and adding again, WILL TEND TO INCREASE THE FILE SIZE.
To repack and clean the file, use ptrepack.

Notes & caveats

Compression

PyTables allows the stored data to be compressed. This applies to all kinds of stores, not just tables. Two parameters
are used to control compression: complevel and complib.
complevel specifies if and how hard data is to be compressed. complevel=0 and complevel=None dis-
ables compression and 0<complevel<10 enables compression.
complib specifies which compression library to use. If nothing is specified the default library zlib is used. A
compression library usually optimizes for either good compression rates or speed and the results will depend
on the type of data. Which type of compression to choose depends on your specific needs and data. The list of
supported compression libraries:
• zlib: The default compression library. A classic in terms of compression, achieves good com-
pression rates but is somewhat slow.
• lzo: Fast compression and decompression.
• bzip2: Good compression rates.
• blosc: Fast compression and decompression.

3.1. IO tools (text, CSV, HDF5, . . . ) 317


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Support for alternative blosc compressors:


– blosc:blosclz This is the default compressor for blosc
– blosc:lz4: A compact, very popular and fast compressor.
– blosc:lz4hc: A tweaked version of LZ4, produces better compression ratios at the
expense of speed.
– blosc:snappy: A popular compressor used in many places.
– blosc:zlib: A classic; somewhat slower than the previous ones, but achieving better
compression ratios.
– blosc:zstd: An extremely well balanced codec; it provides the best compression ratios
among the others above, and at reasonably fast speed.
If complib is defined as something other than the listed libraries a ValueError exception is
issued.

Note: If the library specified with the complib option is missing on your platform, compression defaults to zlib
without further ado.

Enable compression for all objects within the file:

store_compressed = pd.HDFStore('store_compressed.h5', complevel=9,


complib='blosc:blosclz')

Or on-the-fly compression (this only applies to tables) in stores where compression is not enabled:
[email protected]
T56GZSRVAHstore.append('df', df, complib='zlib', complevel=5)

ptrepack

PyTables offers better write performance when tables are compressed after they are written, as opposed to turning on
compression at the very beginning. You can use the supplied PyTables utility ptrepack. In addition, ptrepack
can change compression levels after the fact.

ptrepack --chunkshape=auto --propindexes --complevel=9 --complib=blosc in.h5 out.h5

Furthermore ptrepack in.h5 out.h5 will repack the file to allow you to reuse previously deleted space. Alter-
natively, one can simply remove the file and write again, or use the copy method.

Caveats

Warning: HDFStore is not-threadsafe for writing. The underlying PyTables only supports concurrent
reads (via threading or processes). If you need reading and writing at the same time, you need to serialize these
operations in a single thread in a single process. You will corrupt your data otherwise. See the (GH2397) for more
information.

• If you use locks to manage write access between multiple processes, you may want to use fsync() before
releasing write locks. For convenience you can use store.flush(fsync=True) to do this for you.
• Once a table is created columns (DataFrame) are fixed; only exactly the same columns can be appended

318 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• Be aware that timezones (e.g., pytz.timezone('US/Eastern')) are not necessarily equal across time-
zone versions. So if data is localized to a specific timezone in the HDFStore using one version of a timezone
library and that data is updated with another version, the data will be converted to UTC since these timezones
are not considered equal. Either use the same version of timezone library or use tz_convert with the updated
timezone definition.

Warning: PyTables will show a NaturalNameWarning if a column name cannot be used as an attribute
selector. Natural identifiers contain only letters, numbers, and underscores, and may not begin with a number.
Other identifiers cannot be used in a where clause and are generally a bad idea.

DataTypes

HDFStore will map an object dtype to the PyTables underlying dtype. This means the following types are known
to work:

Type Represents missing values


floating : float64, float32, float16 np.nan
integer : int64, int32, int8, uint64,uint32, uint8
boolean
datetime64[ns] NaT
timedelta64[ns] NaT
categorical : see the section below
object : strings np.nan

[email protected]
T56GZSRVAHunicode columns are not supported, and WILL FAIL.

Categorical data

You can write data that contains category dtypes to a HDFStore. Queries work the same as if it was an object
array. However, the category dtyped data is stored in a more efficient manner.

In [473]: dfcat = pd.DataFrame({'A': pd.Series(list('aabbcdba')).astype('category'),


.....: 'B': np.random.randn(8)})
.....:

In [474]: dfcat
Out[474]:
A B
0 a 0.477849
1 a 0.283128
2 b -2.045700
3 b -0.338206
4 c -0.423113
5 d 2.314361
6 b -0.033100
7 a -0.965461

In [475]: dfcat.dtypes
Out[475]:
A category
B float64
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 319


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


dtype: object

In [476]: cstore = pd.HDFStore('cats.h5', mode='w')

In [477]: cstore.append('dfcat', dfcat, format='table', data_columns=['A'])

In [478]: result = cstore.select('dfcat', where="A in ['b', 'c']")

In [479]: result
Out[479]:
A B
2 b -2.045700
3 b -0.338206
4 c -0.423113
6 b -0.033100

In [480]: result.dtypes
Out[480]:
A category
B float64
dtype: object

String columns

min_itemsize
[email protected]
The underlying implementation of HDFStore uses a fixed column width (itemsize) for string columns. A string
T56GZSRVAH
column itemsize is calculated as the maximum of the length of data (for that column) that is passed to the HDFStore,
in the first append. Subsequent appends, may introduce a string for a column larger than the column can hold, an
Exception will be raised (otherwise you could have a silent truncation of these columns, leading to loss of information).
In the future we may relax this and allow a user-specified truncation to occur.
Pass min_itemsize on the first table creation to a-priori specify the minimum length of a particular string column.
min_itemsize can be an integer, or a dict mapping a column name to an integer. You can pass values as a key
to allow all indexables or data_columns to have this min_itemsize.
Passing a min_itemsize dict will cause all passed columns to be created as data_columns automatically.

Note: If you are not passing any data_columns, then the min_itemsize will be the maximum of the length of
any string passed

In [481]: dfs = pd.DataFrame({'A': 'foo', 'B': 'bar'}, index=list(range(5)))

In [482]: dfs
Out[482]:
A B
0 foo bar
1 foo bar
2 foo bar
3 foo bar
4 foo bar

# A and B have a size of 30


(continues on next page)

320 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [483]: store.append('dfs', dfs, min_itemsize=30)

In [484]: store.get_storer('dfs').table
Out[484]:
/dfs/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=30, shape=(2,), dflt=b'', pos=1)}
byteorder := 'little'
chunkshape := (963,)
autoindex := True
colindexes := {
"index": Index(6, medium, shuffle, zlib(1)).is_csi=False}

# A is created as a data_column with a size of 30


# B is size is calculated
In [485]: store.append('dfs2', dfs, min_itemsize={'A': 30})

In [486]: store.get_storer('dfs2').table
Out[486]:
/dfs2/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=3, shape=(1,), dflt=b'', pos=1),
"A": StringCol(itemsize=30, shape=(), dflt=b'', pos=2)}
byteorder := 'little'
chunkshape := (1598,)
autoindex := True
[email protected]
T56GZSRVAH colindexes := {
"index": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"A": Index(6, medium, shuffle, zlib(1)).is_csi=False}

nan_rep
String columns will serialize a np.nan (a missing value) with the nan_rep string representation. This defaults to
the string value nan. You could inadvertently turn an actual nan value into a missing value.
In [487]: dfss = pd.DataFrame({'A': ['foo', 'bar', 'nan']})

In [488]: dfss
Out[488]:
A
0 foo
1 bar
2 nan

In [489]: store.append('dfss', dfss)

In [490]: store.select('dfss')
Out[490]:
A
0 foo
1 bar
2 NaN

# here you need to specify a different nan rep


In [491]: store.append('dfss2', dfss, nan_rep='_nan_')
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 321


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [492]: store.select('dfss2')
Out[492]:
A
0 foo
1 bar
2 nan

External compatibility

HDFStore writes table format objects in specific formats suitable for producing loss-less round trips to pandas
objects. For external compatibility, HDFStore can read native PyTables format tables.
It is possible to write an HDFStore object that can easily be imported into R using the rhdf5 library (Package
website). Create a table format store like this:

In [493]: df_for_r = pd.DataFrame({"first": np.random.rand(100),


.....: "second": np.random.rand(100),
.....: "class": np.random.randint(0, 2, (100, ))},
.....: index=range(100))
.....:

In [494]: df_for_r.head()
Out[494]:
first second class
0 0.864919 0.852910 0
[email protected]
1 0.030579 0.412962 1
T56GZSRVAH2 0.015226 0.978410 0
3 0.498512 0.686761 0
4 0.232163 0.328185 1

In [495]: store_export = pd.HDFStore('export.h5')

In [496]: store_export.append('df_for_r', df_for_r, data_columns=df_dc.columns)

In [497]: store_export
Out[497]:
<class 'pandas.io.pytables.HDFStore'>
File path: export.h5

In R this file can be read into a data.frame object using the rhdf5 library. The following example function reads
the corresponding column names and data values from the values and assembles them into a data.frame:

# Load values and column names for all datasets from corresponding nodes and
# insert them into one data.frame object.

library(rhdf5)

loadhdf5data <- function(h5File) {

listing <- h5ls(h5File)


# Find all data nodes, values are stored in *_values and corresponding column
# titles in *_items
data_nodes <- grep("_values", listing$name)
name_nodes <- grep("_items", listing$name)
(continues on next page)

322 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


data_paths = paste(listing$group[data_nodes], listing$name[data_nodes], sep = "/")
name_paths = paste(listing$group[name_nodes], listing$name[name_nodes], sep = "/")
columns = list()
for (idx in seq(data_paths)) {
# NOTE: matrices returned by h5read have to be transposed to obtain
# required Fortran order!
data <- data.frame(t(h5read(h5File, data_paths[idx])))
names <- t(h5read(h5File, name_paths[idx]))
entry <- data.frame(data)
colnames(entry) <- names
columns <- append(columns, entry)
}

data <- data.frame(columns)

return(data)
}

Now you can import the DataFrame into R:

> data = loadhdf5data("transfer.hdf5")


> head(data)
first second class
1 0.4170220047 0.3266449 0
2 0.7203244934 0.5270581 0
3 0.0001143748 0.8859421 1
4 0.3023325726 0.3572698 1
5 0.1467558908
[email protected] 0.9085352 1
T56GZSRVAH 6 0.0923385948 0.6233601 1

Note: The R function lists the entire HDF5 file’s contents and assembles the data.frame object from all matching
nodes, so use this only as a starting point if you have stored multiple DataFrame objects to a single HDF5 file.

Performance

• tables format come with a writing performance penalty as compared to fixed stores. The benefit is the
ability to append/delete and query (potentially very large amounts of data). Write times are generally longer as
compared with regular stores. Query times can be quite fast, especially on an indexed axis.
• You can pass chunksize=<int> to append, specifying the write chunksize (default is 50000). This will
significantly lower your memory usage on writing.
• You can pass expectedrows=<int> to the first append, to set the TOTAL number of rows that PyTables
will expect. This will optimize read/write performance.
• Duplicate rows can be written to tables, but are filtered out in selection (with the last items being selected; thus
a table is unique on major, minor pairs)
• A PerformanceWarning will be raised if you are attempting to store types that will be pickled by PyTables
(rather than stored as endemic types). See Here for more information and some solutions.

3.1. IO tools (text, CSV, HDF5, . . . ) 323


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.1.11 Feather

Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data frames
efficient, and to make sharing data across data analysis languages easy.
Feather is designed to faithfully serialize and de-serialize DataFrames, supporting all of the pandas dtypes, including
extension dtypes such as categorical and datetime with tz.
Several caveats.
• This is a newer library, and the format, though stable, is not guaranteed to be backward compatible to the earlier
versions.
• The format will NOT write an Index, or MultiIndex for the DataFrame and will raise an error if a non-
default one is provided. You can .reset_index() to store the index or .reset_index(drop=True)
to ignore it.
• Duplicate column names and non-string columns names are not supported
• Non supported types include Period and actual Python object types. These will raise a helpful error message
on an attempt at serialization.
See the Full Documentation.
In [498]: df = pd.DataFrame({'a': list('abc'),
.....: 'b': list(range(1, 4)),
.....: 'c': np.arange(3, 6).astype('u1'),
.....: 'd': np.arange(4.0, 7.0, dtype='float64'),
.....: 'e': [True, False, True],
.....: 'f': pd.Categorical(list('abc')),
.....: 'g': pd.date_range('20130101', periods=3),
[email protected]
.....: 'h': pd.date_range('20130101', periods=3, tz='US/Eastern
T56GZSRVAH
˓→'),

.....: 'i': pd.date_range('20130101', periods=3, freq='ns')})


.....:

In [499]: df
Out[499]:
a b c d e f g h
˓→ i
0 a 1 3 4.0 True a 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.
˓→000000000

1 b 2 4 5.0 False b 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.


˓→000000001

2 c 3 5 6.0 True c 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.


˓→000000002

In [500]: df.dtypes
Out[500]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object

Write to a feather file.

324 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [501]: df.to_feather('example.feather')

Read from a feather file.

In [502]: result = pd.read_feather('example.feather')

In [503]: result
Out[503]:
a b c d e f g h
˓→ i
0 a 1 3 4.0 True a 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.
˓→000000000

1 b 2 4 5.0 False b 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.


˓→000000001

2 c 3 5 6.0 True c 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.


˓→000000002

# we preserve dtypes
In [504]: result.dtypes
Out[504]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
[email protected]
i datetime64[ns]
T56GZSRVAHdtype: object

3.1.12 Parquet

New in version 0.21.0.


Apache Parquet provides a partitioned binary columnar serialization for data frames. It is designed to make reading and
writing data frames efficient, and to make sharing data across data analysis languages easy. Parquet can use a variety
of compression techniques to shrink the file size as much as possible while still maintaining good read performance.
Parquet is designed to faithfully serialize and de-serialize DataFrame s, supporting all of the pandas dtypes, includ-
ing extension dtypes such as datetime with tz.
Several caveats.
• Duplicate column names and non-string columns names are not supported.
• The pyarrow engine always writes the index to the output, but fastparquet only writes non-default in-
dexes. This extra column can cause problems for non-Pandas consumers that are not expecting it. You can force
including or omitting indexes with the index argument, regardless of the underlying engine.
• Index level names, if specified, must be strings.
• In the pyarrow engine, categorical dtypes for non-string types can be serialized to parquet, but will de-serialize
as their primitive dtype.
• The pyarrow engine preserves the ordered flag of categorical dtypes with string types. fastparquet
does not preserve the ordered flag.

3.1. IO tools (text, CSV, HDF5, . . . ) 325


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• Non supported types include Interval and actual Python object types. These will raise a helpful error mes-
sage on an attempt at serialization. Period type is supported with pyarrow >= 0.16.0.
• The pyarrow engine preserves extension data types such as the nullable integer and string data type (requiring
pyarrow >= 0.16.0, and requiring the extension type to implement the needed protocols, see the extension types
documentation).
You can specify an engine to direct the serialization. This can be one of pyarrow, or fastparquet, or auto.
If the engine is NOT specified, then the pd.options.io.parquet.engine option is checked; if this is also
auto, then pyarrow is tried, and falling back to fastparquet.
See the documentation for pyarrow and fastparquet.

Note: These engines are very similar and should read/write nearly identical parquet format files. Currently pyarrow
does not support timedelta data, fastparquet>=0.1.4 supports timezone aware datetimes. These libraries differ
by having different underlying dependencies (fastparquet by using numba, while pyarrow uses a c-library).

In [505]: df = pd.DataFrame({'a': list('abc'),


.....: 'b': list(range(1, 4)),
.....: 'c': np.arange(3, 6).astype('u1'),
.....: 'd': np.arange(4.0, 7.0, dtype='float64'),
.....: 'e': [True, False, True],
.....: 'f': pd.date_range('20130101', periods=3),
.....: 'g': pd.date_range('20130101', periods=3, tz='US/Eastern
˓→'),

.....: 'h': pd.Categorical(list('abc')),


.....: 'i': pd.Categorical(list('abc'), ordered=True)})
.....:
[email protected]
T56GZSRVAH
In [506]: df
Out[506]:
a b c d e f g h i
0 a 1 3 4.0 True 2013-01-01 2013-01-01 00:00:00-05:00 a a
1 b 2 4 5.0 False 2013-01-02 2013-01-02 00:00:00-05:00 b b
2 c 3 5 6.0 True 2013-01-03 2013-01-03 00:00:00-05:00 c c

In [507]: df.dtypes
Out[507]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object

Write to a parquet file.

In [508]: df.to_parquet('example_pa.parquet', engine='pyarrow')

In [509]: df.to_parquet('example_fp.parquet', engine='fastparquet')

Read from a parquet file.

326 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [510]: result = pd.read_parquet('example_fp.parquet', engine='fastparquet')

In [511]: result = pd.read_parquet('example_pa.parquet', engine='pyarrow')

In [512]: result.dtypes
Out[512]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object

Read only certain columns of a parquet file.

In [513]: result = pd.read_parquet('example_fp.parquet',


.....: engine='fastparquet', columns=['a', 'b'])
.....:

In [514]: result = pd.read_parquet('example_pa.parquet',


.....: engine='pyarrow', columns=['a', 'b'])
.....:

In [515]: result.dtypes
[email protected]
Out[515]:
T56GZSRVAHa object
b int64
dtype: object

Handling indexes

Serializing a DataFrame to parquet may include the implicit index as one or more columns in the output file. Thus,
this code:

In [516]: df = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})

In [517]: df.to_parquet('test.parquet', engine='pyarrow')

creates a parquet file with three columns if you use pyarrow for serialization: a, b, and __index_level_0__.
If you’re using fastparquet, the index may or may not be written to the file.
This unexpected extra column causes some databases like Amazon Redshift to reject the file, because that column
doesn’t exist in the target table.
If you want to omit a dataframe’s indexes when writing, pass index=False to to_parquet():

In [518]: df.to_parquet('test.parquet', index=False)

This creates a parquet file with just the two expected columns, a and b. If your DataFrame has a custom index, you
won’t get it back when you load this file into a DataFrame.
Passing index=True will always write the index, even if that’s not the underlying engine’s default behavior.

3.1. IO tools (text, CSV, HDF5, . . . ) 327


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Partitioning Parquet files

New in version 0.24.0.


Parquet supports partitioning of data based on the values of one or more columns.

In [519]: df = pd.DataFrame({'a': [0, 0, 1, 1], 'b': [0, 1, 0, 1]})

In [520]: df.to_parquet(path='test', engine='pyarrow',


.....: partition_cols=['a'], compression=None)
.....:

The path specifies the parent directory to which data will be saved. The partition_cols are the column names by which
the dataset will be partitioned. Columns are partitioned in the order they are given. The partition splits are determined
by the unique values in the partition columns. The above example creates a partitioned dataset that may look like:

test
a=0
0bac803e32dc42ae83fddfd029cbdebc.parquet
...
a=1
e6ab24a4f45147b49b54a662f0c412a3.parquet
...

3.1.13 ORC

New in version 1.0.0.


[email protected]
T56GZSRVAHSimilar to the parquet format, the ORC Format is a binary columnar serialization for data frames. It is designed to
make reading data frames efficient. Pandas provides only a reader for the ORC format, read_orc(). This requires
the pyarrow library.

3.1.14 SQL queries

The pandas.io.sql module provides a collection of query wrappers to both facilitate data retrieval and to reduce
dependency on DB-specific API. Database abstraction is provided by SQLAlchemy if installed. In addition you will
need a driver library for your database. Examples of such drivers are psycopg2 for PostgreSQL or pymysql for
MySQL. For SQLite this is included in Python’s standard library by default. You can find an overview of supported
drivers for each SQL dialect in the SQLAlchemy docs.
If SQLAlchemy is not installed, a fallback is only provided for sqlite (and for mysql for backwards compatibility,
but this is deprecated and will be removed in a future version). This mode requires a Python database adapter which
respect the Python DB-API.
See also some cookbook examples for some advanced strategies.
The key functions are:

read_sql_table(table_name, con[, schema, . . . ]) Read SQL database table into a DataFrame.


read_sql_query(sql, con[, index_col, . . . ]) Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, . . . ]) Read SQL query or database table into a DataFrame.
DataFrame.to_sql(self, name, con[, schema, . . . ]) Write records stored in a DataFrame to a SQL database.

Note: The function read_sql() is a convenience wrapper around read_sql_table() and

328 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

read_sql_query() (and for backward compatibility) and will delegate to specific function depending on the
provided input (database table name or sql query). Table names do not need to be quoted if they have special charac-
ters.

In the following example, we use the SQlite SQL database engine. You can use a temporary SQLite database where
data are stored in “memory”.
To connect with SQLAlchemy you use the create_engine() function to create an engine object from database
URI. You only need to create the engine once per database you are connecting to. For more information on
create_engine() and the URI formatting, see the examples below and the SQLAlchemy documentation

In [521]: from sqlalchemy import create_engine

# Create your engine.


In [522]: engine = create_engine('sqlite:///:memory:')

If you want to manage your own connections you can pass one of those instead:

with engine.connect() as conn, conn.begin():


data = pd.read_sql_table('data', conn)

Writing DataFrames

Assuming the following data is in a DataFrame data, we can insert it into the database using to_sql().

id Date Col_1 Col_2 Col_3


[email protected] 26 2012-10-18 X 25.7 True
T56GZSRVAH
42 2012-10-19 Y -12.4 False
63 2012-10-20 Z 5.73 True

In [523]: data
Out[523]:
id Date Col_1 Col_2 Col_3
0 26 2010-10-18 X 27.50 True
1 42 2010-10-19 Y -12.50 False
2 63 2010-10-20 Z 5.73 True

In [524]: data.to_sql('data', engine)

With some databases, writing large DataFrames can result in errors due to packet size limitations being exceeded. This
can be avoided by setting the chunksize parameter when calling to_sql. For example, the following writes data
to the database in batches of 1000 rows at a time:

3.1. IO tools (text, CSV, HDF5, . . . ) 329


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [525]: data.to_sql('data_chunked', engine, chunksize=1000)

SQL data types

to_sql() will try to map your data to an appropriate SQL data type based on the dtype of the data. When you have
columns of dtype object, pandas will try to infer the data type.
You can always override the default type by specifying the desired SQL type of any of the columns by using the
dtype argument. This argument needs a dictionary mapping column names to SQLAlchemy types (or strings for the
sqlite3 fallback mode). For example, specifying to use the sqlalchemy String type instead of the default Text type
for string columns:

In [526]: from sqlalchemy.types import String

In [527]: data.to_sql('data_dtype', engine, dtype={'Col_1': String})

Note: Due to the limited support for timedelta’s in the different database flavors, columns with type timedelta64
will be written as integer values as nanoseconds to the database and a warning will be raised.

Note: Columns of category dtype will be converted to the dense representation as you would get with np.
asarray(categorical) (e.g. for string categories this gives an array of strings). Because of this, reading the
database table back in does not generate a categorical.

[email protected]
T56GZSRVAH
Datetime data types

Using SQLAlchemy, to_sql() is capable of writing datetime data that is timezone naive or timezone aware. How-
ever, the resulting data stored in the database ultimately depends on the supported data type for datetime data of the
database system being used.
The following table lists supported data types for datetime data for some common databases. Other database dialects
may have different data types for datetime data.

Database SQL Datetime Types Timezone Support


SQLite TEXT No
MySQL TIMESTAMP or DATETIME No
PostgreSQL TIMESTAMP or TIMESTAMP WITH TIME ZONE Yes

When writing timezone aware data to databases that do not support timezones, the data will be written as timezone
naive timestamps that are in local time with respect to the timezone.
read_sql_table() is also capable of reading datetime data that is timezone aware or naive. When reading
TIMESTAMP WITH TIME ZONE types, pandas will convert the data to UTC.

330 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Insertion method

New in version 0.24.0.


The parameter method controls the SQL insertion clause used. Possible values are:
• None: Uses standard SQL INSERT clause (one per row).
• 'multi': Pass multiple values in a single INSERT clause. It uses a special SQL syntax not supported by
all backends. This usually provides better performance for analytic databases like Presto and Redshift, but has
worse performance for traditional SQL backend if the table contains many columns. For more information
check the SQLAlchemy documention.
• callable with signature (pd_table, conn, keys, data_iter): This can be used to implement a more
performant insertion method based on specific backend dialect features.
Example of a callable using PostgreSQL COPY clause:

# Alternative to_sql() *method* for DBs that support COPY FROM


import csv
from io import StringIO

def psql_insert_copy(table, conn, keys, data_iter):


"""
Execute SQL statement inserting data

Parameters
----------
table : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
[email protected]
keys : list of str
T56GZSRVAH Column names
data_iter : Iterable that iterates the values to be inserted
"""
# gets a DBAPI connection that can provide a cursor
dbapi_conn = conn.connection
with dbapi_conn.cursor() as cur:
s_buf = StringIO()
writer = csv.writer(s_buf)
writer.writerows(data_iter)
s_buf.seek(0)

columns = ', '.join('"{}"'.format(k) for k in keys)


if table.schema:
table_name = '{}.{}'.format(table.schema, table.name)
else:
table_name = table.name

sql = 'COPY {} ({}) FROM STDIN WITH CSV'.format(


table_name, columns)
cur.copy_expert(sql=sql, file=s_buf)

3.1. IO tools (text, CSV, HDF5, . . . ) 331


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Reading tables

read_sql_table() will read a database table given the table name and optionally a subset of columns to read.

Note: In order to use read_sql_table(), you must have the SQLAlchemy optional dependency installed.

In [528]: pd.read_sql_table('data', engine)


Out[528]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True

Note: Note that pandas infers column dtypes from query outputs, and not by looking up data types in the physical
database schema. For example, assume userid is an integer column in a table. Then, intuitively, select userid
... will return integer-valued series, while select cast(userid as text) ... will return object-valued
(str) series. Accordingly, if the query output is empty, then all resulting columns will be returned as object-valued
(since they are most general). If you foresee that your query will sometimes generate an empty result, you may want
to explicitly typecast afterwards to ensure dtype integrity.

You can also specify the name of the column as the DataFrame index, and specify a subset of columns to be read.

In [529]: pd.read_sql_table('data', engine, index_col='id')


Out[529]:
index Date Col_1 Col_2 Col_3
[email protected]
T56GZSRVAH id
26 0 2010-10-18 X 27.50 True
42 1 2010-10-19 Y -12.50 False
63 2 2010-10-20 Z 5.73 True

In [530]: pd.read_sql_table('data', engine, columns=['Col_1', 'Col_2'])


Out[530]:
Col_1 Col_2
0 X 27.50
1 Y -12.50
2 Z 5.73

And you can explicitly force columns to be parsed as dates:

In [531]: pd.read_sql_table('data', engine, parse_dates=['Date'])


Out[531]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True

If needed you can explicitly specify a format string, or a dict of arguments to pass to pandas.to_datetime():

pd.read_sql_table('data', engine, parse_dates={'Date': '%Y-%m-%d'})


pd.read_sql_table('data', engine,
parse_dates={'Date': {'format': '%Y-%m-%d %H:%M:%S'}})

You can check if a table exists using has_table()

332 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Schema support

Reading from and writing to different schema’s is supported through the schema keyword in the
read_sql_table() and to_sql() functions. Note however that this depends on the database flavor (sqlite
does not have schema’s). For example:

df.to_sql('table', engine, schema='other_schema')


pd.read_sql_table('table', engine, schema='other_schema')

Querying

You can query using raw SQL in the read_sql_query() function. In this case you must use the SQL variant
appropriate for your database. When using SQLAlchemy, you can also pass SQLAlchemy Expression language
constructs, which are database-agnostic.

In [532]: pd.read_sql_query('SELECT * FROM data', engine)


Out[532]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.50 1
1 1 42 2010-10-19 00:00:00.000000 Y -12.50 0
2 2 63 2010-10-20 00:00:00.000000 Z 5.73 1

Of course, you can specify a more “complex” query.

In [533]: pd.read_sql_query("SELECT id, Col_1, Col_2 FROM data WHERE id = 42;",


˓→engine)

Out[533]:
[email protected]
T56GZSRVAH id Col_1 Col_2
0 42 Y -12.5

The read_sql_query() function supports a chunksize argument. Specifying this will return an iterator through
chunks of the query result:

In [534]: df = pd.DataFrame(np.random.randn(20, 3), columns=list('abc'))

In [535]: df.to_sql('data_chunks', engine, index=False)

In [536]: for chunk in pd.read_sql_query("SELECT * FROM data_chunks",


.....: engine, chunksize=5):
.....: print(chunk)
.....:
a b c
0 0.092961 -0.674003 1.104153
1 -0.092732 -0.156246 -0.585167
2 -0.358119 -0.862331 -1.672907
3 0.550313 -1.507513 -0.617232
4 0.650576 1.033221 0.492464
a b c
0 -1.627786 -0.692062 1.039548
1 -1.802313 -0.890905 -0.881794
2 0.630492 0.016739 0.014500
3 -0.438358 0.647275 -0.052075
4 0.673137 1.227539 0.203534
a b c
0 0.861658 0.867852 -0.465016
(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 333


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 1.547012 -0.947189 -1.241043
2 0.070470 0.901320 0.937577
3 0.295770 1.420548 -0.005283
4 -1.518598 -0.730065 0.226497
a b c
0 -2.061465 0.632115 0.853619
1 2.719155 0.139018 0.214557
2 -1.538924 -0.366973 -0.748801
3 -0.478137 -1.559153 -3.097759
4 -2.320335 -0.221090 0.119763

You can also run a plain query without creating a DataFrame with execute(). This is useful for queries that don’t
return values, such as INSERT. This is functionally equivalent to calling execute on the SQLAlchemy engine or db
connection object. Again, you must use the SQL syntax variant appropriate for your database.

from pandas.io import sql


sql.execute('SELECT * FROM table_name', engine)
sql.execute('INSERT INTO table_name VALUES(?, ?, ?)', engine,
params=[('id', 1, 12.2, True)])

Engine connection examples

To connect with SQLAlchemy you use the create_engine() function to create an engine object from database
URI. You only need to create the engine once per database you are connecting to.

from sqlalchemy import create_engine


[email protected]
T56GZSRVAH
engine = create_engine('postgresql://scott:tiger@localhost:5432/mydatabase')

engine = create_engine('mysql+mysqldb://scott:tiger@localhost/foo')

engine = create_engine('oracle://scott:[email protected]:1521/sidname')

engine = create_engine('mssql+pyodbc://mydsn')

# sqlite://<nohostname>/<path>
# where <path> is relative:
engine = create_engine('sqlite:///foo.db')

# or absolute, starting with a slash:


engine = create_engine('sqlite:////absolute/path/to/foo.db')

For more information see the examples the SQLAlchemy documentation

334 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Advanced SQLAlchemy queries

You can use SQLAlchemy constructs to describe your query.


Use sqlalchemy.text() to specify query parameters in a backend-neutral way

In [537]: import sqlalchemy as sa

In [538]: pd.read_sql(sa.text('SELECT * FROM data where Col_1=:col1'),


.....: engine, params={'col1': 'X'})
.....:
Out[538]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.5 1

If you have an SQLAlchemy description of your database you can express where conditions using SQLAlchemy
expressions

In [539]: metadata = sa.MetaData()

In [540]: data_table = sa.Table('data', metadata,


.....: sa.Column('index', sa.Integer),
.....: sa.Column('Date', sa.DateTime),
.....: sa.Column('Col_1', sa.String),
.....: sa.Column('Col_2', sa.Float),
.....: sa.Column('Col_3', sa.Boolean),
.....: )
.....:

[email protected]
In [541]: pd.read_sql(sa.select([data_table]).where(data_table.c.Col_3 is True),
T56GZSRVAH˓→engine)
Out[541]:
Empty DataFrame
Columns: [index, Date, Col_1, Col_2, Col_3]
Index: []

You can combine SQLAlchemy expressions with parameters passed to read_sql() using sqlalchemy.
bindparam()

In [542]: import datetime as dt

In [543]: expr = sa.select([data_table]).where(data_table.c.Date > sa.bindparam('date


˓→'))

In [544]: pd.read_sql(expr, engine, params={'date': dt.datetime(2010, 10, 18)})


Out[544]:
index Date Col_1 Col_2 Col_3
0 1 2010-10-19 Y -12.50 False
1 2 2010-10-20 Z 5.73 True

3.1. IO tools (text, CSV, HDF5, . . . ) 335


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Sqlite fallback

The use of sqlite is supported without using SQLAlchemy. This mode requires a Python database adapter which
respect the Python DB-API.
You can create connections like so:

import sqlite3
con = sqlite3.connect(':memory:')

And then issue the following queries:

data.to_sql('data', con)
pd.read_sql_query("SELECT * FROM data", con)

3.1.15 Google BigQuery

Warning: Starting in 0.20.0, pandas has split off Google BigQuery support into the separate package
pandas-gbq. You can pip install pandas-gbq to get it.

The pandas-gbq package provides functionality to read/write from Google BigQuery.


pandas integrates with this external package. if pandas-gbq is installed, you can use the pandas methods pd.
read_gbq and DataFrame.to_gbq, which will call the respective functions from pandas-gbq.
Full documentation can be found here.
[email protected]
T56GZSRVAH

3.1.16 Stata format

Writing to stata format

The method to_stata() will write a DataFrame into a .dta file. The format version of this file is always 115 (Stata
12).

In [545]: df = pd.DataFrame(np.random.randn(10, 2), columns=list('AB'))

In [546]: df.to_stata('stata.dta')

Stata data files have limited data type support; only strings with 244 or fewer characters, int8, int16, int32,
float32 and float64 can be stored in .dta files. Additionally, Stata reserves certain values to represent missing
data. Exporting a non-missing value that is outside of the permitted range in Stata for a particular data type will retype
the variable to the next larger size. For example, int8 values are restricted to lie between -127 and 100 in Stata, and
so variables with values above 100 will trigger a conversion to int16. nan values in floating points data types are
stored as the basic missing data type (. in Stata).

Note: It is not possible to export missing data values for integer data types.

The Stata writer gracefully handles other data types including int64, bool, uint8, uint16, uint32 by casting
to the smallest supported type that can represent the data. For example, data with a type of uint8 will be cast to
int8 if all values are less than 100 (the upper bound for non-missing int8 data in Stata), or, if values are outside of
this range, the variable is cast to int16.

336 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Warning: Conversion from int64 to float64 may result in a loss of precision if int64 values are larger than
2**53.

Warning: StataWriter and to_stata() only support fixed width strings containing up to 244 characters,
a limitation imposed by the version 115 dta file format. Attempting to write Stata dta files with strings longer than
244 characters raises a ValueError.

Reading from Stata format

The top-level function read_stata will read a dta file and return either a DataFrame or a StataReader that
can be used to read the file incrementally.

In [547]: pd.read_stata('stata.dta')
Out[547]:
index A B
0 0 0.608228 1.064810
1 1 -0.780506 -2.736887
2 2 0.143539 1.170191
3 3 -1.573076 0.075792
4 4 -1.722223 -0.774650
5 5 0.803627 0.221665
6 6 0.584637 0.147264
7 7 1.057825 -0.284136
8 8 0.912395 1.552808
[email protected]
9 9 0.189376 -0.109830
T56GZSRVAH
Specifying a chunksize yields a StataReader instance that can be used to read chunksize lines from the file
at a time. The StataReader object can be used as an iterator.

In [548]: reader = pd.read_stata('stata.dta', chunksize=3)

In [549]: for df in reader:


.....: print(df.shape)
.....:
(3, 3)
(3, 3)
(3, 3)
(1, 3)

For more fine-grained control, use iterator=True and specify chunksize with each call to read().

In [550]: reader = pd.read_stata('stata.dta', iterator=True)

In [551]: chunk1 = reader.read(5)

In [552]: chunk2 = reader.read(5)

Currently the index is retrieved as a column.


The parameter convert_categoricals indicates whether value labels should be read and used to create a
Categorical variable from them. Value labels can also be retrieved by the function value_labels, which
requires read() to be called before use.

3.1. IO tools (text, CSV, HDF5, . . . ) 337


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

The parameter convert_missing indicates whether missing value representations in Stata should be preserved.
If False (the default), missing values are represented as np.nan. If True, missing values are represented using
StataMissingValue objects, and columns containing missing values will have object data type.

Note: read_stata() and StataReader support .dta formats 113-115 (Stata 10-12), 117 (Stata 13), and 118
(Stata 14).

Note: Setting preserve_dtypes=False will upcast to the standard pandas data types: int64 for all integer
types and float64 for floating point data. By default, the Stata data types are preserved when importing.

Categorical data

Categorical data can be exported to Stata data files as value labeled data. The exported data consists of the
underlying category codes as integer data values and the categories as value labels. Stata does not have an explicit
equivalent to a Categorical and information about whether the variable is ordered is lost when exporting.

Warning: Stata only supports string value labels, and so str is called on the categories when exporting data.
Exporting Categorical variables with non-string categories produces a warning, and can result a loss of infor-
mation if the str representations of the categories are not unique.

Labeled data can similarly be imported from Stata data files as Categorical variables using the keyword argu-
[email protected]
ment convert_categoricals (True by default). The keyword argument order_categoricals (True by
T56GZSRVAHdefault) determines whether imported Categorical variables are ordered.

Note: When importing categorical data, the values of the variables in the Stata data file are not preserved
since Categorical variables always use integer data types between -1 and n-1 where n is the number
of categories. If the original values in the Stata data file are required, these can be imported by setting
convert_categoricals=False, which will import original data (but not the variable labels). The original
values can be matched to the imported categorical data since there is a simple mapping between the original Stata
data values and the category codes of imported Categorical variables: missing values are assigned code -1, and the
smallest original value is assigned 0, the second smallest is assigned 1 and so on until the largest original value is
assigned the code n-1.

Note: Stata supports partially labeled series. These series have value labels for some but not all data values. Importing
a partially labeled series will produce a Categorical with string categories for the values that are labeled and
numeric categories for values with no label.

338 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.1.17 SAS formats

The top-level function read_sas() can read (but not write) SAS xport (.XPT) and (since v0.18.0) SAS7BDAT
(.sas7bdat) format files.
SAS files only contain two value types: ASCII text and floating point values (usually 8 bytes but sometimes truncated).
For xport files, there is no automatic type conversion to integers, dates, or categoricals. For SAS7BDAT files, the
format codes may allow date variables to be automatically converted to dates. By default the whole file is read and
returned as a DataFrame.
Specify a chunksize or use iterator=True to obtain reader objects (XportReader or SAS7BDATReader)
for incrementally reading the file. The reader objects also have attributes that contain additional information about the
file and its variables.
Read a SAS7BDAT file:

df = pd.read_sas('sas_data.sas7bdat')

Obtain an iterator and read an XPORT file 100,000 lines at a time:

def do_something(chunk):
pass

rdr = pd.read_sas('sas_xport.xpt', chunk=100000)


for chunk in rdr:
do_something(chunk)

The specification for the xport file format is available from the SAS web site.
No official documentation is available for the SAS7BDAT format.
[email protected]
T56GZSRVAH

3.1.18 SPSS formats

New in version 0.25.0.


The top-level function read_spss() can read (but not write) SPSS sav (.sav) and zsav (.zsav) format files.
SPSS files contain column names. By default the whole file is read, categorical columns are converted into pd.
Categorical, and a DataFrame with all columns is returned.
Specify the usecols parameter to obtain a subset of columns. Specify convert_categoricals=False to
avoid converting categorical columns into pd.Categorical.
Read an SPSS file:

df = pd.read_spss('spss_data.sav')

Extract a subset of columns contained in usecols from an SPSS file and avoid converting categorical columns into
pd.Categorical:

df = pd.read_spss('spss_data.sav', usecols=['foo', 'bar'],


convert_categoricals=False)

More information about the sav and zsav file format is available here.

3.1. IO tools (text, CSV, HDF5, . . . ) 339


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.1.19 Other file formats

pandas itself only supports IO with a limited set of file formats that map cleanly to its tabular data model. For reading
and writing other file formats into and from pandas, we recommend these packages from the broader community.

netCDF

xarray provides data structures inspired by the pandas DataFrame for working with multi-dimensional datasets, with
a focus on the netCDF file format and easy conversion to and from pandas.

3.1.20 Performance considerations

This is an informal comparison of various IO methods, using pandas 0.24.2. Timings are machine dependent and small
differences should be ignored.

In [1]: sz = 1000000
In [2]: df = pd.DataFrame({'A': np.random.randn(sz), 'B': [1] * sz})

In [3]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 2 columns):
A 1000000 non-null float64
B 1000000 non-null int64
dtypes: float64(1), int64(1)
memory usage: 15.3 MB
[email protected]
T56GZSRVAH
Given the next test set:

import numpy as np

import os

sz = 1000000
df = pd.DataFrame({'A': np.random.randn(sz), 'B': [1] * sz})

sz = 1000000
np.random.seed(42)
df = pd.DataFrame({'A': np.random.randn(sz), 'B': [1] * sz})

def test_sql_write(df):
if os.path.exists('test.sql'):
os.remove('test.sql')
sql_db = sqlite3.connect('test.sql')
df.to_sql(name='test_table', con=sql_db)
sql_db.close()

def test_sql_read():
sql_db = sqlite3.connect('test.sql')
pd.read_sql_query("select * from test_table", sql_db)
sql_db.close()

def test_hdf_fixed_write(df):
df.to_hdf('test_fixed.hdf', 'test', mode='w')

(continues on next page)

340 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


def test_hdf_fixed_read():
pd.read_hdf('test_fixed.hdf', 'test')

def test_hdf_fixed_write_compress(df):
df.to_hdf('test_fixed_compress.hdf', 'test', mode='w', complib='blosc')

def test_hdf_fixed_read_compress():
pd.read_hdf('test_fixed_compress.hdf', 'test')

def test_hdf_table_write(df):
df.to_hdf('test_table.hdf', 'test', mode='w', format='table')

def test_hdf_table_read():
pd.read_hdf('test_table.hdf', 'test')

def test_hdf_table_write_compress(df):
df.to_hdf('test_table_compress.hdf', 'test', mode='w',
complib='blosc', format='table')

def test_hdf_table_read_compress():
pd.read_hdf('test_table_compress.hdf', 'test')

def test_csv_write(df):
df.to_csv('test.csv', mode='w')

def test_csv_read():
pd.read_csv('test.csv', index_col=0)
[email protected]
T56GZSRVAHdef test_feather_write(df):
df.to_feather('test.feather')

def test_feather_read():
pd.read_feather('test.feather')

def test_pickle_write(df):
df.to_pickle('test.pkl')

def test_pickle_read():
pd.read_pickle('test.pkl')

def test_pickle_write_compress(df):
df.to_pickle('test.pkl.compress', compression='xz')

def test_pickle_read_compress():
pd.read_pickle('test.pkl.compress', compression='xz')

def test_parquet_write(df):
df.to_parquet('test.parquet')

def test_parquet_read():
pd.read_parquet('test.parquet')

When writing, the top-three functions in terms of speed are test_feather_write, test_hdf_fixed_write
and test_hdf_fixed_write_compress.

In [4]: %timeit test_sql_write(df)


(continues on next page)

3.1. IO tools (text, CSV, HDF5, . . . ) 341


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


3.29 s ± 43.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [5]: %timeit test_hdf_fixed_write(df)


19.4 ms ± 560 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [6]: %timeit test_hdf_fixed_write_compress(df)


19.6 ms ± 308 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [7]: %timeit test_hdf_table_write(df)


449 ms ± 5.61 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [8]: %timeit test_hdf_table_write_compress(df)


448 ms ± 11.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [9]: %timeit test_csv_write(df)


3.66 s ± 26.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [10]: %timeit test_feather_write(df)


9.75 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [11]: %timeit test_pickle_write(df)


30.1 ms ± 229 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [12]: %timeit test_pickle_write_compress(df)


4.29 s ± 15.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [13]: %timeit test_parquet_write(df)


67.6 ms ± 706 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
[email protected]
T56GZSRVAH
When reading, the top three are test_feather_read, test_pickle_read and test_hdf_fixed_read.
In [14]: %timeit test_sql_read()
1.77 s ± 17.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [15]: %timeit test_hdf_fixed_read()


19.4 ms ± 436 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [16]: %timeit test_hdf_fixed_read_compress()


19.5 ms ± 222 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [17]: %timeit test_hdf_table_read()


38.6 ms ± 857 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [18]: %timeit test_hdf_table_read_compress()


38.8 ms ± 1.49 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [19]: %timeit test_csv_read()


452 ms ± 9.04 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [20]: %timeit test_feather_read()


12.4 ms ± 99.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [21]: %timeit test_pickle_read()


18.4 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [22]: %timeit test_pickle_read_compress()


915 ms ± 7.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
(continues on next page)

342 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [23]: %timeit test_parquet_read()


24.4 ms ± 146 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

For this test case test.pkl.compress, test.parquet and test.feather took the least space on disk.
Space on disk (in bytes)

29519500 Oct 10 06:45 test.csv


16000248 Oct 10 06:45 test.feather
8281983 Oct 10 06:49 test.parquet
16000857 Oct 10 06:47 test.pkl
7552144 Oct 10 06:48 test.pkl.compress
34816000 Oct 10 06:42 test.sql
24009288 Oct 10 06:43 test_fixed.hdf
24009288 Oct 10 06:43 test_fixed_compress.hdf
24458940 Oct 10 06:44 test_table.hdf
24458940 Oct 10 06:44 test_table_compress.hdf

3.2 Indexing and selecting data

The axis labeling information in pandas objects serves many purposes:


• Identifies data (i.e. provides metadata) using known indicators, important for analysis, visualization, and inter-
active console display.
• Enables automatic and explicit data alignment.
[email protected]
T56GZSRVAH
• Allows intuitive getting and setting of subsets of the data set.
In this section, we will focus on the final point: namely, how to slice, dice, and generally get and set subsets of pandas
objects. The primary focus will be on Series and DataFrame as they have received more development attention in this
area.

Note: The Python and NumPy indexing operators [] and attribute operator . provide quick and easy access to pandas
data structures across a wide range of use cases. This makes interactive work intuitive, as there’s little new to learn if
you already know how to deal with Python dictionaries and NumPy arrays. However, since the type of the data to be
accessed isn’t known in advance, directly using standard operators has some optimization limits. For production code,
we recommended that you take advantage of the optimized pandas data access methods exposed in this chapter.

Warning: Whether a copy or a reference is returned for a setting operation, may depend on the context. This is
sometimes called chained assignment and should be avoided. See Returning a View versus Copy.

See the MultiIndex / Advanced Indexing for MultiIndex and more advanced indexing documentation.
See the cookbook for some advanced strategies.

3.2. Indexing and selecting data 343


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.2.1 Different choices for indexing

Object selection has had a number of user-requested additions in order to support more explicit location based index-
ing. Pandas now supports three types of multi-axis indexing.
• .loc is primarily label based, but may also be used with a boolean array. .loc will raise KeyError when
the items are not found. Allowed inputs are:
– A single label, e.g. 5 or 'a' (Note that 5 is interpreted as a label of the index. This use is not an integer
position along the index.).
– A list or array of labels ['a', 'b', 'c'].
– A slice object with labels 'a':'f' (Note that contrary to usual python slices, both the start and the stop
are included, when present in the index! See Slicing with labels and Endpoints are inclusive.)
– A boolean array (any NA values will be treated as False).
– A callable function with one argument (the calling Series or DataFrame) and that returns valid output
for indexing (one of the above).
See more at Selection by Label.
• .iloc is primarily integer position based (from 0 to length-1 of the axis), but may also be used with a
boolean array. .iloc will raise IndexError if a requested indexer is out-of-bounds, except slice indexers
which allow out-of-bounds indexing. (this conforms with Python/NumPy slice semantics). Allowed inputs are:
– An integer e.g. 5.
– A list or array of integers [4, 3, 0].
– A slice object with ints 1:7.
[email protected]
T56GZSRVAH – A boolean array (any NA values will be treated as False).
– A callable function with one argument (the calling Series or DataFrame) and that returns valid output
for indexing (one of the above).
See more at Selection by Position, Advanced Indexing and Advanced Hierarchical.
• .loc, .iloc, and also [] indexing can accept a callable as indexer. See more at Selection By Callable.
Getting values from an object with multi-axes selection uses the following notation (using .loc as an example, but
the following applies to .iloc as well). Any of the axes accessors may be the null slice :. Axes left out of the
specification are assumed to be :, e.g. p.loc['a'] is equivalent to p.loc['a', :, :].

Object Type Indexers


Series s.loc[indexer]
DataFrame df.loc[row_indexer,column_indexer]

3.2.2 Basics

As mentioned when introducing the data structures in the last section, the primary function of indexing with [] (a.k.a.
__getitem__ for those familiar with implementing class behavior in Python) is selecting out lower-dimensional
slices. The following table shows return type values when indexing pandas objects with []:

Object Type Selection Return Value Type


Series series[label] scalar value
DataFrame frame[colname] Series corresponding to colname

344 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Here we construct a simple time series data set to use for illustrating the indexing functionality:

In [1]: dates = pd.date_range('1/1/2000', periods=8)

In [2]: df = pd.DataFrame(np.random.randn(8, 4),


...: index=dates, columns=['A', 'B', 'C', 'D'])
...:

In [3]: df
Out[3]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885

Note: None of the indexing functionality is time series specific unless specifically stated.

Thus, as per above, we have the most basic indexing using []:

In [4]: s = df['A']

In [5]: s[dates[5]]
[email protected]
Out[5]: -0.6736897080883706
T56GZSRVAH
You can pass a list of columns to [] to select columns in that order. If a column is not contained in the DataFrame, an
exception will be raised. Multiple columns can also be set in this manner:

In [6]: df
Out[6]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885

In [7]: df[['B', 'A']] = df[['A', 'B']]

In [8]: df
Out[8]:
A B C D
2000-01-01 -0.282863 0.469112 -1.509059 -1.135632
2000-01-02 -0.173215 1.212112 0.119209 -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929 1.071804
2000-01-04 -0.706771 0.721555 -1.039575 0.271860
2000-01-05 0.567020 -0.424972 0.276232 -1.087401
2000-01-06 0.113648 -0.673690 -1.478427 0.524988
2000-01-07 0.577046 0.404705 -1.715002 -1.039268
(continues on next page)

3.2. Indexing and selecting data 345


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-08 -1.157892 -0.370647 -1.344312 0.844885

You may find this useful for applying a transform (in-place) to a subset of the columns.

Warning: pandas aligns all AXES when setting Series and DataFrame from .loc, and .iloc.
This will not modify df because the column alignment is before value assignment.
In [9]: df[['A', 'B']]
Out[9]:
A B
2000-01-01 -0.282863 0.469112
2000-01-02 -0.173215 1.212112
2000-01-03 -2.104569 -0.861849
2000-01-04 -0.706771 0.721555
2000-01-05 0.567020 -0.424972
2000-01-06 0.113648 -0.673690
2000-01-07 0.577046 0.404705
2000-01-08 -1.157892 -0.370647

In [10]: df.loc[:, ['B', 'A']] = df[['A', 'B']]

In [11]: df[['A', 'B']]


Out[11]:
A B
2000-01-01 -0.282863 0.469112
2000-01-02 -0.173215 1.212112
[email protected]
2000-01-03 -2.104569 -0.861849
T56GZSRVAH 2000-01-04 -0.706771 0.721555
2000-01-05 0.567020 -0.424972
2000-01-06 0.113648 -0.673690
2000-01-07 0.577046 0.404705
2000-01-08 -1.157892 -0.370647

The correct way to swap column values is by using raw values:


In [12]: df.loc[:, ['B', 'A']] = df[['A', 'B']].to_numpy()

In [13]: df[['A', 'B']]


Out[13]:
A B
2000-01-01 0.469112 -0.282863
2000-01-02 1.212112 -0.173215
2000-01-03 -0.861849 -2.104569
2000-01-04 0.721555 -0.706771
2000-01-05 -0.424972 0.567020
2000-01-06 -0.673690 0.113648
2000-01-07 0.404705 0.577046
2000-01-08 -0.370647 -1.157892

346 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.2.3 Attribute access

You may access an index on a Series or column on a DataFrame directly as an attribute:

In [14]: sa = pd.Series([1, 2, 3], index=list('abc'))

In [15]: dfa = df.copy()

In [16]: sa.b
Out[16]: 2

In [17]: dfa.A
Out[17]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64

In [18]: sa.a = 5

In [19]: sa
Out[19]:
a 5
[email protected]
b 2
T56GZSRVAHc 3
dtype: int64

In [20]: dfa.A = list(range(len(dfa.index))) # ok if A already exists

In [21]: dfa
Out[21]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885

In [22]: dfa['A'] = list(range(len(dfa.index))) # use this form to create a new


˓→column

In [23]: dfa
Out[23]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
(continues on next page)

3.2. Indexing and selecting data 347


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885

Warning:
• You can use this access only if the index element is a valid Python identifier, e.g. s.1 is not allowed. See
here for an explanation of valid identifiers.
• The attribute will not be available if it conflicts with an existing method name, e.g. s.min is not allowed,
but s['min'] is possible.
• Similarly, the attribute will not be available if it conflicts with any of the following list: index,
major_axis, minor_axis, items.
• In any of these cases, standard indexing will still work, e.g. s['1'], s['min'], and s['index'] will
access the corresponding element or column.

If you are using the IPython environment, you may also use tab-completion to see these accessible attributes.
You can also assign a dict to a row of a DataFrame:

In [24]: x = pd.DataFrame({'x': [1, 2, 3], 'y': [3, 4, 5]})

In [25]: x.iloc[1] = {'x': 9, 'y': 99}

In [26]: x
[email protected]
Out[26]:
T56GZSRVAH
x y
0 1 3
1 9 99
2 3 5

You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful; if
you try to use attribute access to create a new column, it creates a new attribute rather than a new column. In 0.21.0
and later, this will raise a UserWarning:

In [1]: df = pd.DataFrame({'one': [1., 2., 3.]})


In [2]: df.two = [4, 5, 6]
UserWarning: Pandas doesn't allow Series to be assigned into nonexistent columns -
˓→see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute_access

In [3]: df
Out[3]:
one
0 1.0
1 2.0
2 3.0

348 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.2.4 Slicing ranges

The most robust and consistent way of slicing ranges along arbitrary axes is described in the Selection by Position
section detailing the .iloc method. For now, we explain the semantics of slicing using the [] operator.
With Series, the syntax works exactly as with an ndarray, returning a slice of the values and the corresponding labels:

In [27]: s[:5]
Out[27]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
Freq: D, Name: A, dtype: float64

In [28]: s[::2]
Out[28]:
2000-01-01 0.469112
2000-01-03 -0.861849
2000-01-05 -0.424972
2000-01-07 0.404705
Freq: 2D, Name: A, dtype: float64

In [29]: s[::-1]
Out[29]:
2000-01-08 -0.370647
2000-01-07 0.404705
2000-01-06 -0.673690
[email protected]
2000-01-05 -0.424972
T56GZSRVAH2000-01-04 0.721555
2000-01-03 -0.861849
2000-01-02 1.212112
2000-01-01 0.469112
Freq: -1D, Name: A, dtype: float64

Note that setting works as well:

In [30]: s2 = s.copy()

In [31]: s2[:5] = 0

In [32]: s2
Out[32]:
2000-01-01 0.000000
2000-01-02 0.000000
2000-01-03 0.000000
2000-01-04 0.000000
2000-01-05 0.000000
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64

With DataFrame, slicing inside of [] slices the rows. This is provided largely as a convenience since it is such a
common operation.

3.2. Indexing and selecting data 349


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [33]: df[:3]
Out[33]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804

In [34]: df[::-1]
Out[34]:
A B C D
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632

3.2.5 Selection by label

Warning: Whether a copy or a reference is returned for a setting operation, may depend on the context. This is
sometimes called chained assignment and should be avoided. See Returning a View versus Copy.

[email protected]
T56GZSRVAH Warning:

.loc is strict when you present slicers that are not compatible (or convertible) with the index type.
For example using integers in a DatetimeIndex. These will raise a TypeError.
In [35]: dfl = pd.DataFrame(np.random.randn(5, 4),
....: columns=list('ABCD'),
....: index=pd.date_range('20130101', periods=5))
....:

In [36]: dfl
Out[36]:
A B C D
2013-01-01 1.075770 -0.109050 1.643563 -1.469388
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061
2013-01-05 0.895717 0.805244 -1.206412 2.565646

In [4]: dfl.loc[2:3]
TypeError: cannot do slice indexing on <class 'pandas.tseries.index.DatetimeIndex'>
˓→with these indexers [2] of <type 'int'>

String likes in slicing can be convertible to the type of the index and lead to natural slicing.
In [37]: dfl.loc['20130102':'20130104']
Out[37]:
A B C D
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061

350 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Warning: Starting in 0.21.0, pandas will show a FutureWarning if indexing with a list with missing labels.
In the future this will raise a KeyError. See list-like Using loc with missing keys in a list is Deprecated.

pandas provides a suite of methods in order to have purely label based indexing. This is a strict inclusion based
protocol. Every label asked for must be in the index, or a KeyError will be raised. When slicing, both the start
bound AND the stop bound are included, if present in the index. Integers are valid labels, but they refer to the label
and not the position.
The .loc attribute is the primary access method. The following are valid inputs:
• A single label, e.g. 5 or 'a' (Note that 5 is interpreted as a label of the index. This use is not an integer
position along the index.).
• A list or array of labels ['a', 'b', 'c'].
• A slice object with labels 'a':'f' (Note that contrary to usual python slices, both the start and the stop are
included, when present in the index! See Slicing with labels.
• A boolean array.
• A callable, see Selection By Callable.
In [38]: s1 = pd.Series(np.random.randn(6), index=list('abcdef'))

In [39]: s1
Out[39]:
[email protected]
T56GZSRVAHa 1.431256
b 1.340309
c -1.170299
d -0.226169
e 0.410835
f 0.813850
dtype: float64

In [40]: s1.loc['c':]
Out[40]:
c -1.170299
d -0.226169
e 0.410835
f 0.813850
dtype: float64

In [41]: s1.loc['b']
Out[41]: 1.3403088497993827

Note that setting works as well:


In [42]: s1.loc['c':] = 0

In [43]: s1
Out[43]:
a 1.431256
b 1.340309
c 0.000000
d 0.000000
(continues on next page)

3.2. Indexing and selecting data 351


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


e 0.000000
f 0.000000
dtype: float64

With a DataFrame:

In [44]: df1 = pd.DataFrame(np.random.randn(6, 4),


....: index=list('abcdef'),
....: columns=list('ABCD'))
....:

In [45]: df1
Out[45]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
c 1.024180 0.569605 0.875906 -2.211372
d 0.974466 -2.006747 -0.410001 -0.078638
e 0.545952 -1.219217 -1.226825 0.769804
f -1.281247 -0.727707 -0.121306 -0.097883

In [46]: df1.loc[['a', 'b', 'd'], :]


Out[46]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
d 0.974466 -2.006747 -0.410001 -0.078638
[email protected]
T56GZSRVAHAccessing via label slices:

In [47]: df1.loc['d':, 'A':'C']


Out[47]:
A B C
d 0.974466 -2.006747 -0.410001
e 0.545952 -1.219217 -1.226825
f -1.281247 -0.727707 -0.121306

For getting a cross section using a label (equivalent to df.xs('a')):

In [48]: df1.loc['a']
Out[48]:
A 0.132003
B -0.827317
C -0.076467
D -1.187678
Name: a, dtype: float64

For getting values with a boolean array:

In [49]: df1.loc['a'] > 0


Out[49]:
A True
B False
C False
D False
Name: a, dtype: bool
(continues on next page)

352 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [50]: df1.loc[:, df1.loc['a'] > 0]


Out[50]:
A
a 0.132003
b 1.130127
c 1.024180
d 0.974466
e 0.545952
f -1.281247

NA values in a boolean array propogate as False:


Changed in version 1.0.2: mask = pd.array([True, False, True, False, pd.NA, False], dtype=”boolean”) mask df1[mask]
For getting a value explicitly:

# this is also equivalent to ``df1.at['a','A']``


In [51]: df1.loc['a', 'A']
Out[51]: 0.13200317033032932

Slicing with labels

When using .loc with slices, if both the start and the stop labels are present in the index, then elements located
between the two (including them) are returned:

In [52]: s = pd.Series(list('abcde'), index=[0, 3, 2, 5, 4])


[email protected]
T56GZSRVAH
In [53]: s.loc[3:5]
Out[53]:
3 b
2 c
5 d
dtype: object

If at least one of the two is absent, but the index is sorted, and can be compared against start and stop labels, then
slicing will still work as expected, by selecting labels which rank between the two:

In [54]: s.sort_index()
Out[54]:
0 a
2 c
3 b
4 e
5 d
dtype: object

In [55]: s.sort_index().loc[1:6]
Out[55]:
2 c
3 b
4 e
5 d
dtype: object

3.2. Indexing and selecting data 353


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

However, if at least one of the two is absent and the index is not sorted, an error will be raised (since doing otherwise
would be computationally expensive, as well as potentially ambiguous for mixed type indexes). For instance, in the
above example, s.loc[1:6] would raise KeyError.
For the rationale behind this behavior, see Endpoints are inclusive.

3.2.6 Selection by position

Warning: Whether a copy or a reference is returned for a setting operation, may depend on the context. This is
sometimes called chained assignment and should be avoided. See Returning a View versus Copy.

Pandas provides a suite of methods in order to get purely integer based indexing. The semantics follow closely
Python and NumPy slicing. These are 0-based indexing. When slicing, the start bound is included, while the upper
bound is excluded. Trying to use a non-integer, even a valid label will raise an IndexError.
The .iloc attribute is the primary access method. The following are valid inputs:
• An integer e.g. 5.
• A list or array of integers [4, 3, 0].
• A slice object with ints 1:7.
• A boolean array.
• A callable, see Selection By Callable.
In [56]: s1 = pd.Series(np.random.randn(5), index=list(range(0, 10, 2)))
[email protected]
T56GZSRVAH
In [57]: s1
Out[57]:
0 0.695775
2 0.341734
4 0.959726
6 -1.110336
8 -0.619976
dtype: float64

In [58]: s1.iloc[:3]
Out[58]:
0 0.695775
2 0.341734
4 0.959726
dtype: float64

In [59]: s1.iloc[3]
Out[59]: -1.110336102891167

Note that setting works as well:


In [60]: s1.iloc[:3] = 0

In [61]: s1
Out[61]:
0 0.000000
2 0.000000
4 0.000000
(continues on next page)

354 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


6 -1.110336
8 -0.619976
dtype: float64

With a DataFrame:
In [62]: df1 = pd.DataFrame(np.random.randn(6, 4),
....: index=list(range(0, 12, 2)),
....: columns=list(range(0, 8, 2)))
....:

In [63]: df1
Out[63]:
0 2 4 6
0 0.149748 -0.732339 0.687738 0.176444
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
6 -0.826591 -0.345352 1.314232 0.690579
8 0.995761 2.396780 0.014871 3.357427
10 -0.317441 -1.236269 0.896171 -0.487602

Select via integer slicing:


In [64]: df1.iloc[:3]
Out[64]:
0 2 4 6
0 0.149748 -0.732339 0.687738 0.176444
2 0.403310 -0.154951 0.301624 -2.179861
[email protected]
T56GZSRVAH4 -1.369849 -0.954208 1.462696 -1.743161

In [65]: df1.iloc[1:5, 2:4]


Out[65]:
4 6
2 0.301624 -2.179861
4 1.462696 -1.743161
6 1.314232 0.690579
8 0.014871 3.357427

Select via integer list:


In [66]: df1.iloc[[1, 3, 5], [1, 3]]
Out[66]:
2 6
2 -0.154951 -2.179861
6 -0.345352 0.690579
10 -1.236269 -0.487602

In [67]: df1.iloc[1:3, :]
Out[67]:
0 2 4 6
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161

In [68]: df1.iloc[:, 1:3]


Out[68]:
2 4
(continues on next page)

3.2. Indexing and selecting data 355


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


0 -0.732339 0.687738
2 -0.154951 0.301624
4 -0.954208 1.462696
6 -0.345352 1.314232
8 2.396780 0.014871
10 -1.236269 0.896171

# this is also equivalent to ``df1.iat[1,1]``


In [69]: df1.iloc[1, 1]
Out[69]: -0.1549507744249032

For getting a cross section using an integer position (equiv to df.xs(1)):

In [70]: df1.iloc[1]
Out[70]:
0 0.403310
2 -0.154951
4 0.301624
6 -2.179861
Name: 2, dtype: float64

Out of range slice indexes are handled gracefully just as in Python/Numpy.

# these are allowed in python/numpy.


In [71]: x = list('abcdef')

In [72]: x
[email protected]
Out[72]: ['a', 'b', 'c', 'd', 'e', 'f']
T56GZSRVAH
In [73]: x[4:10]
Out[73]: ['e', 'f']

In [74]: x[8:10]
Out[74]: []

In [75]: s = pd.Series(x)

In [76]: s
Out[76]:
0 a
1 b
2 c
3 d
4 e
5 f
dtype: object

In [77]: s.iloc[4:10]
Out[77]:
4 e
5 f
dtype: object

In [78]: s.iloc[8:10]
Out[78]: Series([], dtype: object)

Note that using slices that go out of bounds can result in an empty axis (e.g. an empty DataFrame being returned).

356 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [79]: dfl = pd.DataFrame(np.random.randn(5, 2), columns=list('AB'))

In [80]: dfl
Out[80]:
A B
0 -0.082240 -2.182937
1 0.380396 0.084844
2 0.432390 1.519970
3 -0.493662 0.600178
4 0.274230 0.132885

In [81]: dfl.iloc[:, 2:3]


Out[81]:
Empty DataFrame
Columns: []
Index: [0, 1, 2, 3, 4]

In [82]: dfl.iloc[:, 1:3]


Out[82]:
B
0 -2.182937
1 0.084844
2 1.519970
3 0.600178
4 0.132885

In [83]: dfl.iloc[4:6]
Out[83]:
A
[email protected] B
T56GZSRVAH4 0.27423 0.132885

A single indexer that is out of bounds will raise an IndexError. A list of indexers where any element is out of
bounds will raise an IndexError.

>>> dfl.iloc[[4, 5, 6]]


IndexError: positional indexers are out-of-bounds

>>> dfl.iloc[:, 4]
IndexError: single positional indexer is out-of-bounds

3.2.7 Selection by callable

.loc, .iloc, and also [] indexing can accept a callable as indexer. The callable must be a function with
one argument (the calling Series or DataFrame) that returns valid output for indexing.

In [84]: df1 = pd.DataFrame(np.random.randn(6, 4),


....: index=list('abcdef'),
....: columns=list('ABCD'))
....:

In [85]: df1
Out[85]:
A B C D
a -0.023688 2.410179 1.450520 0.206053
b -0.251905 -2.213588 1.063327 1.266143
(continues on next page)

3.2. Indexing and selecting data 357


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


c 0.299368 -0.863838 0.408204 -1.048089
d -0.025747 -0.988387 0.094055 1.262731
e 1.289997 0.082423 -0.055758 0.536580
f -0.489682 0.369374 -0.034571 -2.484478

In [86]: df1.loc[lambda df: df['A'] > 0, :]


Out[86]:
A B C D
c 0.299368 -0.863838 0.408204 -1.048089
e 1.289997 0.082423 -0.055758 0.536580

In [87]: df1.loc[:, lambda df: ['A', 'B']]


Out[87]:
A B
a -0.023688 2.410179
b -0.251905 -2.213588
c 0.299368 -0.863838
d -0.025747 -0.988387
e 1.289997 0.082423
f -0.489682 0.369374

In [88]: df1.iloc[:, lambda df: [0, 1]]


Out[88]:
A B
a -0.023688 2.410179
b -0.251905 -2.213588
c 0.299368 -0.863838
d -0.025747 -0.988387
[email protected]
T56GZSRVAHe 1.289997 0.082423
f -0.489682 0.369374

In [89]: df1[lambda df: df.columns[0]]


Out[89]:
a -0.023688
b -0.251905
c 0.299368
d -0.025747
e 1.289997
f -0.489682
Name: A, dtype: float64

You can use callable indexing in Series.


In [90]: df1['A'].loc[lambda s: s > 0]
Out[90]:
c 0.299368
e 1.289997
Name: A, dtype: float64

Using these methods / indexers, you can chain data selection operations without using a temporary variable.
In [91]: bb = pd.read_csv('data/baseball.csv', index_col='id')

In [92]: (bb.groupby(['year', 'team']).sum()


....: .loc[lambda df: df['r'] > 100])
....:
Out[92]:
(continues on next page)

358 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


stint g ab r h X2b X3b hr rbi sb cs bb so
˓→ibb hbp sh sf gidp
year team
˓→

2007 CIN 6 379 745 101 203 35 2 36 125.0 10.0 1.0 105 127.0 14.
˓→0 1.0 1.0 15.0 18.0
DET 5 301 1062 162 283 54 4 37 144.0 24.0 7.0 97 176.0 3.
˓→0 10.0 4.0 8.0 28.0
HOU 4 311 926 109 218 47 6 14 77.0 10.0 4.0 60 212.0 3.
˓→0 9.0 16.0 6.0 17.0
LAN 11 413 1021 153 293 61 3 36 154.0 7.0 5.0 114 141.0 8.
˓→0 9.0 3.0 8.0 29.0
NYN 13 622 1854 240 509 101 3 61 243.0 22.0 4.0 174 310.0 24.
˓→0 23.0 18.0 15.0 48.0
SFN 5 482 1305 198 337 67 6 40 171.0 26.0 7.0 235 188.0 51.
˓→0 8.0 16.0 6.0 41.0
TEX 2 198 729 115 200 40 4 28 115.0 21.0 4.0 73 140.0 4.
˓→0 5.0 2.0 8.0 16.0
TOR 4 459 1408 187 378 96 2 58 223.0 4.0 2.0 190 265.0 16.
˓→0 12.0 4.0 16.0 38.0

3.2.8 IX indexer is deprecated

Warning: Starting in 0.20.0, the .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers.
[email protected]
T56GZSRVAH
.ix offers a lot of magic on the inference of what the user wants to do. To wit, .ix can decide to index positionally
OR via labels depending on the data type of the index. This has caused quite a bit of user confusion over the years.
The recommended methods of indexing are:
• .loc if you want to label index.
• .iloc if you want to positionally index.

In [93]: dfd = pd.DataFrame({'A': [1, 2, 3],


....: 'B': [4, 5, 6]},
....: index=list('abc'))
....:

In [94]: dfd
Out[94]:
A B
a 1 4
b 2 5
c 3 6

Previous behavior, where you wish to get the 0th and the 2nd elements from the index in the ‘A’ column.

In [3]: dfd.ix[[0, 2], 'A']


Out[3]:
a 1
c 3
Name: A, dtype: int64

3.2. Indexing and selecting data 359


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Using .loc. Here we will select the appropriate indexes from the index, then use label indexing.

In [95]: dfd.loc[dfd.index[[0, 2]], 'A']


Out[95]:
a 1
c 3
Name: A, dtype: int64

This can also be expressed using .iloc, by explicitly getting locations on the indexers, and using positional indexing
to select things.

In [96]: dfd.iloc[[0, 2], dfd.columns.get_loc('A')]


Out[96]:
a 1
c 3
Name: A, dtype: int64

For getting multiple indexers, using .get_indexer:

In [97]: dfd.iloc[[0, 2], dfd.columns.get_indexer(['A', 'B'])]


Out[97]:
A B
a 1 4
c 3 6

3.2.9 Indexing with list with missing labels is deprecated


[email protected]
T56GZSRVAH
Warning: Starting in 0.21.0, using .loc or [] with a list with one or more missing labels, is deprecated, in favor
of .reindex.

In prior versions, using .loc[list-of-labels] would work as long as at least 1 of the keys was found (oth-
erwise it would raise a KeyError). This behavior is deprecated and will show a warning message pointing to this
section. The recommended alternative is to use .reindex().
For example.

In [98]: s = pd.Series([1, 2, 3])

In [99]: s
Out[99]:
0 1
1 2
2 3
dtype: int64

Selection with all keys found is unchanged.

In [100]: s.loc[[1, 2]]


Out[100]:
1 2
2 3
dtype: int64

Previous behavior

360 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [4]: s.loc[[1, 2, 3]]


Out[4]:
1 2.0
2 3.0
3 NaN
dtype: float64

Current behavior

In [4]: s.loc[[1, 2, 3]]


Passing list-likes to .loc with any non-matching elements will raise
KeyError in the future, you can use .reindex() as an alternative.

See the documentation here:


https://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-
˓→listlike

Out[4]:
1 2.0
2 3.0
3 NaN
dtype: float64

Reindexing

The idiomatic way to achieve selecting potentially not-found elements is via .reindex(). See also the section on
reindexing.
[email protected]
T56GZSRVAH
In [101]: s.reindex([1, 2, 3])
Out[101]:
1 2.0
2 3.0
3 NaN
dtype: float64

Alternatively, if you want to select only valid keys, the following is idiomatic and efficient; it is guaranteed to preserve
the dtype of the selection.

In [102]: labels = [1, 2, 3]

In [103]: s.loc[s.index.intersection(labels)]
Out[103]:
1 2
2 3
dtype: int64

Having a duplicated index will raise for a .reindex():

In [104]: s = pd.Series(np.arange(4), index=['a', 'a', 'b', 'c'])

In [105]: labels = ['c', 'd']

In [17]: s.reindex(labels)
ValueError: cannot reindex from a duplicate axis

Generally, you can intersect the desired labels with the current axis, and then reindex.

3.2. Indexing and selecting data 361


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [106]: s.loc[s.index.intersection(labels)].reindex(labels)
Out[106]:
c 3.0
d NaN
dtype: float64

However, this would still raise if your resulting index is duplicated.

In [41]: labels = ['a', 'd']

In [42]: s.loc[s.index.intersection(labels)].reindex(labels)
ValueError: cannot reindex from a duplicate axis

3.2.10 Selecting random samples

A random selection of rows or columns from a Series or DataFrame with the sample() method. The method will
sample rows by default, and accepts a specific number of rows/columns to return, or a fraction of rows.

In [107]: s = pd.Series([0, 1, 2, 3, 4, 5])

# When no arguments are passed, returns 1 row.


In [108]: s.sample()
Out[108]:
4 4
dtype: int64

# One may specify either a number of rows:


[email protected]
T56GZSRVAHIn [109]: s.sample(n=3)
Out[109]:
0 0
4 4
1 1
dtype: int64

# Or a fraction of the rows:


In [110]: s.sample(frac=0.5)
Out[110]:
5 5
3 3
1 1
dtype: int64

By default, sample will return each row at most once, but one can also sample with replacement using the replace
option:

In [111]: s = pd.Series([0, 1, 2, 3, 4, 5])

# Without replacement (default):


In [112]: s.sample(n=6, replace=False)
Out[112]:
0 0
1 1
5 5
3 3
2 2
(continues on next page)

362 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


4 4
dtype: int64

# With replacement:
In [113]: s.sample(n=6, replace=True)
Out[113]:
0 0
4 4
3 3
2 2
4 4
4 4
dtype: int64

By default, each row has an equal probability of being selected, but if you want rows to have different probabilities,
you can pass the sample function sampling weights as weights. These weights can be a list, a NumPy array, or a
Series, but they must be of the same length as the object you are sampling. Missing values will be treated as a weight
of zero, and inf values are not allowed. If weights do not sum to 1, they will be re-normalized by dividing all weights
by the sum of the weights. For example:

In [114]: s = pd.Series([0, 1, 2, 3, 4, 5])

In [115]: example_weights = [0, 0, 0.2, 0.2, 0.2, 0.4]

In [116]: s.sample(n=3, weights=example_weights)


Out[116]:
5 5
[email protected]
4 4
T56GZSRVAH3 3
dtype: int64

# Weights will be re-normalized automatically


In [117]: example_weights2 = [0.5, 0, 0, 0, 0, 0]

In [118]: s.sample(n=1, weights=example_weights2)


Out[118]:
0 0
dtype: int64

When applied to a DataFrame, you can use a column of the DataFrame as sampling weights (provided you are sampling
rows and not columns) by simply passing the name of the column as a string.

In [119]: df2 = pd.DataFrame({'col1': [9, 8, 7, 6],


.....: 'weight_column': [0.5, 0.4, 0.1, 0]})
.....:

In [120]: df2.sample(n=3, weights='weight_column')


Out[120]:
col1 weight_column
1 8 0.4
0 9 0.5
2 7 0.1

sample also allows users to sample columns instead of rows using the axis argument.

In [121]: df3 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [2, 3, 4]})


(continues on next page)

3.2. Indexing and selecting data 363


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [122]: df3.sample(n=1, axis=1)


Out[122]:
col1
0 1
1 2
2 3

Finally, one can also set a seed for sample’s random number generator using the random_state argument, which
will accept either an integer (as a seed) or a NumPy RandomState object.

In [123]: df4 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [2, 3, 4]})

# With a given seed, the sample will always draw the same rows.
In [124]: df4.sample(n=2, random_state=2)
Out[124]:
col1 col2
2 3 4
1 2 3

In [125]: df4.sample(n=2, random_state=2)


Out[125]:
col1 col2
2 3 4
1 2 3

[email protected]
T56GZSRVAH3.2.11 Setting with enlargement
The .loc/[] operations can perform enlargement when setting a non-existent key for that axis.
In the Series case this is effectively an appending operation.

In [126]: se = pd.Series([1, 2, 3])

In [127]: se
Out[127]:
0 1
1 2
2 3
dtype: int64

In [128]: se[5] = 5.

In [129]: se
Out[129]:
0 1.0
1 2.0
2 3.0
5 5.0
dtype: float64

A DataFrame can be enlarged on either axis via .loc.

In [130]: dfi = pd.DataFrame(np.arange(6).reshape(3, 2),


.....: columns=['A', 'B'])
(continues on next page)

364 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


.....:

In [131]: dfi
Out[131]:
A B
0 0 1
1 2 3
2 4 5

In [132]: dfi.loc[:, 'C'] = dfi.loc[:, 'A']

In [133]: dfi
Out[133]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4

This is like an append operation on the DataFrame.

In [134]: dfi.loc[3] = 5

In [135]: dfi
Out[135]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
[email protected]
T56GZSRVAH3 5 5 5

3.2.12 Fast scalar value getting and setting

Since indexing with [] must handle a lot of cases (single-label access, slicing, boolean indexing, etc.), it has a bit of
overhead in order to figure out what you’re asking for. If you only want to access a scalar value, the fastest way is to
use the at and iat methods, which are implemented on all of the data structures.
Similarly to loc, at provides label based scalar lookups, while, iat provides integer based lookups analogously to
iloc

In [136]: s.iat[5]
Out[136]: 5

In [137]: df.at[dates[5], 'A']


Out[137]: -0.6736897080883706

In [138]: df.iat[3, 0]
Out[138]: 0.7215551622443669

You can also set using these same indexers.

In [139]: df.at[dates[5], 'E'] = 7

In [140]: df.iat[3, 0] = 7

at may enlarge the object in-place as above if the indexer is missing.

3.2. Indexing and selecting data 365


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [141]: df.at[dates[-1] + pd.Timedelta('1 day'), 0] = 7

In [142]: df
Out[142]:
A B C D E 0
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632 NaN NaN
2000-01-02 1.212112 -0.173215 0.119209 -1.044236 NaN NaN
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804 NaN NaN
2000-01-04 7.000000 -0.706771 -1.039575 0.271860 NaN NaN
2000-01-05 -0.424972 0.567020 0.276232 -1.087401 NaN NaN
2000-01-06 -0.673690 0.113648 -1.478427 0.524988 7.0 NaN
2000-01-07 0.404705 0.577046 -1.715002 -1.039268 NaN NaN
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885 NaN NaN
2000-01-09 NaN NaN NaN NaN NaN 7.0

3.2.13 Boolean indexing

Another common operation is the use of boolean vectors to filter the data. The operators are: | for or, & for and, and
~ for not. These must be grouped by using parentheses, since by default Python will evaluate an expression such as
df['A'] > 2 & df['B'] < 3 as df['A'] > (2 & df['B']) < 3, while the desired evaluation order
is (df['A > 2) & (df['B'] < 3).
Using a boolean vector to index a Series works exactly as in a NumPy ndarray:
In [143]: s = pd.Series(range(-3, 4))

In [144]: s
[email protected]
Out[144]:
T56GZSRVAH
0 -3
1 -2
2 -1
3 0
4 1
5 2
6 3
dtype: int64

In [145]: s[s > 0]


Out[145]:
4 1
5 2
6 3
dtype: int64

In [146]: s[(s < -1) | (s > 0.5)]


Out[146]:
0 -3
1 -2
4 1
5 2
6 3
dtype: int64

In [147]: s[~(s < 0)]


Out[147]:
3 0
(continues on next page)

366 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


4 1
5 2
6 3
dtype: int64

You may select rows from a DataFrame using a boolean vector the same length as the DataFrame’s index (for example,
something derived from one of the columns of the DataFrame):

In [148]: df[df['A'] > 0]


Out[148]:
A B C D E 0
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632 NaN NaN
2000-01-02 1.212112 -0.173215 0.119209 -1.044236 NaN NaN
2000-01-04 7.000000 -0.706771 -1.039575 0.271860 NaN NaN
2000-01-07 0.404705 0.577046 -1.715002 -1.039268 NaN NaN

List comprehensions and the map method of Series can also be used to produce more complex criteria:

In [149]: df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'three', 'two', 'one', 'six
˓→'],

.....: 'b': ['x', 'y', 'y', 'x', 'y', 'x', 'x'],


.....: 'c': np.random.randn(7)})
.....:

# only want 'two' or 'three'


In [150]: criterion = df2['a'].map(lambda x: x.startswith('t'))

In [151]: df2[criterion]
[email protected]
T56GZSRVAHOut[151]:
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075

# equivalent but slower


In [152]: df2[[x.startswith('t') for x in df2['a']]]
Out[152]:
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075

# Multiple criteria
In [153]: df2[criterion & (df2['b'] == 'x')]
Out[153]:
a b c
3 three x 0.361719

With the choice methods Selection by Label, Selection by Position, and Advanced Indexing you may select along more
than one axis using boolean vectors combined with other indexing expressions.

In [154]: df2.loc[criterion & (df2['b'] == 'x'), 'b':'c']


Out[154]:
b c
3 x 0.361719

3.2. Indexing and selecting data 367


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.2.14 Indexing with isin

Consider the isin() method of Series, which returns a boolean vector that is true wherever the Series elements
exist in the passed list. This allows you to select rows where one or more columns have values you want:
In [155]: s = pd.Series(np.arange(5), index=np.arange(5)[::-1], dtype='int64')

In [156]: s
Out[156]:
4 0
3 1
2 2
1 3
0 4
dtype: int64

In [157]: s.isin([2, 4, 6])


Out[157]:
4 False
3 False
2 True
1 False
0 True
dtype: bool

In [158]: s[s.isin([2, 4, 6])]


Out[158]:
2 2
0 4
[email protected]
dtype: int64
T56GZSRVAH
The same method is available for Index objects and is useful for the cases when you don’t know which of the sought
labels are in fact present:
In [159]: s[s.index.isin([2, 4, 6])]
Out[159]:
4 0
2 2
dtype: int64

# compare it to the following


In [160]: s.reindex([2, 4, 6])
Out[160]:
2 2.0
4 0.0
6 NaN
dtype: float64

In addition to that, MultiIndex allows selecting a separate level to use in the membership check:
In [161]: s_mi = pd.Series(np.arange(6),
.....: index=pd.MultiIndex.from_product([[0, 1], ['a', 'b', 'c
˓→']]))

.....:

In [162]: s_mi
Out[162]:
0 a 0
(continues on next page)

368 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


b 1
c 2
1 a 3
b 4
c 5
dtype: int64

In [163]: s_mi.iloc[s_mi.index.isin([(1, 'a'), (2, 'b'), (0, 'c')])]


Out[163]:
0 c 2
1 a 3
dtype: int64

In [164]: s_mi.iloc[s_mi.index.isin(['a', 'c', 'e'], level=1)]


Out[164]:
0 a 0
c 2
1 a 3
c 5
dtype: int64

DataFrame also has an isin() method. When calling isin, pass a set of values as either an array or dict. If values is
an array, isin returns a DataFrame of booleans that is the same shape as the original DataFrame, with True wherever
the element is in the sequence of values.

In [165]: df = pd.DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'],


.....: 'ids2': ['a', 'n', 'c', 'n']})
.....:
[email protected]
T56GZSRVAH
In [166]: values = ['a', 'b', 1, 3]

In [167]: df.isin(values)
Out[167]:
vals ids ids2
0 True True True
1 False True False
2 True False False
3 False False False

Oftentimes you’ll want to match certain values with certain columns. Just make values a dict where the key is the
column, and the value is a list of items you want to check for.

In [168]: values = {'ids': ['a', 'b'], 'vals': [1, 3]}

In [169]: df.isin(values)
Out[169]:
vals ids ids2
0 True True False
1 False True False
2 True False False
3 False False False

Combine DataFrame’s isin with the any() and all() methods to quickly select subsets of your data that meet a
given criteria. To select a row where each column meets its own criterion:

In [170]: values = {'ids': ['a', 'b'], 'ids2': ['a', 'c'], 'vals': [1, 3]}
(continues on next page)

3.2. Indexing and selecting data 369


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [171]: row_mask = df.isin(values).all(1)

In [172]: df[row_mask]
Out[172]:
vals ids ids2
0 1 a a

3.2.15 The where() Method and Masking

Selecting values from a Series with a boolean vector generally returns a subset of the data. To guarantee that selection
output has the same shape as the original data, you can use the where method in Series and DataFrame.
To return only the selected rows:
In [173]: s[s > 0]
Out[173]:
3 1
2 2
1 3
0 4
dtype: int64

To return a Series of the same shape as the original:


In [174]: s.where(s > 0)
[email protected]
Out[174]:
T56GZSRVAH4 NaN
3 1.0
2 2.0
1 3.0
0 4.0
dtype: float64

Selecting values from a DataFrame with a boolean criterion now also preserves input data shape. where is used under
the hood as the implementation. The code below is equivalent to df.where(df < 0).
In [175]: df[df < 0]
Out[175]:
A B C D
2000-01-01 -2.104139 -1.309525 NaN NaN
2000-01-02 -0.352480 NaN -1.192319 NaN
2000-01-03 -0.864883 NaN -0.227870 NaN
2000-01-04 NaN -1.222082 NaN -1.233203
2000-01-05 NaN -0.605656 -1.169184 NaN
2000-01-06 NaN -0.948458 NaN -0.684718
2000-01-07 -2.670153 -0.114722 NaN -0.048048
2000-01-08 NaN NaN -0.048788 -0.808838

In addition, where takes an optional other argument for replacement of values where the condition is False, in the
returned copy.
In [176]: df.where(df < 0, -df)
Out[176]:
A B C D
(continues on next page)

370 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-01 -2.104139 -1.309525 -0.485855 -0.245166
2000-01-02 -0.352480 -0.390389 -1.192319 -1.655824
2000-01-03 -0.864883 -0.299674 -0.227870 -0.281059
2000-01-04 -0.846958 -1.222082 -0.600705 -1.233203
2000-01-05 -0.669692 -0.605656 -1.169184 -0.342416
2000-01-06 -0.868584 -0.948458 -2.297780 -0.684718
2000-01-07 -2.670153 -0.114722 -0.168904 -0.048048
2000-01-08 -0.801196 -1.392071 -0.048788 -0.808838

You may wish to set values based on some boolean criteria. This can be done intuitively like so:

In [177]: s2 = s.copy()

In [178]: s2[s2 < 0] = 0

In [179]: s2
Out[179]:
4 0
3 1
2 2
1 3
0 4
dtype: int64

In [180]: df2 = df.copy()

In [181]: df2[df2 < 0] = 0


[email protected]
T56GZSRVAHIn [182]: df2
Out[182]:
A B C D
2000-01-01 0.000000 0.000000 0.485855 0.245166
2000-01-02 0.000000 0.390389 0.000000 1.655824
2000-01-03 0.000000 0.299674 0.000000 0.281059
2000-01-04 0.846958 0.000000 0.600705 0.000000
2000-01-05 0.669692 0.000000 0.000000 0.342416
2000-01-06 0.868584 0.000000 2.297780 0.000000
2000-01-07 0.000000 0.000000 0.168904 0.000000
2000-01-08 0.801196 1.392071 0.000000 0.000000

By default, where returns a modified copy of the data. There is an optional parameter inplace so that the original
data can be modified without creating a copy:

In [183]: df_orig = df.copy()

In [184]: df_orig.where(df > 0, -df, inplace=True)

In [185]: df_orig
Out[185]:
A B C D
2000-01-01 2.104139 1.309525 0.485855 0.245166
2000-01-02 0.352480 0.390389 1.192319 1.655824
2000-01-03 0.864883 0.299674 0.227870 0.281059
2000-01-04 0.846958 1.222082 0.600705 1.233203
2000-01-05 0.669692 0.605656 1.169184 0.342416
2000-01-06 0.868584 0.948458 2.297780 0.684718
(continues on next page)

3.2. Indexing and selecting data 371


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-07 2.670153 0.114722 0.168904 0.048048
2000-01-08 0.801196 1.392071 0.048788 0.808838

Note: The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m,
df2) is equivalent to np.where(m, df1, df2).

In [186]: df.where(df < 0, -df) == np.where(df < 0, df, -df)


Out[186]:
A B C D
2000-01-01 True True True True
2000-01-02 True True True True
2000-01-03 True True True True
2000-01-04 True True True True
2000-01-05 True True True True
2000-01-06 True True True True
2000-01-07 True True True True
2000-01-08 True True True True

Alignment
Furthermore, where aligns the input boolean condition (ndarray or DataFrame), such that partial selection with setting
is possible. This is analogous to partial setting via .loc (but on the contents rather than the axis labels).

In [187]: df2 = df.copy()

In [188]: df2[df2[1:4] > 0] = 3


[email protected]
T56GZSRVAH
In [189]: df2
Out[189]:
A B C D
2000-01-01 -2.104139 -1.309525 0.485855 0.245166
2000-01-02 -0.352480 3.000000 -1.192319 3.000000
2000-01-03 -0.864883 3.000000 -0.227870 3.000000
2000-01-04 3.000000 -1.222082 3.000000 -1.233203
2000-01-05 0.669692 -0.605656 -1.169184 0.342416
2000-01-06 0.868584 -0.948458 2.297780 -0.684718
2000-01-07 -2.670153 -0.114722 0.168904 -0.048048
2000-01-08 0.801196 1.392071 -0.048788 -0.808838

Where can also accept axis and level parameters to align the input when performing the where.

In [190]: df2 = df.copy()

In [191]: df2.where(df2 > 0, df2['A'], axis='index')


Out[191]:
A B C D
2000-01-01 -2.104139 -2.104139 0.485855 0.245166
2000-01-02 -0.352480 0.390389 -0.352480 1.655824
2000-01-03 -0.864883 0.299674 -0.864883 0.281059
2000-01-04 0.846958 0.846958 0.600705 0.846958
2000-01-05 0.669692 0.669692 0.669692 0.342416
2000-01-06 0.868584 0.868584 2.297780 0.868584
2000-01-07 -2.670153 -2.670153 0.168904 -2.670153
2000-01-08 0.801196 1.392071 0.801196 0.801196

372 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

This is equivalent to (but faster than) the following.

In [192]: df2 = df.copy()

In [193]: df.apply(lambda x, y: x.where(x > 0, y), y=df['A'])


Out[193]:
A B C D
2000-01-01 -2.104139 -2.104139 0.485855 0.245166
2000-01-02 -0.352480 0.390389 -0.352480 1.655824
2000-01-03 -0.864883 0.299674 -0.864883 0.281059
2000-01-04 0.846958 0.846958 0.600705 0.846958
2000-01-05 0.669692 0.669692 0.669692 0.342416
2000-01-06 0.868584 0.868584 2.297780 0.868584
2000-01-07 -2.670153 -2.670153 0.168904 -2.670153
2000-01-08 0.801196 1.392071 0.801196 0.801196

where can accept a callable as condition and other arguments. The function must be with one argument (the calling
Series or DataFrame) and that returns valid output as condition and other argument.

In [194]: df3 = pd.DataFrame({'A': [1, 2, 3],


.....: 'B': [4, 5, 6],
.....: 'C': [7, 8, 9]})
.....:

In [195]: df3.where(lambda x: x > 4, lambda x: x + 10)


Out[195]:
A B C
0 11 14 7
1 12 5 8
[email protected]
2 13 6 9
T56GZSRVAH

Mask

mask() is the inverse boolean operation of where.

In [196]: s.mask(s >= 0)


Out[196]:
4 NaN
3 NaN
2 NaN
1 NaN
0 NaN
dtype: float64

In [197]: df.mask(df >= 0)


Out[197]:
A B C D
2000-01-01 -2.104139 -1.309525 NaN NaN
2000-01-02 -0.352480 NaN -1.192319 NaN
2000-01-03 -0.864883 NaN -0.227870 NaN
2000-01-04 NaN -1.222082 NaN -1.233203
2000-01-05 NaN -0.605656 -1.169184 NaN
2000-01-06 NaN -0.948458 NaN -0.684718
2000-01-07 -2.670153 -0.114722 NaN -0.048048
2000-01-08 NaN NaN -0.048788 -0.808838

3.2. Indexing and selecting data 373


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.2.16 The query() Method

DataFrame objects have a query() method that allows selection using an expression.
You can get the value of the frame where column b has values between the values of columns a and c. For example:

In [198]: n = 10

In [199]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))

In [200]: df
Out[200]:
a b c
0 0.438921 0.118680 0.863670
1 0.138138 0.577363 0.686602
2 0.595307 0.564592 0.520630
3 0.913052 0.926075 0.616184
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
6 0.792342 0.216974 0.564056
7 0.397890 0.454131 0.915716
8 0.074315 0.437913 0.019794
9 0.559209 0.502065 0.026437

# pure python
In [201]: df[(df['a'] < df['b']) & (df['b'] < df['c'])]
Out[201]:
a b c
1 0.138138 0.577363 0.686602
[email protected]
4 0.078718 0.854477 0.898725
T56GZSRVAH5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716

# query
In [202]: df.query('(a < b) & (b < c)')
Out[202]:
a b c
1 0.138138 0.577363 0.686602
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716

Do the same thing but fall back on a named index if there is no column with the name a.

In [203]: df = pd.DataFrame(np.random.randint(n / 2, size=(n, 2)), columns=list('bc'))

In [204]: df.index.name = 'a'

In [205]: df
Out[205]:
b c
a
0 0 4
1 0 1
2 3 4
3 4 3
4 1 4
5 0 3
(continues on next page)

374 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


6 0 1
7 3 4
8 2 3
9 1 1

In [206]: df.query('a < b and b < c')


Out[206]:
b c
a
2 3 4

If instead you don’t want to or cannot name your index, you can use the name index in your query expression:

In [207]: df = pd.DataFrame(np.random.randint(n, size=(n, 2)), columns=list('bc'))

In [208]: df
Out[208]:
b c
0 3 1
1 3 0
2 5 6
3 5 2
4 7 4
5 0 1
6 2 5
7 0 1
8 6 0
9 7 9
[email protected]
T56GZSRVAH
In [209]: df.query('index < b < c')
Out[209]:
b c
2 5 6

Note: If the name of your index overlaps with a column name, the column name is given precedence. For example,

In [210]: df = pd.DataFrame({'a': np.random.randint(5, size=5)})

In [211]: df.index.name = 'a'

In [212]: df.query('a > 2') # uses the column 'a', not the index
Out[212]:
a
a
1 3
3 3

You can still use the index in a query expression by using the special identifier ‘index’:

In [213]: df.query('index > 2')


Out[213]:
a
a
3 3
4 2

3.2. Indexing and selecting data 375


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

If for some reason you have a column named index, then you can refer to the index as ilevel_0 as well, but at
this point you should consider renaming your columns to something less ambiguous.

MultiIndex query() Syntax

You can also use the levels of a DataFrame with a MultiIndex as if they were columns in the frame:
In [214]: n = 10

In [215]: colors = np.random.choice(['red', 'green'], size=n)

In [216]: foods = np.random.choice(['eggs', 'ham'], size=n)

In [217]: colors
Out[217]:
array(['red', 'red', 'red', 'green', 'green', 'green', 'green', 'green',
'green', 'green'], dtype='<U5')

In [218]: foods
Out[218]:
array(['ham', 'ham', 'eggs', 'eggs', 'eggs', 'ham', 'ham', 'eggs', 'eggs',
'eggs'], dtype='<U4')

In [219]: index = pd.MultiIndex.from_arrays([colors, foods], names=['color', 'food'])

In [220]: df = pd.DataFrame(np.random.randn(n, 2), index=index)

[email protected]
In [221]: df
T56GZSRVAH
Out[221]:
0 1
color food
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
green eggs -0.748199 1.318931
eggs -2.029766 0.792652
ham 0.461007 -0.542749
ham -0.305384 -0.479195
eggs 0.095031 -0.270099
eggs -0.707140 -0.773882
eggs 0.229453 0.304418

In [222]: df.query('color == "red"')


Out[222]:
0 1
color food
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255

If the levels of the MultiIndex are unnamed, you can refer to them using special names:
In [223]: df.index.names = [None, None]

In [224]: df
Out[224]:
(continues on next page)

376 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


0 1
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
green eggs -0.748199 1.318931
eggs -2.029766 0.792652
ham 0.461007 -0.542749
ham -0.305384 -0.479195
eggs 0.095031 -0.270099
eggs -0.707140 -0.773882
eggs 0.229453 0.304418

In [225]: df.query('ilevel_0 == "red"')


Out[225]:
0 1
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255

The convention is ilevel_0, which means “index level 0” for the 0th level of the index.

query() Use Cases

A use case for query() is when you have a collection of DataFrame objects that have a subset of column names
(or index levels/names) in common. You can pass the same query to both frames without having to specify which
frame you’re interested in querying
[email protected]
T56GZSRVAHIn [226]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))

In [227]: df
Out[227]:
a b c
0 0.224283 0.736107 0.139168
1 0.302827 0.657803 0.713897
2 0.611185 0.136624 0.984960
3 0.195246 0.123436 0.627712
4 0.618673 0.371660 0.047902
5 0.480088 0.062993 0.185760
6 0.568018 0.483467 0.445289
7 0.309040 0.274580 0.587101
8 0.258993 0.477769 0.370255
9 0.550459 0.840870 0.304611

In [228]: df2 = pd.DataFrame(np.random.rand(n + 2, 3), columns=df.columns)

In [229]: df2
Out[229]:
a b c
0 0.357579 0.229800 0.596001
1 0.309059 0.957923 0.965663
2 0.123102 0.336914 0.318616
3 0.526506 0.323321 0.860813
4 0.518736 0.486514 0.384724
5 0.190804 0.505723 0.614533
6 0.891939 0.623977 0.676639
(continues on next page)

3.2. Indexing and selecting data 377


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


7 0.480559 0.378528 0.460858
8 0.420223 0.136404 0.141295
9 0.732206 0.419540 0.604675
10 0.604466 0.848974 0.896165
11 0.589168 0.920046 0.732716

In [230]: expr = '0.0 <= a <= c <= 0.5'

In [231]: map(lambda frame: frame.query(expr), [df, df2])


Out[231]: <map at 0x7f3d1d734150>

query() Python versus pandas Syntax Comparison

Full numpy-like syntax:

In [232]: df = pd.DataFrame(np.random.randint(n, size=(n, 3)), columns=list('abc'))

In [233]: df
Out[233]:
a b c
0 7 8 9
1 1 0 7
2 2 7 2
3 6 2 2
4 2 6 3
5 3 8 2
[email protected]
6 1 7 2
T56GZSRVAH7 5 1 5
8 9 8 0
9 1 5 0

In [234]: df.query('(a < b) & (b < c)')


Out[234]:
a b c
0 7 8 9

In [235]: df[(df['a'] < df['b']) & (df['b'] < df['c'])]


Out[235]:
a b c
0 7 8 9

Slightly nicer by removing the parentheses (by binding making comparison operators bind tighter than & and |).

In [236]: df.query('a < b & b < c')


Out[236]:
a b c
0 7 8 9

Use English instead of symbols:

In [237]: df.query('a < b and b < c')


Out[237]:
a b c
0 7 8 9

Pretty close to how you might write it on paper:

378 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [238]: df.query('a < b < c')


Out[238]:
a b c
0 7 8 9

The in and not in operators

query() also supports special use of Python’s in and not in comparison operators, providing a succinct syntax
for calling the isin method of a Series or DataFrame.
# get all rows where columns "a" and "b" have overlapping values
In [239]: df = pd.DataFrame({'a': list('aabbccddeeff'), 'b': list('aaaabbbbcccc'),
.....: 'c': np.random.randint(5, size=12),
.....: 'd': np.random.randint(9, size=12)})
.....:

In [240]: df
Out[240]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
[email protected]
8 e c 4 3
T56GZSRVAH
9 e c 2 0
10 f c 0 6
11 f c 1 2

In [241]: df.query('a in b')


Out[241]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2

# How you'd do it in pure Python


In [242]: df[df['a'].isin(df['b'])]
Out[242]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2

In [243]: df.query('a not in b')


Out[243]:
a b c d
(continues on next page)

3.2. Indexing and selecting data 379


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2

# pure Python
In [244]: df[~df['a'].isin(df['b'])]
Out[244]:
a b c d
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2

You can combine this with other expressions for very succinct queries:

# rows where cols a and b have overlapping values


# and col c's values are less than col d's
In [245]: df.query('a in b and c < d')
Out[245]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
[email protected]
T56GZSRVAH4 c b 3 6
5 c b 0 2

# pure Python
In [246]: df[df['b'].isin(df['a']) & (df['c'] < df['d'])]
Out[246]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
4 c b 3 6
5 c b 0 2
10 f c 0 6
11 f c 1 2

Note: Note that in and not in are evaluated in Python, since numexpr has no equivalent of this operation.
However, only the in/not in expression itself is evaluated in vanilla Python. For example, in the expression

df.query('a in b + c + d')

(b + c + d) is evaluated by numexpr and then the in operation is evaluated in plain Python. In general, any
operations that can be evaluated using numexpr will be.

380 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Special use of the == operator with list objects

Comparing a list of values to a column using ==/!= works similarly to in/not in.
In [247]: df.query('b == ["a", "b", "c"]')
Out[247]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2

# pure Python
In [248]: df[df['b'].isin(["a", "b", "c"])]
Out[248]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
[email protected]
5 c b 0 2
T56GZSRVAH
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2

In [249]: df.query('c == [1, 2]')


Out[249]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2

In [250]: df.query('c != [1, 2]')


Out[250]:
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6

# using in/not in
(continues on next page)

3.2. Indexing and selecting data 381


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [251]: df.query('[1, 2] in c')
Out[251]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2

In [252]: df.query('[1, 2] not in c')


Out[252]:
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6

# pure Python
In [253]: df[df['c'].isin([1, 2])]
Out[253]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
[email protected]
T56GZSRVAH 9 e c 2 0
11 f c 1 2

Boolean operators

You can negate boolean expressions with the word not or the ~ operator.

In [254]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))

In [255]: df['bools'] = np.random.rand(len(df)) > 0.5

In [256]: df.query('~bools')
Out[256]:
a b c bools
2 0.697753 0.212799 0.329209 False
7 0.275396 0.691034 0.826619 False
8 0.190649 0.558748 0.262467 False

In [257]: df.query('not bools')


Out[257]:
a b c bools
2 0.697753 0.212799 0.329209 False
7 0.275396 0.691034 0.826619 False
8 0.190649 0.558748 0.262467 False

In [258]: df.query('not bools') == df[~df['bools']]


Out[258]:
(continues on next page)

382 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


a b c bools
2 True True True True
7 True True True True
8 True True True True

Of course, expressions can be arbitrarily complex too:


# short query syntax
In [259]: shorter = df.query('a < b < c and (not bools) or bools > 2')

# equivalent in pure Python


In [260]: longer = df[(df['a'] < df['b'])
.....: & (df['b'] < df['c'])
.....: & (~df['bools'])
.....: | (df['bools'] > 2)]
.....:

In [261]: shorter
Out[261]:
a b c bools
7 0.275396 0.691034 0.826619 False

In [262]: longer
Out[262]:
a b c bools
7 0.275396 0.691034 0.826619 False

In [263]: shorter == longer


[email protected]
T56GZSRVAHOut[263]:
a b c bools
7 True True True True

Performance of query()

DataFrame.query() using numexpr is slightly faster than Python for large frames.

3.2. Indexing and selecting data 383


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Note: You will only see the performance benefits of using the numexpr engine with DataFrame.query() if
your frame has more than approximately 200,000 rows.

This plot was created using a DataFrame with 3 columns each containing floating point values generated using
numpy.random.randn().
[email protected]
T56GZSRVAH
3.2.17 Duplicate data

If you want to identify and remove duplicate rows in a DataFrame, there are two methods that will help: duplicated
and drop_duplicates. Each takes as an argument the columns to use to identify duplicated rows.
• duplicated returns a boolean vector whose length is the number of rows, and which indicates whether a row
is duplicated.
• drop_duplicates removes duplicate rows.
By default, the first observed row of a duplicate set is considered unique, but each method has a keep parameter to
specify targets to be kept.
• keep='first' (default): mark / drop duplicates except for the first occurrence.
• keep='last': mark / drop duplicates except for the last occurrence.
• keep=False: mark / drop all duplicates.

In [264]: df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'two', 'two', 'three', 'four
˓→'],

.....: 'b': ['x', 'y', 'x', 'y', 'x', 'x', 'x'],


.....: 'c': np.random.randn(7)})
.....:

In [265]: df2
Out[265]:
a b c
0 one x -1.067137
(continues on next page)

384 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329

In [266]: df2.duplicated('a')
Out[266]:
0 False
1 True
2 False
3 True
4 True
5 False
6 False
dtype: bool

In [267]: df2.duplicated('a', keep='last')


Out[267]:
0 True
1 False
2 True
3 True
4 False
5 False
6 False
dtype: bool
[email protected]
T56GZSRVAH
In [268]: df2.duplicated('a', keep=False)
Out[268]:
0 True
1 True
2 True
3 True
4 True
5 False
6 False
dtype: bool

In [269]: df2.drop_duplicates('a')
Out[269]:
a b c
0 one x -1.067137
2 two x -0.211056
5 three x -1.964475
6 four x 1.298329

In [270]: df2.drop_duplicates('a', keep='last')


Out[270]:
a b c
1 one y 0.309500
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329

In [271]: df2.drop_duplicates('a', keep=False)


(continues on next page)

3.2. Indexing and selecting data 385


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[271]:
a b c
5 three x -1.964475
6 four x 1.298329

Also, you can pass a list of columns to identify duplications.

In [272]: df2.duplicated(['a', 'b'])


Out[272]:
0 False
1 False
2 False
3 False
4 True
5 False
6 False
dtype: bool

In [273]: df2.drop_duplicates(['a', 'b'])


Out[273]:
a b c
0 one x -1.067137
1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
5 three x -1.964475
6 four x 1.298329
[email protected]
T56GZSRVAHTo drop duplicates by index value, use Index.duplicated then perform slicing. The same set of options are
available for the keep parameter.

In [274]: df3 = pd.DataFrame({'a': np.arange(6),


.....: 'b': np.random.randn(6)},
.....: index=['a', 'a', 'b', 'c', 'b', 'a'])
.....:

In [275]: df3
Out[275]:
a b
a 0 1.440455
a 1 2.456086
b 2 1.038402
c 3 -0.894409
b 4 0.683536
a 5 3.082764

In [276]: df3.index.duplicated()
Out[276]: array([False, True, False, False, True, True])

In [277]: df3[~df3.index.duplicated()]
Out[277]:
a b
a 0 1.440455
b 2 1.038402
c 3 -0.894409

(continues on next page)

386 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [278]: df3[~df3.index.duplicated(keep='last')]
Out[278]:
a b
c 3 -0.894409
b 4 0.683536
a 5 3.082764

In [279]: df3[~df3.index.duplicated(keep=False)]
Out[279]:
a b
c 3 -0.894409

3.2.18 Dictionary-like get() method

Each of Series or DataFrame have a get method which can return a default value.

In [280]: s = pd.Series([1, 2, 3], index=['a', 'b', 'c'])

In [281]: s.get('a') # equivalent to s['a']


Out[281]: 1

In [282]: s.get('x', default=-1)


Out[282]: -1

3.2.19 The lookup() method


[email protected]
T56GZSRVAH
Sometimes you want to extract a set of values given a sequence of row labels and column labels, and the lookup
method allows for this and returns a NumPy array. For instance:

In [283]: dflookup = pd.DataFrame(np.random.rand(20, 4), columns = ['A', 'B', 'C', 'D


˓→'])

In [284]: dflookup.lookup(list(range(0, 10, 2)), ['B', 'C', 'A', 'B', 'D'])


Out[284]: array([0.3506, 0.4779, 0.4825, 0.9197, 0.5019])

3.2.20 Index objects

The pandas Index class and its subclasses can be viewed as implementing an ordered multiset. Duplicates are
allowed. However, if you try to convert an Index object with duplicate entries into a set, an exception will be
raised.
Index also provides the infrastructure necessary for lookups, data alignment, and reindexing. The easiest way to
create an Index directly is to pass a list or other sequence to Index:

In [285]: index = pd.Index(['e', 'd', 'a', 'b'])

In [286]: index
Out[286]: Index(['e', 'd', 'a', 'b'], dtype='object')

In [287]: 'd' in index


Out[287]: True

3.2. Indexing and selecting data 387


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

You can also pass a name to be stored in the index:

In [288]: index = pd.Index(['e', 'd', 'a', 'b'], name='something')

In [289]: index.name
Out[289]: 'something'

The name, if set, will be shown in the console display:

In [290]: index = pd.Index(list(range(5)), name='rows')

In [291]: columns = pd.Index(['A', 'B', 'C'], name='cols')

In [292]: df = pd.DataFrame(np.random.randn(5, 3), index=index, columns=columns)

In [293]: df
Out[293]:
cols A B C
rows
0 1.295989 0.185778 0.436259
1 0.678101 0.311369 -0.528378
2 -0.674808 -1.103529 -0.656157
3 1.889957 2.076651 -1.102192
4 -1.211795 -0.791746 0.634724

In [294]: df['A']
Out[294]:
rows
0 1.295989
[email protected]
1 0.678101
T56GZSRVAH2 -0.674808
3 1.889957
4 -1.211795
Name: A, dtype: float64

Setting metadata

Indexes are “mostly immutable”, but it is possible to set and change their metadata, like the index name (or, for
MultiIndex, levels and codes).
You can use the rename, set_names, set_levels, and set_codes to set these attributes directly. They default
to returning a copy; however, you can specify inplace=True to have the data change in place.
See Advanced Indexing for usage of MultiIndexes.

In [295]: ind = pd.Index([1, 2, 3])

In [296]: ind.rename("apple")
Out[296]: Int64Index([1, 2, 3], dtype='int64', name='apple')

In [297]: ind
Out[297]: Int64Index([1, 2, 3], dtype='int64')

In [298]: ind.set_names(["apple"], inplace=True)

In [299]: ind.name = "bob"

(continues on next page)

388 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [300]: ind
Out[300]: Int64Index([1, 2, 3], dtype='int64', name='bob')

set_names, set_levels, and set_codes also take an optional level argument


In [301]: index = pd.MultiIndex.from_product([range(3), ['one', 'two']], names=['first
˓→', 'second'])

In [302]: index
Out[302]:
MultiIndex([(0, 'one'),
(0, 'two'),
(1, 'one'),
(1, 'two'),
(2, 'one'),
(2, 'two')],
names=['first', 'second'])

In [303]: index.levels[1]
Out[303]: Index(['one', 'two'], dtype='object', name='second')

In [304]: index.set_levels(["a", "b"], level=1)


Out[304]:
MultiIndex([(0, 'a'),
(0, 'b'),
(1, 'a'),
(1, 'b'),
[email protected] (2, 'a'),
T56GZSRVAH (2, 'b')],
names=['first', 'second'])

Set operations on Index objects

The two main operations are union (|) and intersection (&). These can be directly called as instance
methods or used via overloaded operators. Difference is provided via the .difference() method.
In [305]: a = pd.Index(['c', 'b', 'a'])

In [306]: b = pd.Index(['c', 'e', 'd'])

In [307]: a | b
Out[307]: Index(['a', 'b', 'c', 'd', 'e'], dtype='object')

In [308]: a & b
Out[308]: Index(['c'], dtype='object')

In [309]: a.difference(b)
Out[309]: Index(['a', 'b'], dtype='object')

Also available is the symmetric_difference (^) operation, which returns elements that appear in either idx1
or idx2, but not in both. This is equivalent to the Index created by idx1.difference(idx2).union(idx2.
difference(idx1)), with duplicates dropped.
In [310]: idx1 = pd.Index([1, 2, 3, 4])

(continues on next page)

3.2. Indexing and selecting data 389


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [311]: idx2 = pd.Index([2, 3, 4, 5])

In [312]: idx1.symmetric_difference(idx2)
Out[312]: Int64Index([1, 5], dtype='int64')

In [313]: idx1 ^ idx2


Out[313]: Int64Index([1, 5], dtype='int64')

Note: The resulting index from a set operation will be sorted in ascending order.

When performing Index.union() between indexes with different dtypes, the indexes must be cast to a common
dtype. Typically, though not always, this is object dtype. The exception is when performing a union between integer
and float data. In this case, the integer values are converted to float

In [314]: idx1 = pd.Index([0, 1, 2])

In [315]: idx2 = pd.Index([0.5, 1.5])

In [316]: idx1 | idx2


Out[316]: Float64Index([0.0, 0.5, 1.0, 1.5, 2.0], dtype='float64')

Missing values

[email protected]
T56GZSRVAHImportant: Even though Index can hold missing values (NaN), it should be avoided if you do not want any
unexpected results. For example, some operations exclude missing values implicitly.

Index.fillna fills missing values with specified scalar value.

In [317]: idx1 = pd.Index([1, np.nan, 3, 4])

In [318]: idx1
Out[318]: Float64Index([1.0, nan, 3.0, 4.0], dtype='float64')

In [319]: idx1.fillna(2)
Out[319]: Float64Index([1.0, 2.0, 3.0, 4.0], dtype='float64')

In [320]: idx2 = pd.DatetimeIndex([pd.Timestamp('2011-01-01'),


.....: pd.NaT,
.....: pd.Timestamp('2011-01-03')])
.....:

In [321]: idx2
Out[321]: DatetimeIndex(['2011-01-01', 'NaT', '2011-01-03'], dtype='datetime64[ns]',
˓→freq=None)

In [322]: idx2.fillna(pd.Timestamp('2011-01-02'))
Out[322]: DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], dtype=
˓→'datetime64[ns]', freq=None)

390 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.2.21 Set / reset index

Occasionally you will load or create a data set into a DataFrame and want to add an index after you’ve already done
so. There are a couple of different ways.

Set an index

DataFrame has a set_index() method which takes a column name (for a regular Index) or a list of column names
(for a MultiIndex). To create a new, re-indexed DataFrame:

In [323]: data
Out[323]:
a b c d
0 bar one z 1.0
1 bar two y 2.0
2 foo one x 3.0
3 foo two w 4.0

In [324]: indexed1 = data.set_index('c')

In [325]: indexed1
Out[325]:
a b d
c
z bar one 1.0
y bar two 2.0
x foo one 3.0
w foo two 4.0
[email protected]
T56GZSRVAH
In [326]: indexed2 = data.set_index(['a', 'b'])

In [327]: indexed2
Out[327]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0

The append keyword option allow you to keep the existing index and append the given columns to a MultiIndex:

In [328]: frame = data.set_index('c', drop=False)

In [329]: frame = frame.set_index(['a', 'b'], append=True)

In [330]: frame
Out[330]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0

Other options in set_index allow you not drop the index columns or to add the index in-place (without creating a
new object):

3.2. Indexing and selecting data 391


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [331]: data.set_index('c', drop=False)


Out[331]:
a b c d
c
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0

In [332]: data.set_index(['a', 'b'], inplace=True)

In [333]: data
Out[333]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0

Reset the index

As a convenience, there is a new function on DataFrame called reset_index() which transfers the index values
into the DataFrame’s columns and sets a simple integer index. This is the inverse operation of set_index().

In [334]: data
Out[334]:
[email protected]
T56GZSRVAH c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0

In [335]: data.reset_index()
Out[335]:
a b c d
0 bar one z 1.0
1 bar two y 2.0
2 foo one x 3.0
3 foo two w 4.0

The output is more similar to a SQL table or a record array. The names for the columns derived from the index are the
ones stored in the names attribute.
You can use the level keyword to remove only a portion of the index:

In [336]: frame
Out[336]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0

(continues on next page)

392 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [337]: frame.reset_index(level=1)
Out[337]:
a c d
c b
z one bar z 1.0
y two bar y 2.0
x one foo x 3.0
w two foo w 4.0

reset_index takes an optional parameter drop which if true simply discards the index, instead of putting index
values in the DataFrame’s columns.

Adding an ad hoc index

If you create an index yourself, you can just assign it to the index field:

data.index = index

3.2.22 Returning a view versus a copy

When setting values in a pandas object, care must be taken to avoid what is called chained indexing. Here is an
example.

In [338]: dfmi = pd.DataFrame([list('abcd'),


.....:
[email protected] list('efgh'),
T56GZSRVAH .....: list('ijkl'),
.....: list('mnop')],
.....: columns=pd.MultiIndex.from_product([['one', 'two'],
.....: ['first', 'second
˓→']]))

.....:

In [339]: dfmi
Out[339]:
one two
first second first second
0 a b c d
1 e f g h
2 i j k l
3 m n o p

Compare these two access methods:

In [340]: dfmi['one']['second']
Out[340]:
0 b
1 f
2 j
3 n
Name: second, dtype: object

In [341]: dfmi.loc[:, ('one', 'second')]


Out[341]:
(continues on next page)

3.2. Indexing and selecting data 393


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


0 b
1 f
2 j
3 n
Name: (one, second), dtype: object

These both yield the same results, so which should you use? It is instructive to understand the order of operations on
these and why method 2 (.loc) is much preferred over method 1 (chained []).
dfmi['one'] selects the first level of the columns and returns a DataFrame that is singly-indexed. Then an-
other Python operation dfmi_with_one['second'] selects the series indexed by 'second'. This is indicated
by the variable dfmi_with_one because pandas sees these operations as separate events. e.g. separate calls to
__getitem__, so it has to treat them as linear operations, they happen one after another.
Contrast this to df.loc[:,('one','second')] which passes a nested tuple of (slice(None),('one',
'second')) to a single call to __getitem__. This allows pandas to deal with this as a single entity. Furthermore
this order of operations can be significantly faster, and allows one to index both axes if so desired.

Why does assignment fail when using chained indexing?

The problem in the previous section is just a performance issue. What’s up with the SettingWithCopy warning?
We don’t usually throw warnings around when you do something that might cost a few extra milliseconds!
But it turns out that assigning to the product of chained indexing has inherently unpredictable results. To see this,
think about how the Python interpreter executes this code:

dfmi.loc[:, ('one', 'second')] = value


[email protected]
T56GZSRVAH# becomes
dfmi.loc.__setitem__((slice(None), ('one', 'second')), value)

But this code is handled differently:

dfmi['one']['second'] = value
# becomes
dfmi.__getitem__('one').__setitem__('second', value)

See that __getitem__ in there? Outside of simple cases, it’s very hard to predict whether it will return a view or a
copy (it depends on the memory layout of the array, about which pandas makes no guarantees), and therefore whether
the __setitem__ will modify dfmi or a temporary object that gets thrown out immediately afterward. That’s what
SettingWithCopy is warning you about!

Note: You may be wondering whether we should be concerned about the loc property in the first example. But
dfmi.loc is guaranteed to be dfmi itself with modified indexing behavior, so dfmi.loc.__getitem__ /
dfmi.loc.__setitem__ operate on dfmi directly. Of course, dfmi.loc.__getitem__(idx) may be
a view or a copy of dfmi.

Sometimes a SettingWithCopy warning will arise at times when there’s no obvious chained indexing going on.
These are the bugs that SettingWithCopy is designed to catch! Pandas is probably trying to warn you that you’ve
done this:

def do_something(df):
foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows!
# ... many lines here ...
(continues on next page)

394 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


# We don't know whether this will modify df or not!
foo['quux'] = value
return foo

Yikes!

Evaluation order matters

When you use chained indexing, the order and type of the indexing operation partially determine whether the result is
a slice into the original object, or a copy of the slice.
Pandas has the SettingWithCopyWarning because assigning to a copy of a slice is frequently not intentional,
but a mistake caused by chained indexing returning a copy where a slice was expected.
If you would like pandas to be more or less trusting about assignment to a chained indexing expression, you can set
the option mode.chained_assignment to one of these values:
• 'warn', the default, means a SettingWithCopyWarning is printed.
• 'raise' means pandas will raise a SettingWithCopyException you have to deal with.
• None will suppress the warnings entirely.

In [342]: dfb = pd.DataFrame({'a': ['one', 'one', 'two',


.....: 'three', 'two', 'one', 'six'],
.....: 'c': np.arange(7)})
.....:

[email protected]
# This will show the SettingWithCopyWarning
T56GZSRVAH# but the frame values will be set
In [343]: dfb['c'][dfb['a'].str.startswith('o')] = 42

This however is operating on a copy and will not work.

>>> pd.set_option('mode.chained_assignment','warn')
>>> dfb[dfb['a'].str.startswith('o')]['c'] = 42
Traceback (most recent call last)
...
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead

A chained assignment can also crop up in setting in a mixed dtype frame.

Note: These setting rules apply to all of .loc/.iloc.

This is the correct access method:

In [344]: dfc = pd.DataFrame({'A': ['aaa', 'bbb', 'ccc'], 'B': [1, 2, 3]})

In [345]: dfc.loc[0, 'A'] = 11

In [346]: dfc
Out[346]:
A B
(continues on next page)

3.2. Indexing and selecting data 395


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


0 11 1
1 bbb 2
2 ccc 3

This can work at times, but it is not guaranteed to, and therefore should be avoided:

In [347]: dfc = dfc.copy()

In [348]: dfc['A'][0] = 111

In [349]: dfc
Out[349]:
A B
0 111 1
1 bbb 2
2 ccc 3

This will not work at all, and so should be avoided:

>>> pd.set_option('mode.chained_assignment','raise')
>>> dfc.loc[0]['A'] = 1111
Traceback (most recent call last)
...
SettingWithCopyException:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead

[email protected]
T56GZSRVAH Warning: The chained assignment warnings / exceptions are aiming to inform the user of a possibly invalid
assignment. There may be false positives; situations where a chained assignment is inadvertently reported.

3.3 MultiIndex / advanced indexing

This section covers indexing with a MultiIndex and other advanced indexing features.
See the Indexing and Selecting Data for general indexing documentation.

Warning: Whether a copy or a reference is returned for a setting operation may depend on the context. This is
sometimes called chained assignment and should be avoided. See Returning a View versus Copy.

See the cookbook for some advanced strategies.

396 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.3.1 Hierarchical indexing (MultiIndex)

Hierarchical / Multi-level indexing is very exciting as it opens the door to some quite sophisticated data analysis and
manipulation, especially for working with higher dimensional data. In essence, it enables you to store and manipulate
data with an arbitrary number of dimensions in lower dimensional data structures like Series (1d) and DataFrame
(2d).
In this section, we will show what exactly we mean by “hierarchical” indexing and how it integrates with all of the
pandas indexing functionality described above and in prior sections. Later, when discussing group by and pivoting and
reshaping data, we’ll show non-trivial applications to illustrate how it aids in structuring data for analysis.
See the cookbook for some advanced strategies.
Changed in version 0.24.0: MultiIndex.labels has been renamed to MultiIndex.codes and
MultiIndex.set_labels to MultiIndex.set_codes.

Creating a MultiIndex (hierarchical index) object

The MultiIndex object is the hierarchical analogue of the standard Index object which typically stores the axis
labels in pandas objects. You can think of MultiIndex as an array of tuples where each tuple is unique. A
MultiIndex can be created from a list of arrays (using MultiIndex.from_arrays()), an array of tuples
(using MultiIndex.from_tuples()), a crossed set of iterables (using MultiIndex.from_product()),
or a DataFrame (using MultiIndex.from_frame()). The Index constructor will attempt to return a
MultiIndex when it is passed a list of tuples. The following examples demonstrate different ways to initialize
MultiIndexes.
In [1]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
...: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
[email protected]
...:
T56GZSRVAH
In [2]: tuples = list(zip(*arrays))

In [3]: tuples
Out[3]:
[('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')]

In [4]: index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])

In [5]: index
Out[5]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')],
names=['first', 'second'])

(continues on next page)

3.3. MultiIndex / advanced indexing 397


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [6]: s = pd.Series(np.random.randn(8), index=index)

In [7]: s
Out[7]:
first second
bar one 0.469112
two -0.282863
baz one -1.509059
two -1.135632
foo one 1.212112
two -0.173215
qux one 0.119209
two -1.044236
dtype: float64

When you want every pairing of the elements in two iterables, it can be easier to use the MultiIndex.
from_product() method:
In [8]: iterables = [['bar', 'baz', 'foo', 'qux'], ['one', 'two']]

In [9]: pd.MultiIndex.from_product(iterables, names=['first', 'second'])


Out[9]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
[email protected] ('foo', 'two'),
T56GZSRVAH ('qux', 'one'),
('qux', 'two')],
names=['first', 'second'])

You can also construct a MultiIndex from a DataFrame directly, using the method MultiIndex.
from_frame(). This is a complementary method to MultiIndex.to_frame().
New in version 0.24.0.
In [10]: df = pd.DataFrame([['bar', 'one'], ['bar', 'two'],
....: ['foo', 'one'], ['foo', 'two']],
....: columns=['first', 'second'])
....:

In [11]: pd.MultiIndex.from_frame(df)
Out[11]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('foo', 'one'),
('foo', 'two')],
names=['first', 'second'])

As a convenience, you can pass a list of arrays directly into Series or DataFrame to construct a MultiIndex
automatically:
In [12]: arrays = [np.array(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux']),
....: np.array(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'])]
....:

(continues on next page)

398 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [13]: s = pd.Series(np.random.randn(8), index=arrays)

In [14]: s
Out[14]:
bar one -0.861849
two -2.104569
baz one -0.494929
two 1.071804
foo one 0.721555
two -0.706771
qux one -1.039575
two 0.271860
dtype: float64

In [15]: df = pd.DataFrame(np.random.randn(8, 4), index=arrays)

In [16]: df
Out[16]:
0 1 2 3
bar one -0.424972 0.567020 0.276232 -1.087401
two -0.673690 0.113648 -1.478427 0.524988
baz one 0.404705 0.577046 -1.715002 -1.039268
two -0.370647 -1.157892 -1.344312 0.844885
foo one 1.075770 -0.109050 1.643563 -1.469388
two 0.357021 -0.674600 -1.776904 -0.968914
qux one -1.294524 0.413738 0.276662 -0.472035
two -0.013960 -0.362543 -0.006154 -0.923061
[email protected]
T56GZSRVAHAll of the MultiIndex constructors accept a names argument which stores string names for the levels themselves.
If no names are provided, None will be assigned:

In [17]: df.index.names
Out[17]: FrozenList([None, None])

This index can back any axis of a pandas object, and the number of levels of the index is up to you:

In [18]: df = pd.DataFrame(np.random.randn(3, 8), index=['A', 'B', 'C'],


˓→columns=index)

In [19]: df
Out[19]:
first bar baz foo qux
second one two one two one two one two
A 0.895717 0.805244 -1.206412 2.565646 1.431256 1.340309 -1.170299 -0.226169
B 0.410835 0.813850 0.132003 -0.827317 -0.076467 -1.187678 1.130127 -1.436737
C -1.413681 1.607920 1.024180 0.569605 0.875906 -2.211372 0.974466 -2.006747

In [20]: pd.DataFrame(np.random.randn(6, 6), index=index[:6], columns=index[:6])


Out[20]:
first bar baz foo
second one two one two one two
first second
bar one -0.410001 -0.078638 0.545952 -1.219217 -1.226825 0.769804
two -1.281247 -0.727707 -0.121306 -0.097883 0.695775 0.341734
baz one 0.959726 -1.110336 -0.619976 0.149748 -0.732339 0.687738
two 0.176444 0.403310 -0.154951 0.301624 -2.179861 -1.369849
(continues on next page)

3.3. MultiIndex / advanced indexing 399


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


foo one -0.954208 1.462696 -1.743161 -0.826591 -0.345352 1.314232
two 0.690579 0.995761 2.396780 0.014871 3.357427 -0.317441

We’ve “sparsified” the higher levels of the indexes to make the console output a bit easier on the eyes. Note that how
the index is displayed can be controlled using the multi_sparse option in pandas.set_options():

In [21]: with pd.option_context('display.multi_sparse', False):


....: df
....:

It’s worth keeping in mind that there’s nothing preventing you from using tuples as atomic labels on an axis:

In [22]: pd.Series(np.random.randn(8), index=tuples)


Out[22]:
(bar, one) -1.236269
(bar, two) 0.896171
(baz, one) -0.487602
(baz, two) -0.082240
(foo, one) -2.182937
(foo, two) 0.380396
(qux, one) 0.084844
(qux, two) 0.432390
dtype: float64

The reason that the MultiIndex matters is that it can allow you to do grouping, selection, and reshaping operations
as we will describe below and in subsequent areas of the documentation. As you will see in later sections, you can find
yourself working with hierarchically-indexed data without creating a MultiIndex explicitly yourself. However,
[email protected]
when loading data from a file, you may wish to generate your own MultiIndex when preparing the data set.
T56GZSRVAH

Reconstructing the level labels

The method get_level_values() will return a vector of the labels for each location at a particular level:

In [23]: index.get_level_values(0)
Out[23]: Index(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], dtype='object
˓→', name='first')

In [24]: index.get_level_values('second')
Out[24]: Index(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'], dtype='object
˓→', name='second')

Basic indexing on axis with MultiIndex

One of the important features of hierarchical indexing is that you can select data by a “partial” label identifying a
subgroup in the data. Partial selection “drops” levels of the hierarchical index in the result in a completely analogous
way to selecting a column in a regular DataFrame:

In [25]: df['bar']
Out[25]:
second one two
A 0.895717 0.805244
B 0.410835 0.813850
C -1.413681 1.607920
(continues on next page)

400 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [26]: df['bar', 'one']


Out[26]:
A 0.895717
B 0.410835
C -1.413681
Name: (bar, one), dtype: float64

In [27]: df['bar']['one']
Out[27]:
A 0.895717
B 0.410835
C -1.413681
Name: one, dtype: float64

In [28]: s['qux']
Out[28]:
one -1.039575
two 0.271860
dtype: float64

See Cross-section with hierarchical index for how to select on a deeper level.

Defined levels

The MultiIndex keeps all the defined levels of an index, even if they are not actually used. When slicing an index,
[email protected]
you may notice this. For example:
T56GZSRVAH
In [29]: df.columns.levels # original MultiIndex
Out[29]: FrozenList([['bar', 'baz', 'foo', 'qux'], ['one', 'two']])

In [30]: df[['foo','qux']].columns.levels # sliced


Out[30]: FrozenList([['bar', 'baz', 'foo', 'qux'], ['one', 'two']])

This is done to avoid a recomputation of the levels in order to make slicing highly performant. If you want to see only
the used levels, you can use the get_level_values() method.

In [31]: df[['foo', 'qux']].columns.to_numpy()


Out[31]:
array([('foo', 'one'), ('foo', 'two'), ('qux', 'one'), ('qux', 'two')],
dtype=object)

# for a specific level


In [32]: df[['foo', 'qux']].columns.get_level_values(0)
Out[32]: Index(['foo', 'foo', 'qux', 'qux'], dtype='object', name='first')

To reconstruct the MultiIndex with only the used levels, the remove_unused_levels() method may be used.

In [33]: new_mi = df[['foo', 'qux']].columns.remove_unused_levels()

In [34]: new_mi.levels
Out[34]: FrozenList([['foo', 'qux'], ['one', 'two']])

3.3. MultiIndex / advanced indexing 401


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Data alignment and using reindex

Operations between differently-indexed objects having MultiIndex on the axes will work as you expect; data
alignment will work the same as an Index of tuples:

In [35]: s + s[:-2]
Out[35]:
bar one -1.723698
two -4.209138
baz one -0.989859
two 2.143608
foo one 1.443110
two -1.413542
qux one NaN
two NaN
dtype: float64

In [36]: s + s[::2]
Out[36]:
bar one -1.723698
two NaN
baz one -0.989859
two NaN
foo one 1.443110
two NaN
qux one -2.079150
two NaN
dtype: float64
[email protected]
T56GZSRVAHThe reindex() method of Series/DataFrames can be called with another MultiIndex, or even a list or array
of tuples:

In [37]: s.reindex(index[:3])
Out[37]:
first second
bar one -0.861849
two -2.104569
baz one -0.494929
dtype: float64

In [38]: s.reindex([('foo', 'two'), ('bar', 'one'), ('qux', 'one'), ('baz', 'one')])


Out[38]:
foo two -0.706771
bar one -0.861849
qux one -1.039575
baz one -0.494929
dtype: float64

402 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.3.2 Advanced indexing with hierarchical index

Syntactically integrating MultiIndex in advanced indexing with .loc is a bit challenging, but we’ve made every
effort to do so. In general, MultiIndex keys take the form of tuples. For example, the following works as you would
expect:
In [39]: df = df.T

In [40]: df
Out[40]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747

In [41]: df.loc[('bar', 'two')]


Out[41]:
A 0.805244
B 0.813850
C 1.607920
Name: (bar, two), dtype: float64

Note that df.loc['bar', 'two'] would also work in this example, but this shorthand notation can lead to
[email protected]
T56GZSRVAHambiguity in general.
If you also want to index a specific column with .loc, you must use a tuple like this:
In [42]: df.loc[('bar', 'two'), 'A']
Out[42]: 0.8052440253863785

You don’t have to specify all levels of the MultiIndex by passing only the first elements of the tuple. For example,
you can use “partial” indexing to get all elements with bar in the first level as follows:
df.loc[‘bar’]
This is a shortcut for the slightly more verbose notation df.loc[('bar',),] (equivalent to df.loc['bar',]
in this example).
“Partial” slicing also works quite nicely.
In [43]: df.loc['baz':'foo']
Out[43]:
A B C
first second
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372

You can slice with a ‘range’ of values, by providing a slice of tuples.


In [44]: df.loc[('baz', 'two'):('qux', 'one')]
Out[44]:
(continues on next page)

3.3. MultiIndex / advanced indexing 403


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


A B C
first second
baz two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466

In [45]: df.loc[('baz', 'two'):'foo']


Out[45]:
A B C
first second
baz two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372

Passing a list of labels or tuples works similar to reindexing:

In [46]: df.loc[[('bar', 'two'), ('qux', 'one')]]


Out[46]:
A B C
first second
bar two 0.805244 0.813850 1.607920
qux one -1.170299 1.130127 0.974466

Note: It is important to note that tuples and lists are not treated identically in pandas when it comes to indexing.
Whereas a tuple is interpreted as one multi-level key, a list is used to specify several keys. Or in other words, tuples
[email protected]
go horizontally (traversing levels), lists go vertically (scanning levels).
T56GZSRVAH

Importantly, a list of tuples indexes several complete MultiIndex keys, whereas a tuple of lists refer to several
values within a level:

In [47]: s = pd.Series([1, 2, 3, 4, 5, 6],


....: index=pd.MultiIndex.from_product([["A", "B"], ["c", "d", "e
˓→"]]))

....:

In [48]: s.loc[[("A", "c"), ("B", "d")]] # list of tuples


Out[48]:
A c 1
B d 5
dtype: int64

In [49]: s.loc[(["A", "B"], ["c", "d"])] # tuple of lists


Out[49]:
A c 1
d 2
B c 4
d 5
dtype: int64

404 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Using slicers

You can slice a MultiIndex by providing multiple indexers.


You can provide any of the selectors as if you are indexing by label, see Selection by Label, including slices, lists of
labels, labels, and boolean indexers.
You can use slice(None) to select all the contents of that level. You do not need to specify all the deeper levels,
they will be implied as slice(None).
As usual, both sides of the slicers are included as this is label indexing.

Warning: You should specify all axes in the .loc specifier, meaning the indexer for the index and for the
columns. There are some ambiguous cases where the passed indexer could be mis-interpreted as indexing both
axes, rather than into say the MultiIndex for the rows.
You should do this:
df.loc[(slice('A1', 'A3'), ...), :] # noqa: E999

You should not do this:


df.loc[(slice('A1', 'A3'), ...)] # noqa: E999

In [50]: def mklbl(prefix, n):


....: return ["%s%s" % (prefix, i) for i in range(n)]
....:

In [51]: miindex = pd.MultiIndex.from_product([mklbl('A', 4),


[email protected]
....: mklbl('B', 2),
T56GZSRVAH
....: mklbl('C', 4),
....: mklbl('D', 2)])
....:

In [52]: micolumns = pd.MultiIndex.from_tuples([('a', 'foo'), ('a', 'bar'),


....: ('b', 'foo'), ('b', 'bah')],
....: names=['lvl0', 'lvl1'])
....:

In [53]: dfmi = pd.DataFrame(np.arange(len(miindex) * len(micolumns))


....: .reshape((len(miindex), len(micolumns))),
....: index=miindex,
....: columns=micolumns).sort_index().sort_index(axis=1)
....:

In [54]: dfmi
Out[54]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9 8 11 10
D1 13 12 15 14
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 237 236 239 238
C2 D0 241 240 243 242
D1 245 244 247 246
(continues on next page)

3.3. MultiIndex / advanced indexing 405


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


C3 D0 249 248 251 250
D1 253 252 255 254

[64 rows x 4 columns]

Basic MultiIndex slicing using slices, lists, and labels.


In [55]: dfmi.loc[(slice('A1', 'A3'), slice(None), ['C1', 'C3']), :]
Out[55]:
lvl0 a b
lvl1 bar foo bah foo
A1 B0 C1 D0 73 72 75 74
D1 77 76 79 78
C3 D0 89 88 91 90
D1 93 92 95 94
B1 C1 D0 105 104 107 106
... ... ... ... ...
A3 B0 C3 D1 221 220 223 222
B1 C1 D0 233 232 235 234
D1 237 236 239 238
C3 D0 249 248 251 250
D1 253 252 255 254

[24 rows x 4 columns]

You can use pandas.IndexSlice to facilitate a more natural syntax using :, rather than using slice(None).
In [56]: idx = pd.IndexSlice
[email protected]
T56GZSRVAH
In [57]: dfmi.loc[idx[:, :, ['C1', 'C3']], idx[:, 'foo']]
Out[57]:
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
... ... ...
A3 B0 C3 D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254

[32 rows x 2 columns]

It is possible to perform quite complicated selections using this method on multiple axes at the same time.
In [58]: dfmi.loc['A1', (slice(None), 'foo')]
Out[58]:
lvl0 a b
lvl1 foo foo
B0 C0 D0 64 66
D1 68 70
C1 D0 72 74
D1 76 78
(continues on next page)

406 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


C2 D0 80 82
... ... ...
B1 C1 D1 108 110
C2 D0 112 114
D1 116 118
C3 D0 120 122
D1 124 126

[16 rows x 2 columns]

In [59]: dfmi.loc[idx[:, :, ['C1', 'C3']], idx[:, 'foo']]


Out[59]:
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
... ... ...
A3 B0 C3 D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254

[32 rows x 2 columns]


[email protected]
T56GZSRVAHUsing a boolean indexer you can provide selection related to the values.
In [60]: mask = dfmi[('a', 'foo')] > 200

In [61]: dfmi.loc[idx[mask, :, ['C1', 'C3']], idx[:, 'foo']]


Out[61]:
lvl0 a b
lvl1 foo foo
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254

You can also specify the axis argument to .loc to interpret the passed slicers on a single axis.
In [62]: dfmi.loc(axis=0)[:, :, ['C1', 'C3']]
Out[62]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C1 D0 9 8 11 10
D1 13 12 15 14
C3 D0 25 24 27 26
D1 29 28 31 30
B1 C1 D0 41 40 43 42
... ... ... ... ...
A3 B0 C3 D1 221 220 223 222
(continues on next page)

3.3. MultiIndex / advanced indexing 407


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


B1 C1 D0 233 232 235 234
D1 237 236 239 238
C3 D0 249 248 251 250
D1 253 252 255 254

[32 rows x 4 columns]

Furthermore, you can set the values using the following methods.

In [63]: df2 = dfmi.copy()

In [64]: df2.loc(axis=0)[:, :, ['C1', 'C3']] = -10

In [65]: df2
Out[65]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 -10 -10 -10 -10
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 -10 -10 -10 -10
D1 -10
[email protected] -10 -10 -10
T56GZSRVAH
[64 rows x 4 columns]

You can use a right-hand-side of an alignable object as well.

In [66]: df2 = dfmi.copy()

In [67]: df2.loc[idx[:, :, ['C1', 'C3']], :] = df2 * 1000

In [68]: df2
Out[68]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9000 8000 11000 10000
D1 13000 12000 15000 14000
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 237000 236000 239000 238000
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249000 248000 251000 250000
D1 253000 252000 255000 254000

[64 rows x 4 columns]

408 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Cross-section

The xs() method of DataFrame additionally takes a level argument to make selecting data at a particular level of a
MultiIndex easier.
In [69]: df
Out[69]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747

In [70]: df.xs('one', level='second')


Out[70]:
A B C
first
bar 0.895717 0.410835 -1.413681
baz -1.206412 0.132003 1.024180
foo 1.431256 -0.076467 0.875906
qux -1.170299 1.130127 0.974466

# using the slicers


[email protected]
In [71]: df.loc[(slice(None), 'one'), :]
T56GZSRVAHOut[71]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
baz one -1.206412 0.132003 1.024180
foo one 1.431256 -0.076467 0.875906
qux one -1.170299 1.130127 0.974466

You can also select on the columns with xs, by providing the axis argument.
In [72]: df = df.T

In [73]: df.xs('one', level='second', axis=1)


Out[73]:
first bar baz foo qux
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466

# using the slicers


In [74]: df.loc[:, (slice(None), 'one')]
Out[74]:
first bar baz foo qux
second one one one one
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466

xs also allows selection with multiple keys.

3.3. MultiIndex / advanced indexing 409


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [75]: df.xs(('one', 'bar'), level=('second', 'first'), axis=1)


Out[75]:
first bar
second one
A 0.895717
B 0.410835
C -1.413681

# using the slicers


In [76]: df.loc[:, ('bar', 'one')]
Out[76]:
A 0.895717
B 0.410835
C -1.413681
Name: (bar, one), dtype: float64

You can pass drop_level=False to xs to retain the level that was selected.

In [77]: df.xs('one', level='second', axis=1, drop_level=False)


Out[77]:
first bar baz foo qux
second one one one one
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466

Compare the above with the result using drop_level=True (the default value).
[email protected]
T56GZSRVAHIn [78]: df.xs('one', level='second', axis=1, drop_level=True)
Out[78]:
first bar baz foo qux
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466

Advanced reindexing and alignment

Using the parameter level in the reindex() and align() methods of pandas objects is useful to broadcast
values across a level. For instance:

In [79]: midx = pd.MultiIndex(levels=[['zero', 'one'], ['x', 'y']],


....: codes=[[1, 1, 0, 0], [1, 0, 1, 0]])
....:

In [80]: df = pd.DataFrame(np.random.randn(4, 2), index=midx)

In [81]: df
Out[81]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520

In [82]: df2 = df.mean(level=0)


(continues on next page)

410 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [83]: df2
Out[83]:
0 1
one 1.060074 -0.109716
zero 1.271532 0.713416

In [84]: df2.reindex(df.index, level=0)


Out[84]:
0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416

# aligning
In [85]: df_aligned, df2_aligned = df.align(df2, level=0)

In [86]: df_aligned
Out[86]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520

In [87]: df2_aligned
Out[87]:
[email protected]
T56GZSRVAH 0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416

Swapping levels with swaplevel

The swaplevel() method can switch the order of two levels:

In [88]: df[:5]
Out[88]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520

In [89]: df[:5].swaplevel(0, 1, axis=0)


Out[89]:
0 1
y one 1.519970 -0.493662
x one 0.600178 0.274230
y zero 0.132885 -0.023688
x zero 2.410179 1.450520

3.3. MultiIndex / advanced indexing 411


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Reordering levels with reorder_levels

The reorder_levels() method generalizes the swaplevel method, allowing you to permute the hierarchical
index levels in one step:

In [90]: df[:5].reorder_levels([1, 0], axis=0)


Out[90]:
0 1
y one 1.519970 -0.493662
x one 0.600178 0.274230
y zero 0.132885 -0.023688
x zero 2.410179 1.450520

Renaming names of an Index or MultiIndex

The rename() method is used to rename the labels of a MultiIndex, and is typically used to rename the columns
of a DataFrame. The columns argument of rename allows a dictionary to be specified that includes only the
columns you wish to rename.

In [91]: df.rename(columns={0: "col0", 1: "col1"})


Out[91]:
col0 col1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520

[email protected]
T56GZSRVAHThis method can also be used to rename specific labels of the main index of the DataFrame.
In [92]: df.rename(index={"one": "two", "y": "z"})
Out[92]:
0 1
two z 1.519970 -0.493662
x 0.600178 0.274230
zero z 0.132885 -0.023688
x 2.410179 1.450520

The rename_axis() method is used to rename the name of a Index or MultiIndex. In particular, the names of
the levels of a MultiIndex can be specified, which is useful if reset_index() is later used to move the values
from the MultiIndex to a column.

In [93]: df.rename_axis(index=['abc', 'def'])


Out[93]:
0 1
abc def
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520

Note that the columns of a DataFrame are an index, so that using rename_axis with the columns argument
will change the name of that index.

In [94]: df.rename_axis(columns="Cols").columns
Out[94]: RangeIndex(start=0, stop=2, step=1, name='Cols')

412 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Both rename and rename_axis support specifying a dictionary, Series or a mapping function to map la-
bels/names to new values.
When working with an Index object directly, rather than via a DataFrame, Index.set_names() can be used
to change the names.
In [95]: mi = pd.MultiIndex.from_product([[1, 2], ['a', 'b']], names=['x', 'y'])

In [96]: mi.names
Out[96]: FrozenList(['x', 'y'])

In [97]: mi2 = mi.rename("new name", level=0)

In [98]: mi2
Out[98]:
MultiIndex([(1, 'a'),
(1, 'b'),
(2, 'a'),
(2, 'b')],
names=['new name', 'y'])

You cannot set the names of the MultiIndex via a level.


In [99]: mi.levels[0].name = "name via level"
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-99-35d32a9a5218> in <module>
----> 1 mi.levels[0].name = "name via level"

[email protected]
/pandas/pandas/core/indexes/base.py in name(self, value)
T56GZSRVAH 1189 # Used in MultiIndex.levels to avoid silently ignoring name
updates.
˓→

1190 raise RuntimeError(


-> 1191 "Cannot set name on a level of a MultiIndex. Use "
1192 "'MultiIndex.set_names' instead."
1193 )

RuntimeError: Cannot set name on a level of a MultiIndex. Use 'MultiIndex.set_names'


˓→instead.

Use Index.set_names() instead.

3.3.3 Sorting a MultiIndex

For MultiIndex-ed objects to be indexed and sliced effectively, they need to be sorted. As with any index, you can
use sort_index().
In [100]: import random

In [101]: random.shuffle(tuples)

In [102]: s = pd.Series(np.random.randn(8), index=pd.MultiIndex.from_tuples(tuples))

In [103]: s
Out[103]:
foo one 0.206053
two -0.251905
(continues on next page)

3.3. MultiIndex / advanced indexing 413


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


baz one -2.213588
bar one 1.063327
baz two 1.266143
bar two 0.299368
qux one -0.863838
two 0.408204
dtype: float64

In [104]: s.sort_index()
Out[104]:
bar one 1.063327
two 0.299368
baz one -2.213588
two 1.266143
foo one 0.206053
two -0.251905
qux one -0.863838
two 0.408204
dtype: float64

In [105]: s.sort_index(level=0)
Out[105]:
bar one 1.063327
two 0.299368
baz one -2.213588
two 1.266143
foo one 0.206053
two -0.251905
[email protected]
T56GZSRVAHqux one -0.863838
two 0.408204
dtype: float64

In [106]: s.sort_index(level=1)
Out[106]:
bar one 1.063327
baz one -2.213588
foo one 0.206053
qux one -0.863838
bar two 0.299368
baz two 1.266143
foo two -0.251905
qux two 0.408204
dtype: float64

You may also pass a level name to sort_index if the MultiIndex levels are named.
In [107]: s.index.set_names(['L1', 'L2'], inplace=True)

In [108]: s.sort_index(level='L1')
Out[108]:
L1 L2
bar one 1.063327
two 0.299368
baz one -2.213588
two 1.266143
foo one 0.206053
two -0.251905
(continues on next page)

414 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


qux one -0.863838
two 0.408204
dtype: float64

In [109]: s.sort_index(level='L2')
Out[109]:
L1 L2
bar one 1.063327
baz one -2.213588
foo one 0.206053
qux one -0.863838
bar two 0.299368
baz two 1.266143
foo two -0.251905
qux two 0.408204
dtype: float64

On higher dimensional objects, you can sort any of the other axes by level if they have a MultiIndex:

In [110]: df.T.sort_index(level=1, axis=1)


Out[110]:
one zero one zero
x x y y
0 0.600178 2.410179 1.519970 0.132885
1 0.274230 1.450520 -0.493662 -0.023688

Indexing will work even if the data are not sorted, but will be rather inefficient (and show a PerformanceWarning).
It will also return a copy of the data rather than a view:
[email protected]
T56GZSRVAH
In [111]: dfm = pd.DataFrame({'jim': [0, 0, 1, 1],
.....: 'joe': ['x', 'x', 'z', 'y'],
.....: 'jolie': np.random.rand(4)})
.....:

In [112]: dfm = dfm.set_index(['jim', 'joe'])

In [113]: dfm
Out[113]:
jolie
jim joe
0 x 0.490671
x 0.120248
1 z 0.537020
y 0.110968

In [4]: dfm.loc[(1, 'z')]


PerformanceWarning: indexing past lexsort depth may impact performance.

Out[4]:
jolie
jim joe
1 z 0.64094

Furthermore, if you try to index something that is not fully lexsorted, this can raise:

3.3. MultiIndex / advanced indexing 415


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [5]: dfm.loc[(0, 'y'):(1, 'z')]


UnsortedIndexError: 'Key length (2) was greater than MultiIndex lexsort depth (1)'

The is_lexsorted() method on a MultiIndex shows if the index is sorted, and the lexsort_depth prop-
erty returns the sort depth:
In [114]: dfm.index.is_lexsorted()
Out[114]: False

In [115]: dfm.index.lexsort_depth
Out[115]: 1

In [116]: dfm = dfm.sort_index()

In [117]: dfm
Out[117]:
jolie
jim joe
0 x 0.490671
x 0.120248
1 y 0.110968
z 0.537020

In [118]: dfm.index.is_lexsorted()
Out[118]: True

In [119]: dfm.index.lexsort_depth
Out[119]: 2
[email protected]
T56GZSRVAH
And now selection works as expected.
In [120]: dfm.loc[(0, 'y'):(1, 'z')]
Out[120]:
jolie
jim joe
1 y 0.110968
z 0.537020

3.3.4 Take methods

Similar to NumPy ndarrays, pandas Index, Series, and DataFrame also provides the take() method that
retrieves elements along a given axis at the given indices. The given indices must be either a list or an ndarray of
integer index positions. take will also accept negative integers as relative positions to the end of the object.
In [121]: index = pd.Index(np.random.randint(0, 1000, 10))

In [122]: index
Out[122]: Int64Index([214, 502, 712, 567, 786, 175, 993, 133, 758, 329], dtype='int64
˓→')

In [123]: positions = [0, 9, 3]

In [124]: index[positions]
Out[124]: Int64Index([214, 329, 567], dtype='int64')

(continues on next page)

416 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [125]: index.take(positions)
Out[125]: Int64Index([214, 329, 567], dtype='int64')

In [126]: ser = pd.Series(np.random.randn(10))

In [127]: ser.iloc[positions]
Out[127]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64

In [128]: ser.take(positions)
Out[128]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64

For DataFrames, the given indices should be a 1d list or ndarray that specifies row or column positions.

In [129]: frm = pd.DataFrame(np.random.randn(5, 3))

In [130]: frm.take([1, 4, 3])


Out[130]:
0 1 2
1 -1.237881 0.106854 -1.276829
4 0.629675 -1.425966 1.857704
[email protected]
T56GZSRVAH3 0.979542 -1.633678 0.615855
In [131]: frm.take([0, 2], axis=1)
Out[131]:
0 2
0 0.595974 0.601544
1 -1.237881 -1.276829
2 -0.767101 1.499591
3 0.979542 0.615855
4 0.629675 1.857704

It is important to note that the take method on pandas objects are not intended to work on boolean indices and may
return unexpected results.

In [132]: arr = np.random.randn(10)

In [133]: arr.take([False, False, True, True])


Out[133]: array([-1.1935, -1.1935, 0.6775, 0.6775])

In [134]: arr[[0, 1]]


Out[134]: array([-1.1935, 0.6775])

In [135]: ser = pd.Series(np.random.randn(10))

In [136]: ser.take([False, False, True, True])


Out[136]:
0 0.233141
0 0.233141
(continues on next page)

3.3. MultiIndex / advanced indexing 417


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 -0.223540
1 -0.223540
dtype: float64

In [137]: ser.iloc[[0, 1]]


Out[137]:
0 0.233141
1 -0.223540
dtype: float64

Finally, as a small note on performance, because the take method handles a narrower range of inputs, it can offer
performance that is a good deal faster than fancy indexing.

In [138]: arr = np.random.randn(10000, 5)

In [139]: indexer = np.arange(10000)

In [140]: random.shuffle(indexer)

In [141]: %timeit arr[indexer]


.....: %timeit arr.take(indexer, axis=0)
.....:
149 us +- 1.22 us per loop (mean +- std. dev. of 7 runs, 10000 loops each)
50.1 us +- 133 ns per loop (mean +- std. dev. of 7 runs, 10000 loops each)

In [142]: ser = pd.Series(arr[:, 0])

[email protected]
In [143]: %timeit ser.iloc[indexer]
T56GZSRVAH .....: %timeit ser.take(indexer)
.....:
142 us +- 4.95 us per loop (mean +- std. dev. of 7 runs, 10000 loops each)
128 us +- 1.48 us per loop (mean +- std. dev. of 7 runs, 10000 loops each)

3.3.5 Index types

We have discussed MultiIndex in the previous sections pretty extensively. Documentation about
DatetimeIndex and PeriodIndex are shown here, and documentation about TimedeltaIndex is found
here.
In the following sub-sections we will highlight some other index types.

CategoricalIndex

CategoricalIndex is a type of index that is useful for supporting indexing with duplicates. This is a container
around a Categorical and allows efficient indexing and storage of an index with a large number of duplicated
elements.

In [144]: from pandas.api.types import CategoricalDtype

In [145]: df = pd.DataFrame({'A': np.arange(6),


.....: 'B': list('aabbca')})
.....:

(continues on next page)

418 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [146]: df['B'] = df['B'].astype(CategoricalDtype(list('cab')))

In [147]: df
Out[147]:
A B
0 0 a
1 1 a
2 2 b
3 3 b
4 4 c
5 5 a

In [148]: df.dtypes
Out[148]:
A int64
B category
dtype: object

In [149]: df['B'].cat.categories
Out[149]: Index(['c', 'a', 'b'], dtype='object')

Setting the index will create a CategoricalIndex.

In [150]: df2 = df.set_index('B')

In [151]: df2.index
Out[151]: CategoricalIndex(['a', 'a', 'b', 'b', 'c', 'a'], categories=['c', 'a', 'b'],
˓→ ordered=False, name='B', dtype='category')
[email protected]
T56GZSRVAH
Indexing with __getitem__/.iloc/.loc works similarly to an Index with duplicates. The indexers must be
in the category or the operation will raise a KeyError.

In [152]: df2.loc['a']
Out[152]:
A
B
a 0
a 1
a 5

The CategoricalIndex is preserved after indexing:

In [153]: df2.loc['a'].index
Out[153]: CategoricalIndex(['a', 'a', 'a'], categories=['c', 'a', 'b'], ordered=False,
˓→ name='B', dtype='category')

Sorting the index will sort by the order of the categories (recall that we created the index with
CategoricalDtype(list('cab')), so the sorted order is cab).

In [154]: df2.sort_index()
Out[154]:
A
B
c 4
a 0
a 1
(continues on next page)

3.3. MultiIndex / advanced indexing 419


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


a 5
b 2
b 3

Groupby operations on the index will preserve the index nature as well.
In [155]: df2.groupby(level=0).sum()
Out[155]:
A
B
c 4
a 6
b 5

In [156]: df2.groupby(level=0).sum().index
Out[156]: CategoricalIndex(['c', 'a', 'b'], categories=['c', 'a', 'b'], ordered=False,
˓→ name='B', dtype='category')

Reindexing operations will return a resulting index based on the type of the passed indexer. Passing a list will return
a plain-old Index; indexing with a Categorical will return a CategoricalIndex, indexed according to the
categories of the passed Categorical dtype. This allows one to arbitrarily index these even with values not in the
categories, similarly to how you can reindex any pandas index.
In [157]: df3 = pd.DataFrame({'A': np.arange(3),
.....: 'B': pd.Series(list('abc')).astype('category')})
.....:

In [158]: df3 = df3.set_index('B')


[email protected]
T56GZSRVAH
In [159]: df3
Out[159]:
A
B
a 0
b 1
c 2

In [160]: df3.reindex(['a', 'e'])


Out[160]:
A
B
a 0.0
e NaN

In [161]: df3.reindex(['a', 'e']).index


Out[161]: Index(['a', 'e'], dtype='object', name='B')

In [162]: df3.reindex(pd.Categorical(['a', 'e'], categories=list('abe')))


Out[162]:
A
B
a 0.0
e NaN

In [163]: df3.reindex(pd.Categorical(['a', 'e'], categories=list('abe'))).index


Out[163]: CategoricalIndex(['a', 'e'], categories=['a', 'b', 'e'], ordered=False,
˓→name='B', dtype='category') (continues on next page)

420 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

Warning: Reshaping and Comparison operations on a CategoricalIndex must have the same categories or
a TypeError will be raised.
In [164]: df4 = pd.DataFrame({'A': np.arange(2),
.....: 'B': list('ba')})
.....:

In [165]: df4['B'] = df4['B'].astype(CategoricalDtype(list('ab')))

In [166]: df4 = df4.set_index('B')

In [167]: df4.index
Out[167]: CategoricalIndex(['b', 'a'], categories=['a', 'b'], ordered=False, name='B
˓→', dtype='category')

In [168]: df5 = pd.DataFrame({'A': np.arange(2),


.....: 'B': list('bc')})
.....:

In [169]: df5['B'] = df5['B'].astype(CategoricalDtype(list('bc')))

In [170]: df5 = df5.set_index('B')

In [171]: df5.index
Out[171]: CategoricalIndex(['b', 'c'], categories=['b', 'c'], ordered=False, name='B
[email protected]
˓→', dtype='category')
T56GZSRVAH
In [1]: pd.concat([df4, df5])
TypeError: categories must match existing categories when appending

Int64Index and RangeIndex

Int64Index is a fundamental basic index in pandas. This is an immutable array implementing an ordered, sliceable
set.
RangeIndex is a sub-class of Int64Index that provides the default index for all NDFrame objects.
RangeIndex is an optimized version of Int64Index that can represent a monotonic ordered set. These are
analogous to Python range types.

Float64Index

By default a Float64Index will be automatically created when passing floating, or mixed-integer-floating values
in index creation. This enables a pure label-based slicing paradigm that makes [],ix,loc for scalar indexing and
slicing work exactly the same.
In [172]: indexf = pd.Index([1.5, 2, 3, 4.5, 5])

In [173]: indexf
Out[173]: Float64Index([1.5, 2.0, 3.0, 4.5, 5.0], dtype='float64')

In [174]: sf = pd.Series(range(5), index=indexf)


(continues on next page)

3.3. MultiIndex / advanced indexing 421


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [175]: sf
Out[175]:
1.5 0
2.0 1
3.0 2
4.5 3
5.0 4
dtype: int64

Scalar selection for [],.loc will always be label based. An integer will match an equal float index (e.g. 3 is
equivalent to 3.0).

In [176]: sf[3]
Out[176]: 2

In [177]: sf[3.0]
Out[177]: 2

In [178]: sf.loc[3]
Out[178]: 2

In [179]: sf.loc[3.0]
Out[179]: 2

The only positional indexing is via iloc.

In [180]: sf.iloc[3]
[email protected]
T56GZSRVAHOut[180]: 3

A scalar index that is not found will raise a KeyError. Slicing is primarily on the values of the index when using
[],ix,loc, and always positional when using iloc. The exception is when the slice is boolean, in which case it
will always be positional.

In [181]: sf[2:4]
Out[181]:
2.0 1
3.0 2
dtype: int64

In [182]: sf.loc[2:4]
Out[182]:
2.0 1
3.0 2
dtype: int64

In [183]: sf.iloc[2:4]
Out[183]:
3.0 2
4.5 3
dtype: int64

In float indexes, slicing using floats is allowed.

In [184]: sf[2.1:4.6]
Out[184]:
(continues on next page)

422 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


3.0 2
4.5 3
dtype: int64

In [185]: sf.loc[2.1:4.6]
Out[185]:
3.0 2
4.5 3
dtype: int64

In non-float indexes, slicing using floats will raise a TypeError.

In [1]: pd.Series(range(5))[3.5]
TypeError: the label [3.5] is not a proper indexer for this index type (Int64Index)

In [1]: pd.Series(range(5))[3.5:4.5]
TypeError: the slice start [3.5] is not a proper indexer for this index type
˓→(Int64Index)

Here is a typical use-case for using this type of indexing. Imagine that you have a somewhat irregular timedelta-like
indexing scheme, but the data is recorded as floats. This could, for example, be millisecond offsets.

In [186]: dfir = pd.concat([pd.DataFrame(np.random.randn(5, 2),


.....: index=np.arange(5) * 250.0,
.....: columns=list('AB')),
.....: pd.DataFrame(np.random.randn(6, 2),
.....: index=np.arange(4, 10) * 250.1,
.....:
[email protected] columns=list('AB'))])
T56GZSRVAH .....:

In [187]: dfir
Out[187]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
1000.4 -0.179734 0.993962
1250.5 -0.212673 0.909872
1500.6 -0.733333 -0.349893
1750.7 0.456434 -0.306735
2000.8 0.553396 0.166221
2250.9 -0.101684 -0.734907

Selection operations then will always work on a value basis, for all selection operators.

In [188]: dfir[0:1000.4]
Out[188]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
1000.4 -0.179734 0.993962
(continues on next page)

3.3. MultiIndex / advanced indexing 423


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [189]: dfir.loc[0:1001, 'A']


Out[189]:
0.0 -0.435772
250.0 -0.808286
500.0 -1.815703
750.0 -0.243487
1000.0 1.162969
1000.4 -0.179734
Name: A, dtype: float64

In [190]: dfir.loc[1000.4]
Out[190]:
A -0.179734
B 0.993962
Name: 1000.4, dtype: float64

You could retrieve the first 1 second (1000 ms) of data as such:

In [191]: dfir[0:1000]
Out[191]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
[email protected]
T56GZSRVAHIf you need integer based selection, you should use iloc:

In [192]: dfir.iloc[0:5]
Out[192]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725

IntervalIndex

IntervalIndex together with its own dtype, IntervalDtype as well as the Interval scalar type, allow
first-class support in pandas for interval notation.
The IntervalIndex allows some unique indexing and is also used as a return type for the categories in cut()
and qcut().

424 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Indexing with an IntervalIndex

An IntervalIndex can be used in Series and in DataFrame as the index.

In [193]: df = pd.DataFrame({'A': [1, 2, 3, 4]},


.....: index=pd.IntervalIndex.from_breaks([0, 1, 2, 3, 4]))
.....:

In [194]: df
Out[194]:
A
(0, 1] 1
(1, 2] 2
(2, 3] 3
(3, 4] 4

Label based indexing via .loc along the edges of an interval works as you would expect, selecting that particular
interval.

In [195]: df.loc[2]
Out[195]:
A 2
Name: (1, 2], dtype: int64

In [196]: df.loc[[2, 3]]


Out[196]:
A
(1, 2] 2
[email protected]
(2, 3] 3
T56GZSRVAH
If you select a label contained within an interval, this will also select the interval.

In [197]: df.loc[2.5]
Out[197]:
A 3
Name: (2, 3], dtype: int64

In [198]: df.loc[[2.5, 3.5]]


Out[198]:
A
(2, 3] 3
(3, 4] 4

Selecting using an Interval will only return exact matches (starting from pandas 0.25.0).

In [199]: df.loc[pd.Interval(1, 2)]


Out[199]:
A 2
Name: (1, 2], dtype: int64

Trying to select an Interval that is not exactly contained in the IntervalIndex will raise a KeyError.

In [7]: df.loc[pd.Interval(0.5, 2.5)]


---------------------------------------------------------------------------
KeyError: Interval(0.5, 2.5, closed='right')

Selecting all Intervals that overlap a given Interval can be performed using the overlaps() method to
create a boolean indexer.

3.3. MultiIndex / advanced indexing 425


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [200]: idxr = df.index.overlaps(pd.Interval(0.5, 2.5))

In [201]: idxr
Out[201]: array([ True, True, True, False])

In [202]: df[idxr]
Out[202]:
A
(0, 1] 1
(1, 2] 2
(2, 3] 3

Binning data with cut and qcut

cut() and qcut() both return a Categorical object, and the bins they create are stored as an IntervalIndex
in its .categories attribute.

In [203]: c = pd.cut(range(4), bins=2)

In [204]: c
Out[204]:
[(-0.003, 1.5], (-0.003, 1.5], (1.5, 3.0], (1.5, 3.0]]
Categories (2, interval[float64]): [(-0.003, 1.5] < (1.5, 3.0]]

In [205]: c.categories
Out[205]:
IntervalIndex([(-0.003, 1.5], (1.5, 3.0]],
[email protected]
T56GZSRVAH closed='right',
dtype='interval[float64]')

cut() also accepts an IntervalIndex for its bins argument, which enables a useful pandas idiom. First, We
call cut() with some data and bins set to a fixed number, to generate the bins. Then, we pass the values of .
categories as the bins argument in subsequent calls to cut(), supplying new data which will be binned into
the same bins.

In [206]: pd.cut([0, 3, 5, 1], bins=c.categories)


Out[206]:
[(-0.003, 1.5], (1.5, 3.0], NaN, (-0.003, 1.5]]
Categories (2, interval[float64]): [(-0.003, 1.5] < (1.5, 3.0]]

Any value which falls outside all bins will be assigned a NaN value.

Generating ranges of intervals

If we need intervals on a regular frequency, we can use the interval_range() function to create an
IntervalIndex using various combinations of start, end, and periods. The default frequency for
interval_range is a 1 for numeric intervals, and calendar day for datetime-like intervals:

In [207]: pd.interval_range(start=0, end=5)


Out[207]:
IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]],
closed='right',
dtype='interval[int64]')
(continues on next page)

426 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [208]: pd.interval_range(start=pd.Timestamp('2017-01-01'), periods=4)


Out[208]:
IntervalIndex([(2017-01-01, 2017-01-02], (2017-01-02, 2017-01-03], (2017-01-03, 2017-
˓→01-04], (2017-01-04, 2017-01-05]],

closed='right',
dtype='interval[datetime64[ns]]')

In [209]: pd.interval_range(end=pd.Timedelta('3 days'), periods=3)


Out[209]:
IntervalIndex([(0 days 00:00:00, 1 days 00:00:00], (1 days 00:00:00, 2 days 00:00:00],
˓→ (2 days 00:00:00, 3 days 00:00:00]],

closed='right',
dtype='interval[timedelta64[ns]]')

The freq parameter can used to specify non-default frequencies, and can utilize a variety of frequency aliases with
datetime-like intervals:

In [210]: pd.interval_range(start=0, periods=5, freq=1.5)


Out[210]:
IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0], (6.0, 7.5]],
closed='right',
dtype='interval[float64]')

In [211]: pd.interval_range(start=pd.Timestamp('2017-01-01'), periods=4, freq='W')


Out[211]:
IntervalIndex([(2017-01-01, 2017-01-08], (2017-01-08, 2017-01-15], (2017-01-15, 2017-
˓→01-22], (2017-01-22, 2017-01-29]],
[email protected]
T56GZSRVAH closed='right',
dtype='interval[datetime64[ns]]')

In [212]: pd.interval_range(start=pd.Timedelta('0 days'), periods=3, freq='9H')


Out[212]:
IntervalIndex([(0 days 00:00:00, 0 days 09:00:00], (0 days 09:00:00, 0 days 18:00:00],
˓→ (0 days 18:00:00, 1 days 03:00:00]],

closed='right',
dtype='interval[timedelta64[ns]]')

Additionally, the closed parameter can be used to specify which side(s) the intervals are closed on. Intervals are
closed on the right side by default.

In [213]: pd.interval_range(start=0, end=4, closed='both')


Out[213]:
IntervalIndex([[0, 1], [1, 2], [2, 3], [3, 4]],
closed='both',
dtype='interval[int64]')

In [214]: pd.interval_range(start=0, end=4, closed='neither')


Out[214]:
IntervalIndex([(0, 1), (1, 2), (2, 3), (3, 4)],
closed='neither',
dtype='interval[int64]')

New in version 0.23.0.


Specifying start, end, and periods will generate a range of evenly spaced intervals from start to end inclu-
sively, with periods number of elements in the resulting IntervalIndex:

3.3. MultiIndex / advanced indexing 427


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [215]: pd.interval_range(start=0, end=6, periods=4)


Out[215]:
IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0]],
closed='right',
dtype='interval[float64]')

In [216]: pd.interval_range(pd.Timestamp('2018-01-01'),
.....: pd.Timestamp('2018-02-28'), periods=3)
.....:
Out[216]:
IntervalIndex([(2018-01-01, 2018-01-20 08:00:00], (2018-01-20 08:00:00, 2018-02-08 16:
˓→00:00], (2018-02-08 16:00:00, 2018-02-28]],

closed='right',
dtype='interval[datetime64[ns]]')

3.3.6 Miscellaneous indexing FAQ

Integer indexing

Label-based indexing with integer axis labels is a thorny topic. It has been discussed heavily on mailing lists and
among various members of the scientific Python community. In pandas, our general viewpoint is that labels matter
more than integer locations. Therefore, with an integer axis index only label-based indexing is possible with the
standard tools like .loc. The following code will generate exceptions:

In [217]: s = pd.Series(range(5))

[email protected]
In [218]: s[-1]
T56GZSRVAH---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-218-76c3dce40054> in <module>
----> 1 s[-1]

/pandas/pandas/core/series.py in __getitem__(self, key)


869 key = com.apply_if_callable(key, self)
870 try:
--> 871 result = self.index.get_value(self, key)
872
873 if not is_scalar(result):

/pandas/pandas/core/indexes/base.py in get_value(self, series, key)


4402 k = self._convert_scalar_indexer(k, kind="getitem")
4403 try:
-> 4404 return self._engine.get_value(s, k, tz=getattr(series.dtype, "tz",
˓→ None))

4405 except KeyError as e1:


4406 if len(self) > 0 and (self.holds_integer() or self.is_boolean()):

/pandas/pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_value()

/pandas/pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_value()

/pandas/pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()

/pandas/pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.
˓→Int64HashTable.get_item()

(continues on next page)

428 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

/pandas/pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.
˓→Int64HashTable.get_item()

KeyError: -1

In [219]: df = pd.DataFrame(np.random.randn(5, 4))

In [220]: df
Out[220]:
0 1 2 3
0 -0.130121 -0.476046 0.759104 0.213379
1 -0.082641 0.448008 0.656420 -1.051443
2 0.594956 -0.151360 -0.069303 1.221431
3 -0.182832 0.791235 0.042745 2.069775
4 1.446552 0.019814 -1.389212 -0.702312

In [221]: df.loc[-2:]
Out[221]:
0 1 2 3
0 -0.130121 -0.476046 0.759104 0.213379
1 -0.082641 0.448008 0.656420 -1.051443
2 0.594956 -0.151360 -0.069303 1.221431
3 -0.182832 0.791235 0.042745 2.069775
4 1.446552 0.019814 -1.389212 -0.702312

This deliberate decision was made to prevent ambiguities and subtle bugs (many users reported finding bugs when the
API change was made to stop “falling back” on position-based indexing).
[email protected]
T56GZSRVAH

Non-monotonic indexes require exact matches

If the index of a Series or DataFrame is monotonically increasing or decreasing, then the bounds of a label-based
slice can be outside the range of the index, much like slice indexing a normal Python list. Monotonicity of an index
can be tested with the is_monotonic_increasing() and is_monotonic_decreasing() attributes.
In [222]: df = pd.DataFrame(index=[2, 3, 3, 4, 5], columns=['data'],
˓→data=list(range(5)))

In [223]: df.index.is_monotonic_increasing
Out[223]: True

# no rows 0 or 1, but still returns rows 2, 3 (both of them), and 4:


In [224]: df.loc[0:4, :]
Out[224]:
data
2 0
3 1
3 2
4 3

# slice is are outside the index, so empty DataFrame is returned


In [225]: df.loc[13:15, :]
Out[225]:
Empty DataFrame
Columns: [data]
Index: []

3.3. MultiIndex / advanced indexing 429


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

On the other hand, if the index is not monotonic, then both slice bounds must be unique members of the index.
In [226]: df = pd.DataFrame(index=[2, 3, 1, 4, 3, 5],
.....: columns=['data'], data=list(range(6)))
.....:

In [227]: df.index.is_monotonic_increasing
Out[227]: False

# OK because 2 and 4 are in the index


In [228]: df.loc[2:4, :]
Out[228]:
data
2 0
3 1
1 2
4 3

# 0 is not in the index


In [9]: df.loc[0:4, :]
KeyError: 0

# 3 is not a unique label


In [11]: df.loc[2:3, :]
KeyError: 'Cannot get right slice bound for non-unique label: 3'

Index.is_monotonic_increasing and Index.is_monotonic_decreasing only check that an index


is weakly monotonic. To check for strict monotonicity, you can combine one of those with the is_unique()
attribute.
[email protected]
T56GZSRVAH
In [229]: weakly_monotonic = pd.Index(['a', 'b', 'c', 'c'])

In [230]: weakly_monotonic
Out[230]: Index(['a', 'b', 'c', 'c'], dtype='object')

In [231]: weakly_monotonic.is_monotonic_increasing
Out[231]: True

In [232]: weakly_monotonic.is_monotonic_increasing & weakly_monotonic.is_unique


Out[232]: False

Endpoints are inclusive

Compared with standard Python sequence slicing in which the slice endpoint is not inclusive, label-based slicing in
pandas is inclusive. The primary reason for this is that it is often not possible to easily determine the “successor” or
next element after a particular label in an index. For example, consider the following Series:
In [233]: s = pd.Series(np.random.randn(6), index=list('abcdef'))

In [234]: s
Out[234]:
a 0.301379
b 1.240445
c -0.846068
d -0.043312
e -1.658747
(continues on next page)

430 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


f -0.819549
dtype: float64

Suppose we wished to slice from c to e, using integers this would be accomplished as such:
In [235]: s[2:5]
Out[235]:
c -0.846068
d -0.043312
e -1.658747
dtype: float64

However, if you only had c and e, determining the next element in the index can be somewhat complicated. For
example, the following does not work:
s.loc['c':'e' + 1]

A very common use case is to limit a time series to start and end at two specific dates. To enable this, we made the
design choice to make label-based slicing include both endpoints:
In [236]: s.loc['c':'e']
Out[236]:
c -0.846068
d -0.043312
e -1.658747
dtype: float64

[email protected]
This is most definitely a “practicality beats purity” sort of thing, but it is something to watch out for if you expect
T56GZSRVAHlabel-based slicing to behave exactly in the way that standard Python integer slicing works.

Indexing potentially changes underlying Series dtype

The different indexing operation can potentially change the dtype of a Series.
In [237]: series1 = pd.Series([1, 2, 3])

In [238]: series1.dtype
Out[238]: dtype('int64')

In [239]: res = series1.reindex([0, 4])

In [240]: res.dtype
Out[240]: dtype('float64')

In [241]: res
Out[241]:
0 1.0
4 NaN
dtype: float64

In [242]: series2 = pd.Series([True])

In [243]: series2.dtype
Out[243]: dtype('bool')

(continues on next page)

3.3. MultiIndex / advanced indexing 431


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [244]: res = series2.reindex_like(series1)

In [245]: res.dtype
Out[245]: dtype('O')

In [246]: res
Out[246]:
0 True
1 NaN
2 NaN
dtype: object

This is because the (re)indexing operations above silently inserts NaNs and the dtype changes accordingly. This can
cause some issues when using numpy ufuncs such as numpy.logical_and.
See the this old issue for a more detailed discussion.

3.4 Merge, join, and concatenate

pandas provides various facilities for easily combining together Series or DataFrame with various kinds of set logic
for the indexes and relational algebra functionality in the case of join / merge-type operations.

3.4.1 Concatenating objects


[email protected]
The concat() function (in the main pandas namespace) does all of the heavy lifting of performing concatenation
T56GZSRVAHoperations along an axis while performing optional set logic (union or intersection) of the indexes (if any) on the other
axes. Note that I say “if any” because there is only a single possible axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is a simple example:

In [1]: df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],


...: 'B': ['B0', 'B1', 'B2', 'B3'],
...: 'C': ['C0', 'C1', 'C2', 'C3'],
...: 'D': ['D0', 'D1', 'D2', 'D3']},
...: index=[0, 1, 2, 3])
...:

In [2]: df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],


...: 'B': ['B4', 'B5', 'B6', 'B7'],
...: 'C': ['C4', 'C5', 'C6', 'C7'],
...: 'D': ['D4', 'D5', 'D6', 'D7']},
...: index=[4, 5, 6, 7])
...:

In [3]: df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],


...: 'B': ['B8', 'B9', 'B10', 'B11'],
...: 'C': ['C8', 'C9', 'C10', 'C11'],
...: 'D': ['D8', 'D9', 'D10', 'D11']},
...: index=[8, 9, 10, 11])
...:

In [4]: frames = [df1, df2, df3]

In [5]: result = pd.concat(frames)

432 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat takes a list or dict of
T56GZSRVAH
homogeneously-typed objects and concatenates them with some configurable handling of “what to do with the other
axes”:

pd.concat(objs, axis=0, join='outer', ignore_index=False, keys=None,


levels=None, names=None, verify_integrity=False, copy=True)

• objs : a sequence or mapping of Series or DataFrame objects. If a dict is passed, the sorted keys will be used as
the keys argument, unless it is passed, in which case the values will be selected (see below). Any None objects
will be dropped silently unless they are all None in which case a ValueError will be raised.
• axis : {0, 1, . . . }, default 0. The axis to concatenate along.
• join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on other axis(es). Outer for union and inner
for intersection.
• ignore_index : boolean, default False. If True, do not use the index values on the concatenation axis. The
resulting axis will be labeled 0, . . . , n - 1. This is useful if you are concatenating objects where the concatenation
axis does not have meaningful indexing information. Note the index values on the other axes are still respected
in the join.
• keys : sequence, default None. Construct hierarchical index using the passed keys as the outermost level. If
multiple levels passed, should contain tuples.
• levels : list of sequences, default None. Specific levels (unique values) to use for constructing a MultiIndex.
Otherwise they will be inferred from the keys.
• names : list, default None. Names for the levels in the resulting hierarchical index.
• verify_integrity : boolean, default False. Check whether the new concatenated axis contains duplicates.
This can be very expensive relative to the actual data concatenation.

3.4. Merge, join, and concatenate 433


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• copy : boolean, default True. If False, do not copy data unnecessarily.


Without a little bit of context many of these arguments don’t make much sense. Let’s revisit the above example.
Suppose we wanted to associate specific keys with each of the pieces of the chopped up DataFrame. We can do this
using the keys argument:

In [6]: result = pd.concat(frames, keys=['x', 'y', 'z'])

[email protected]
T56GZSRVAH

As you can see (if you’ve read the rest of the documentation), the resulting object’s index has a hierarchical index.
This means that we can now select out each chunk by key:

In [7]: result.loc['y']
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7

It’s not a stretch to see how this can be very useful. More detail on this functionality below.

Note: It is worth noting that concat() (and therefore append()) makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need to use the operation over several datasets,
use a list comprehension.

frames = [ process_your_file(f) for f in files ]


result = pd.concat(frames)

434 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Set logic on the other axes

When gluing together multiple DataFrames, you have a choice of how to handle the other axes (other than the one
being concatenated). This can be done in the following two ways:
• Take the union of them all, join='outer'. This is the default option as it results in zero information loss.
• Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer' behavior:

In [8]: df4 = pd.DataFrame({'B': ['B2', 'B3', 'B6', 'B7'],


...: 'D': ['D2', 'D3', 'D6', 'D7'],
...: 'F': ['F2', 'F3', 'F6', 'F7']},
...: index=[2, 3, 6, 7])
...:

In [9]: result = pd.concat([df1, df4], axis=1, sort=False)

[email protected]
T56GZSRVAH

Warning: Changed in version 0.23.0.


The default behavior with join='outer' is to sort the other axis (columns in this case). In a future version of
pandas, the default will be to not sort. We specified sort=False to opt in to the new behavior now.

Here is the same thing with join='inner':

In [10]: result = pd.concat([df1, df4], axis=1, join='inner')

Lastly, suppose we just wanted to reuse the exact index from the original DataFrame:

3.4. Merge, join, and concatenate 435


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)

Similarly, we could index before the concatenation:

In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)


Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3

Concatenating using append

A useful shortcut to concat() are the append() instance methods on Series and DataFrame. These methods
[email protected]
actually predated concat. They concatenate along axis=0, namely the index:
T56GZSRVAH
In [13]: result = df1.append(df2)

In the case of DataFrame, the indexes must be disjoint but the columns do not need to be:

In [14]: result = df1.append(df4, sort=False)

436 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

append may take multiple objects to concatenate:

In [15]: result = df1.append([df2, df3])

[email protected]
T56GZSRVAH

Note: Unlike the append() method, which appends to the original list and returns None, append() here does
not modify df1 and returns its copy with df2 appended.

3.4. Merge, join, and concatenate 437


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Ignoring indexes on the concatenation axis

For DataFrame objects which don’t have a meaningful index, you may wish to append them and ignore the fact that
they may have overlapping indexes. To do this, use the ignore_index argument:

In [16]: result = pd.concat([df1, df4], ignore_index=True, sort=False)

This is also a valid argument to DataFrame.append():


[email protected]
T56GZSRVAHIn [17]: result = df1.append(df4, ignore_index=True, sort=False)

438 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Concatenating with mixed ndims

You can concatenate a mix of Series and DataFrame objects. The Series will be transformed to DataFrame
with the column name as the name of the Series.

In [18]: s1 = pd.Series(['X0', 'X1', 'X2', 'X3'], name='X')

In [19]: result = pd.concat([df1, s1], axis=1)

Note: Since we’re concatenating a Series to a DataFrame, we could have achieved the same result with
DataFrame.assign(). To concatenate an arbitrary number of pandas objects (DataFrame or Series), use
concat.

If unnamed Series are passed they will be numbered consecutively.


[email protected]
In [20]: s2 = pd.Series(['_0', '_1', '_2', '_3'])
T56GZSRVAH
In [21]: result = pd.concat([df1, s2, s2, s2], axis=1)

Passing ignore_index=True will drop all name references.

In [22]: result = pd.concat([df1, s1], axis=1, ignore_index=True)

3.4. Merge, join, and concatenate 439


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

More concatenating with group keys

A fairly common use of the keys argument is to override the column names when creating a new DataFrame based
on existing Series. Notice how the default behaviour consists on letting the resulting DataFrame inherit the parent
Series’ name, when these existed.

In [23]: s3 = pd.Series([0, 1, 2, 3], name='foo')

In [24]: s4 = pd.Series([0, 1, 2, 3])

In [25]: s5 = pd.Series([0, 1, 4, 5])

In [26]: pd.concat([s3, s4, s5], axis=1)


Out[26]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
[email protected]
3 3 3 5
T56GZSRVAH
Through the keys argument we can override the existing column names.

In [27]: pd.concat([s3, s4, s5], axis=1, keys=['red', 'blue', 'yellow'])


Out[27]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5

Let’s consider a variation of the very first example presented:

In [28]: result = pd.concat(frames, keys=['x', 'y', 'z'])

440 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

You can also pass a dict to concat in which case the dict keys will be used for the keys argument (unless other keys
are specified):
[email protected]
T56GZSRVAHIn [29]: pieces = {'x': df1, 'y': df2, 'z': df3}
In [30]: result = pd.concat(pieces)

3.4. Merge, join, and concatenate 441


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [31]: result = pd.concat(pieces, keys=['z', 'y'])

[email protected]
T56GZSRVAH

The MultiIndex created has levels that are constructed from the passed keys and the index of the DataFrame pieces:

442 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [32]: result.index.levels
Out[32]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])

If you wish to specify other levels (as will occasionally be the case), you can do so using the levels argument:

In [33]: result = pd.concat(pieces, keys=['x', 'y', 'z'],


....: levels=[['z', 'y', 'x', 'w']],
....: names=['group_key'])
....:

[email protected]
T56GZSRVAH

In [34]: result.index.levels
Out[34]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])

This is fairly esoteric, but it is actually necessary for implementing things like GroupBy where the order of a categorical
variable is meaningful.

Appending rows to a DataFrame

While not especially efficient (since a new object must be created), you can append a single row to a DataFrame by
passing a Series or dict to append, which returns a new DataFrame as above.

In [35]: s2 = pd.Series(['X0', 'X1', 'X2', 'X3'], index=['A', 'B', 'C', 'D'])

In [36]: result = df1.append(s2, ignore_index=True)

3.4. Merge, join, and concatenate 443


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

You should use ignore_index with this method to instruct DataFrame to discard its index. If you wish to preserve
the index, you should construct an appropriately-indexed DataFrame and append or concatenate those objects.
You can also pass a list of dicts or Series:
In [37]: dicts = [{'A': 1, 'B': 2, 'C': 3, 'X': 4},
....: {'A': 5, 'B': 6, 'C': 7, 'Y': 8}]
....:

In [38]: result = df1.append(dicts, ignore_index=True, sort=False)

[email protected]
T56GZSRVAH

3.4.2 Database-style DataFrame or named Series joining/merging

pandas has full-featured, high performance in-memory join operations idiomatically very similar to relational
databases like SQL. These methods perform significantly better (in some cases well over an order of magnitude better)
than other open source implementations (like base::merge.data.frame in R). The reason for this is careful
algorithmic design and the internal layout of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a comparison with SQL.
pandas provides a single function, merge(), as the entry point for all standard database join operations between
DataFrame or named Series objects:

444 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

pd.merge(left, right, how='inner', on=None, left_on=None, right_on=None,


left_index=False, right_index=False, sort=True,
suffixes=('_x', '_y'), copy=True, indicator=False,
validate=None)

• left: A DataFrame or named Series object.


• right: Another DataFrame or named Series object.
• on: Column or index level names to join on. Must be found in both the left and right DataFrame and/or Series
objects. If not passed and left_index and right_index are False, the intersection of the columns in
the DataFrames and/or Series will be inferred to be the join keys.
• left_on: Columns or index levels from the left DataFrame or Series to use as keys. Can either be column
names, index level names, or arrays with length equal to the length of the DataFrame or Series.
• right_on: Columns or index levels from the right DataFrame or Series to use as keys. Can either be column
names, index level names, or arrays with length equal to the length of the DataFrame or Series.
• left_index: If True, use the index (row labels) from the left DataFrame or Series as its join key(s). In the
case of a DataFrame or Series with a MultiIndex (hierarchical), the number of levels must match the number of
join keys from the right DataFrame or Series.
• right_index: Same usage as left_index for the right DataFrame or Series
• how: One of 'left', 'right', 'outer', 'inner'. Defaults to inner. See below for more detailed
description of each method.
• sort: Sort the result DataFrame by the join keys in lexicographical order. Defaults to True, setting to False
will improve performance substantially in many cases.
[email protected]
T56GZSRVAH • suffixes: A tuple of string suffixes to apply to overlapping columns. Defaults to ('_x', '_y').
• copy: Always copy data (default True) from the passed DataFrame or named Series objects, even when
reindexing is not necessary. Cannot be avoided in many cases but may improve performance / memory usage.
The cases where copying can be avoided are somewhat pathological but this option is provided nonetheless.
• indicator: Add a column to the output DataFrame called _merge with information on the source of each
row. _merge is Categorical-type and takes on a value of left_only for observations whose merge key only
appears in 'left' DataFrame or Series, right_only for observations whose merge key only appears in
'right' DataFrame or Series, and both if the observation’s merge key is found in both.
• validate : string, default None. If specified, checks if merge is of specified type.
– “one_to_one” or “1:1”: checks if merge keys are unique in both left and right datasets.
– “one_to_many” or “1:m”: checks if merge keys are unique in left dataset.
– “many_to_one” or “m:1”: checks if merge keys are unique in right dataset.
– “many_to_many” or “m:m”: allowed, but does not result in checks.
New in version 0.21.0.

Note: Support for specifying index levels as the on, left_on, and right_on parameters was added in version
0.23.0. Support for merging named Series objects was added in version 0.24.0.

The return type will be the same as left. If left is a DataFrame or named Series and right is a subclass of
DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a DataFrame instance method merge(),
with the calling DataFrame being implicitly considered the left object in the join.

3.4. Merge, join, and concatenate 445


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

The related join() method, uses merge internally for the index-on-index (by default) and column(s)-on-index join.
If you are joining on index only, you may wish to use DataFrame.join to save yourself some typing.

Brief primer on merge methods (relational algebra)

Experienced users of relational databases like SQL will be familiar with the terminology used to describe join oper-
ations between two SQL-table like structures (DataFrame objects). There are several cases to consider which are
very important to understand:
• one-to-one joins: for example when joining two DataFrame objects on their indexes (which must contain
unique values).
• many-to-one joins: for example when joining an index (unique) to one or more columns in a different
DataFrame.
• many-to-many joins: joining columns on columns.

Note: When joining columns on columns (potentially a many-to-many join), any indexes on the passed DataFrame
objects will be discarded.

It is worth spending some time understanding the result of the many-to-many join case. In SQL / standard relational
algebra, if a key combination appears more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique key combination:

In [39]: left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],


....: 'A': ['A0', 'A1', 'A2', 'A3'],
....: 'B': ['B0', 'B1', 'B2', 'B3']})
[email protected]
....:
T56GZSRVAH
In [40]: right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
....: 'C': ['C0', 'C1', 'C2', 'C3'],
....: 'D': ['D0', 'D1', 'D2', 'D3']})
....:

In [41]: result = pd.merge(left, right, on='key')

Here is a more complicated example with multiple join keys. Only the keys appearing in left and right are present
(the intersection), since how='inner' by default.

In [42]: left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],


....: 'key2': ['K0', 'K1', 'K0', 'K1'],
....: 'A': ['A0', 'A1', 'A2', 'A3'],
....: 'B': ['B0', 'B1', 'B2', 'B3']})
....:
(continues on next page)

446 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [43]: right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],


....: 'key2': ['K0', 'K0', 'K0', 'K0'],
....: 'C': ['C0', 'C1', 'C2', 'C3'],
....: 'D': ['D0', 'D1', 'D2', 'D3']})
....:

In [44]: result = pd.merge(left, right, on=['key1', 'key2'])

The how argument to merge specifies how to determine which keys are to be included in the resulting table. If a
key combination does not appear in either the left or right tables, the values in the joined table will be NA. Here is a
summary of the how options and their SQL equivalent names:

Merge method SQL Join Name Description


left LEFT OUTER Use keys from left frame only
[email protected] JOIN
T56GZSRVAH right RIGHT OUTER Use keys from right frame only
JOIN
outer FULL OUTER Use union of keys from both frames
JOIN
inner INNER JOIN Use intersection of keys from both frames

In [45]: result = pd.merge(left, right, how='left', on=['key1', 'key2'])

In [46]: result = pd.merge(left, right, how='right', on=['key1', 'key2'])

3.4. Merge, join, and concatenate 447


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [47]: result = pd.merge(left, right, how='outer', on=['key1', 'key2'])

In [48]: result = pd.merge(left, right, how='inner', on=['key1', 'key2'])

[email protected]
T56GZSRVAH

Here is another example with duplicate join keys in DataFrames:

In [49]: left = pd.DataFrame({'A': [1, 2], 'B': [2, 2]})

In [50]: right = pd.DataFrame({'A': [4, 5, 6], 'B': [2, 2, 2]})

In [51]: result = pd.merge(left, right, on='B', how='outer')

448 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Warning: Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row
dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in
keys before joining large DataFrames.

Checking for duplicate keys

New in version 0.21.0.


Users can use the validate argument to automatically check whether there are unexpected duplicates in their merge
keys. Key uniqueness is checked before merge operations and so should protect against memory overflows. Checking
key uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right DataFrame. As this is not a one-to-one merge
[email protected]
– as specified in the validate argument – an exception will be raised.
T56GZSRVAH
In [52]: left = pd.DataFrame({'A' : [1,2], 'B' : [1, 2]})

In [53]: right = pd.DataFrame({'A' : [4,5,6], 'B': [2, 2, 2]})

In [53]: result = pd.merge(left, right, on='B', how='outer', validate="one_to_one")


...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge

If the user is aware of the duplicates in the right DataFrame but wants to ensure there are no duplicates in the left
DataFrame, one can use the validate='one_to_many' argument instead, which will not raise an exception.

In [54]: pd.merge(left, right, on='B', how='outer', validate="one_to_many")


Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0

3.4. Merge, join, and concatenate 449


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

The merge indicator

merge() accepts the argument indicator. If True, a Categorical-type column called _merge will be added to
the output object that takes on values:

Observation Origin _merge value


Merge key only in 'left' frame left_only
Merge key only in 'right' frame right_only
Merge key in both frames both

In [55]: df1 = pd.DataFrame({'col1': [0, 1], 'col_left': ['a', 'b']})

In [56]: df2 = pd.DataFrame({'col1': [1, 2, 2], 'col_right': [2, 2, 2]})

In [57]: pd.merge(df1, df2, on='col1', how='outer', indicator=True)


Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only

The indicator argument will also accept string arguments, in which case the indicator function will use the value
of the passed string as the name for the indicator column.

In [58]: pd.merge(df1, df2, on='col1', how='outer', indicator='indicator_column')


Out[58]:
[email protected]
col1 col_left col_right indicator_column
T56GZSRVAH
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only

Merge dtypes

Merging will preserve the dtype of the join keys.

In [59]: left = pd.DataFrame({'key': [1], 'v1': [10]})

In [60]: left
Out[60]:
key v1
0 1 10

In [61]: right = pd.DataFrame({'key': [1, 2], 'v1': [20, 30]})

In [62]: right
Out[62]:
key v1
0 1 20
1 2 30

We are able to preserve the join keys:

450 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [63]: pd.merge(left, right, how='outer')


Out[63]:
key v1
0 1 10
1 1 20
2 2 30

In [64]: pd.merge(left, right, how='outer').dtypes


Out[64]:
key int64
v1 int64
dtype: object

Of course if you have missing values that are introduced, then the resulting dtype will be upcast.
In [65]: pd.merge(left, right, how='outer', on='key')
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30

In [66]: pd.merge(left, right, how='outer', on='key').dtypes


Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object

[email protected]
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
T56GZSRVAH
The left frame.
In [67]: from pandas.api.types import CategoricalDtype

In [68]: X = pd.Series(np.random.choice(['foo', 'bar'], size=(10,)))

In [69]: X = X.astype(CategoricalDtype(categories=['foo', 'bar']))

In [70]: left = pd.DataFrame({'X': X,


....: 'Y': np.random.choice(['one', 'two', 'three'],
....: size=(10,))})
....:

In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three

In [72]: left.dtypes
(continues on next page)

3.4. Merge, join, and concatenate 451


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[72]:
X category
Y object
dtype: object

The right frame.

In [73]: right = pd.DataFrame({'X': pd.Series(['foo', 'bar'],


....: dtype=CategoricalDtype(['foo', 'bar'])),
....: 'Z': [1, 2]})
....:

In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2

In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object

The merged result:

In [76]: result = pd.merge(left, right, how='outer')


[email protected]
T56GZSRVAHIn [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1

In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object

Note: The category dtypes must be exactly the same, meaning the same categories and the ordered attribute. Other-
wise the result will coerce to the categories’ dtype.

Note: Merging on category dtypes that are the same can be quite performant compared to object dtype merging.

452 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Joining on index

DataFrame.join() is a convenient method for combining the columns of two potentially differently-indexed
DataFrames into a single result DataFrame. Here is a very basic example:

In [79]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],


....: 'B': ['B0', 'B1', 'B2']},
....: index=['K0', 'K1', 'K2'])
....:

In [80]: right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],


....: 'D': ['D0', 'D2', 'D3']},
....: index=['K0', 'K2', 'K3'])
....:

In [81]: result = left.join(right)

In [82]: result = left.join(right, how='outer')


[email protected]
T56GZSRVAH

The same as above, but with how='inner'.

In [83]: result = left.join(right, how='inner')

The data alignment here is on the indexes (row labels). This same behavior can be achieved using merge plus
additional arguments instructing it to use the indexes:

3.4. Merge, join, and concatenate 453


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how='outer


˓→')

In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how='inner


˓→');

[email protected]
T56GZSRVAH
Joining key columns on an index

join() takes an optional on argument which may be a column or multiple column names, which specifies that the
passed DataFrame is to be aligned on that column in the DataFrame. These two function calls are completely
equivalent:

left.join(right, on=key_or_keys)
pd.merge(left, right, left_on=key_or_keys, right_index=True,
how='left', sort=False)

Obviously you can choose whichever form you find more convenient. For many-to-one joins (where one of the
DataFrame’s is already indexed by the join key), using join may be more convenient. Here is a simple example:

In [86]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],


....: 'B': ['B0', 'B1', 'B2', 'B3'],
....: 'key': ['K0', 'K1', 'K0', 'K1']})
....:

In [87]: right = pd.DataFrame({'C': ['C0', 'C1'],


....: 'D': ['D0', 'D1']},
....: index=['K0', 'K1'])
....:

In [88]: result = left.join(right, on='key')

454 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [89]: result = pd.merge(left, right, left_on='key', right_index=True,


....: how='left', sort=False);
....:

To join on multiple keys,


the passed DataFrame must have a MultiIndex:

In [90]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],


....: 'B': ['B0', 'B1', 'B2', 'B3'],
[email protected]
....: 'key1': ['K0', 'K0', 'K1', 'K2'],
T56GZSRVAH ....: 'key2': ['K0', 'K1', 'K0', 'K1']})
....:

In [91]: index = pd.MultiIndex.from_tuples([('K0', 'K0'), ('K1', 'K0'),


....: ('K2', 'K0'), ('K2', 'K1')])
....:

In [92]: right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],


....: 'D': ['D0', 'D1', 'D2', 'D3']},
....: index=index)
....:

Now this can be joined by passing the two key column names:

In [93]: result = left.join(right, on=['key1', 'key2'])

The
default for DataFrame.join is to perform a left join (essentially a “VLOOKUP” operation, for Excel users),
which uses only the keys found in the calling DataFrame. Other join types, for example inner join, can be just as easily

3.4. Merge, join, and concatenate 455


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

performed:

In [94]: result = left.join(right, on=['key1', 'key2'], how='inner')

As you can see, this drops any rows where there was no match.

Joining a single Index to a MultiIndex

You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame. The level will match on the
name of the index of the singly-indexed frame against a level name of the MultiIndexed frame.

In [95]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],


....: 'B': ['B0', 'B1', 'B2']},
....: index=pd.Index(['K0', 'K1', 'K2'], name='key'))
....:

[email protected]
In [96]: index = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'),
T56GZSRVAH ....: ('K2', 'Y2'), ('K2', 'Y3')],
....: names=['key', 'Y'])
....:

In [97]: right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],


....: 'D': ['D0', 'D1', 'D2', 'D3']},
....: index=index)
....:

In [98]: result = left.join(right, how='inner')

This is equivalent but less verbose and more memory efficient / faster than this.

In [99]: result = pd.merge(left.reset_index(), right.reset_index(),


....: on=['key'], how='inner').set_index(['key','Y'])
....:

456 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Joining with two MultiIndexes

This is supported in a limited way, provided that the index for the right argument is completely used in the join, and is
a subset of the indices in the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product([list('abc'), list('xy'), [1, 2]],
.....: names=['abc', 'xy', 'num'])
.....:

In [101]: left = pd.DataFrame({'v1': range(12)}, index=leftindex)

In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
[email protected]
T56GZSRVAH 2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11

In [103]: rightindex = pd.MultiIndex.from_product([list('abc'), list('xy')],


.....: names=['abc', 'xy'])
.....:

In [104]: right = pd.DataFrame({'v2': [100 * i for i in range(1, 7)]},


˓→index=rightindex)

In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
(continues on next page)

3.4. Merge, join, and concatenate 457


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [106]: left.join(right, on=['abc', 'xy'], how='inner')


Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600

If that condition is not satisfied, a join with two multi-indexes can be done using the following code.

In [107]: leftindex = pd.MultiIndex.from_tuples([('K0', 'X0'), ('K0', 'X1'),


.....: ('K1', 'X2')],
.....: names=['key', 'X'])
.....:

In [108]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],


.....: 'B': ['B0', 'B1', 'B2']},
.....:
[email protected] index=leftindex)
T56GZSRVAH .....:
In [109]: rightindex = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'),
.....: ('K2', 'Y2'), ('K2', 'Y3')],
.....: names=['key', 'Y'])
.....:

In [110]: right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],


.....: 'D': ['D0', 'D1', 'D2', 'D3']},
.....: index=rightindex)
.....:

In [111]: result = pd.merge(left.reset_index(), right.reset_index(),


.....: on=['key'], how='inner').set_index(['key', 'X', 'Y'])
.....:

458 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Merging on a combination of columns and index levels

New in version 0.23.


Strings passed as the on, left_on, and right_on parameters may refer to either column names or index level
names. This enables merging DataFrame instances on a combination of index levels and columns without resetting
indexes.

In [112]: left_index = pd.Index(['K0', 'K0', 'K1', 'K2'], name='key1')

In [113]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],


.....: 'B': ['B0', 'B1', 'B2', 'B3'],
.....: 'key2': ['K0', 'K1', 'K0', 'K1']},
.....: index=left_index)
.....:

In [114]: right_index = pd.Index(['K0', 'K1', 'K2', 'K2'], name='key1')

In [115]: right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],


.....: 'D': ['D0', 'D1', 'D2', 'D3'],
.....: 'key2': ['K0', 'K0', 'K0', 'K1']},
.....: index=right_index)
.....:

In [116]: result = left.merge(right, on=['key1', 'key2'])

[email protected]
T56GZSRVAH

Note: When DataFrames are merged on a string that matches an index level in both frames, the index level is
preserved as an index level in the resulting DataFrame.

Note: When DataFrames are merged using only some of the levels of a MultiIndex, the extra levels will be dropped
from the resulting merge. In order to preserve those levels, use reset_index on those level names to move those
levels to columns prior to doing the merge.

Note: If a string matches both a column name and an index level name, then a warning is issued and the column takes
precedence. This will result in an ambiguity error in a future version.

3.4. Merge, join, and concatenate 459


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Overlapping value columns

The merge suffixes argument takes a tuple of list of strings to append to overlapping column names in the input
DataFrames to disambiguate the result columns:

In [117]: left = pd.DataFrame({'k': ['K0', 'K1', 'K2'], 'v': [1, 2, 3]})

In [118]: right = pd.DataFrame({'k': ['K0', 'K0', 'K3'], 'v': [4, 5, 6]})

In [119]: result = pd.merge(left, right, on='k')

In [120]: result = pd.merge(left, right, on='k', suffixes=['_l', '_r'])

[email protected]
T56GZSRVAH

DataFrame.join() has lsuffix and rsuffix arguments which behave similarly.

In [121]: left = left.set_index('k')

In [122]: right = right.set_index('k')

In [123]: result = left.join(right, lsuffix='_l', rsuffix='_r')

460 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Joining multiple DataFrames

A list or tuple of DataFrames can also be passed to join() to join them together on their indexes.

In [124]: right2 = pd.DataFrame({'v': [7, 8, 9]}, index=['K1', 'K1', 'K2'])

In [125]: result = left.join([right, right2])

Merging together values within Series or DataFrame columns

Another fairly common situation is to have two like-indexed (or similarly indexed) Series or DataFrame objects
and wanting to “patch” values in one object from values for matching indices in the other. Here is an example:

In [126]: df1 = pd.DataFrame([[np.nan, 3., 5.], [-4.6, np.nan, np.nan],


[email protected] [np.nan, 7., np.nan]])
T56GZSRVAH .....:
.....:

In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5., 1.6, 4]],


.....: index=[1, 2])
.....:

For this, use the combine_first() method:

In [128]: result = df1.combine_first(df2)

Note that this method only takes values from the right DataFrame if they are missing in the left DataFrame. A
related method, update(), alters non-NA values in place:

In [129]: df1.update(df2)

3.4. Merge, join, and concatenate 461


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.4.3 Timeseries friendly merging

Merging ordered data

A merge_ordered() function allows combining time series and other ordered data. In particular it has an optional
fill_method keyword to fill/interpolate missing data:

In [130]: left = pd.DataFrame({'k': ['K0', 'K1', 'K1', 'K2'],


.....: 'lv': [1, 2, 3, 4],
.....: 's': ['a', 'b', 'c', 'd']})
.....:

In [131]: right = pd.DataFrame({'k': ['K1', 'K2', 'K4'],


.....: 'rv': [1, 2, 3]})
.....:

In [132]: pd.merge_ordered(left, right, fill_method='ffill', left_by='s')


Out[132]:
k lv s rv
[email protected]
0 K0 1.0 a NaN
T56GZSRVAH1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0

Merging asof

A merge_asof() is similar to an ordered left-join except that we match on nearest key rather than equal keys. For
each row in the left DataFrame, we select the last row in the right DataFrame whose on key is less than the
left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the by key equally, in addition to the nearest
match on the on key.
For example; we might have trades and quotes and we want to asof merge them.

In [133]: trades = pd.DataFrame({


.....: 'time': pd.to_datetime(['20160525 13:30:00.023',
.....: '20160525 13:30:00.038',
(continues on next page)

462 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


.....: '20160525 13:30:00.048',
.....: '20160525 13:30:00.048',
.....: '20160525 13:30:00.048']),
.....: 'ticker': ['MSFT', 'MSFT',
.....: 'GOOG', 'GOOG', 'AAPL'],
.....: 'price': [51.95, 51.95,
.....: 720.77, 720.92, 98.00],
.....: 'quantity': [75, 155,
.....: 100, 100, 100]},
.....: columns=['time', 'ticker', 'price', 'quantity'])
.....:

In [134]: quotes = pd.DataFrame({


.....: 'time': pd.to_datetime(['20160525 13:30:00.023',
.....: '20160525 13:30:00.023',
.....: '20160525 13:30:00.030',
.....: '20160525 13:30:00.041',
.....: '20160525 13:30:00.048',
.....: '20160525 13:30:00.049',
.....: '20160525 13:30:00.072',
.....: '20160525 13:30:00.075']),
.....: 'ticker': ['GOOG', 'MSFT', 'MSFT',
.....: 'MSFT', 'GOOG', 'AAPL', 'GOOG',
.....: 'MSFT'],
.....: 'bid': [720.50, 51.95, 51.97, 51.99,
.....: 720.50, 97.99, 720.50, 52.01],
.....: 'ask': [720.93, 51.96, 51.98, 52.00,
.....: 720.93, 98.01, 720.88, 52.03]},
[email protected]
T56GZSRVAH .....: columns=['time', 'ticker', 'bid', 'ask'])
.....:

In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100

In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03

By default we are taking the asof of the quotes.

In [137]: pd.merge_asof(trades, quotes,


.....: on='time',
(continues on next page)

3.4. Merge, join, and concatenate 463


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


.....: by='ticker')
.....:
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN

We only asof within 2ms between the quote time and the trade time.

In [138]: pd.merge_asof(trades, quotes,


.....: on='time',
.....: by='ticker',
.....: tolerance=pd.Timedelta('2ms'))
.....:
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN

We only asof within 10ms between the quote time and the trade time and we exclude exact matches on time. Note
that though we exclude the exact matches (of the quotes), prior quotes do propagate to that point in time.
[email protected]
T56GZSRVAHIn [139]: pd.merge_asof(trades, quotes,
.....: on='time',
.....: by='ticker',
.....: tolerance=pd.Timedelta('10ms'),
.....: allow_exact_matches=False)
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN

464 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.5 Reshaping and pivot tables

3.5.1 Reshaping by pivoting DataFrame objects

Data is often stored in so-called “stacked” or “record” format:


[email protected]
T56GZSRVAH
In [1]: df
Out[1]:
date variable value
0 2000-01-03 A 0.469112
1 2000-01-04 A -0.282863
2 2000-01-05 A -1.509059
3 2000-01-03 B -1.135632
4 2000-01-04 B 1.212112
5 2000-01-05 B -0.173215
6 2000-01-03 C 0.119209
7 2000-01-04 C -1.044236
8 2000-01-05 C -0.861849
9 2000-01-03 D -2.104569
10 2000-01-04 D -0.494929
11 2000-01-05 D 1.071804

For the curious here is how the above DataFrame was created:
import pandas._testing as tm

tm.N = 3

def unpivot(frame):
N, K = frame.shape
data = {'value': frame.to_numpy().ravel('F'),
'variable': np.asarray(frame.columns).repeat(N),
'date': np.tile(np.asarray(frame.index), K)}
(continues on next page)

3.5. Reshaping and pivot tables 465


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


return pd.DataFrame(data, columns=['date', 'variable', 'value'])

df = unpivot(tm.makeTimeDataFrame())

To select out everything for variable A we could do:

In [2]: df[df['variable'] == 'A']


Out[2]:
date variable value
0 2000-01-03 A 0.469112
1 2000-01-04 A -0.282863
2 2000-01-05 A -1.509059

But suppose we wish to do time series operations with the variables. A better representation would be where the
columns are the unique variables and an index of dates identifies individual observations. To reshape the data into
this form, we use the DataFrame.pivot() method (also implemented as a top level function pivot()):

In [3]: df.pivot(index='date', columns='variable', values='value')


Out[3]:
variable A B C D
date
2000-01-03 0.469112 -1.135632 0.119209 -2.104569
2000-01-04 -0.282863 1.212112 -1.044236 -0.494929
2000-01-05 -1.509059 -0.173215 -0.861849 1.071804

If the values argument is omitted, and the input DataFrame has more than one column of values which are not
[email protected]
used as column or index inputs to pivot, then the resulting “pivoted” DataFrame will have hierarchical columns
T56GZSRVAH
whose topmost level indicates the respective value column:

In [4]: df['value2'] = df['value'] * 2

In [5]: pivoted = df.pivot(index='date', columns='variable')

In [6]: pivoted
Out[6]:
value value2
˓→

variable A B C D A B C
˓→ D
date
˓→

2000-01-03 0.469112 -1.135632 0.119209 -2.104569 0.938225 -2.271265 0.238417 -4.


˓→209138

2000-01-04 -0.282863 1.212112 -1.044236 -0.494929 -0.565727 2.424224 -2.088472 -0.


˓→989859

2000-01-05 -1.509059 -0.173215 -0.861849 1.071804 -3.018117 -0.346429 -1.723698 2.


˓→143608

You can then select subsets from the pivoted DataFrame:

In [7]: pivoted['value2']
Out[7]:
variable A B C D
date
2000-01-03 0.938225 -2.271265 0.238417 -4.209138
(continues on next page)

466 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-04 -0.565727 2.424224 -2.088472 -0.989859
2000-01-05 -3.018117 -0.346429 -1.723698 2.143608

Note that this returns a view on the underlying data in the case where the data are homogeneously-typed.

Note: pivot() will error with a ValueError: Index contains duplicate entries, cannot
reshape if the index/column pair is not unique. In this case, consider using pivot_table() which is a gen-
eralization of pivot that can handle duplicate values for one index/column pair.

3.5.2 Reshaping by stacking and unstacking

[email protected]
T56GZSRVAH

Closely related to the pivot() method are the related stack() and unstack() methods available on Series
and DataFrame. These methods are designed to work together with MultiIndex objects (see the section on
hierarchical indexing). Here are essentially what these methods do:
• stack: “pivot” a level of the (possibly hierarchical) column labels, returning a DataFrame with an index
with a new inner-most level of row labels.
• unstack: (inverse operation of stack) “pivot” a level of the (possibly hierarchical) row index to the column
axis, producing a reshaped DataFrame with a new inner-most level of column labels.

3.5. Reshaping and pivot tables 467


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

The clearest way to explain is by example. Let’s take a prior example data set from the hierarchical indexing section:

In [8]: tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',


[email protected]
...: 'foo', 'foo', 'qux', 'qux'],
T56GZSRVAH ...: ['one', 'two', 'one', 'two',
...: 'one', 'two', 'one', 'two']]))
...:

In [9]: index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])

In [10]: df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])

In [11]: df2 = df[:4]

In [12]: df2
Out[12]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401

The stack function “compresses” a level in the DataFrame’s columns to produce either:
• A Series, in the case of a simple column Index.
• A DataFrame, in the case of a MultiIndex in the columns.
If the columns have a MultiIndex, you can choose which level to stack. The stacked level becomes the new lowest
level in a MultiIndex on the columns:

468 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [13]: stacked = df2.stack()

In [14]: stacked
Out[14]:
first second
bar one A 0.721555
B -0.706771
two A -1.039575
B 0.271860
baz one A -0.424972
B 0.567020
two A 0.276232
B -1.087401
dtype: float64

With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack
is unstack, which by default unstacks the last level:

In [15]: stacked.unstack()
Out[15]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401

In [16]: stacked.unstack(1)
[email protected]
Out[16]:
T56GZSRVAHsecond one two
first
bar A 0.721555 -1.039575
B -0.706771 0.271860
baz A -0.424972 0.276232
B 0.567020 -1.087401

In [17]: stacked.unstack(0)
Out[17]:
first bar baz
second
one A 0.721555 -0.424972
B -0.706771 0.567020
two A -1.039575 0.276232
B 0.271860 -1.087401

3.5. Reshaping and pivot tables 469


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

If the indexes have names, you can use the level names instead of specifying the level numbers:

In [18]: stacked.unstack('second')
Out[18]:
[email protected]
T56GZSRVAHsecond one two
first
bar A 0.721555 -1.039575
B -0.706771 0.271860
baz A -0.424972 0.276232
B 0.567020 -1.087401

470 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Notice that the stack and unstack methods implicitly sort the index levels involved. Hence a call to stack and
then unstack, or vice versa, will result in a sorted copy of the original DataFrame or Series:
In [19]: index = pd.MultiIndex.from_product([[2, 1], ['a', 'b']])
[email protected]
T56GZSRVAHIn [20]: df = pd.DataFrame(np.random.randn(4), index=index, columns=['A'])

In [21]: df
Out[21]:
A
2 a -0.370647
b -1.157892
1 a -1.344312
b 0.844885

In [22]: all(df.unstack().stack() == df.sort_index())


Out[22]: True

The above code will raise a TypeError if the call to sort_index is removed.

Multiple levels

You may also stack or unstack more than one level at a time by passing a list of levels, in which case the end result is
as if each level in the list were processed individually.
In [23]: columns = pd.MultiIndex.from_tuples([
....: ('A', 'cat', 'long'), ('B', 'cat', 'long'),
....: ('A', 'dog', 'short'), ('B', 'dog', 'short')],
....: names=['exp', 'animal', 'hair_length']
....: )
....:

(continues on next page)

3.5. Reshaping and pivot tables 471


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [24]: df = pd.DataFrame(np.random.randn(4, 4), columns=columns)

In [25]: df
Out[25]:
exp A B A B
animal cat cat dog dog
hair_length long long short short
0 1.075770 -0.109050 1.643563 -1.469388
1 0.357021 -0.674600 -1.776904 -0.968914
2 -1.294524 0.413738 0.276662 -0.472035
3 -0.013960 -0.362543 -0.006154 -0.923061

In [26]: df.stack(level=['animal', 'hair_length'])


Out[26]:
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061

The list of levels can contain either level names or level numbers (but not a mixture of the two).
# df.stack(level=['animal', 'hair_length'])
[email protected]
T56GZSRVAH# from above is equivalent to:
In [27]: df.stack(level=[1, 2])
Out[27]:
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061

Missing data

These functions are intelligent about handling missing data and do not expect each subgroup within the hierarchical
index to have the same set of labels. They also can handle the index being unsorted (but you can make it sorted by
calling sort_index, of course). Here is a more complex example:
In [28]: columns = pd.MultiIndex.from_tuples([('A', 'cat'), ('B', 'dog'),
....: ('B', 'cat'), ('A', 'dog')],
....: names=['exp', 'animal'])
....:

In [29]: index = pd.MultiIndex.from_product([('bar', 'baz', 'foo', 'qux'),


....: ('one', 'two')],
(continues on next page)

472 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


....: names=['first', 'second'])
....:

In [30]: df = pd.DataFrame(np.random.randn(8, 4), index=index, columns=columns)

In [31]: df2 = df.iloc[[0, 1, 2, 4, 5, 7]]

In [32]: df2
Out[32]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
foo one -1.413681 1.607920 1.024180 0.569605
two 0.875906 -2.211372 0.974466 -2.006747
qux two -1.226825 0.769804 -1.281247 -0.727707

As mentioned above, stack can be called with a level argument to select which level in the columns to stack:

In [33]: df2.stack('exp')
Out[33]:
animal cat dog
first second exp
bar one A 0.895717 2.565646
B -1.206412 0.805244
two
[email protected] A 1.431256 -0.226169
T56GZSRVAH B -1.170299 1.340309
baz one A 0.410835 -0.827317
B 0.132003 0.813850
foo one A -1.413681 0.569605
B 1.024180 1.607920
two A 0.875906 -2.006747
B 0.974466 -2.211372
qux two A -1.226825 -0.727707
B -1.281247 0.769804

In [34]: df2.stack('animal')
Out[34]:
exp A B
first second animal
bar one cat 0.895717 -1.206412
dog 2.565646 0.805244
two cat 1.431256 -1.170299
dog -0.226169 1.340309
baz one cat 0.410835 0.132003
dog -0.827317 0.813850
foo one cat -1.413681 1.024180
dog 0.569605 1.607920
two cat 0.875906 0.974466
dog -2.006747 -2.211372
qux two cat -1.226825 -1.281247
dog -0.727707 0.769804

Unstacking can result in missing values if subgroups do not have the same set of labels. By default, missing values
will be replaced with the default fill value for that data type, NaN for float, NaT for datetimelike, etc. For integer types,

3.5. Reshaping and pivot tables 473


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

by default data will converted to float and missing values will be set to NaN.

In [35]: df3 = df.iloc[[0, 1, 4, 7], [1, 2]]

In [36]: df3
Out[36]:
exp B
animal dog cat
first second
bar one 0.805244 -1.206412
two 1.340309 -1.170299
foo one 1.607920 1.024180
qux two 0.769804 -1.281247

In [37]: df3.unstack()
Out[37]:
exp B
animal dog cat
second one two one two
first
bar 0.805244 1.340309 -1.206412 -1.170299
foo 1.607920 NaN 1.024180 NaN
qux NaN 0.769804 NaN -1.281247

Alternatively, unstack takes an optional fill_value argument, for specifying the value of missing data.

In [38]: df3.unstack(fill_value=-1e9)
Out[38]:
exp B
[email protected]
animal dog cat
T56GZSRVAHsecond one two one two
first
bar 8.052440e-01 1.340309e+00 -1.206412e+00 -1.170299e+00
foo 1.607920e+00 -1.000000e+09 1.024180e+00 -1.000000e+09
qux -1.000000e+09 7.698036e-01 -1.000000e+09 -1.281247e+00

With a MultiIndex

Unstacking when the columns are a MultiIndex is also careful about doing the right thing:

In [39]: df[:3].unstack(0)
Out[39]:
exp A B A
animal cat dog cat dog
first bar baz bar baz bar baz bar baz
second
one 0.895717 0.410835 0.805244 0.81385 -1.206412 0.132003 2.565646 -0.827317
two 1.431256 NaN 1.340309 NaN -1.170299 NaN -0.226169 NaN

In [40]: df2.unstack(1)
Out[40]:
exp A B A
animal cat dog cat dog
second one two one two one two one two
first
bar 0.895717 1.431256 0.805244 1.340309 -1.206412 -1.170299 2.565646 -0.226169
(continues on next page)

474 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


baz 0.410835 NaN 0.813850 NaN 0.132003 NaN -0.827317 NaN
foo -1.413681 0.875906 1.607920 -2.211372 1.024180 0.974466 0.569605 -2.006747
qux NaN -1.226825 NaN 0.769804 NaN -1.281247 NaN -0.727707

3.5.3 Reshaping by Melt

[email protected]
T56GZSRVAH

The top-level melt() function and the corresponding DataFrame.melt() are useful to massage a DataFrame
into a format where one or more columns are identifier variables, while all other columns, considered measured
variables, are “unpivoted” to the row axis, leaving just two non-identifier columns, “variable” and “value”. The names
of those columns can be customized by supplying the var_name and value_name parameters.
For instance,
In [41]: cheese = pd.DataFrame({'first': ['John', 'Mary'],
....: 'last': ['Doe', 'Bo'],
....: 'height': [5.5, 6.0],
....: 'weight': [130, 150]})
....:

In [42]: cheese
Out[42]:
first last height weight
0 John Doe 5.5 130
1 Mary Bo 6.0 150

In [43]: cheese.melt(id_vars=['first', 'last'])


Out[43]:
first last variable value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
(continues on next page)

3.5. Reshaping and pivot tables 475


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [44]: cheese.melt(id_vars=['first', 'last'], var_name='quantity')


Out[44]:
first last quantity value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0

Another way to transform is to use the wide_to_long() panel data convenience function. It is less flexible than
melt(), but more user-friendly.
In [45]: dft = pd.DataFrame({"A1970": {0: "a", 1: "b", 2: "c"},
....: "A1980": {0: "d", 1: "e", 2: "f"},
....: "B1970": {0: 2.5, 1: 1.2, 2: .7},
....: "B1980": {0: 3.2, 1: 1.3, 2: .1},
....: "X": dict(zip(range(3), np.random.randn(3)))
....: })
....:

In [46]: dft["id"] = dft.index

In [47]: dft
Out[47]:
A1970 A1980 B1970 B1980 X id
0 a d 2.5 3.2 -0.121306 0
1 b e 1.2 1.3 -0.097883 1
[email protected] f
2 c 0.7 0.1 0.695775 2
T56GZSRVAH
In [48]: pd.wide_to_long(dft, ["A", "B"], i="id", j="year")
Out[48]:
X A B
id year
0 1970 -0.121306 a 2.5
1 1970 -0.097883 b 1.2
2 1970 0.695775 c 0.7
0 1980 -0.121306 d 3.2
1 1980 -0.097883 e 1.3
2 1980 0.695775 f 0.1

3.5.4 Combining with stats and GroupBy

It should be no shock that combining pivot / stack / unstack with GroupBy and the basic Series and DataFrame
statistical functions can produce some very expressive and fast data manipulations.
In [49]: df
Out[49]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
two -0.076467 -1.187678 1.130127 -1.436737
foo one -1.413681 1.607920 1.024180 0.569605
(continues on next page)

476 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


two 0.875906 -2.211372 0.974466 -2.006747
qux one -0.410001 -0.078638 0.545952 -1.219217
two -1.226825 0.769804 -1.281247 -0.727707

In [50]: df.stack().mean(1).unstack()
Out[50]:
animal cat dog
first second
bar one -0.155347 1.685445
two 0.130479 0.557070
baz one 0.271419 -0.006733
two 0.526830 -1.312207
foo one -0.194750 1.088763
two 0.925186 -2.109060
qux one 0.067976 -0.648927
two -1.254036 0.021048

# same result, another way


In [51]: df.groupby(level=1, axis=1).mean()
Out[51]:
animal cat dog
first second
bar one -0.155347 1.685445
two 0.130479 0.557070
baz one 0.271419 -0.006733
two 0.526830 -1.312207
foo one -0.194750 1.088763
two 0.925186 -2.109060
[email protected]
T56GZSRVAH qux one 0.067976 -0.648927
two -1.254036 0.021048

In [52]: df.stack().groupby(level=1).mean()
Out[52]:
exp A B
second
one 0.071448 0.455513
two -0.424186 -0.204486

In [53]: df.mean().unstack(0)
Out[53]:
exp A B
animal
cat 0.060843 0.018596
dog -0.413580 0.232430

3.5.5 Pivot tables

While pivot() provides general purpose pivoting with various data types (strings, numerics, etc.), pandas also
provides pivot_table() for pivoting with aggregation of numeric data.
The function pivot_table() can be used to create spreadsheet-style pivot tables. See the cookbook for some
advanced strategies.
It takes a number of arguments:
• data: a DataFrame object.

3.5. Reshaping and pivot tables 477


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• values: a column or a list of columns to aggregate.


• index: a column, Grouper, array which has the same length as data, or list of them. Keys to group by on the
pivot table index. If an array is passed, it is being used as the same manner as column values.
• columns: a column, Grouper, array which has the same length as data, or list of them. Keys to group by on
the pivot table column. If an array is passed, it is being used as the same manner as column values.
• aggfunc: function to use for aggregation, defaulting to numpy.mean.
Consider a data set like this:

In [54]: import datetime

In [55]: df = pd.DataFrame({'A': ['one', 'one', 'two', 'three'] * 6,


....: 'B': ['A', 'B', 'C'] * 8,
....: 'C': ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 4,
....: 'D': np.random.randn(24),
....: 'E': np.random.randn(24),
....: 'F': [datetime.datetime(2013, i, 1) for i in range(1, 13)]
....: + [datetime.datetime(2013, i, 15) for i in range(1, 13)]})
....:

In [56]: df
Out[56]:
A B C D E F
0 one A foo 0.341734 -0.317441 2013-01-01
1 one B foo 0.959726 -1.236269 2013-02-01
2 two C foo -1.110336 0.896171 2013-03-01
3 three A bar -0.619976 -0.487602 2013-04-01
[email protected]
4 one B bar 0.149748 -0.082240 2013-05-01
T56GZSRVAH.. ... .. ... ... ... ...
19 three B foo 0.690579 -2.213588 2013-08-15
20 one C foo 0.995761 1.063327 2013-09-15
21 one A bar 2.396780 1.266143 2013-10-15
22 two B bar 0.014871 0.299368 2013-11-15
23 three C bar 3.357427 -0.863838 2013-12-15

[24 rows x 6 columns]

We can produce pivot tables from this data very easily:

In [57]: pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'])


Out[57]:
C bar foo
A B
one A 1.120915 -0.514058
B -0.338421 0.002759
C -0.538846 0.699535
three A -1.181568 NaN
B NaN 0.433512
C 0.588783 NaN
two A NaN 1.000985
B 0.158248 NaN
C NaN 0.176180

In [58]: pd.pivot_table(df, values='D', index=['B'], columns=['A', 'C'], aggfunc=np.


˓→sum)

Out[58]:
(continues on next page)

478 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


A one three two
C bar foo bar foo bar foo
B
A 2.241830 -1.028115 -2.363137 NaN NaN 2.001971
B -0.676843 0.005518 NaN 0.867024 0.316495 NaN
C -1.077692 1.399070 1.177566 NaN NaN 0.352360

In [59]: pd.pivot_table(df, values=['D', 'E'], index=['B'], columns=['A', 'C'],


....: aggfunc=np.sum)
....:
Out[59]:
D E
˓→

A one three two one


˓→ three two
C bar foo bar foo bar foo bar foo
˓→ bar foo bar foo
B
˓→

A 2.241830 -1.028115 -2.363137 NaN NaN 2.001971 2.786113 -0.043211 1.


˓→ 922577 NaN NaN 0.128491
B -0.676843 0.005518 NaN 0.867024 0.316495 NaN 1.368280 -1.103384
˓→ NaN -2.128743 -0.194294 NaN
C -1.077692 1.399070 1.177566 NaN NaN 0.352360 -1.976883 1.495717 -0.
˓→263660 NaN NaN 0.872482

The result object is a DataFrame having potentially hierarchical indexes on the rows and columns. If the values
column name is not given, the pivot table will include all of the data that can be aggregated in an additional level of
[email protected]
T56GZSRVAHhierarchy in the columns:

In [60]: pd.pivot_table(df, index=['A', 'B'], columns=['C'])


Out[60]:
D E
C bar foo bar foo
A B
one A 1.120915 -0.514058 1.393057 -0.021605
B -0.338421 0.002759 0.684140 -0.551692
C -0.538846 0.699535 -0.988442 0.747859
three A -1.181568 NaN 0.961289 NaN
B NaN 0.433512 NaN -1.064372
C 0.588783 NaN -0.131830 NaN
two A NaN 1.000985 NaN 0.064245
B 0.158248 NaN -0.097147 NaN
C NaN 0.176180 NaN 0.436241

Also, you can use Grouper for index and columns keywords. For detail of Grouper, see Grouping with a
Grouper specification.

In [61]: pd.pivot_table(df, values='D', index=pd.Grouper(freq='M', key='F'),


....: columns='C')
....:
Out[61]:
C bar foo
F
2013-01-31 NaN -0.514058
2013-02-28 NaN 0.002759
(continues on next page)

3.5. Reshaping and pivot tables 479


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2013-03-31 NaN 0.176180
2013-04-30 -1.181568 NaN
2013-05-31 -0.338421 NaN
2013-06-30 -0.538846 NaN
2013-07-31 NaN 1.000985
2013-08-31 NaN 0.433512
2013-09-30 NaN 0.699535
2013-10-31 1.120915 NaN
2013-11-30 0.158248 NaN
2013-12-31 0.588783 NaN

You can render a nice output of the table omitting the missing values by calling to_string if you wish:

In [62]: table = pd.pivot_table(df, index=['A', 'B'], columns=['C'])

In [63]: print(table.to_string(na_rep=''))
D E
C bar foo bar foo
A B
one A 1.120915 -0.514058 1.393057 -0.021605
B -0.338421 0.002759 0.684140 -0.551692
C -0.538846 0.699535 -0.988442 0.747859
three A -1.181568 0.961289
B 0.433512 -1.064372
C 0.588783 -0.131830
two A 1.000985 0.064245
B 0.158248 -0.097147
C
[email protected] 0.176180 0.436241
T56GZSRVAH
Note that pivot_table is also available as an instance method on DataFrame, i.e. DataFrame.
pivot_table().

Adding margins

If you pass margins=True to pivot_table, special All columns and rows will be added with partial group
aggregates across the categories on the rows and columns:

In [64]: df.pivot_table(index=['A', 'B'], columns='C', margins=True, aggfunc=np.std)


Out[64]:
D E
C bar foo All bar foo All
A B
one A 1.804346 1.210272 1.569879 0.179483 0.418374 0.858005
B 0.690376 1.353355 0.898998 1.083825 0.968138 1.101401
C 0.273641 0.418926 0.771139 1.689271 0.446140 1.422136
three A 0.794212 NaN 0.794212 2.049040 NaN 2.049040
B NaN 0.363548 0.363548 NaN 1.625237 1.625237
C 3.915454 NaN 3.915454 1.035215 NaN 1.035215
two A NaN 0.442998 0.442998 NaN 0.447104 0.447104
B 0.202765 NaN 0.202765 0.560757 NaN 0.560757
C NaN 1.819408 1.819408 NaN 0.650439 0.650439
All 1.556686 0.952552 1.246608 1.250924 0.899904 1.059389

480 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.5.6 Cross tabulations

Use crosstab() to compute a cross-tabulation of two (or more) factors. By default crosstab computes a fre-
quency table of the factors unless an array of values and an aggregation function are passed.
It takes a number of arguments
• index: array-like, values to group by in the rows.
• columns: array-like, values to group by in the columns.
• values: array-like, optional, array of values to aggregate according to the factors.
• aggfunc: function, optional, If no values array is passed, computes a frequency table.
• rownames: sequence, default None, must match number of row arrays passed.
• colnames: sequence, default None, if passed, must match number of column arrays passed.
• margins: boolean, default False, Add row/column margins (subtotals)
• normalize: boolean, {‘all’, ‘index’, ‘columns’}, or {0,1}, default False. Normalize by dividing all values
by the sum of values.
Any Series passed will have their name attributes used unless row or column names for the cross-tabulation are
specified
For example:

In [65]: foo, bar, dull, shiny, one, two = 'foo', 'bar', 'dull', 'shiny', 'one', 'two'

In [66]: a = np.array([foo, foo, bar, bar, foo, foo], dtype=object)


[email protected]
T56GZSRVAHIn [67]: b = np.array([one, one, two, one, two, one], dtype=object)
In [68]: c = np.array([dull, dull, shiny, dull, dull, shiny], dtype=object)

In [69]: pd.crosstab(a, [b, c], rownames=['a'], colnames=['b', 'c'])


Out[69]:
b one two
c dull shiny dull shiny
a
bar 1 0 0 1
foo 2 1 1 0

If crosstab receives only two Series, it will provide a frequency table.

In [70]: df = pd.DataFrame({'A': [1, 2, 2, 2, 2], 'B': [3, 3, 4, 4, 4],


....: 'C': [1, 1, np.nan, 1, 1]})
....:

In [71]: df
Out[71]:
A B C
0 1 3 1.0
1 2 3 1.0
2 2 4 NaN
3 2 4 1.0
4 2 4 1.0

In [72]: pd.crosstab(df['A'], df['B'])


(continues on next page)

3.5. Reshaping and pivot tables 481


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[72]:
B 3 4
A
1 1 0
2 1 3

Any input passed containing Categorical data will have all of its categories included in the cross-tabulation, even
if the actual data does not contain any instances of a particular category.

In [73]: foo = pd.Categorical(['a', 'b'], categories=['a', 'b', 'c'])

In [74]: bar = pd.Categorical(['d', 'e'], categories=['d', 'e', 'f'])

In [75]: pd.crosstab(foo, bar)


Out[75]:
col_0 d e
row_0
a 1 0
b 0 1

Normalization

Frequency tables can also be normalized to show percentages rather than counts using the normalize argument:

In [76]: pd.crosstab(df['A'], df['B'], normalize=True)


Out[76]:
[email protected]
B 3 4
T56GZSRVAHA
1 0.2 0.0
2 0.2 0.6

normalize can also normalize values within each row or within each column:

In [77]: pd.crosstab(df['A'], df['B'], normalize='columns')


Out[77]:
B 3 4
A
1 0.5 0.0
2 0.5 1.0

crosstab can also be passed a third Series and an aggregation function (aggfunc) that will be applied to the
values of the third Series within each group defined by the first two Series:

In [78]: pd.crosstab(df['A'], df['B'], values=df['C'], aggfunc=np.sum)


Out[78]:
B 3 4
A
1 1.0 NaN
2 1.0 2.0

482 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Adding margins

Finally, one can also add margins or normalize this output.

In [79]: pd.crosstab(df['A'], df['B'], values=df['C'], aggfunc=np.sum, normalize=True,


....: margins=True)
....:
Out[79]:
B 3 4 All
A
1 0.25 0.0 0.25
2 0.25 0.5 0.75
All 0.50 0.5 1.00

3.5.7 Tiling

The cut() function computes groupings for the values of the input array and is often used to transform continuous
variables to discrete or categorical variables:

In [80]: ages = np.array([10, 15, 13, 12, 23, 25, 28, 59, 60])

In [81]: pd.cut(ages, bins=3)


Out[81]:
[(9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.
˓→95, 26.667], (26.667, 43.333], (43.333, 60.0], (43.333, 60.0]]

Categories (3, interval[float64]): [(9.95, 26.667] < (26.667, 43.333] < (43.333, 60.
˓→0]]
[email protected]
T56GZSRVAH
If the bins keyword is an integer, then equal-width bins are formed. Alternatively we can specify custom bin-edges:

In [82]: c = pd.cut(ages, bins=[0, 18, 35, 70])

In [83]: c
Out[83]:
[(0, 18], (0, 18], (0, 18], (0, 18], (18, 35], (18, 35], (18, 35], (35, 70], (35, 70]]
Categories (3, interval[int64]): [(0, 18] < (18, 35] < (35, 70]]

If the bins keyword is an IntervalIndex, then these will be used to bin the passed data.:

pd.cut([25, 20, 50], bins=c.categories)

3.5.8 Computing indicator / dummy variables

To convert a categorical variable into a “dummy” or “indicator” DataFrame, for example a column in a DataFrame
(a Series) which has k distinct values, can derive a DataFrame containing k columns of 1s and 0s using
get_dummies():

In [84]: df = pd.DataFrame({'key': list('bbacab'), 'data1': range(6)})

In [85]: pd.get_dummies(df['key'])
Out[85]:
a b c
0 0 1 0
1 0 1 0
(continues on next page)

3.5. Reshaping and pivot tables 483


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0

Sometimes it’s useful to prefix the column names, for example when merging the result with the original DataFrame:
In [86]: dummies = pd.get_dummies(df['key'], prefix='key')

In [87]: dummies
Out[87]:
key_a key_b key_c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0

In [88]: df[['data1']].join(dummies)
Out[88]:
data1 key_a key_b key_c
0 0 0 1 0
1 1 0 1 0
2 2 1 0 0
3 3 0 0 1
4 4 1 0 0
5 5 0 1 0
[email protected]
T56GZSRVAH
This function is often used along with discretization functions like cut:
In [89]: values = np.random.randn(10)

In [90]: values
Out[90]:
array([ 0.4082, -1.0481, -0.0257, -0.9884, 0.0941, 1.2627, 1.29 ,
0.0824, -0.0558, 0.5366])

In [91]: bins = [0, 0.2, 0.4, 0.6, 0.8, 1]

In [92]: pd.get_dummies(pd.cut(values, bins))


Out[92]:
(0.0, 0.2] (0.2, 0.4] (0.4, 0.6] (0.6, 0.8] (0.8, 1.0]
0 0 0 1 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 1 0 0 0 0
5 0 0 0 0 0
6 0 0 0 0 0
7 1 0 0 0 0
8 0 0 0 0 0
9 0 0 1 0 0

See also Series.str.get_dummies.


get_dummies() also accepts a DataFrame. By default all categorical variables (categorical in the statistical
sense, those with object or categorical dtype) are encoded as dummy variables.

484 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [93]: df = pd.DataFrame({'A': ['a', 'b', 'a'], 'B': ['c', 'c', 'b'],


....: 'C': [1, 2, 3]})
....:

In [94]: pd.get_dummies(df)
Out[94]:
C A_a A_b B_b B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0

All non-object columns are included untouched in the output. You can control the columns that are encoded with the
columns keyword.

In [95]: pd.get_dummies(df, columns=['A'])


Out[95]:
B C A_a A_b
0 c 1 1 0
1 c 2 0 1
2 b 3 1 0

Notice that the B column is still included in the output, it just hasn’t been encoded. You can drop B before calling
get_dummies if you don’t want to include it in the output.
As with the Series version, you can pass values for the prefix and prefix_sep. By default the column name
is used as the prefix, and ‘_’ as the prefix separator. You can specify prefix and prefix_sep in 3 ways:
• string: Use the same value for prefix or prefix_sep for each column to be encoded.
[email protected]
• list: Must be the same length as the number of columns being encoded.
T56GZSRVAH
• dict: Mapping column name to prefix.

In [96]: simple = pd.get_dummies(df, prefix='new_prefix')

In [97]: simple
Out[97]:
C new_prefix_a new_prefix_b new_prefix_b new_prefix_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0

In [98]: from_list = pd.get_dummies(df, prefix=['from_A', 'from_B'])

In [99]: from_list
Out[99]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0

In [100]: from_dict = pd.get_dummies(df, prefix={'B': 'from_B', 'A': 'from_A'})

In [101]: from_dict
Out[101]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0

3.5. Reshaping and pivot tables 485


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Sometimes it will be useful to only keep k-1 levels of a categorical variable to avoid collinearity when feeding the
result to statistical models. You can switch to this mode by turn on drop_first.

In [102]: s = pd.Series(list('abcaa'))

In [103]: pd.get_dummies(s)
Out[103]:
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
4 1 0 0

In [104]: pd.get_dummies(s, drop_first=True)


Out[104]:
b c
0 0 0
1 1 0
2 0 1
3 0 0
4 0 0

When a column contains only one level, it will be omitted in the result.

In [105]: df = pd.DataFrame({'A': list('aaaaa'), 'B': list('ababc')})

In [106]: pd.get_dummies(df)
Out[106]:
[email protected]
A_a B_a B_b B_c
T56GZSRVAH
0 1 1 0 0
1 1 0 1 0
2 1 1 0 0
3 1 0 1 0
4 1 0 0 1

In [107]: pd.get_dummies(df, drop_first=True)


Out[107]:
B_b B_c
0 0 0
1 1 0
2 0 0
3 1 0
4 0 1

By default new columns will have np.uint8 dtype. To choose another dtype, use the dtype argument:

In [108]: df = pd.DataFrame({'A': list('abc'), 'B': [1.1, 2.2, 3.3]})

In [109]: pd.get_dummies(df, dtype=bool).dtypes


Out[109]:
B float64
A_a bool
A_b bool
A_c bool
dtype: object

New in version 0.23.0.

486 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.5.9 Factorizing values

To encode 1-d values as an enumerated type use factorize():

In [110]: x = pd.Series(['A', 'A', np.nan, 'B', 3.14, np.inf])

In [111]: x
Out[111]:
0 A
1 A
2 NaN
3 B
4 3.14
5 inf
dtype: object

In [112]: labels, uniques = pd.factorize(x)

In [113]: labels
Out[113]: array([ 0, 0, -1, 1, 2, 3])

In [114]: uniques
Out[114]: Index(['A', 'B', 3.14, inf], dtype='object')

Note that factorize is similar to numpy.unique, but differs in its handling of NaN:

Note: The following numpy.unique will fail under Python 3 with a TypeError because of an ordering bug. See
also here.
[email protected]
T56GZSRVAH
In [1]: x = pd.Series(['A', 'A', np.nan, 'B', 3.14, np.inf])
In [2]: pd.factorize(x, sort=True)
Out[2]:
(array([ 2, 2, -1, 3, 0, 1]),
Index([3.14, inf, 'A', 'B'], dtype='object'))

In [3]: np.unique(x, return_inverse=True)[::-1]


Out[3]: (array([3, 3, 0, 4, 1, 2]), array([nan, 3.14, inf, 'A', 'B'], dtype=object))

Note: If you just want to handle one column as a categorical variable (like R’s factor), you can use df["cat_col"]
= pd.Categorical(df["col"]) or df["cat_col"] = df["col"].astype("category"). For
full docs on Categorical, see the Categorical introduction and the API documentation.

3.5.10 Examples

In this section, we will review frequently asked questions and examples. The column names and relevant column
values are named to correspond with how this DataFrame will be pivoted in the answers below.

In [115]: np.random.seed([3, 1415])

In [116]: n = 20

In [117]: cols = np.array(['key', 'row', 'item', 'col'])


(continues on next page)

3.5. Reshaping and pivot tables 487


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [118]: df = cols + pd.DataFrame((np.random.randint(5, size=(n, 4))


.....: // [2, 1, 2, 1]).astype(str))
.....:

In [119]: df.columns = cols

In [120]: df = df.join(pd.DataFrame(np.random.rand(n, 2).round(2)).add_prefix('val'))

In [121]: df
Out[121]:
key row item col val0 val1
0 key0 row3 item1 col3 0.81 0.04
1 key1 row2 item1 col2 0.44 0.07
2 key1 row0 item1 col0 0.77 0.01
3 key0 row4 item0 col2 0.15 0.59
4 key1 row0 item2 col1 0.81 0.64
.. ... ... ... ... ... ...
15 key0 row3 item1 col1 0.31 0.23
16 key0 row0 item2 col3 0.86 0.01
17 key0 row4 item0 col3 0.64 0.21
18 key2 row2 item2 col0 0.13 0.45
19 key0 row2 item0 col4 0.37 0.70

[20 rows x 6 columns]

[email protected]
Pivoting with single aggregations
T56GZSRVAH
Suppose we wanted to pivot df such that the col values are columns, row values are the index, and the mean of
val0 are the values? In particular, the resulting DataFrame should look like:

col col0 col1 col2 col3 col4


row
row0 0.77 0.605 NaN 0.860 0.65
row2 0.13 NaN 0.395 0.500 0.25
row3 NaN 0.310 NaN 0.545 NaN
row4 NaN 0.100 0.395 0.760 0.24

This solution uses pivot_table(). Also note that aggfunc='mean' is the default. It is included here to be
explicit.

In [122]: df.pivot_table(
.....: values='val0', index='row', columns='col', aggfunc='mean')
.....:
Out[122]:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65
row2 0.13 NaN 0.395 0.500 0.25
row3 NaN 0.310 NaN 0.545 NaN
row4 NaN 0.100 0.395 0.760 0.24

Note that we can also replace the missing values by using the fill_value parameter.

488 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [123]: df.pivot_table(
.....: values='val0', index='row', columns='col', aggfunc='mean', fill_value=0)
.....:
Out[123]:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 0.000 0.860 0.65
row2 0.13 0.000 0.395 0.500 0.25
row3 0.00 0.310 0.000 0.545 0.00
row4 0.00 0.100 0.395 0.760 0.24

Also note that we can pass in other aggregation functions as well. For example, we can also pass in sum.

In [124]: df.pivot_table(
.....: values='val0', index='row', columns='col', aggfunc='sum', fill_value=0)
.....:
Out[124]:
col col0 col1 col2 col3 col4
row
row0 0.77 1.21 0.00 0.86 0.65
row2 0.13 0.00 0.79 0.50 0.50
row3 0.00 0.31 0.00 1.09 0.00
row4 0.00 0.10 0.79 1.52 0.24

Another aggregation we can do is calculate the frequency in which the columns and rows occur together a.k.a. “cross
tabulation”. To do this, we can pass size to the aggfunc parameter.

In [125]: df.pivot_table(index='row', columns='col', fill_value=0, aggfunc='size')


[email protected]
Out[125]:
T56GZSRVAHcol col0 col1 col2 col3 col4
row
row0 1 2 0 1 1
row2 1 0 2 1 2
row3 0 1 0 2 0
row4 0 1 2 2 1

Pivoting with multiple aggregations

We can also perform multiple aggregations. For example, to perform both a sum and mean, we can pass in a list to
the aggfunc argument.

In [126]: df.pivot_table(
.....: values='val0', index='row', columns='col', aggfunc=['mean', 'sum'])
.....:
Out[126]:
mean sum
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65 0.77 1.21 NaN 0.86 0.65
row2 0.13 NaN 0.395 0.500 0.25 0.13 NaN 0.79 0.50 0.50
row3 NaN 0.310 NaN 0.545 NaN NaN 0.31 NaN 1.09 NaN
row4 NaN 0.100 0.395 0.760 0.24 NaN 0.10 0.79 1.52 0.24

Note to aggregate over multiple value columns, we can pass in a list to the values parameter.

3.5. Reshaping and pivot tables 489


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [127]: df.pivot_table(
.....: values=['val0', 'val1'], index='row', columns='col', aggfunc=['mean'])
.....:
Out[127]:
mean
val0 val1
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65 0.01 0.745 NaN 0.010 0.02
row2 0.13 NaN 0.395 0.500 0.25 0.45 NaN 0.34 0.440 0.79
row3 NaN 0.310 NaN 0.545 NaN NaN 0.230 NaN 0.075 NaN
row4 NaN 0.100 0.395 0.760 0.24 NaN 0.070 0.42 0.300 0.46

Note to subdivide over multiple columns we can pass in a list to the columns parameter.
In [128]: df.pivot_table(
.....: values=['val0'], index='row', columns=['item', 'col'], aggfunc=['mean'])
.....:
Out[128]:
mean
val0
item item0 item1 item2
col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4
row
row0 NaN NaN NaN 0.77 NaN NaN NaN NaN NaN 0.605 0.86 0.65
row2 0.35 NaN 0.37 NaN NaN 0.44 NaN NaN 0.13 NaN 0.50 0.13
row3 NaN NaN NaN NaN 0.31 NaN 0.81 NaN NaN NaN 0.28 NaN
row4 0.15 0.64 NaN NaN 0.10 0.64 0.88 0.24 NaN NaN NaN NaN
[email protected]
T56GZSRVAH
3.5.11 Exploding a list-like column

New in version 0.25.0.


Sometimes the values in a column are list-like.
In [129]: keys = ['panda1', 'panda2', 'panda3']

In [130]: values = [['eats', 'shoots'], ['shoots', 'leaves'], ['eats', 'leaves']]

In [131]: df = pd.DataFrame({'keys': keys, 'values': values})

In [132]: df
Out[132]:
keys values
0 panda1 [eats, shoots]
1 panda2 [shoots, leaves]
2 panda3 [eats, leaves]

We can ‘explode’ the values column, transforming each list-like to a separate row, by using explode(). This will
replicate the index values from the original row:
In [133]: df['values'].explode()
Out[133]:
0 eats
0 shoots
1 shoots
(continues on next page)

490 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 leaves
2 eats
2 leaves
Name: values, dtype: object

You can also explode the column in the DataFrame.

In [134]: df.explode('values')
Out[134]:
keys values
0 panda1 eats
0 panda1 shoots
1 panda2 shoots
1 panda2 leaves
2 panda3 eats
2 panda3 leaves

Series.explode() will replace empty lists with np.nan and preserve scalar entries. The dtype of the resulting
Series is always object.

In [135]: s = pd.Series([[1, 2, 3], 'foo', [], ['a', 'b']])

In [136]: s
Out[136]:
0 [1, 2, 3]
1 foo
2 []
[email protected] b]
3 [a,
T56GZSRVAHdtype: object

In [137]: s.explode()
Out[137]:
0 1
0 2
0 3
1 foo
2 NaN
3 a
3 b
dtype: object

Here is a typical usecase. You have comma separated strings in a column and want to expand this.

In [138]: df = pd.DataFrame([{'var1': 'a,b,c', 'var2': 1},


.....: {'var1': 'd,e,f', 'var2': 2}])
.....:

In [139]: df
Out[139]:
var1 var2
0 a,b,c 1
1 d,e,f 2

Creating a long form DataFrame is now straightforward using explode and chained operations

3.5. Reshaping and pivot tables 491


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [140]: df.assign(var1=df.var1.str.split(',')).explode('var1')
Out[140]:
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2

3.6 Working with text data

3.6.1 Text Data Types

New in version 1.0.0.


There are two ways to store text data in pandas:
1. object -dtype NumPy array.
2. StringDtype extension type.
We recommend using StringDtype to store text data.
Prior to pandas 1.0, object dtype was the only option. This was unfortunate for many reasons:
1. You can accidentally store a mixture of strings and non-strings in an object dtype array. It’s better to have a
[email protected]
dedicated dtype.
T56GZSRVAH
2. object dtype breaks dtype-specific operations like DataFrame.select_dtypes(). There isn’t a clear
way to select just text while excluding non-text but still object-dtype columns.
3. When reading code, the contents of an object dtype array is less clear than 'string'.
Currently, the performance of object dtype arrays of strings and arrays.StringArray are about the same.
We expect future enhancements to significantly increase the performance and lower the memory overhead of
StringArray.

Warning: StringArray is currently considered experimental. The implementation and parts of the API may
change without warning.

For backwards-compatibility, object dtype remains the default type we infer a list of strings to

In [1]: pd.Series(['a', 'b', 'c'])


Out[1]:
0 a
1 b
2 c
dtype: object

To explicitly request string dtype, specify the dtype

In [2]: pd.Series(['a', 'b', 'c'], dtype="string")


Out[2]:
0 a
(continues on next page)

492 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 b
2 c
dtype: string

In [3]: pd.Series(['a', 'b', 'c'], dtype=pd.StringDtype())


Out[3]:
0 a
1 b
2 c
dtype: string

Or astype after the Series or DataFrame is created

In [4]: s = pd.Series(['a', 'b', 'c'])

In [5]: s
Out[5]:
0 a
1 b
2 c
dtype: object

In [6]: s.astype("string")
Out[6]:
0 a
1 b
2 c
dtype: string
[email protected]
T56GZSRVAH

Behavior differences

These are places where the behavior of StringDtype objects differ from object dtype
l. For StringDtype, string accessor methods that return numeric output will always return a nullable integer
dtype, rather than either int or float dtype, depending on the presence of NA values. Methods returning boolean
output will return a nullable boolean dtype.

In [7]: s = pd.Series(["a", None, "b"], dtype="string")

In [8]: s
Out[8]:
0 a
1 <NA>
2 b
dtype: string

In [9]: s.str.count("a")
Out[9]:
0 1
1 <NA>
2 0
dtype: Int64

In [10]: s.dropna().str.count("a")
Out[10]:
(continues on next page)

3.6. Working with text data 493


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


0 1
2 0
dtype: Int64

Both outputs are Int64 dtype. Compare that with object-dtype

In [11]: s2 = pd.Series(["a", None, "b"], dtype="object")

In [12]: s2.str.count("a")
Out[12]:
0 1.0
1 NaN
2 0.0
dtype: float64

In [13]: s2.dropna().str.count("a")
Out[13]:
0 1
2 0
dtype: int64

When NA values are present, the output dtype is float64. Similarly for methods returning boolean values.

In [14]: s.str.isdigit()
Out[14]:
0 False
1 <NA>
2 False
[email protected]
T56GZSRVAH dtype: boolean

In [15]: s.str.match("a")
Out[15]:
0 True
1 <NA>
2 False
dtype: boolean

2. Some string methods, like Series.str.decode() are not available on StringArray because
StringArray only holds strings, not bytes.
3. In comparison operations, arrays.StringArray and Series backed by a StringArray will return
an object with BooleanDtype, rather than a bool dtype object. Missing values in a StringArray will
propagate in comparison operations, rather than always comparing unequal like numpy.nan.
Everything else that follows in the rest of this document applies equally to string and object dtype.

3.6.2 String Methods

Series and Index are equipped with a set of string processing methods that make it easy to operate on each element of
the array. Perhaps most importantly, these methods exclude missing/NA values automatically. These are accessed via
the str attribute and generally have names matching the equivalent (scalar) built-in string methods:

In [16]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'],
....: dtype="string")
....:
(continues on next page)

494 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [17]: s.str.lower()
Out[17]:
0 a
1 b
2 c
3 aaba
4 baca
5 <NA>
6 caba
7 dog
8 cat
dtype: string

In [18]: s.str.upper()
Out[18]:
0 A
1 B
2 C
3 AABA
4 BACA
5 <NA>
6 CABA
7 DOG
8 CAT
dtype: string

In [19]: s.str.len()
[email protected]
T56GZSRVAHOut[19]:
0 1
1 1
2 1
3 4
4 4
5 <NA>
6 4
7 3
8 3
dtype: Int64

In [20]: idx = pd.Index([' jack', 'jill ', ' jesse ', 'frank'])

In [21]: idx.str.strip()
Out[21]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='object')

In [22]: idx.str.lstrip()
Out[22]: Index(['jack', 'jill ', 'jesse ', 'frank'], dtype='object')

In [23]: idx.str.rstrip()
Out[23]: Index([' jack', 'jill', ' jesse', 'frank'], dtype='object')

The string methods on Index are especially useful for cleaning up or transforming DataFrame columns. For instance,
you may have columns with leading or trailing whitespace:

In [24]: df = pd.DataFrame(np.random.randn(3, 2),


....: columns=[' Column A ', ' Column B '], index=range(3))
(continues on next page)

3.6. Working with text data 495


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


....:

In [25]: df
Out[25]:
Column A Column B
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215

Since df.columns is an Index object, we can use the .str accessor

In [26]: df.columns.str.strip()
Out[26]: Index(['Column A', 'Column B'], dtype='object')

In [27]: df.columns.str.lower()
Out[27]: Index([' column a ', ' column b '], dtype='object')

These string methods can then be used to clean up the columns as needed. Here we are removing leading and trailing
whitespaces, lower casing all names, and replacing any remaining whitespaces with underscores:

In [28]: df.columns = df.columns.str.strip().str.lower().str.replace(' ', '_')

In [29]: df
Out[29]:
column_a column_b
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
[email protected]
T56GZSRVAH

Note: If you have a Series where lots of elements are repeated (i.e. the number of unique elements in the
Series is a lot smaller than the length of the Series), it can be faster to convert the original Series to one of
type category and then use .str.<method> or .dt.<property> on that. The performance difference comes
from the fact that, for Series of type category, the string operations are done on the .categories and not on
each element of the Series.
Please note that a Series of type category with string .categories has some limitations in comparison to
Series of type string (e.g. you can’t add strings to each other: s + " " + s won’t work if s is a Series of type
category). Also, .str methods which operate on elements of type list are not available on such a Series.

Warning: Before v.0.25.0, the .str-accessor did only the most rudimentary type checks. Starting with v.0.25.0,
the type of the Series is inferred and the allowed types (i.e. strings) are enforced more rigorously.
Generally speaking, the .str accessor is intended to work only on strings. With very few exceptions, other uses
are not supported, and may be disabled at a later point.

496 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.6.3 Splitting and replacing strings

Methods like split return a Series of lists:

In [30]: s2 = pd.Series(['a_b_c', 'c_d_e', np.nan, 'f_g_h'], dtype="string")

In [31]: s2.str.split('_')
Out[31]:
0 [a, b, c]
1 [c, d, e]
2 <NA>
3 [f, g, h]
dtype: object

Elements in the split lists can be accessed using get or [] notation:

In [32]: s2.str.split('_').str.get(1)
Out[32]:
0 b
1 d
2 <NA>
3 g
dtype: object

In [33]: s2.str.split('_').str[1]
Out[33]:
0 b
1 d
2 <NA>
[email protected]
T56GZSRVAH3 g
dtype: object

It is easy to expand this to return a DataFrame using expand.

In [34]: s2.str.split('_', expand=True)


Out[34]:
0 1 2
0 a b c
1 c d e
2 <NA> <NA> <NA>
3 f g h

When original Series has StringDtype, the output columns will all be StringDtype as well.
It is also possible to limit the number of splits:

In [35]: s2.str.split('_', expand=True, n=1)


Out[35]:
0 1
0 a b_c
1 c d_e
2 <NA> <NA>
3 f g_h

rsplit is similar to split except it works in the reverse direction, i.e., from the end of the string to the beginning
of the string:

3.6. Working with text data 497


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [36]: s2.str.rsplit('_', expand=True, n=1)


Out[36]:
0 1
0 a_b c
1 c_d e
2 <NA> <NA>
3 f_g h

replace by default replaces regular expressions:

In [37]: s3 = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca',


....: '', np.nan, 'CABA', 'dog', 'cat'],
....: dtype="string")
....:

In [38]: s3
Out[38]:
0 A
1 B
2 C
3 Aaba
4 Baca
5
6 <NA>
7 CABA
8 dog
9 cat
dtype: string
[email protected]
T56GZSRVAHIn [39]: s3.str.replace('^.a|dog', 'XX-XX ', case=False)
Out[39]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string

Some caution must be taken to keep regular expressions in mind! For example, the following code will cause trouble
because of the regular expression meaning of $:

# Consider the following badly formatted financial data


In [40]: dollars = pd.Series(['12', '-$10', '$10,000'], dtype="string")

# This does what you'd naively expect:


In [41]: dollars.str.replace('$', '')
Out[41]:
0 12
1 -10
2 10,000
dtype: string

(continues on next page)

498 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


# But this doesn't:
In [42]: dollars.str.replace('-$', '-')
Out[42]:
0 12
1 -$10
2 $10,000
dtype: string

# We need to escape the special character (for >1 len patterns)


In [43]: dollars.str.replace(r'-\$', '-')
Out[43]:
0 12
1 -10
2 $10,000
dtype: string

New in version 0.23.0.


If you do want literal replacement of a string (equivalent to str.replace()), you can set the optional regex
parameter to False, rather than escaping each character. In this case both pat and repl must be strings:
# These lines are equivalent
In [44]: dollars.str.replace(r'-\$', '-')
Out[44]:
0 12
1 -10
2 $10,000
dtype: string
[email protected]
T56GZSRVAHIn [45]: dollars.str.replace('-$', '-', regex=False)
Out[45]:
0 12
1 -10
2 $10,000
dtype: string

The replace method can also take a callable as replacement. It is called on every pat using re.sub(). The
callable should expect one positional argument (a regex object) and return a string.
# Reverse every lowercase alphabetic word
In [46]: pat = r'[a-z]+'

In [47]: def repl(m):


....: return m.group(0)[::-1]
....:

In [48]: pd.Series(['foo 123', 'bar baz', np.nan],


....: dtype="string").str.replace(pat, repl)
....:
Out[48]:
0 oof 123
1 rab zab
2 <NA>
dtype: string

# Using regex groups


In [49]: pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)"
(continues on next page)

3.6. Working with text data 499


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [50]: def repl(m):


....: return m.group('two').swapcase()
....:

In [51]: pd.Series(['Foo Bar Baz', np.nan],


....: dtype="string").str.replace(pat, repl)
....:
Out[51]:
0 bAR
1 <NA>
dtype: string

The replace method also accepts a compiled regular expression object from re.compile() as a pattern. All
flags should be included in the compiled regular expression object.

In [52]: import re

In [53]: regex_pat = re.compile(r'^.a|dog', flags=re.IGNORECASE)

In [54]: s3.str.replace(regex_pat, 'XX-XX ')


Out[54]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
[email protected]
T56GZSRVAH6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string

Including a flags argument when calling replace with a compiled regular expression object will raise a
ValueError.

In [55]: s3.str.replace(regex_pat, 'XX-XX ', flags=re.IGNORECASE)


---------------------------------------------------------------------------
ValueError: case and flags cannot be set when pat is a compiled regex

500 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.6.4 Concatenation

There are several ways to concatenate a Series or Index, either with itself or others, all based on cat(), resp.
Index.str.cat.

Concatenating a single Series into a string

The content of a Series (or Index) can be concatenated:


In [56]: s = pd.Series(['a', 'b', 'c', 'd'], dtype="string")

In [57]: s.str.cat(sep=',')
Out[57]: 'a,b,c,d'

If not specified, the keyword sep for the separator defaults to the empty string, sep='':
In [58]: s.str.cat()
Out[58]: 'abcd'

By default, missing values are ignored. Using na_rep, they can be given a representation:
In [59]: t = pd.Series(['a', 'b', np.nan, 'd'], dtype="string")

In [60]: t.str.cat(sep=',')
Out[60]: 'a,b,d'

In [61]: t.str.cat(sep=',', na_rep='-')


Out[61]: 'a,b,-,d'
[email protected]
T56GZSRVAH

Concatenating a Series and something list-like into a Series

The first argument to cat() can be a list-like object, provided that it matches the length of the calling Series (or
Index).
In [62]: s.str.cat(['A', 'B', 'C', 'D'])
Out[62]:
0 aA
1 bB
2 cC
3 dD
dtype: string

Missing values on either side will result in missing values in the result as well, unless na_rep is specified:
In [63]: s.str.cat(t)
Out[63]:
0 aa
1 bb
2 <NA>
3 dd
dtype: string

In [64]: s.str.cat(t, na_rep='-')


Out[64]:
0 aa
(continues on next page)

3.6. Working with text data 501


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 bb
2 c-
3 dd
dtype: string

Concatenating a Series and something array-like into a Series

New in version 0.23.0.


The parameter others can also be two-dimensional. In this case, the number or rows must match the lengths of the
calling Series (or Index).

In [65]: d = pd.concat([t, s], axis=1)

In [66]: s
Out[66]:
0 a
1 b
2 c
3 d
dtype: string

In [67]: d
Out[67]:
0 1
0 a a
1 b b
[email protected]
T56GZSRVAH2 <NA> c
3 d d

In [68]: s.str.cat(d, na_rep='-')


Out[68]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string

Concatenating a Series and an indexed object into a Series, with alignment

New in version 0.23.0.


For concatenation with a Series or DataFrame, it is possible to align the indexes before concatenation by setting
the join-keyword.

In [69]: u = pd.Series(['b', 'd', 'a', 'c'], index=[1, 3, 0, 2],


....: dtype="string")
....:

In [70]: s
Out[70]:
0 a
1 b
2 c
(continues on next page)

502 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


3 d
dtype: string

In [71]: u
Out[71]:
1 b
3 d
0 a
2 c
dtype: string

In [72]: s.str.cat(u)
Out[72]:
0 aa
1 bb
2 cc
3 dd
dtype: string

In [73]: s.str.cat(u, join='left')


Out[73]:
0 aa
1 bb
2 cc
3 dd
dtype: string

[email protected]
T56GZSRVAH Warning: If the join keyword is not passed, the method cat() will currently fall back to the behavior before
version 0.23.0 (i.e. no alignment), but a FutureWarning will be raised if any of the involved indexes differ,
since this default will change to join='left' in a future version.

The usual options are available for join (one of 'left', 'outer', 'inner', 'right'). In particular,
alignment also means that the different lengths do not need to coincide anymore.

In [74]: v = pd.Series(['z', 'a', 'b', 'd', 'e'], index=[-1, 0, 1, 3, 4],


....: dtype="string")
....:

In [75]: s
Out[75]:
0 a
1 b
2 c
3 d
dtype: string

In [76]: v
Out[76]:
-1 z
0 a
1 b
3 d
4 e
dtype: string
(continues on next page)

3.6. Working with text data 503


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [77]: s.str.cat(v, join='left', na_rep='-')


Out[77]:
0 aa
1 bb
2 c-
3 dd
dtype: string

In [78]: s.str.cat(v, join='outer', na_rep='-')


Out[78]:
-1 -z
0 aa
1 bb
2 c-
3 dd
4 -e
dtype: string

The same alignment can be used when others is a DataFrame:

In [79]: f = d.loc[[3, 2, 1, 0], :]

In [80]: s
Out[80]:
0 a
1 b
2 c
[email protected]
T56GZSRVAH 3 d
dtype: string

In [81]: f
Out[81]:
0 1
3 d d
2 <NA> c
1 b b
0 a a

In [82]: s.str.cat(f, join='left', na_rep='-')


Out[82]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string

504 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Concatenating a Series and many objects into a Series

Several array-like items (specifically: Series, Index, and 1-dimensional variants of np.ndarray) can be com-
bined in a list-like container (including iterators, dict-views, etc.).

In [83]: s
Out[83]:
0 a
1 b
2 c
3 d
dtype: string

In [84]: u
Out[84]:
1 b
3 d
0 a
2 c
dtype: string

In [85]: s.str.cat([u, u.to_numpy()], join='left')


Out[85]:
0 aab
1 bbd
2 cca
3 ddc
dtype: string
[email protected]
T56GZSRVAHAll elements without an index (e.g. np.ndarray) within the passed list-like must match in length to the calling
Series (or Index), but Series and Index may have arbitrary length (as long as alignment is not disabled with
join=None):

In [86]: v
Out[86]:
-1 z
0 a
1 b
3 d
4 e
dtype: string

In [87]: s.str.cat([v, u, u.to_numpy()], join='outer', na_rep='-')


Out[87]:
-1 -z--
0 aaab
1 bbbd
2 c-ca
3 dddc
4 -e--
dtype: string

If using join='right' on a list-like of others that contains different indexes, the union of these indexes will be
used as the basis for the final concatenation:

In [88]: u.loc[[3]]
Out[88]:
(continues on next page)

3.6. Working with text data 505


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


3 d
dtype: string

In [89]: v.loc[[-1, 0]]


Out[89]:
-1 z
0 a
dtype: string

In [90]: s.str.cat([u.loc[[3]], v.loc[[-1, 0]]], join='right', na_rep='-')


Out[90]:
-1 --z
0 a-a
3 dd-
dtype: string

3.6.5 Indexing with .str

You can use [] notation to directly index by position locations. If you index past the end of the string, the result will
be a NaN.

In [91]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan,


....: 'CABA', 'dog', 'cat'],
....: dtype="string")
....:
[email protected]
T56GZSRVAHIn [92]: s.str[0]
Out[92]:
0 A
1 B
2 C
3 A
4 B
5 <NA>
6 C
7 d
8 c
dtype: string

In [93]: s.str[1]
Out[93]:
0 <NA>
1 <NA>
2 <NA>
3 a
4 a
5 <NA>
6 A
7 o
8 a
dtype: string

506 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.6.6 Extracting substrings

Extract first match in each subject (extract)

Warning: Before version 0.23, argument expand of the extract method defaulted to False. When
expand=False, expand returns a Series, Index, or DataFrame, depending on the subject and regu-
lar expression pattern. When expand=True, it always returns a DataFrame, which is more consistent and less
confusing from the perspective of a user. expand=True has been the default since version 0.23.0.

The extract method accepts a regular expression with at least one capture group.
Extracting a regular expression with more than one group returns a DataFrame with one column per group.

In [94]: pd.Series(['a1', 'b2', 'c3'],


....: dtype="string").str.extract(r'([ab])(\d)', expand=False)
....:
Out[94]:
0 1
0 a 1
1 b 2
2 <NA> <NA>

Elements that do not match return a row filled with NaN. Thus, a Series of messy strings can be “converted” into a
like-indexed Series or DataFrame of cleaned-up or more useful strings, without necessitating get() to access tuples
or re.match objects. The dtype of the result is always object, even if no match is found and the result only contains
NaN.
[email protected]
T56GZSRVAHNamed groups like

In [95]: pd.Series(['a1', 'b2', 'c3'],


....: dtype="string").str.extract(r'(?P<letter>[ab])(?P<digit>\d)',
....: expand=False)
....:
Out[95]:
letter digit
0 a 1
1 b 2
2 <NA> <NA>

and optional groups like

In [96]: pd.Series(['a1', 'b2', '3'],


....: dtype="string").str.extract(r'([ab])?(\d)', expand=False)
....:
Out[96]:
0 1
0 a 1
1 b 2
2 <NA> 3

can also be used. Note that any capture group names in the regular expression will be used for column names;
otherwise capture group numbers will be used.
Extracting a regular expression with one group returns a DataFrame with one column if expand=True.

3.6. Working with text data 507


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [97]: pd.Series(['a1', 'b2', 'c3'],


....: dtype="string").str.extract(r'[ab](\d)', expand=True)
....:
Out[97]:
0
0 1
1 2
2 <NA>

It returns a Series if expand=False.

In [98]: pd.Series(['a1', 'b2', 'c3'],


....: dtype="string").str.extract(r'[ab](\d)', expand=False)
....:
Out[98]:
0 1
1 2
2 <NA>
dtype: string

Calling on an Index with a regex with exactly one capture group returns a DataFrame with one column if
expand=True.

In [99]: s = pd.Series(["a1", "b2", "c3"], ["A11", "B22", "C33"],


....: dtype="string")
....:

In [100]: s
[email protected]
Out[100]:
T56GZSRVAHA11 a1
B22 b2
C33 c3
dtype: string

In [101]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=True)


Out[101]:
letter
0 A
1 B
2 C

It returns an Index if expand=False.

In [102]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=False)


Out[102]: Index(['A', 'B', 'C'], dtype='object', name='letter')

Calling on an Index with a regex with more than one capture group returns a DataFrame if expand=True.

In [103]: s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=True)


Out[103]:
letter 1
0 A 11
1 B 22
2 C 33

It raises ValueError if expand=False.

508 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

>>> s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=False)


ValueError: only one regex group is supported with Index

The table below summarizes the behavior of extract(expand=False) (input subject in first column, number of
groups in regex in first row)

1 group >1 group


Index Index ValueError
Series Series DataFrame

Extract all matches in each subject (extractall)

Unlike extract (which returns only the first match),


In [104]: s = pd.Series(["a1a2", "b1", "c1"], index=["A", "B", "C"],
.....: dtype="string")
.....:

In [105]: s
Out[105]:
A a1a2
B b1
C c1
dtype: string

In [106]: two_groups = '(?P<letter>[a-z])(?P<digit>[0-9])'


[email protected]
T56GZSRVAHIn [107]: s.str.extract(two_groups, expand=True)
Out[107]:
letter digit
A a 1
B b 1
C c 1

the extractall method returns every match. The result of extractall is always a DataFrame with a
MultiIndex on its rows. The last level of the MultiIndex is named match and indicates the order in the
subject.
In [108]: s.str.extractall(two_groups)
Out[108]:
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
C 0 c 1

When each subject string in the Series has exactly one match,
In [109]: s = pd.Series(['a3', 'b3', 'c2'], dtype="string")

In [110]: s
Out[110]:
0 a3
1 b3
(continues on next page)

3.6. Working with text data 509


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2 c2
dtype: string

then extractall(pat).xs(0, level='match') gives the same result as extract(pat).

In [111]: extract_result = s.str.extract(two_groups, expand=True)

In [112]: extract_result
Out[112]:
letter digit
0 a 3
1 b 3
2 c 2

In [113]: extractall_result = s.str.extractall(two_groups)

In [114]: extractall_result
Out[114]:
letter digit
match
0 0 a 3
1 0 b 3
2 0 c 2

In [115]: extractall_result.xs(0, level="match")


Out[115]:
letter digit
0 a
[email protected] 3
T56GZSRVAH 1 b 3
2 c 2

Index also supports .str.extractall. It returns a DataFrame which has the same result as a Series.str.
extractall with a default index (starts from 0).

In [116]: pd.Index(["a1a2", "b1", "c1"]).str.extractall(two_groups)


Out[116]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1

In [117]: pd.Series(["a1a2", "b1", "c1"], dtype="string").str.extractall(two_groups)


Out[117]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1

510 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.6.7 Testing for Strings that match or contain a pattern

You can check whether elements contain a pattern:

In [118]: pattern = r'[0-9][a-z]'

In [119]: pd.Series(['1', '2', '3a', '3b', '03c'],


.....: dtype="string").str.contains(pattern)
.....:
Out[119]:
0 False
1 False
2 True
3 True
4 True
dtype: boolean

Or whether elements match a pattern:

In [120]: pd.Series(['1', '2', '3a', '3b', '03c'],


.....: dtype="string").str.match(pattern)
.....:
Out[120]:
0 False
1 False
2 True
3 True
4 False
dtype: boolean
[email protected]
T56GZSRVAH
The distinction between match and contains is strictness: match relies on strict re.match, while contains
relies on re.search.
Methods like match, contains, startswith, and endswith take an extra na argument so missing values can
be considered True or False:

In [121]: s4 = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat
˓→'],

.....: dtype="string")
.....:

In [122]: s4.str.contains('A', na=False)


Out[122]:
0 True
1 False
2 False
3 True
4 False
5 False
6 True
7 False
8 False
dtype: boolean

3.6. Working with text data 511


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.6.8 Creating indicator variables

You can extract dummy variables from string columns. For example if they are separated by a '|':

In [123]: s = pd.Series(['a', 'a|b', np.nan, 'a|c'], dtype="string")

In [124]: s.str.get_dummies(sep='|')
Out[124]:
a b c
0 1 0 0
1 1 1 0
2 0 0 0
3 1 0 1

String Index also supports get_dummies which returns a MultiIndex.

In [125]: idx = pd.Index(['a', 'a|b', np.nan, 'a|c'])

In [126]: idx.str.get_dummies(sep='|')
Out[126]:
MultiIndex([(1, 0, 0),
(1, 1, 0),
(0, 0, 0),
(1, 0, 1)],
names=['a', 'b', 'c'])

See also get_dummies().

[email protected]
T56GZSRVAH3.6.9 Method summary

Method Description
cat() Concatenate strings
split() Split strings on delimiter
rsplit() Split strings on delimiter working from the end of the string
get() Index into each element (retrieve i-th element)
join() Join strings in each element of the Series with passed separator
get_dummies() Split strings on the delimiter returning DataFrame of dummy variables
contains() Return boolean array if each string contains pattern/regex
replace() Replace occurrences of pattern/regex/string with some other string or the return value of a
callable given the occurrence
repeat() Duplicate values (s.str.repeat(3) equivalent to x * 3)
pad() Add whitespace to left, right, or both sides of strings
center() Equivalent to str.center
ljust() Equivalent to str.ljust
rjust() Equivalent to str.rjust
zfill() Equivalent to str.zfill
wrap() Split long strings into lines with length less than a given width
slice() Slice each string in the Series
slice_replace() Replace slice in each string with passed value
count() Count occurrences of pattern
startswith() Equivalent to str.startswith(pat) for each element
endswith() Equivalent to str.endswith(pat) for each element
Continued on next page

512 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Table 2 – continued from previous page


Method Description
findall() Compute list of all occurrences of pattern/regex for each string
match() Call re.match on each element, returning matched groups as list
extract() Call re.search on each element, returning DataFrame with one row for each element
and one column for each regex capture group
extractall() Call re.findall on each element, returning DataFrame with one row for each match
and one column for each regex capture group
len() Compute string lengths
strip() Equivalent to str.strip
rstrip() Equivalent to str.rstrip
lstrip() Equivalent to str.lstrip
partition() Equivalent to str.partition
rpartition() Equivalent to str.rpartition
lower() Equivalent to str.lower
casefold() Equivalent to str.casefold
upper() Equivalent to str.upper
find() Equivalent to str.find
rfind() Equivalent to str.rfind
index() Equivalent to str.index
rindex() Equivalent to str.rindex
capitalize() Equivalent to str.capitalize
swapcase() Equivalent to str.swapcase
normalize() Return Unicode normal form. Equivalent to unicodedata.normalize
translate() Equivalent to str.translate
isalnum() Equivalent to str.isalnum
[email protected] Equivalent to str.isalpha
T56GZSRVAH isalpha()
isdigit() Equivalent to str.isdigit
isspace() Equivalent to str.isspace
islower() Equivalent to str.islower
isupper() Equivalent to str.isupper
istitle() Equivalent to str.istitle
isnumeric() Equivalent to str.isnumeric
isdecimal() Equivalent to str.isdecimal

3.7 Working with missing data

In this section, we will discuss missing (also referred to as NA) values in pandas.

Note: The choice of using NaN internally to denote missing data was largely for simplicity and performance reasons.
Starting from pandas 1.0, some optional data types start experimenting with a native NA scalar using a mask-based
approach. See here for more.

See the cookbook for some advanced strategies.

3.7. Working with missing data 513


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.7.1 Values considered “missing”

As data comes in many shapes and forms, pandas aims to be flexible with regard to handling missing data. While
NaN is the default missing value marker for reasons of computational speed and convenience, we need to be able to
easily detect this value with data of different types: floating point, integer, boolean, and general object. In many cases,
however, the Python None will arise and we wish to also consider that “missing” or “not available” or “NA”.

Note: If you want to consider inf and -inf to be “NA” in computations, you can set pandas.options.mode.
use_inf_as_na = True.

In [1]: df = pd.DataFrame(np.random.randn(5, 3), index=['a', 'c', 'e', 'f', 'h'],


...: columns=['one', 'two', 'three'])
...:

In [2]: df['four'] = 'bar'

In [3]: df['five'] = df['one'] > 0

In [4]: df
Out[4]:
one two three four five
a 0.469112 -0.282863 -1.509059 bar True
c -1.135632 1.212112 -0.173215 bar False
e 0.119209 -1.044236 -0.861849 bar True
f -2.104569 -0.494929 1.071804 bar False
h 0.721555 -0.706771 -1.039575 bar True
[email protected]
T56GZSRVAHIn [5]: df2 = df.reindex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'])
In [6]: df2
Out[6]:
one two three four five
a 0.469112 -0.282863 -1.509059 bar True
b NaN NaN NaN NaN NaN
c -1.135632 1.212112 -0.173215 bar False
d NaN NaN NaN NaN NaN
e 0.119209 -1.044236 -0.861849 bar True
f -2.104569 -0.494929 1.071804 bar False
g NaN NaN NaN NaN NaN
h 0.721555 -0.706771 -1.039575 bar True

To make detecting missing values easier (and across different array dtypes), pandas provides the isna() and
notna() functions, which are also methods on Series and DataFrame objects:
In [7]: df2['one']
Out[7]:
a 0.469112
b NaN
c -1.135632
d NaN
e 0.119209
f -2.104569
g NaN
h 0.721555
Name: one, dtype: float64

(continues on next page)

514 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [8]: pd.isna(df2['one'])
Out[8]:
a False
b True
c False
d True
e False
f False
g True
h False
Name: one, dtype: bool

In [9]: df2['four'].notna()
Out[9]:
a True
b False
c True
d False
e True
f True
g False
h True
Name: four, dtype: bool

In [10]: df2.isna()
Out[10]:
one two three four five
a False False False False False
[email protected]
T56GZSRVAHb True True True True True
c False False False False False
d True True True True True
e False False False False False
f False False False False False
g True True True True True
h False False False False False

Warning: One has to be mindful that in Python (and NumPy), the nan's don’t compare equal, but None's do.
Note that pandas/NumPy uses the fact that np.nan != np.nan, and treats None like np.nan.
In [11]: None == None # noqa: E711
Out[11]: True

In [12]: np.nan == np.nan


Out[12]: False

So as compared to above, a scalar equality comparison versus a None/np.nan doesn’t provide useful informa-
tion.
In [13]: df2['one'] == np.nan
Out[13]:
a False
b False
c False
d False
e False
f False
g False
h False
Name: one, dtype: bool
3.7. Working with missing data 515
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Integer dtypes and missing data

Because NaN is a float, a column of integers with even one missing values is cast to floating-point dtype (see Support
for integer NA for more). Pandas provides a nullable integer array, which can be used by explicitly requesting the
dtype:
In [14]: pd.Series([1, 2, np.nan, 4], dtype=pd.Int64Dtype())
Out[14]:
0 1
1 2
2 <NA>
3 4
dtype: Int64

Alternatively, the string alias dtype='Int64' (note the capital "I") can be used.
See Nullable integer data type for more.

Datetimes

For datetime64[ns] types, NaT represents missing values. This is a pseudo-native sentinel value that can be represented
by NumPy in a singular dtype (datetime64[ns]). pandas objects provide compatibility between NaT and NaN.
In [15]: df2 = df.copy()
[email protected]
T56GZSRVAHIn [16]: df2['timestamp'] = pd.Timestamp('20120101')

In [17]: df2
Out[17]:
one two three four five timestamp
a 0.469112 -0.282863 -1.509059 bar True 2012-01-01
c -1.135632 1.212112 -0.173215 bar False 2012-01-01
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h 0.721555 -0.706771 -1.039575 bar True 2012-01-01

In [18]: df2.loc[['a', 'c', 'h'], ['one', 'timestamp']] = np.nan

In [19]: df2
Out[19]:
one two three four five timestamp
a NaN -0.282863 -1.509059 bar True NaT
c NaN 1.212112 -0.173215 bar False NaT
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h NaN -0.706771 -1.039575 bar True NaT

In [20]: df2.dtypes.value_counts()
Out[20]:
float64 3
bool 1
object 1
datetime64[ns] 1
dtype: int64

516 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.7.2 Inserting missing data

You can insert missing values by simply assigning to containers. The actual missing value used will be chosen based
on the dtype.
For example, numeric containers will always use NaN regardless of the missing value type chosen:
In [21]: s = pd.Series([1, 2, 3])

In [22]: s.loc[0] = None

In [23]: s
Out[23]:
0 NaN
1 2.0
2 3.0
dtype: float64

Likewise, datetime containers will always use NaT.


For object containers, pandas will use the value given:
In [24]: s = pd.Series(["a", "b", "c"])

In [25]: s.loc[0] = None

In [26]: s.loc[1] = np.nan

In [27]: s
Out[27]:
[email protected]
T56GZSRVAH0 None
1 NaN
2 c
dtype: object

3.7.3 Calculations with missing data

Missing values propagate naturally through arithmetic operations between pandas objects.
In [28]: a
Out[28]:
one two
a NaN -0.282863
c NaN 1.212112
e 0.119209 -1.044236
f -2.104569 -0.494929
h -2.104569 -0.706771

In [29]: b
Out[29]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575

(continues on next page)

3.7. Working with missing data 517


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [30]: a + b
Out[30]:
one three two
a NaN NaN -0.565727
c NaN NaN 2.424224
e 0.238417 NaN -2.088472
f -4.209138 NaN -0.989859
h NaN NaN -1.413542

The descriptive statistics and computational methods discussed in the data structure overview (and listed here and
here) are all written to account for missing data. For example:
• When summing data, NA (missing) values will be treated as zero.
• If the data are all NA, the result will be 0.
• Cumulative methods like cumsum() and cumprod() ignore NA values by default, but preserve them in the
resulting arrays. To override this behaviour and include NA values, use skipna=False.

In [31]: df
Out[31]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575

In [32]: df['one'].sum()
[email protected]
T56GZSRVAHOut[32]: -1.9853605075978744

In [33]: df.mean(1)
Out[33]:
a -0.895961
c 0.519449
e -0.595625
f -0.509232
h -0.873173
dtype: float64

In [34]: df.cumsum()
Out[34]:
one two three
a NaN -0.282863 -1.509059
c NaN 0.929249 -1.682273
e 0.119209 -0.114987 -2.544122
f -1.985361 -0.609917 -1.472318
h NaN -1.316688 -2.511893

In [35]: df.cumsum(skipna=False)
Out[35]:
one two three
a NaN -0.282863 -1.509059
c NaN 0.929249 -1.682273
e NaN -0.114987 -2.544122
f NaN -0.609917 -1.472318
h NaN -1.316688 -2.511893

518 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.7.4 Sum/prod of empties/nans

Warning: This behavior is now standard as of v0.22.0 and is consistent with the default in numpy; previously
sum/prod of all-NA or empty Series/DataFrames would return NaN. See v0.22.0 whatsnew for more.

The sum of an empty or all-NA Series or column of a DataFrame is 0.

In [36]: pd.Series([np.nan]).sum()
Out[36]: 0.0

In [37]: pd.Series([], dtype="float64").sum()


Out[37]: 0.0

The product of an empty or all-NA Series or column of a DataFrame is 1.

In [38]: pd.Series([np.nan]).prod()
Out[38]: 1.0

In [39]: pd.Series([], dtype="float64").prod()


Out[39]: 1.0

3.7.5 NA values in GroupBy

NA groups in GroupBy are automatically excluded. This behavior is consistent with R, for example:
[email protected]
T56GZSRVAHIn [40]: df
Out[40]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575

In [41]: df.groupby('one').mean()
Out[41]:
two three
one
-2.104569 -0.494929 1.071804
0.119209 -1.044236 -0.861849

See the groupby section here for more information.

3.7. Working with missing data 519


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Cleaning / filling missing data

pandas objects are equipped with various data manipulation methods for dealing with missing data.

3.7.6 Filling missing values: fillna

fillna() can “fill in” NA values with non-NA data in a couple of ways, which we illustrate:
Replace NA with a scalar value
In [42]: df2
Out[42]:
one two three four five timestamp
a NaN -0.282863 -1.509059 bar True NaT
c NaN 1.212112 -0.173215 bar False NaT
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h NaN -0.706771 -1.039575 bar True NaT

In [43]: df2.fillna(0)
Out[43]:
one two three four five timestamp
a 0.000000 -0.282863 -1.509059 bar True 0
c 0.000000 1.212112 -0.173215 bar False 0
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01 00:00:00
f -2.104569 -0.494929 1.071804 bar False 2012-01-01 00:00:00
h 0.000000 -0.706771 -1.039575 bar True 0

[email protected]
In [44]: df2['one'].fillna('missing')
T56GZSRVAHOut[44]:
a missing
c missing
e 0.119209
f -2.10457
h missing
Name: one, dtype: object

Fill gaps forward or backward


Using the same filling arguments as reindexing, we can propagate non-NA values forward or backward:
In [45]: df
Out[45]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575

In [46]: df.fillna(method='pad')
Out[46]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h -2.104569 -0.706771 -1.039575

520 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Limit the amount of filling


If we only want consecutive gaps filled up to a certain number of data points, we can use the limit keyword:

In [47]: df
Out[47]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN NaN NaN
f NaN NaN NaN
h NaN -0.706771 -1.039575

In [48]: df.fillna(method='pad', limit=1)


Out[48]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN 1.212112 -0.173215
f NaN NaN NaN
h NaN -0.706771 -1.039575

To remind you, these are the available filling methods:

Method Action
pad / ffill Fill values forward
bfill / backfill Fill values backward

[email protected]
With time series data, using pad/ffill is extremely common so that the “last known value” is available at every time
T56GZSRVAH
point.
ffill() is equivalent to fillna(method='ffill') and bfill() is equivalent to
fillna(method='bfill')

3.7.7 Filling with a PandasObject

You can also fillna using a dict or Series that is alignable. The labels of the dict or index of the Series must match the
columns of the frame you wish to fill. The use case of this is to fill a DataFrame with the mean of that column.

In [49]: dff = pd.DataFrame(np.random.randn(10, 3), columns=list('ABC'))

In [50]: dff.iloc[3:5, 0] = np.nan

In [51]: dff.iloc[4:6, 1] = np.nan

In [52]: dff.iloc[5:8, 2] = np.nan

In [53]: dff
Out[53]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 NaN 0.577046 -1.715002
4 NaN NaN -1.157892
5 -1.344312 NaN NaN
(continues on next page)

3.7. Working with missing data 521


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


6 -0.109050 1.643563 NaN
7 0.357021 -0.674600 NaN
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960

In [54]: dff.fillna(dff.mean())
Out[54]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 -0.140857 0.577046 -1.715002
4 -0.140857 -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960

In [55]: dff.fillna(dff.mean()['B':'C'])
Out[55]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 NaN 0.577046 -1.715002
4 NaN -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
[email protected]
T56GZSRVAH6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960

Same result as above, but is aligning the ‘fill’ value which is a Series in this case.

In [56]: dff.where(pd.notna(dff), dff.mean(), axis='columns')


Out[56]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 -0.140857 0.577046 -1.715002
4 -0.140857 -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960

522 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.7.8 Dropping axis labels with missing data: dropna

You may wish to simply exclude labels from a data set which refer to missing data. To do this, use dropna():
In [57]: df
Out[57]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN 0.000000 0.000000
f NaN 0.000000 0.000000
h NaN -0.706771 -1.039575

In [58]: df.dropna(axis=0)
Out[58]:
Empty DataFrame
Columns: [one, two, three]
Index: []

In [59]: df.dropna(axis=1)
Out[59]:
two three
a -0.282863 -1.509059
c 1.212112 -0.173215
e 0.000000 0.000000
f 0.000000 0.000000
h -0.706771 -1.039575

In [60]: df['one'].dropna()
[email protected]
Out[60]: Series([], Name: one, dtype: float64)
T56GZSRVAH
An equivalent dropna() is available for Series. DataFrame.dropna has considerably more options than Se-
ries.dropna, which can be examined in the API.

3.7.9 Interpolation

New in version 0.23.0: The limit_area keyword argument was added.


Both Series and DataFrame objects have interpolate() that, by default, performs linear interpolation at missing
data points.
In [61]: ts
Out[61]:
2000-01-31 0.469112
2000-02-29 NaN
2000-03-31 NaN
2000-04-28 NaN
2000-05-31 NaN
...
2007-12-31 -6.950267
2008-01-31 -7.904475
2008-02-29 -6.441779
2008-03-31 -8.184940
2008-04-30 -9.011531
Freq: BM, Length: 100, dtype: float64

In [62]: ts.count()
(continues on next page)

3.7. Working with missing data 523


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[62]: 66

In [63]: ts.plot()
Out[63]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d082ad750>

[email protected]
T56GZSRVAH

In [64]: ts.interpolate()
Out[64]:
2000-01-31 0.469112
2000-02-29 0.434469
2000-03-31 0.399826
2000-04-28 0.365184
2000-05-31 0.330541
...
2007-12-31 -6.950267
2008-01-31 -7.904475
2008-02-29 -6.441779
2008-03-31 -8.184940
2008-04-30 -9.011531
Freq: BM, Length: 100, dtype: float64

In [65]: ts.interpolate().count()
Out[65]: 100

In [66]: ts.interpolate().plot()
(continues on next page)

524 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[66]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d08228c90>

[email protected]
T56GZSRVAH

Index aware interpolation is available via the method keyword:

In [67]: ts2
Out[67]:
2000-01-31 0.469112
2000-02-29 NaN
2002-07-31 -5.785037
2005-01-31 NaN
2008-04-30 -9.011531
dtype: float64

In [68]: ts2.interpolate()
Out[68]:
2000-01-31 0.469112
2000-02-29 -2.657962
2002-07-31 -5.785037
2005-01-31 -7.398284
2008-04-30 -9.011531
dtype: float64

In [69]: ts2.interpolate(method='time')
Out[69]:
(continues on next page)

3.7. Working with missing data 525


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-31 0.469112
2000-02-29 0.270241
2002-07-31 -5.785037
2005-01-31 -7.190866
2008-04-30 -9.011531
dtype: float64

For a floating-point index, use method='values':

In [70]: ser
Out[70]:
0.0 0.0
1.0 NaN
10.0 10.0
dtype: float64

In [71]: ser.interpolate()
Out[71]:
0.0 0.0
1.0 5.0
10.0 10.0
dtype: float64

In [72]: ser.interpolate(method='values')
Out[72]:
0.0 0.0
1.0 1.0
10.0 10.0
[email protected]
T56GZSRVAHdtype: float64

You can also interpolate with a DataFrame:

In [73]: df = pd.DataFrame({'A': [1, 2.1, np.nan, 4.7, 5.6, 6.8],


....: 'B': [.25, np.nan, np.nan, 4, 12.2, 14.4]})
....:

In [74]: df
Out[74]:
A B
0 1.0 0.25
1 2.1 NaN
2 NaN NaN
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40

In [75]: df.interpolate()
Out[75]:
A B
0 1.0 0.25
1 2.1 1.50
2 3.4 2.75
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40

The method argument gives access to fancier interpolation methods. If you have scipy installed, you can pass the

526 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

name of a 1-d interpolation routine to method. You’ll want to consult the full scipy interpolation documentation and
reference guide for details. The appropriate interpolation method will depend on the type of data you are working
with.
• If you are dealing with a time series that is growing at an increasing rate, method='quadratic' may be
appropriate.
• If you have values approximating a cumulative distribution function, then method='pchip' should work
well.
• To fill missing values with goal of smooth plotting, consider method='akima'.

Warning: These methods require scipy.

In [76]: df.interpolate(method='barycentric')
Out[76]:
A B
0 1.00 0.250
1 2.10 -7.660
2 3.53 -4.515
3 4.70 4.000
4 5.60 12.200
5 6.80 14.400

In [77]: df.interpolate(method='pchip')
Out[77]:
A B
0 1.00000
[email protected] 0.250000
T56GZSRVAH1 2.10000 0.672808
2 3.43454 1.928950
3 4.70000 4.000000
4 5.60000 12.200000
5 6.80000 14.400000

In [78]: df.interpolate(method='akima')
Out[78]:
A B
0 1.000000 0.250000
1 2.100000 -0.873316
2 3.406667 0.320034
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000

When interpolating via a polynomial or spline approximation, you must also specify the degree or order of the approx-
imation:

In [79]: df.interpolate(method='spline', order=2)


Out[79]:
A B
0 1.000000 0.250000
1 2.100000 -0.428598
2 3.404545 1.206900
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
(continues on next page)

3.7. Working with missing data 527


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [80]: df.interpolate(method='polynomial', order=2)


Out[80]:
A B
0 1.000000 0.250000
1 2.100000 -2.703846
2 3.451351 -1.453846
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000

Compare several methods:

In [81]: np.random.seed(2)

In [82]: ser = pd.Series(np.arange(1, 10.1, .25) ** 2 + np.random.randn(37))

In [83]: missing = np.array([4, 13, 14, 15, 16, 17, 18, 20, 29])

In [84]: ser[missing] = np.nan

In [85]: methods = ['linear', 'quadratic', 'cubic']

In [86]: df = pd.DataFrame({m: ser.interpolate(method=m) for m in methods})

In [87]: df.plot()
Out[87]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d0817d490>
[email protected]
T56GZSRVAH

528 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Another use case is interpolation at new values. Suppose you have 100 observations from some distribution. And let’s
suppose that you’re particularly interested in what’s happening around the middle. You can mix pandas’ reindex
and interpolate methods to interpolate at the new values.

In [88]: ser = pd.Series(np.sort(np.random.uniform(size=100)))

# interpolate at new_index
In [89]: new_index = ser.index | pd.Index([49.25, 49.5, 49.75, 50.25, 50.5, 50.75])

In [90]: interp_s = ser.reindex(new_index).interpolate(method='pchip')

In [91]: interp_s[49:51]
Out[91]:
49.00 0.471410
49.25 0.476841
49.50 0.481780
49.75 0.485998
50.00 0.489266
50.25 0.491814
50.50 0.493995
50.75 0.495763
51.00 0.497074
dtype: float64

3.7. Working with missing data 529


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Interpolation limits

Like other pandas fill methods, interpolate() accepts a limit keyword argument. Use this argument to limit
the number of consecutive NaN values filled since the last valid observation:
In [92]: ser = pd.Series([np.nan, np.nan, 5, np.nan, np.nan,
....: np.nan, 13, np.nan, np.nan])
....:

In [93]: ser
Out[93]:
0 NaN
1 NaN
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 NaN
8 NaN
dtype: float64

# fill all consecutive values in a forward direction


In [94]: ser.interpolate()
Out[94]:
0 NaN
1 NaN
2 5.0
3 7.0
[email protected]
4 9.0
T56GZSRVAH
5 11.0
6 13.0
7 13.0
8 13.0
dtype: float64

# fill one consecutive value in a forward direction


In [95]: ser.interpolate(limit=1)
Out[95]:
0 NaN
1 NaN
2 5.0
3 7.0
4 NaN
5 NaN
6 13.0
7 13.0
8 NaN
dtype: float64

By default, NaN values are filled in a forward direction. Use limit_direction parameter to fill backward or
from both directions.
# fill one consecutive value backwards
In [96]: ser.interpolate(limit=1, limit_direction='backward')
Out[96]:
0 NaN
1 5.0
(continues on next page)

530 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2 5.0
3 NaN
4 NaN
5 11.0
6 13.0
7 NaN
8 NaN
dtype: float64

# fill one consecutive value in both directions


In [97]: ser.interpolate(limit=1, limit_direction='both')
Out[97]:
0 NaN
1 5.0
2 5.0
3 7.0
4 NaN
5 11.0
6 13.0
7 13.0
8 NaN
dtype: float64

# fill all consecutive values in both directions


In [98]: ser.interpolate(limit_direction='both')
Out[98]:
0 5.0
1 5.0
[email protected]
T56GZSRVAH 2 5.0
3 7.0
4 9.0
5 11.0
6 13.0
7 13.0
8 13.0
dtype: float64

By default, NaN values are filled whether they are inside (surrounded by) existing valid values, or outside existing
valid values. Introduced in v0.23 the limit_area parameter restricts filling to either inside or outside values.

# fill one consecutive inside value in both directions


In [99]: ser.interpolate(limit_direction='both', limit_area='inside', limit=1)
Out[99]:
0 NaN
1 NaN
2 5.0
3 7.0
4 NaN
5 11.0
6 13.0
7 NaN
8 NaN
dtype: float64

# fill all consecutive outside values backward


In [100]: ser.interpolate(limit_direction='backward', limit_area='outside')
(continues on next page)

3.7. Working with missing data 531


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[100]:
0 5.0
1 5.0
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 NaN
8 NaN
dtype: float64

# fill all consecutive outside values in both directions


In [101]: ser.interpolate(limit_direction='both', limit_area='outside')
Out[101]:
0 5.0
1 5.0
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 13.0
8 13.0
dtype: float64

[email protected]
3.7.10 Replacing generic values
T56GZSRVAH
Often times we want to replace arbitrary values with other values.
replace() in Series and replace() in DataFrame provides an efficient yet flexible way to perform such replace-
ments.
For a Series, you can replace a single value or a list of values by another value:

In [102]: ser = pd.Series([0., 1., 2., 3., 4.])

In [103]: ser.replace(0, 5)
Out[103]:
0 5.0
1 1.0
2 2.0
3 3.0
4 4.0
dtype: float64

You can replace a list of values by a list of other values:

In [104]: ser.replace([0, 1, 2, 3, 4], [4, 3, 2, 1, 0])


Out[104]:
0 4.0
1 3.0
2 2.0
3 1.0
4 0.0
dtype: float64

532 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

You can also specify a mapping dict:

In [105]: ser.replace({0: 10, 1: 100})


Out[105]:
0 10.0
1 100.0
2 2.0
3 3.0
4 4.0
dtype: float64

For a DataFrame, you can specify individual values by column:

In [106]: df = pd.DataFrame({'a': [0, 1, 2, 3, 4], 'b': [5, 6, 7, 8, 9]})

In [107]: df.replace({'a': 0, 'b': 5}, 100)


Out[107]:
a b
0 100 100
1 1 6
2 2 7
3 3 8
4 4 9

Instead of replacing with specified values, you can treat all given values as missing and interpolate over them:

In [108]: ser.replace([1, 2, 3], method='pad')


Out[108]:
0 0.0
[email protected]
1 0.0
T56GZSRVAH
2 0.0
3 0.0
4 4.0
dtype: float64

3.7.11 String/regular expression replacement

Note: Python strings prefixed with the r character such as r'hello world' are so-called “raw” strings. They
have different semantics regarding backslashes than strings without this prefix. Backslashes in raw strings will be
interpreted as an escaped backslash, e.g., r'\' == '\\'. You should read about them if this is unclear.

Replace the ‘.’ with NaN (str -> str):

In [109]: d = {'a': list(range(4)), 'b': list('ab..'), 'c': ['a', 'b', np.nan, 'd']}

In [110]: df = pd.DataFrame(d)

In [111]: df.replace('.', np.nan)


Out[111]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d

3.7. Working with missing data 533


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Now do it with a regular expression that removes surrounding whitespace (regex -> regex):

In [112]: df.replace(r'\s*\.\s*', np.nan, regex=True)


Out[112]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d

Replace a few different values (list -> list):

In [113]: df.replace(['a', '.'], ['b', np.nan])


Out[113]:
a b c
0 0 b b
1 1 b b
2 2 NaN NaN
3 3 NaN d

list of regex -> list of regex:

In [114]: df.replace([r'\.', r'(a)'], ['dot', r'\1stuff'], regex=True)


Out[114]:
a b c
0 0 astuff astuff
1 1 b b
2 2 dot NaN
3 3 dot d
[email protected]
T56GZSRVAH
Only search in column 'b' (dict -> dict):

In [115]: df.replace({'b': '.'}, {'b': np.nan})


Out[115]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d

Same as the previous example, but use a regular expression for searching instead (dict of regex -> dict):

In [116]: df.replace({'b': r'\s*\.\s*'}, {'b': np.nan}, regex=True)


Out[116]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d

You can pass nested dictionaries of regular expressions that use regex=True:

In [117]: df.replace({'b': {'b': r''}}, regex=True)


Out[117]:
a b c
0 0 a a
1 1 b
(continues on next page)

534 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2 2 . NaN
3 3 . d

Alternatively, you can pass the nested dictionary like so:

In [118]: df.replace(regex={'b': {r'\s*\.\s*': np.nan}})


Out[118]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d

You can also use the group of a regular expression match when replacing (dict of regex -> dict of regex), this works
for lists as well.

In [119]: df.replace({'b': r'\s*(\.)\s*'}, {'b': r'\1ty'}, regex=True)


Out[119]:
a b c
0 0 a a
1 1 b b
2 2 .ty NaN
3 3 .ty d

You can pass a list of regular expressions, of which those that match will be replaced with a scalar (list of regex ->
regex).

[email protected]
In [120]: df.replace([r'\s*\.\s*', r'a|b'], np.nan, regex=True)
T56GZSRVAHOut[120]:
a b c
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN d

All of the regular expression examples can also be passed with the to_replace argument as the regex argument.
In this case the value argument must be passed explicitly by name or regex must be a nested dictionary. The
previous example, in this case, would then be:

In [121]: df.replace(regex=[r'\s*\.\s*', r'a|b'], value=np.nan)


Out[121]:
a b c
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN d

This can be convenient if you do not want to pass regex=True every time you want to use a regular expression.

Note: Anywhere in the above replace examples that you see a regular expression a compiled regular expression is
valid as well.

3.7. Working with missing data 535


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.7.12 Numeric replacement

replace() is similar to fillna().

In [122]: df = pd.DataFrame(np.random.randn(10, 2))

In [123]: df[np.random.rand(df.shape[0]) > 0.5] = 1.5

In [124]: df.replace(1.5, np.nan)


Out[124]:
0 1
0 -0.844214 -1.021415
1 0.432396 -0.323580
2 0.423825 0.799180
3 1.262614 0.751965
4 NaN NaN
5 NaN NaN
6 -0.498174 -1.060799
7 0.591667 -0.183257
8 1.019855 -1.482465
9 NaN NaN

Replacing more than one value is possible by passing a list.

In [125]: df00 = df.iloc[0, 0]

In [126]: df.replace([1.5, df00], [np.nan, 'a'])


Out[126]:
0 1
[email protected]
T56GZSRVAH 0 a -1.02141
1 0.432396 -0.32358
2 0.423825 0.79918
3 1.26261 0.751965
4 NaN NaN
5 NaN NaN
6 -0.498174 -1.0608
7 0.591667 -0.183257
8 1.01985 -1.48247
9 NaN NaN

In [127]: df[1].dtype
Out[127]: dtype('float64')

You can also operate on the DataFrame in place:

In [128]: df.replace(1.5, np.nan, inplace=True)

Warning: When replacing multiple bool or datetime64 objects, the first argument to replace
(to_replace) must match the type of the value being replaced. For example,
>>> s = pd.Series([True, False, True])
>>> s.replace({'a string': 'new value', True: False}) # raises
TypeError: Cannot compare types 'ndarray(dtype=bool)' and 'str'

will raise a TypeError because one of the dict keys is not of the correct type for replacement.
However, when replacing a single object such as,

536 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [129]: s = pd.Series([True, False, True])

In [130]: s.replace('a string', 'another string')


Out[130]:
0 True
1 False
2 True
dtype: bool

the original NDFrame object will be returned untouched. We’re working on unifying this API, but for backwards
compatibility reasons we cannot break the latter behavior. See GH6354 for more details.

Missing data casting rules and indexing

While pandas supports storing arrays of integer and boolean type, these types are not capable of storing missing data.
Until we can switch to using a native NA type in NumPy, we’ve established some “casting rules”. When a reindexing
operation introduces missing data, the Series will be cast according to the rules introduced in the table below.

data type Cast to


integer float
boolean object
float no cast
object no cast

For example:
[email protected]
T56GZSRVAH
In [131]: s = pd.Series(np.random.randn(5), index=[0, 2, 4, 6, 7])

In [132]: s > 0
Out[132]:
0 True
2 True
4 True
6 True
7 True
dtype: bool

In [133]: (s > 0).dtype


Out[133]: dtype('bool')

In [134]: crit = (s > 0).reindex(list(range(8)))

In [135]: crit
Out[135]:
0 True
1 NaN
2 True
3 NaN
4 True
5 NaN
6 True
7 True
dtype: object

(continues on next page)

3.7. Working with missing data 537


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [136]: crit.dtype
Out[136]: dtype('O')

Ordinarily NumPy will complain if you try to use an object array (even if it contains boolean values) instead of a
boolean array to get or set values from an ndarray (e.g. selecting values based on some criteria). If a boolean vector
contains NAs, an exception will be generated:

In [137]: reindexed = s.reindex(list(range(8))).fillna(0)

In [138]: reindexed[crit]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-138-0dac417a4890> in <module>
----> 1 reindexed[crit]

/pandas/pandas/core/series.py in __getitem__(self, key)


905 key = list(key)
906
--> 907 if com.is_bool_indexer(key):
908 key = check_bool_indexer(self.index, key)
909

/pandas/pandas/core/common.py in is_bool_indexer(key)
134 na_msg = "Cannot mask with non-boolean array containing NA /
˓→NaN values"

135 if isna(key).any():
--> 136 raise ValueError(na_msg)
137
[email protected] return False
T56GZSRVAH 138 return True

ValueError: Cannot mask with non-boolean array containing NA / NaN values

However, these can be filled in using fillna() and it will work fine:

In [139]: reindexed[crit.fillna(False)]
Out[139]:
0 0.126504
2 0.696198
4 0.697416
6 0.601516
7 0.003659
dtype: float64

In [140]: reindexed[crit.fillna(True)]
Out[140]:
0 0.126504
1 0.000000
2 0.696198
3 0.000000
4 0.697416
5 0.000000
6 0.601516
7 0.003659
dtype: float64

Pandas provides a nullable integer dtype, but you must explicitly request it when creating the series or column. Notice
that we use a capital “I” in the dtype="Int64".

538 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [141]: s = pd.Series([0, 1, np.nan, 3, 4], dtype="Int64")

In [142]: s
Out[142]:
0 0
1 1
2 <NA>
3 3
4 4
dtype: Int64

See Nullable integer data type for more.

3.7.13 Experimental NA scalar to denote missing values

Warning: Experimental: the behaviour of pd.NA can still change without warning.

New in version 1.0.0.


Starting from pandas 1.0, an experimental pd.NA value (singleton) is available to represent scalar missing values. At
this moment, it is used in the nullable integer, boolean and dedicated string data types as the missing value indicator.
The goal of pd.NA is provide a “missing” indicator that can be used consistently across data types (instead of np.
nan, None or pd.NaT depending on the data type).
For example, when having missing values in a Series with the nullable integer dtype, it will use pd.NA:
[email protected]
T56GZSRVAH
In [143]: s = pd.Series([1, 2, None], dtype="Int64")

In [144]: s
Out[144]:
0 1
1 2
2 <NA>
dtype: Int64

In [145]: s[2]
Out[145]: <NA>

In [146]: s[2] is pd.NA


Out[146]: True

Currently, pandas does not yet use those data types by default (when creating a DataFrame or Series, or when reading
in data), so you need to specify the dtype explicitly. An easy way to convert to those dtypes is explained here.

3.7. Working with missing data 539


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Propagation in arithmetic and comparison operations

In general, missing values propagate in operations involving pd.NA. When one of the operands is unknown, the
outcome of the operation is also unknown.
For example, pd.NA propagates in arithmetic operations, similarly to np.nan:

In [147]: pd.NA + 1
Out[147]: <NA>

In [148]: "a" * pd.NA


Out[148]: <NA>

There are a few special cases when the result is known, even when one of the operands is NA.

In [149]: pd.NA ** 0
Out[149]: 1

In [150]: 1 ** pd.NA
Out[150]: 1

In equality and comparison operations, pd.NA also propagates. This deviates from the behaviour of np.nan, where
comparisons with np.nan always return False.

In [151]: pd.NA == 1
Out[151]: <NA>

In [152]: pd.NA == pd.NA


Out[152]: <NA>
[email protected]
T56GZSRVAH
In [153]: pd.NA < 2.5
Out[153]: <NA>

To check if a value is equal to pd.NA, the isna() function can be used:

In [154]: pd.isna(pd.NA)
Out[154]: True

An exception on this basic propagation rule are reductions (such as the mean or the minimum), where pandas defaults
to skipping missing values. See above for more.

Logical operations

For logical operations, pd.NA follows the rules of the three-valued logic (or Kleene logic, similarly to R, SQL and
Julia). This logic means to only propagate missing values when it is logically required.
For example, for the logical “or” operation (|), if one of the operands is True, we already know the result will be
True, regardless of the other value (so regardless the missing value would be True or False). In this case, pd.NA
does not propagate:

In [155]: True | False


Out[155]: True

In [156]: True | pd.NA


Out[156]: True

(continues on next page)

540 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [157]: pd.NA | True
Out[157]: True

On the other hand, if one of the operands is False, the result depends on the value of the other operand. Therefore,
in this case pd.NA propagates:

In [158]: False | True


Out[158]: True

In [159]: False | False


Out[159]: False

In [160]: False | pd.NA


Out[160]: <NA>

The behaviour of the logical “and” operation (&) can be derived using similar logic (where now pd.NA will not
propagate if one of the operands is already False):

In [161]: False & True


Out[161]: False

In [162]: False & False


Out[162]: False

In [163]: False & pd.NA


Out[163]: False

[email protected]
In [164]: True & True
T56GZSRVAH
Out[164]: True

In [165]: True & False


Out[165]: False

In [166]: True & pd.NA


Out[166]: <NA>

NA in a boolean context

Since the actual value of an NA is unknown, it is ambiguous to convert NA to a boolean value. The following raises
an error:

In [167]: bool(pd.NA)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-167-5477a57d5abb> in <module>
----> 1 bool(pd.NA)

/pandas/pandas/_libs/missing.pyx in pandas._libs.missing.NAType.__bool__()

TypeError: boolean value of NA is ambiguous

This also means that pd.NA cannot be used in a context where it is evaluated to a boolean, such as if condition:
... where condition can potentially be pd.NA. In such cases, isna() can be used to check for pd.NA or
condition being pd.NA can be avoided, for example by filling missing values beforehand.

3.7. Working with missing data 541


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

A similar situation occurs when using Series or DataFrame objects in if statements, see Using if/truth statements with
pandas.

NumPy ufuncs

pandas.NA implements NumPy’s __array_ufunc__ protocol. Most ufuncs work with NA, and generally return
NA:
In [168]: np.log(pd.NA)
Out[168]: <NA>

In [169]: np.add(pd.NA, 1)
Out[169]: <NA>

Warning: Currently, ufuncs involving an ndarray and NA will return an object-dtype filled with NA values.
In [170]: a = np.array([1, 2, 3])

In [171]: np.greater(a, pd.NA)


Out[171]: array([<NA>, <NA>, <NA>], dtype=object)

The return type here may change to return a different array type in the future.

See DataFrame interoperability with NumPy functions for more on ufuncs.

[email protected]
Conversion
T56GZSRVAH
If you have a DataFrame or Series using traditional types that have missing data represented using np.nan, there are
convenience methods convert_dtypes() in Series and convert_dtypes() in DataFrame that can convert
data to use the newer dtypes for integers, strings and booleans listed here. This is especially helpful after reading in
data sets when letting the readers such as read_csv() and read_excel() infer default dtypes.
In this example, while the dtypes of all columns are changed, we show the results for the first 10 columns.
In [172]: bb = pd.read_csv('data/baseball.csv', index_col='id')

In [173]: bb[bb.columns[:10]].dtypes
Out[173]:
player object
year int64
stint int64
team object
lg object
g int64
ab int64
r int64
h int64
X2b int64
dtype: object

In [174]: bbn = bb.convert_dtypes()

In [175]: bbn[bbn.columns[:10]].dtypes
Out[175]:
(continues on next page)

542 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


player string
year Int64
stint Int64
team string
lg string
g Int64
ab Int64
r Int64
h Int64
X2b Int64
dtype: object

3.8 Categorical data

This is an introduction to pandas categorical data type, including a short comparison with R’s factor.
Categoricals are a pandas data type corresponding to categorical variables in statistics. A categorical variable takes
on a limited, and usually fixed, number of possible values (categories; levels in R). Examples are gender, social class,
blood type, country affiliation, observation time or rating via Likert scales.
In contrast to statistical categorical variables, categorical data might have an order (e.g. ‘strongly agree’ vs ‘agree’ or
‘first observation’ vs. ‘second observation’), but numerical operations (additions, divisions, . . . ) are not possible.
All values of categorical data are either in categories or np.nan. Order is defined by the order of categories, not lexical
order of the values. Internally, the data structure consists of a categories array and an integer array of codes which
point to the real value in the categories array.
[email protected]
T56GZSRVAH
The categorical data type is useful in the following cases:
• A string variable consisting of only a few different values. Converting such a string variable to a categorical
variable will save some memory, see here.
• The lexical order of a variable is not the same as the logical order (“one”, “two”, “three”). By converting to a
categorical and specifying an order on the categories, sorting and min/max will use the logical order instead of
the lexical order, see here.
• As a signal to other Python libraries that this column should be treated as a categorical variable (e.g. to use
suitable statistical methods or plot types).
See also the API docs on categoricals.

3.8.1 Object creation

Series creation

Categorical Series or columns in a DataFrame can be created in several ways:


By specifying dtype="category" when constructing a Series:
In [1]: s = pd.Series(["a", "b", "c", "a"], dtype="category")

In [2]: s
Out[2]:
0 a
1 b
(continues on next page)

3.8. Categorical data 543


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2 c
3 a
dtype: category
Categories (3, object): [a, b, c]

By converting an existing Series or column to a category dtype:

In [3]: df = pd.DataFrame({"A": ["a", "b", "c", "a"]})

In [4]: df["B"] = df["A"].astype('category')

In [5]: df
Out[5]:
A B
0 a a
1 b b
2 c c
3 a a

By using special functions, such as cut(), which groups data into discrete bins. See the example on tiling in the
docs.

In [6]: df = pd.DataFrame({'value': np.random.randint(0, 100, 20)})

In [7]: labels = ["{0} - {1}".format(i, i + 9) for i in range(0, 100, 10)]

In [8]: df['group'] = pd.cut(df.value, range(0, 105, 10), right=False, labels=labels)


[email protected]
T56GZSRVAHIn [9]: df.head(10)
Out[9]:
value group
0 65 60 - 69
1 49 40 - 49
2 56 50 - 59
3 43 40 - 49
4 43 40 - 49
5 91 90 - 99
6 32 30 - 39
7 87 80 - 89
8 36 30 - 39
9 8 0 - 9

By passing a pandas.Categorical object to a Series or assigning it to a DataFrame.

In [10]: raw_cat = pd.Categorical(["a", "b", "c", "a"], categories=["b", "c", "d"],


....: ordered=False)
....:

In [11]: s = pd.Series(raw_cat)

In [12]: s
Out[12]:
0 NaN
1 b
2 c
3 NaN
(continues on next page)

544 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


dtype: category
Categories (3, object): [b, c, d]

In [13]: df = pd.DataFrame({"A": ["a", "b", "c", "a"]})

In [14]: df["B"] = raw_cat

In [15]: df
Out[15]:
A B
0 a NaN
1 b b
2 c c
3 a NaN

Categorical data has a specific category dtype:

In [16]: df.dtypes
Out[16]:
A object
B category
dtype: object

DataFrame creation

Similar to the previous section where a single column was converted to categorical, all columns in a DataFrame can
[email protected]
T56GZSRVAHbe batch converted to categorical either during or after construction.
This can be done during construction by specifying dtype="category" in the DataFrame constructor:

In [17]: df = pd.DataFrame({'A': list('abca'), 'B': list('bccd')}, dtype="category")

In [18]: df.dtypes
Out[18]:
A category
B category
dtype: object

Note that the categories present in each column differ; the conversion is done column by column, so only labels present
in a given column are categories:

In [19]: df['A']
Out[19]:
0 a
1 b
2 c
3 a
Name: A, dtype: category
Categories (3, object): [a, b, c]

In [20]: df['B']
Out[20]:
0 b
1 c
2 c
(continues on next page)

3.8. Categorical data 545


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


3 d
Name: B, dtype: category
Categories (3, object): [b, c, d]

New in version 0.23.0.


Analogously, all columns in an existing DataFrame can be batch converted using DataFrame.astype():

In [21]: df = pd.DataFrame({'A': list('abca'), 'B': list('bccd')})

In [22]: df_cat = df.astype('category')

In [23]: df_cat.dtypes
Out[23]:
A category
B category
dtype: object

This conversion is likewise done column by column:

In [24]: df_cat['A']
Out[24]:
0 a
1 b
2 c
3 a
Name: A, dtype: category
Categories (3, object): [a, b, c]
[email protected]
T56GZSRVAH
In [25]: df_cat['B']
Out[25]:
0 b
1 c
2 c
3 d
Name: B, dtype: category
Categories (3, object): [b, c, d]

Controlling behavior

In the examples above where we passed dtype='category', we used the default behavior:
1. Categories are inferred from the data.
2. Categories are unordered.
To control those behaviors, instead of passing 'category', use an instance of CategoricalDtype.

In [26]: from pandas.api.types import CategoricalDtype

In [27]: s = pd.Series(["a", "b", "c", "a"])

In [28]: cat_type = CategoricalDtype(categories=["b", "c", "d"],


....: ordered=True)
....:

(continues on next page)

546 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [29]: s_cat = s.astype(cat_type)

In [30]: s_cat
Out[30]:
0 NaN
1 b
2 c
3 NaN
dtype: category
Categories (3, object): [b < c < d]

Similarly, a CategoricalDtype can be used with a DataFrame to ensure that categories are consistent among
all columns.

In [31]: from pandas.api.types import CategoricalDtype

In [32]: df = pd.DataFrame({'A': list('abca'), 'B': list('bccd')})

In [33]: cat_type = CategoricalDtype(categories=list('abcd'),


....: ordered=True)
....:

In [34]: df_cat = df.astype(cat_type)

In [35]: df_cat['A']
Out[35]:
0 a
1 b
[email protected]
T56GZSRVAH2 c
3 a
Name: A, dtype: category
Categories (4, object): [a < b < c < d]

In [36]: df_cat['B']
Out[36]:
0 b
1 c
2 c
3 d
Name: B, dtype: category
Categories (4, object): [a < b < c < d]

Note: To perform table-wise conversion, where all labels in the entire DataFrame are used as categories for each
column, the categories parameter can be determined programmatically by categories = pd.unique(df.
to_numpy().ravel()).

If you already have codes and categories, you can use the from_codes() constructor to save the factorize
step during normal constructor mode:

In [37]: splitter = np.random.choice([0, 1], 5, p=[0.5, 0.5])

In [38]: s = pd.Series(pd.Categorical.from_codes(splitter,
....: categories=["train", "test"]))
....:

3.8. Categorical data 547


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Regaining original data

To get back to the original Series or NumPy array, use Series.astype(original_dtype) or np.
asarray(categorical):

In [39]: s = pd.Series(["a", "b", "c", "a"])

In [40]: s
Out[40]:
0 a
1 b
2 c
3 a
dtype: object

In [41]: s2 = s.astype('category')

In [42]: s2
Out[42]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): [a, b, c]

In [43]: s2.astype(str)
Out[43]:
0 a
[email protected]
T56GZSRVAH 1 b
2 c
3 a
dtype: object

In [44]: np.asarray(s2)
Out[44]: array(['a', 'b', 'c', 'a'], dtype=object)

Note: In contrast to R’s factor function, categorical data is not converting input values to strings; categories will end
up the same data type as the original values.

Note: In contrast to R’s factor function, there is currently no way to assign/change labels at creation time. Use
categories to change the categories after creation time.

548 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.8.2 CategoricalDtype

Changed in version 0.21.0.


A categorical’s type is fully described by
1. categories: a sequence of unique values and no missing values
2. ordered: a boolean
This information can be stored in a CategoricalDtype. The categories argument is optional, which implies
that the actual categories should be inferred from whatever is present in the data when the pandas.Categorical
is created. The categories are assumed to be unordered by default.

In [45]: from pandas.api.types import CategoricalDtype

In [46]: CategoricalDtype(['a', 'b', 'c'])


Out[46]: CategoricalDtype(categories=['a', 'b', 'c'], ordered=False)

In [47]: CategoricalDtype(['a', 'b', 'c'], ordered=True)


Out[47]: CategoricalDtype(categories=['a', 'b', 'c'], ordered=True)

In [48]: CategoricalDtype()
Out[48]: CategoricalDtype(categories=None, ordered=False)

A CategoricalDtype can be used in any place pandas expects a dtype. For example pandas.read_csv(),
pandas.DataFrame.astype(), or in the Series constructor.

Note: As a convenience, you can use the string 'category' in place of a CategoricalDtype when you want
[email protected]
the default behavior of the categories being unordered, and equal to the set values present in the array. In other words,
T56GZSRVAH
dtype='category' is equivalent to dtype=CategoricalDtype().

Equality semantics

Two instances of CategoricalDtype compare equal whenever they have the same categories and order. When
comparing two unordered categoricals, the order of the categories is not considered.

In [49]: c1 = CategoricalDtype(['a', 'b', 'c'], ordered=False)

# Equal, since order is not considered when ordered=False


In [50]: c1 == CategoricalDtype(['b', 'c', 'a'], ordered=False)
Out[50]: True

# Unequal, since the second CategoricalDtype is ordered


In [51]: c1 == CategoricalDtype(['a', 'b', 'c'], ordered=True)
Out[51]: False

All instances of CategoricalDtype compare equal to the string 'category'.

In [52]: c1 == 'category'
Out[52]: True

3.8. Categorical data 549


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Warning: Since dtype='category' is essentially CategoricalDtype(None, False), and since


all instances CategoricalDtype compare equal to 'category', all instances of CategoricalDtype
compare equal to a CategoricalDtype(None, False), regardless of categories or ordered.

3.8.3 Description

Using describe() on categorical data will produce similar output to a Series or DataFrame of type string.

In [53]: cat = pd.Categorical(["a", "c", "c", np.nan], categories=["b", "a", "c"])

In [54]: df = pd.DataFrame({"cat": cat, "s": ["a", "c", "c", np.nan]})

In [55]: df.describe()
Out[55]:
cat s
count 3 3
unique 2 2
top c c
freq 2 2

In [56]: df["cat"].describe()
Out[56]:
count 3
unique 2
top c
freq 2
[email protected]
Name: cat, dtype: object
T56GZSRVAH

3.8.4 Working with categories

Categorical data has a categories and a ordered property, which list their possible values and whether the ordering
matters or not. These properties are exposed as s.cat.categories and s.cat.ordered. If you don’t manually
specify categories and ordering, they are inferred from the passed arguments.

In [57]: s = pd.Series(["a", "b", "c", "a"], dtype="category")

In [58]: s.cat.categories
Out[58]: Index(['a', 'b', 'c'], dtype='object')

In [59]: s.cat.ordered
Out[59]: False

It’s also possible to pass in the categories in a specific order:

In [60]: s = pd.Series(pd.Categorical(["a", "b", "c", "a"],


....: categories=["c", "b", "a"]))
....:

In [61]: s.cat.categories
Out[61]: Index(['c', 'b', 'a'], dtype='object')

In [62]: s.cat.ordered
Out[62]: False

550 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Note: New categorical data are not automatically ordered. You must explicitly pass ordered=True to indicate an
ordered Categorical.

Note: The result of unique() is not always the same as Series.cat.categories, because Series.
unique() has a couple of guarantees, namely that it returns categories in the order of appearance, and it only
includes values that are actually present.

In [63]: s = pd.Series(list('babc')).astype(CategoricalDtype(list('abcd')))

In [64]: s
Out[64]:
0 b
1 a
2 b
3 c
dtype: category
Categories (4, object): [a, b, c, d]

# categories
In [65]: s.cat.categories
Out[65]: Index(['a', 'b', 'c', 'd'], dtype='object')

# uniques
In [66]: s.unique()
Out[66]:
[b, a, c]
[email protected]
Categories (3, object): [b, a, c]
T56GZSRVAH

Renaming categories

Renaming categories is done by assigning new values to the Series.cat.categories property or by using the
rename_categories() method:

In [67]: s = pd.Series(["a", "b", "c", "a"], dtype="category")

In [68]: s
Out[68]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): [a, b, c]

In [69]: s.cat.categories = ["Group %s" % g for g in s.cat.categories]

In [70]: s
Out[70]:
0 Group a
1 Group b
2 Group c
3 Group a
(continues on next page)

3.8. Categorical data 551


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


dtype: category
Categories (3, object): [Group a, Group b, Group c]

In [71]: s = s.cat.rename_categories([1, 2, 3])

In [72]: s
Out[72]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [1, 2, 3]

# You can also pass a dict-like object to map the renaming


In [73]: s = s.cat.rename_categories({1: 'x', 2: 'y', 3: 'z'})

In [74]: s
Out[74]:
0 x
1 y
2 z
3 x
dtype: category
Categories (3, object): [x, y, z]

Note: In contrast to R’s factor, categorical data can have categories of other types than string.
[email protected]
T56GZSRVAH

Note: Be aware that assigning new categories is an inplace operation, while most other operations under Series.
cat per default return a new Series of dtype category.

Categories must be unique or a ValueError is raised:

In [75]: try:
....: s.cat.categories = [1, 1, 1]
....: except ValueError as e:
....: print("ValueError:", str(e))
....:
ValueError: Categorical categories must be unique

Categories must also not be NaN or a ValueError is raised:

In [76]: try:
....: s.cat.categories = [1, 2, np.nan]
....: except ValueError as e:
....: print("ValueError:", str(e))
....:
ValueError: Categorial categories cannot be null

552 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Appending new categories

Appending categories can be done by using the add_categories() method:


In [77]: s = s.cat.add_categories([4])

In [78]: s.cat.categories
Out[78]: Index(['x', 'y', 'z', 4], dtype='object')

In [79]: s
Out[79]:
0 x
1 y
2 z
3 x
dtype: category
Categories (4, object): [x, y, z, 4]

Removing categories

Removing categories can be done by using the remove_categories() method. Values which are removed are
replaced by np.nan.:
In [80]: s = s.cat.remove_categories([4])

In [81]: s
Out[81]:
[email protected]
0 x
T56GZSRVAH1 y
2 z
3 x
dtype: category
Categories (3, object): [x, y, z]

Removing unused categories

Removing unused categories can also be done:


In [82]: s = pd.Series(pd.Categorical(["a", "b", "a"],
....: categories=["a", "b", "c", "d"]))
....:

In [83]: s
Out[83]:
0 a
1 b
2 a
dtype: category
Categories (4, object): [a, b, c, d]

In [84]: s.cat.remove_unused_categories()
Out[84]:
0 a
1 b
2 a
(continues on next page)

3.8. Categorical data 553


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


dtype: category
Categories (2, object): [a, b]

Setting categories

If you want to do remove and add new categories in one step (which has some speed advantage), or simply set the
categories to a predefined scale, use set_categories().

In [85]: s = pd.Series(["one", "two", "four", "-"], dtype="category")

In [86]: s
Out[86]:
0 one
1 two
2 four
3 -
dtype: category
Categories (4, object): [-, four, one, two]

In [87]: s = s.cat.set_categories(["one", "two", "three", "four"])

In [88]: s
Out[88]:
0 one
1 two
2 four
[email protected]
3 NaN
T56GZSRVAHdtype: category
Categories (4, object): [one, two, three, four]

Note: Be aware that Categorical.set_categories() cannot know whether some category is omitted in-
tentionally or because it is misspelled or (under Python3) due to a type difference (e.g., NumPy S1 dtype and Python
strings). This can result in surprising behaviour!

3.8.5 Sorting and order

If categorical data is ordered (s.cat.ordered == True), then the order of the categories has a meaning and
certain operations are possible. If the categorical is unordered, .min()/.max() will raise a TypeError.

In [89]: s = pd.Series(pd.Categorical(["a", "b", "c", "a"], ordered=False))

In [90]: s.sort_values(inplace=True)

In [91]: s = pd.Series(["a", "b", "c", "a"]).astype(


....: CategoricalDtype(ordered=True)
....: )
....:

In [92]: s.sort_values(inplace=True)

In [93]: s
(continues on next page)

554 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[93]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): [a < b < c]

In [94]: s.min(), s.max()


Out[94]: ('a', 'c')

You can set categorical data to be ordered by using as_ordered() or unordered by using as_unordered().
These will by default return a new object.

In [95]: s.cat.as_ordered()
Out[95]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): [a < b < c]

In [96]: s.cat.as_unordered()
Out[96]:
0 a
3 a
1 b
[email protected]
T56GZSRVAH2 c
dtype: category
Categories (3, object): [a, b, c]

Sorting will use the order defined by categories, not any lexical order present on the data type. This is even true for
strings and numeric data:

In [97]: s = pd.Series([1, 2, 3, 1], dtype="category")

In [98]: s = s.cat.set_categories([2, 3, 1], ordered=True)

In [99]: s
Out[99]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]

In [100]: s.sort_values(inplace=True)

In [101]: s
Out[101]:
1 2
2 3
0 1
3 1
(continues on next page)

3.8. Categorical data 555


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


dtype: category
Categories (3, int64): [2 < 3 < 1]

In [102]: s.min(), s.max()


Out[102]: (2, 1)

Reordering

Reordering the categories is possible via the Categorical.reorder_categories() and the


Categorical.set_categories() methods. For Categorical.reorder_categories(), all old
categories must be included in the new categories and no new categories are allowed. This will necessarily make the
sort order the same as the categories order.

In [103]: s = pd.Series([1, 2, 3, 1], dtype="category")

In [104]: s = s.cat.reorder_categories([2, 3, 1], ordered=True)

In [105]: s
Out[105]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
[email protected]
T56GZSRVAHIn [106]: s.sort_values(inplace=True)
In [107]: s
Out[107]:
1 2
2 3
0 1
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]

In [108]: s.min(), s.max()


Out[108]: (2, 1)

Note: Note the difference between assigning new categories and reordering the categories: the first renames categories
and therefore the individual values in the Series, but if the first position was sorted last, the renamed value will still
be sorted last. Reordering means that the way values are sorted is different afterwards, but not that individual values
in the Series are changed.

Note: If the Categorical is not ordered, Series.min() and Series.max() will raise TypeError. Nu-
meric operations like +, -, *, / and operations based on them (e.g. Series.median(), which would need to
compute the mean between two values if the length of an array is even) do not work and raise a TypeError.

556 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Multi column sorting

A categorical dtyped column will participate in a multi-column sort in a similar manner to other columns. The ordering
of the categorical is determined by the categories of that column.

In [109]: dfs = pd.DataFrame({'A': pd.Categorical(list('bbeebbaa'),


.....: categories=['e', 'a', 'b'],
.....: ordered=True),
.....: 'B': [1, 2, 1, 2, 2, 1, 2, 1]})
.....:

In [110]: dfs.sort_values(by=['A', 'B'])


Out[110]:
A B
2 e 1
3 e 2
7 a 1
6 a 2
0 b 1
5 b 1
1 b 2
4 b 2

Reordering the categories changes a future sort.

In [111]: dfs['A'] = dfs['A'].cat.reorder_categories(['a', 'b', 'e'])

In [112]: dfs.sort_values(by=['A', 'B'])


Out[112]:
[email protected]
T56GZSRVAH A B
7 a 1
6 a 2
0 b 1
5 b 1
1 b 2
4 b 2
2 e 1
3 e 2

3.8.6 Comparisons

Comparing categorical data with other objects is possible in three cases:


• Comparing equality (== and !=) to a list-like object (list, Series, array, . . . ) of the same length as the categorical
data.
• All comparisons (==, !=, >, >=, <, and <=) of categorical data to another categorical Series, when
ordered==True and the categories are the same.
• All comparisons of a categorical data to a scalar.
All other comparisons, especially “non-equality” comparisons of two categoricals with different categories or a cate-
gorical with any list-like object, will raise a TypeError.

Note: Any “non-equality” comparisons of categorical data with a Series, np.array, list or categorical data
with different categories or ordering will raise a TypeError because custom categories ordering could be interpreted

3.8. Categorical data 557


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

in two ways: one with taking into account the ordering and one without.

In [113]: cat = pd.Series([1, 2, 3]).astype(


.....: CategoricalDtype([3, 2, 1], ordered=True)
.....: )
.....:

In [114]: cat_base = pd.Series([2, 2, 2]).astype(


.....: CategoricalDtype([3, 2, 1], ordered=True)
.....: )
.....:

In [115]: cat_base2 = pd.Series([2, 2, 2]).astype(


.....: CategoricalDtype(ordered=True)
.....: )
.....:

In [116]: cat
Out[116]:
0 1
1 2
2 3
dtype: category
Categories (3, int64): [3 < 2 < 1]

In [117]: cat_base
Out[117]:
0 2
[email protected]
1 2
T56GZSRVAH2 2
dtype: category
Categories (3, int64): [3 < 2 < 1]

In [118]: cat_base2
Out[118]:
0 2
1 2
2 2
dtype: category
Categories (1, int64): [2]

Comparing to a categorical with the same categories and ordering or to a scalar works:

In [119]: cat > cat_base


Out[119]:
0 True
1 False
2 False
dtype: bool

In [120]: cat > 2


Out[120]:
0 True
1 False
2 False
dtype: bool

Equality comparisons work with any list-like object of same length and scalars:

558 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [121]: cat == cat_base


Out[121]:
0 False
1 True
2 False
dtype: bool

In [122]: cat == np.array([1, 2, 3])


Out[122]:
0 True
1 True
2 True
dtype: bool

In [123]: cat == 2
Out[123]:
0 False
1 True
2 False
dtype: bool

This doesn’t work because the categories are not the same:

In [124]: try:
.....: cat > cat_base2
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
[email protected]
TypeError: Categoricals can only be compared if 'categories' are the same. Categories
T56GZSRVAH˓→are different lengths

If you want to do a “non-equality” comparison of a categorical series with a list-like object which is not categorical
data, you need to be explicit and convert the categorical data back to the original values:

In [125]: base = np.array([1, 2, 3])

In [126]: try:
.....: cat > base
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: Cannot compare a Categorical for op __gt__ with type <class 'numpy.ndarray
˓→'>.

If you want to compare values, use 'np.asarray(cat) <op> other'.

In [127]: np.asarray(cat) > base


Out[127]: array([False, False, False])

When you compare two unordered categoricals with the same categories, the order is not considered:

In [128]: c1 = pd.Categorical(['a', 'b'], categories=['a', 'b'], ordered=False)

In [129]: c2 = pd.Categorical(['a', 'b'], categories=['b', 'a'], ordered=False)

In [130]: c1 == c2
Out[130]: array([ True, True])

3.8. Categorical data 559


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.8.7 Operations

Apart from Series.min(), Series.max() and Series.mode(), the following operations are possible with
categorical data:
Series methods like Series.value_counts() will use all categories, even if some categories are not present
in the data:

In [131]: s = pd.Series(pd.Categorical(["a", "b", "c", "c"],


.....: categories=["c", "a", "b", "d"]))
.....:

In [132]: s.value_counts()
Out[132]:
c 2
b 1
a 1
d 0
dtype: int64

Groupby will also show “unused” categories:

In [133]: cats = pd.Categorical(["a", "b", "b", "b", "c", "c", "c"],


.....: categories=["a", "b", "c", "d"])
.....:

In [134]: df = pd.DataFrame({"cats": cats, "values": [1, 2, 2, 2, 3, 4, 5]})

In [135]: df.groupby("cats").mean()
[email protected]
Out[135]:
T56GZSRVAH values
cats
a 1.0
b 2.0
c 4.0
d NaN

In [136]: cats2 = pd.Categorical(["a", "a", "b", "b"], categories=["a", "b", "c"])

In [137]: df2 = pd.DataFrame({"cats": cats2,


.....: "B": ["c", "d", "c", "d"],
.....: "values": [1, 2, 3, 4]})
.....:

In [138]: df2.groupby(["cats", "B"]).mean()


Out[138]:
values
cats B
a c 1.0
d 2.0
b c 3.0
d 4.0
c c NaN
d NaN

Pivot tables:

560 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [139]: raw_cat = pd.Categorical(["a", "a", "b", "b"], categories=["a", "b", "c"])

In [140]: df = pd.DataFrame({"A": raw_cat,


.....: "B": ["c", "d", "c", "d"],
.....: "values": [1, 2, 3, 4]})
.....:

In [141]: pd.pivot_table(df, values='values', index=['A', 'B'])


Out[141]:
values
A B
a c 1
d 2
b c 3
d 4

3.8.8 Data munging

The optimized pandas data access methods .loc, .iloc, .at, and .iat, work as normal. The only difference is
the return type (for getting) and that only values already in categories can be assigned.

Getting

If the slicing operation returns either a DataFrame or a column of type Series, the category dtype is preserved.

In [142]: idx = pd.Index(["h", "i", "j", "k", "l", "m", "n"])


[email protected]
T56GZSRVAH
In [143]: cats = pd.Series(["a", "b", "b", "b", "c", "c", "c"],
.....: dtype="category", index=idx)
.....:

In [144]: values = [1, 2, 2, 2, 3, 4, 5]

In [145]: df = pd.DataFrame({"cats": cats, "values": values}, index=idx)

In [146]: df.iloc[2:4, :]
Out[146]:
cats values
j b 2
k b 2

In [147]: df.iloc[2:4, :].dtypes


Out[147]:
cats category
values int64
dtype: object

In [148]: df.loc["h":"j", "cats"]


Out[148]:
h a
i b
j b
Name: cats, dtype: category
Categories (3, object): [a, b, c]
(continues on next page)

3.8. Categorical data 561


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [149]: df[df["cats"] == "b"]


Out[149]:
cats values
i b 2
j b 2
k b 2

An example where the category type is not preserved is if you take one single row: the resulting Series is of dtype
object:

# get the complete "h" row as a Series


In [150]: df.loc["h", :]
Out[150]:
cats a
values 1
Name: h, dtype: object

Returning a single item from categorical data will also return the value, not a categorical of length “1”.

In [151]: df.iat[0, 0]
Out[151]: 'a'

In [152]: df["cats"].cat.categories = ["x", "y", "z"]

In [153]: df.at["h", "cats"] # returns a string


Out[153]: 'x'
[email protected]
T56GZSRVAH
Note: The is in contrast to R’s factor function, where factor(c(1,2,3))[1] returns a single value factor.

To get a single value Series of type category, you pass in a list with a single value:

In [154]: df.loc[["h"], "cats"]


Out[154]:
h x
Name: cats, dtype: category
Categories (3, object): [x, y, z]

String and datetime accessors

The accessors .dt and .str will work if the s.cat.categories are of an appropriate type:

In [155]: str_s = pd.Series(list('aabb'))

In [156]: str_cat = str_s.astype('category')

In [157]: str_cat
Out[157]:
0 a
1 a
2 b
3 b
dtype: category
(continues on next page)

562 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Categories (2, object): [a, b]

In [158]: str_cat.str.contains("a")
Out[158]:
0 True
1 True
2 False
3 False
dtype: bool

In [159]: date_s = pd.Series(pd.date_range('1/1/2015', periods=5))

In [160]: date_cat = date_s.astype('category')

In [161]: date_cat
Out[161]:
0 2015-01-01
1 2015-01-02
2 2015-01-03
3 2015-01-04
4 2015-01-05
dtype: category
Categories (5, datetime64[ns]): [2015-01-01, 2015-01-02, 2015-01-03, 2015-01-04, 2015-
˓→01-05]

In [162]: date_cat.dt.day
Out[162]:
0 1
[email protected]
T56GZSRVAH 1 2
2 3
3 4
4 5
dtype: int64

Note: The returned Series (or DataFrame) is of the same type as if you used the .str.<method> / .dt.
<method> on a Series of that type (and not of type category!).

That means, that the returned values from methods and properties on the accessors of a Series and the returned
values from methods and properties on the accessors of this Series transformed to one of type category will be
equal:

In [163]: ret_s = str_s.str.contains("a")

In [164]: ret_cat = str_cat.str.contains("a")

In [165]: ret_s.dtype == ret_cat.dtype


Out[165]: True

In [166]: ret_s == ret_cat


Out[166]:
0 True
1 True
2 True
3 True
dtype: bool

3.8. Categorical data 563


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Note: The work is done on the categories and then a new Series is constructed. This has some performance
implication if you have a Series of type string, where lots of elements are repeated (i.e. the number of unique
elements in the Series is a lot smaller than the length of the Series). In this case it can be faster to convert the
original Series to one of type category and use .str.<method> or .dt.<property> on that.

Setting

Setting values in a categorical column (or Series) works as long as the value is included in the categories:

In [167]: idx = pd.Index(["h", "i", "j", "k", "l", "m", "n"])

In [168]: cats = pd.Categorical(["a", "a", "a", "a", "a", "a", "a"],


.....: categories=["a", "b"])
.....:

In [169]: values = [1, 1, 1, 1, 1, 1, 1]

In [170]: df = pd.DataFrame({"cats": cats, "values": values}, index=idx)

In [171]: df.iloc[2:4, :] = [["b", 2], ["b", 2]]

In [172]: df
Out[172]:
cats values
h a 1
[email protected] 1
i a
T56GZSRVAHj b 2
k b 2
l a 1
m a 1
n a 1

In [173]: try:
.....: df.iloc[2:4, :] = [["c", 3], ["c", 3]]
.....: except ValueError as e:
.....: print("ValueError:", str(e))
.....:
ValueError: Cannot setitem on a Categorical with a new category, set the categories
˓→first

Setting values by assigning categorical data will also check that the categories match:

In [174]: df.loc["j":"k", "cats"] = pd.Categorical(["a", "a"], categories=["a", "b"])

In [175]: df
Out[175]:
cats values
h a 1
i a 1
j a 2
k a 2
l a 1
m a 1
n a 1
(continues on next page)

564 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [176]: try:
.....: df.loc["j":"k", "cats"] = pd.Categorical(["b", "b"],
.....: categories=["a", "b", "c"])
.....: except ValueError as e:
.....: print("ValueError:", str(e))
.....:
ValueError: Cannot set a Categorical with another, without identical categories

Assigning a Categorical to parts of a column of other types will use the values:
In [177]: df = pd.DataFrame({"a": [1, 1, 1, 1, 1], "b": ["a", "a", "a", "a", "a"]})

In [178]: df.loc[1:2, "a"] = pd.Categorical(["b", "b"], categories=["a", "b"])

In [179]: df.loc[2:3, "b"] = pd.Categorical(["b", "b"], categories=["a", "b"])

In [180]: df
Out[180]:
a b
0 1 a
1 b a
2 b b
3 1 b
4 1 a

In [181]: df.dtypes
Out[181]:
[email protected]
T56GZSRVAHa object
b object
dtype: object

Merging / Concatenation

By default, combining Series or DataFrames which contain the same categories results in category dtype,
otherwise results will depend on the dtype of the underlying categories. Merges that result in non-categorical dtypes
will likely have higher memory usage. Use .astype or union_categoricals to ensure category results.
In [182]: from pandas.api.types import union_categoricals

# same categories
In [183]: s1 = pd.Series(['a', 'b'], dtype='category')

In [184]: s2 = pd.Series(['a', 'b', 'a'], dtype='category')

In [185]: pd.concat([s1, s2])


Out[185]:
0 a
1 b
0 a
1 b
2 a
dtype: category
Categories (2, object): [a, b]

(continues on next page)

3.8. Categorical data 565


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


# different categories
In [186]: s3 = pd.Series(['b', 'c'], dtype='category')

In [187]: pd.concat([s1, s3])


Out[187]:
0 a
1 b
0 b
1 c
dtype: object

# Output dtype is inferred based on categories values


In [188]: int_cats = pd.Series([1, 2], dtype="category")

In [189]: float_cats = pd.Series([3.0, 4.0], dtype="category")

In [190]: pd.concat([int_cats, float_cats])


Out[190]:
0 1.0
1 2.0
0 3.0
1 4.0
dtype: float64

In [191]: pd.concat([s1, s3]).astype('category')


Out[191]:
0 a
1 b
[email protected]
T56GZSRVAH 0 b
1 c
dtype: category
Categories (3, object): [a, b, c]

In [192]: union_categoricals([s1.array, s3.array])


Out[192]:
[a, b, b, c]
Categories (3, object): [a, b, c]

The following table summarizes the results of merging Categoricals:

arg1 arg2 identical result


category category True category
category (object) category (object) False object (dtype is inferred)
category (int) category (float) False float (dtype is inferred)

See also the section on merge dtypes for notes about preserving merge dtypes and performance.

566 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Unioning

If you want to combine categoricals that do not necessarily have the same categories, the union_categoricals()
function will combine a list-like of categoricals. The new categories will be the union of the categories being combined.

In [193]: from pandas.api.types import union_categoricals

In [194]: a = pd.Categorical(["b", "c"])

In [195]: b = pd.Categorical(["a", "b"])

In [196]: union_categoricals([a, b])


Out[196]:
[b, c, a, b]
Categories (3, object): [b, c, a]

By default, the resulting categories will be ordered as they appear in the data. If you want the categories to be lexsorted,
use sort_categories=True argument.

In [197]: union_categoricals([a, b], sort_categories=True)


Out[197]:
[b, c, a, b]
Categories (3, object): [a, b, c]

union_categoricals also works with the “easy” case of combining two categoricals of the same categories and
order information (e.g. what you could also append for).

In [198]: a = pd.Categorical(["a", "b"], ordered=True)


[email protected]
T56GZSRVAHIn [199]: b = pd.Categorical(["a", "b", "a"], ordered=True)

In [200]: union_categoricals([a, b])


Out[200]:
[a, b, a, b, a]
Categories (2, object): [a < b]

The below raises TypeError because the categories are ordered and not identical.

In [1]: a = pd.Categorical(["a", "b"], ordered=True)


In [2]: b = pd.Categorical(["a", "b", "c"], ordered=True)
In [3]: union_categoricals([a, b])
Out[3]:
TypeError: to union ordered Categoricals, all categories must be the same

Ordered categoricals with different categories or orderings can be combined by using the ignore_ordered=True
argument.

In [201]: a = pd.Categorical(["a", "b", "c"], ordered=True)

In [202]: b = pd.Categorical(["c", "b", "a"], ordered=True)

In [203]: union_categoricals([a, b], ignore_order=True)


Out[203]:
[a, b, c, c, b, a]
Categories (3, object): [a, b, c]

union_categoricals() also works with a CategoricalIndex, or Series containing categorical data, but
note that the resulting array will always be a plain Categorical:

3.8. Categorical data 567


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [204]: a = pd.Series(["b", "c"], dtype='category')

In [205]: b = pd.Series(["a", "b"], dtype='category')

In [206]: union_categoricals([a, b])


Out[206]:
[b, c, a, b]
Categories (3, object): [b, c, a]

Note: union_categoricals may recode the integer codes for categories when combining categoricals. This is
likely what you want, but if you are relying on the exact numbering of the categories, be aware.

In [207]: c1 = pd.Categorical(["b", "c"])

In [208]: c2 = pd.Categorical(["a", "b"])

In [209]: c1
Out[209]:
[b, c]
Categories (2, object): [b, c]

# "b" is coded to 0
In [210]: c1.codes
Out[210]: array([0, 1], dtype=int8)

In [211]: c2
Out[211]:
[email protected]
[a, b]
T56GZSRVAHCategories (2, object): [a, b]

# "b" is coded to 1
In [212]: c2.codes
Out[212]: array([0, 1], dtype=int8)

In [213]: c = union_categoricals([c1, c2])

In [214]: c
Out[214]:
[b, c, a, b]
Categories (3, object): [b, c, a]

# "b" is coded to 0 throughout, same as c1, different from c2


In [215]: c.codes
Out[215]: array([0, 1, 2, 0], dtype=int8)

568 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.8.9 Getting data in/out

You can write data that contains category dtypes to a HDFStore. See here for an example and caveats.
It is also possible to write data to and reading data from Stata format files. See here for an example and caveats.
Writing to a CSV file will convert the data, effectively removing any information about the categorical (categories and
ordering). So if you read back the CSV file you have to convert the relevant columns back to category and assign the
right categories and categories ordering.

In [216]: import io

In [217]: s = pd.Series(pd.Categorical(['a', 'b', 'b', 'a', 'a', 'd']))

# rename the categories


In [218]: s.cat.categories = ["very good", "good", "bad"]

# reorder the categories and add missing categories


In [219]: s = s.cat.set_categories(["very bad", "bad", "medium", "good", "very good"])

In [220]: df = pd.DataFrame({"cats": s, "vals": [1, 2, 3, 4, 5, 6]})

In [221]: csv = io.StringIO()

In [222]: df.to_csv(csv)

In [223]: df2 = pd.read_csv(io.StringIO(csv.getvalue()))

In [224]: df2.dtypes
Out[224]:
[email protected]
T56GZSRVAHUnnamed: 0 int64
cats object
vals int64
dtype: object

In [225]: df2["cats"]
Out[225]:
0 very good
1 good
2 good
3 very good
4 very good
5 bad
Name: cats, dtype: object

# Redo the category


In [226]: df2["cats"] = df2["cats"].astype("category")

In [227]: df2["cats"].cat.set_categories(["very bad", "bad", "medium",


.....: "good", "very good"],
.....: inplace=True)
.....:

In [228]: df2.dtypes
Out[228]:
Unnamed: 0 int64
cats category
vals int64
(continues on next page)

3.8. Categorical data 569


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


dtype: object

In [229]: df2["cats"]
Out[229]:
0 very good
1 good
2 good
3 very good
4 very good
5 bad
Name: cats, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]

The same holds for writing to a SQL database with to_sql.

3.8.10 Missing data

pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See
the Missing Data section.
Missing values should not be included in the Categorical’s categories, only in the values. Instead, it is under-
stood that NaN is different, and is always a possibility. When working with the Categorical’s codes, missing values
will always have a code of -1.

In [230]: s = pd.Series(["a", "b", np.nan, "a"], dtype="category")

# only two categories


[email protected]
T56GZSRVAHIn [231]: s
Out[231]:
0 a
1 b
2 NaN
3 a
dtype: category
Categories (2, object): [a, b]

In [232]: s.cat.codes
Out[232]:
0 0
1 1
2 -1
3 0
dtype: int8

Methods for working with missing data, e.g. isna(), fillna(), dropna(), all work normally:

In [233]: s = pd.Series(["a", "b", np.nan], dtype="category")

In [234]: s
Out[234]:
0 a
1 b
2 NaN
dtype: category
Categories (2, object): [a, b]
(continues on next page)

570 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [235]: pd.isna(s)
Out[235]:
0 False
1 False
2 True
dtype: bool

In [236]: s.fillna("a")
Out[236]:
0 a
1 b
2 a
dtype: category
Categories (2, object): [a, b]

3.8.11 Differences to R’s factor

The following differences to R’s factor functions can be observed:


• R’s levels are named categories.
• R’s levels are always of type string, while categories in pandas can be of any dtype.
• It’s not possible to specify labels at creation time. Use s.cat.rename_categories(new_labels)
afterwards.
[email protected]
• In contrast to R’s factor function, using categorical data as the sole input to create a new categorical series will
T56GZSRVAH not remove unused categories but create a new categorical series which is equal to the passed in one!
• R allows for missing values to be included in its levels (pandas’ categories). Pandas does not allow NaN
categories, but missing values can still be in the values.

3.8.12 Gotchas

Memory usage

The memory usage of a Categorical is proportional to the number of categories plus the length of the data. In
contrast, an object dtype is a constant times the length of the data.

In [237]: s = pd.Series(['foo', 'bar'] * 1000)

# object dtype
In [238]: s.nbytes
Out[238]: 16000

# category dtype
In [239]: s.astype('category').nbytes
Out[239]: 2016

Note: If the number of categories approaches the length of the data, the Categorical will use nearly the same or
more memory than an equivalent object dtype representation.

3.8. Categorical data 571


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [240]: s = pd.Series(['foo%04d' % i for i in range(2000)])

# object dtype
In [241]: s.nbytes
Out[241]: 16000

# category dtype
In [242]: s.astype('category').nbytes
Out[242]: 20000

Categorical is not a numpy array

Currently, categorical data and the underlying Categorical is implemented as a Python object and not as a low-
level NumPy array dtype. This leads to some problems.
NumPy itself doesn’t know about the new dtype:
In [243]: try:
.....: np.dtype("category")
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: data type "category" not understood

In [244]: dtype = pd.Categorical(["a"]).dtype

[email protected]
In [245]: try:
T56GZSRVAH .....: np.dtype(dtype)
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: data type not understood

Dtype comparisons work:


In [246]: dtype == np.str_
Out[246]: False

In [247]: np.str_ == dtype


Out[247]: False

To check if a Series contains Categorical data, use hasattr(s, 'cat'):


In [248]: hasattr(pd.Series(['a'], dtype='category'), 'cat')
Out[248]: True

In [249]: hasattr(pd.Series(['a']), 'cat')


Out[249]: False

Using NumPy functions on a Series of type category should not work as Categoricals are not numeric data (even
in the case that .categories is numeric).
In [250]: s = pd.Series(pd.Categorical([1, 2, 3, 4]))

In [251]: try:
(continues on next page)

572 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


.....: np.sum(s)
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: Categorical cannot perform the operation sum

Note: If such a function works, please file a bug at https://github.com/pandas-dev/pandas!

dtype in apply

Pandas currently does not preserve the dtype in apply functions: If you apply along rows you get a Series of object
dtype (same as getting a row -> getting one element will return a basic type) and applying along columns will also
convert to object. NaN values are unaffected. You can use fillna to handle missing values before applying a
function.
In [252]: df = pd.DataFrame({"a": [1, 2, 3, 4],
.....: "b": ["a", "b", "c", "d"],
.....: "cats": pd.Categorical([1, 2, 3, 2])})
.....:

In [253]: df.apply(lambda row: type(row["cats"]), axis=1)


Out[253]:
0 <class 'int'>
1 <class 'int'>
[email protected]
2 <class 'int'>
T56GZSRVAH3 <class 'int'>
dtype: object

In [254]: df.apply(lambda col: col.dtype, axis=0)


Out[254]:
a int64
b object
cats category
dtype: object

Categorical index

CategoricalIndex is a type of index that is useful for supporting indexing with duplicates. This is a container
around a Categorical and allows efficient indexing and storage of an index with a large number of duplicated
elements. See the advanced indexing docs for a more detailed explanation.
Setting the index will create a CategoricalIndex:
In [255]: cats = pd.Categorical([1, 2, 3, 4], categories=[4, 2, 3, 1])

In [256]: strings = ["a", "b", "c", "d"]

In [257]: values = [4, 2, 3, 1]

In [258]: df = pd.DataFrame({"strings": strings, "values": values}, index=cats)

In [259]: df.index
(continues on next page)

3.8. Categorical data 573


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[259]: CategoricalIndex([1, 2, 3, 4], categories=[4, 2, 3, 1], ordered=False,
˓→dtype='category')

# This now sorts by the categories order


In [260]: df.sort_index()
Out[260]:
strings values
4 d 1
2 b 2
3 c 3
1 a 4

Side effects

Constructing a Series from a Categorical will not copy the input Categorical. This means that changes to
the Series will in most cases change the original Categorical:

In [261]: cat = pd.Categorical([1, 2, 3, 10], categories=[1, 2, 3, 4, 10])

In [262]: s = pd.Series(cat, name="cat")

In [263]: cat
Out[263]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]

[email protected]
In [264]: s.iloc[0:2] = 10
T56GZSRVAH
In [265]: cat
Out[265]:
[10, 10, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]

In [266]: df = pd.DataFrame(s)

In [267]: df["cat"].cat.categories = [1, 2, 3, 4, 5]

In [268]: cat
Out[268]:
[5, 5, 3, 5]
Categories (5, int64): [1, 2, 3, 4, 5]

Use copy=True to prevent such a behaviour or simply don’t reuse Categoricals:

In [269]: cat = pd.Categorical([1, 2, 3, 10], categories=[1, 2, 3, 4, 10])

In [270]: s = pd.Series(cat, name="cat", copy=True)

In [271]: cat
Out[271]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]

In [272]: s.iloc[0:2] = 10

(continues on next page)

574 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [273]: cat
Out[273]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]

Note: This also happens in some cases when you supply a NumPy array instead of a Categorical: using an int
array (e.g. np.array([1,2,3,4])) will exhibit the same behavior, while using a string array (e.g. np.array([
"a","b","c","a"])) will not.

3.9 Nullable integer data type

New in version 0.24.0.

Note: IntegerArray is currently experimental. Its API or implementation may change without warning.

Changed in version 1.0.0: Now uses pandas.NA as the missing value rather than numpy.nan.
In Working with missing data, we saw that pandas primarily uses NaN to represent missing data. Because NaN is a
float, this forces an array of integers with any missing values to become floating point. In some cases, this may not
matter much. But if your integer column is, say, an identifier, casting to float can be problematic. Some integers cannot
even be represented as floating point numbers.
[email protected]
T56GZSRVAH
3.9.1 Construction

Pandas can represent integer data with possibly missing values using arrays.IntegerArray. This is an extension
types implemented within pandas.
In [1]: arr = pd.array([1, 2, None], dtype=pd.Int64Dtype())

In [2]: arr
Out[2]:
<IntegerArray>
[1, 2, <NA>]
Length: 3, dtype: Int64

Or the string alias "Int64" (note the capital "I", to differentiate from NumPy’s 'int64' dtype:
In [3]: pd.array([1, 2, np.nan], dtype="Int64")
Out[3]:
<IntegerArray>
[1, 2, <NA>]
Length: 3, dtype: Int64

All NA-like values are replaced with pandas.NA.


In [4]: pd.array([1, 2, np.nan, None, pd.NA], dtype="Int64")
Out[4]:
<IntegerArray>
[1, 2, <NA>, <NA>, <NA>]
Length: 5, dtype: Int64

3.9. Nullable integer data type 575


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

This array can be stored in a DataFrame or Series like any NumPy array.

In [5]: pd.Series(arr)
Out[5]:
0 1
1 2
2 <NA>
dtype: Int64

You can also pass the list-like object to the Series constructor with the dtype.

Warning: Currently pandas.array() and pandas.Series() use different rules for dtype inference.
pandas.array() will infer a nullable- integer dtype
In [6]: pd.array([1, None])
Out[6]:
<IntegerArray>
[1, <NA>]
Length: 2, dtype: Int64

In [7]: pd.array([1, 2])


Out[7]:
<IntegerArray>
[1, 2]
Length: 2, dtype: Int64

For backwards-compatibility, Series infers these as either integer or float dtype


In [8]: pd.Series([1, None])
[email protected]
T56GZSRVAH Out[8]:
0 1.0
1 NaN
dtype: float64

In [9]: pd.Series([1, 2])


Out[9]:
0 1
1 2
dtype: int64

We recommend explicitly providing the dtype to avoid confusion.


In [10]: pd.array([1, None], dtype="Int64")
Out[10]:
<IntegerArray>
[1, <NA>]
Length: 2, dtype: Int64

In [11]: pd.Series([1, None], dtype="Int64")


Out[11]:
0 1
1 <NA>
dtype: Int64

In the future, we may provide an option for Series to infer a nullable-integer dtype.

576 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.9.2 Operations

Operations involving an integer array will behave similar to NumPy arrays. Missing values will be propagated, and
the data will be coerced to another dtype if needed.
In [12]: s = pd.Series([1, 2, None], dtype="Int64")

# arithmetic
In [13]: s + 1
Out[13]:
0 2
1 3
2 <NA>
dtype: Int64

# comparison
In [14]: s == 1
Out[14]:
0 True
1 False
2 <NA>
dtype: boolean

# indexing
In [15]: s.iloc[1:3]
Out[15]:
1 2
2 <NA>
dtype: Int64
[email protected]
T56GZSRVAH
# operate with other dtypes
In [16]: s + s.iloc[1:3].astype('Int8')
Out[16]:
0 <NA>
1 4
2 <NA>
dtype: Int64

# coerce when needed


In [17]: s + 0.01
Out[17]:
0 1.01
1 2.01
2 NaN
dtype: float64

These dtypes can operate as part of of DataFrame.


In [18]: df = pd.DataFrame({'A': s, 'B': [1, 1, 3], 'C': list('aab')})

In [19]: df
Out[19]:
A B C
0 1 1 a
1 2 1 a
2 <NA> 3 b

In [20]: df.dtypes
(continues on next page)

3.9. Nullable integer data type 577


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[20]:
A Int64
B int64
C object
dtype: object

These dtypes can be merged & reshaped & casted.

In [21]: pd.concat([df[['A']], df[['B', 'C']]], axis=1).dtypes


Out[21]:
A Int64
B int64
C object
dtype: object

In [22]: df['A'].astype(float)
Out[22]:
0 1.0
1 2.0
2 NaN
Name: A, dtype: float64

Reduction and groupby operations such as ‘sum’ work as well.

In [23]: df.sum()
Out[23]:
A 3
B 5
[email protected]
T56GZSRVAHC aab
dtype: object

In [24]: df.groupby('B').A.sum()
Out[24]:
B
1 3
3 0
Name: A, dtype: Int64

3.9.3 Scalar NA Value

arrays.IntegerArray uses pandas.NA as its scalar missing value. Slicing a single element that’s missing will
return pandas.NA

In [25]: a = pd.array([1, None], dtype="Int64")

In [26]: a[1]
Out[26]: <NA>

578 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.10 Nullable Boolean Data Type

New in version 1.0.0.

3.10.1 Indexing with NA values

pandas allows indexing with NA values in a boolean array, which are treated as False.
Changed in version 1.0.2.

In [1]: s = pd.Series([1, 2, 3])

In [2]: mask = pd.array([True, False, pd.NA], dtype="boolean")

In [3]: s[mask]
Out[3]:
0 1
dtype: int64

If you would prefer to keep the NA values you can manually fill them with fillna(True).

In [4]: s[mask.fillna(True)]
Out[4]:
0 1
2 3
dtype: int64

[email protected]
T56GZSRVAH
3.10.2 Kleene Logical Operations

arrays.BooleanArray implements Kleene Logic (sometimes called three-value logic) for logical operations like
& (and), | (or) and ^ (exclusive-or).
This table demonstrates the results for every combination. These operations are symmetrical, so flipping the left- and
right-hand side makes no difference in the result.

3.10. Nullable Boolean Data Type 579


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Expression Result
True & True True
True & False False
True & NA NA
False & False False
False & NA False
NA & NA NA
True | True True
True | False True
True | NA True
False | False False
False | NA NA
NA | NA NA
True ^ True False
True ^ False True
True ^ NA NA
False ^ False False
False ^ NA NA
NA ^ NA NA

When an NA is present in an operation, the output value is NA only if the result cannot be determined solely based on
the other input. For example, True | NA is True, because both True | True and True | False are True.
In that case, we don’t actually need to consider the value of the NA.
On the other hand, True & NA is NA. The result depends on whether the NA really is True or False, since True
& True is True, but True & False is False, so we can’t determine the output.
[email protected]
T56GZSRVAH
This differs from how np.nan behaves in logical operations. Pandas treated np.nan is always false in the output.
In or

In [5]: pd.Series([True, False, np.nan], dtype="object") | True


Out[5]:
0 True
1 True
2 False
dtype: bool

In [6]: pd.Series([True, False, np.nan], dtype="boolean") | True


Out[6]:
0 True
1 True
2 True
dtype: boolean

In and

In [7]: pd.Series([True, False, np.nan], dtype="object") & True


Out[7]:
0 True
1 False
2 False
dtype: bool

In [8]: pd.Series([True, False, np.nan], dtype="boolean") & True


(continues on next page)

580 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[8]:
0 True
1 False
2 <NA>
dtype: boolean

{{ header }}

3.11 Visualization

We use the standard convention for referencing the matplotlib API:

In [1]: import matplotlib.pyplot as plt

In [2]: plt.close('all')

We provide the basics in pandas to easily create decent looking plots. See the ecosystem section for visualization
libraries that go beyond the basics documented here.

Note: All calls to np.random are seeded with 123456.

3.11.1 Basic plotting: plot


[email protected]
T56GZSRVAHWe will demonstrate the basics, see the cookbook for some advanced strategies.

The plot method on Series and DataFrame is just a simple wrapper around plt.plot():

In [3]: ts = pd.Series(np.random.randn(1000),
...: index=pd.date_range('1/1/2000', periods=1000))
...:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-3-00eeb137fb11> in <module>
----> 1 ts = pd.Series(np.random.randn(1000),
2 index=pd.date_range('1/1/2000', periods=1000))

NameError: name 'pd' is not defined

In [4]: ts = ts.cumsum()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-4-a7771f529bde> in <module>
----> 1 ts = ts.cumsum()

NameError: name 'ts' is not defined

In [5]: ts.plot()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-5-8a34b37f0ce9> in <module>
----> 1 ts.plot()
(continues on next page)

3.11. Visualization 581


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

NameError: name 'ts' is not defined

[email protected]
T56GZSRVAH

If the index consists of dates, it calls gcf().autofmt_xdate() to try to format the x-axis nicely as per above.
On DataFrame, plot() is a convenience to plot all of the columns with labels:

In [6]: df = pd.DataFrame(np.random.randn(1000, 4),


...: index=ts.index, columns=list('ABCD'))
...:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-6-ae243d2a43b5> in <module>
----> 1 df = pd.DataFrame(np.random.randn(1000, 4),
2 index=ts.index, columns=list('ABCD'))

NameError: name 'pd' is not defined

In [7]: df = df.cumsum()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-7-08208d45ae16> in <module>
----> 1 df = df.cumsum()

(continues on next page)

582 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


NameError: name 'df' is not defined

In [8]: plt.figure();

In [9]: df.plot();

[email protected]
T56GZSRVAH

You can plot one column versus another using the x and y keywords in plot():

In [10]: df3 = pd.DataFrame(np.random.randn(1000, 2), columns=['B', 'C']).cumsum()


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-10-2bf7f8097da6> in <module>
----> 1 df3 = pd.DataFrame(np.random.randn(1000, 2), columns=['B', 'C']).cumsum()

NameError: name 'pd' is not defined

In [11]: df3['A'] = pd.Series(list(range(len(df))))


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-11-d039f08f00c4> in <module>
----> 1 df3['A'] = pd.Series(list(range(len(df))))

NameError: name 'pd' is not defined

(continues on next page)

3.11. Visualization 583


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [12]: df3.plot(x='A', y='B')
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-12-6b33533f2e7d> in <module>
----> 1 df3.plot(x='A', y='B')

NameError: name 'df3' is not defined

[email protected]
T56GZSRVAH

Note: For more formatting and styling options, see formatting below.

3.11.2 Other plots

Plotting methods allow for a handful of plot styles other than the default line plot. These methods can be provided as
the kind keyword argument to plot(), and include:
• ‘bar’ or ‘barh’ for bar plots
• ‘hist’ for histogram
• ‘box’ for boxplot
• ‘kde’ or ‘density’ for density plots

584 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• ‘area’ for area plots


• ‘scatter’ for scatter plots
• ‘hexbin’ for hexagonal bin plots
• ‘pie’ for pie plots
For example, a bar plot can be created the following way:

In [13]: plt.figure();

In [14]: df.iloc[5].plot(kind='bar');

[email protected]
T56GZSRVAH

You can also create these other plots using the methods DataFrame.plot.<kind> instead of providing the kind
keyword argument. This makes it easier to discover plot methods and the specific arguments they use:

In [15]: df = pd.DataFrame()

In [16]: df.plot.<TAB> # noqa: E225, E999


df.plot.area df.plot.barh df.plot.density df.plot.hist df.plot.line
˓→df.plot.scatter

df.plot.bar df.plot.box df.plot.hexbin df.plot.kde df.plot.pie

In addition to these kind s, there are the DataFrame.hist(), and DataFrame.boxplot() methods, which use a separate
interface.

3.11. Visualization 585


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Finally, there are several plotting functions in pandas.plotting that take a Series or DataFrame as an argu-
ment. These include:
• Scatter Matrix
• Andrews Curves
• Parallel Coordinates
• Lag Plot
• Autocorrelation Plot
• Bootstrap Plot
• RadViz
Plots may also be adorned with errorbars or tables.

Bar plots

For labeled, non-time series data, you may wish to produce a bar plot:

In [17]: plt.figure();

In [18]: df.iloc[5].plot.bar()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-18-d1a2cddc601a> in <module>
----> 1 df.iloc[5].plot.bar()
[email protected]
T56GZSRVAHNameError: name 'df' is not defined
In [19]: plt.axhline(0, color='k');

586 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Calling a DataFrame’s plot.bar() method produces a multiple bar plot:

In [20]: df2 = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-20-6133adb252fc> in <module>
----> 1 df2 = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])

NameError: name 'pd' is not defined

In [21]: df2.plot.bar();

3.11. Visualization 587


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

To produce a stacked bar plot, pass stacked=True:

In [22]: df2.plot.bar(stacked=True);

588 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

To get horizontal bar plots, use the barh method:

In [23]: df2.plot.barh(stacked=True);

3.11. Visualization 589


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Histograms

Histograms can be drawn by using the DataFrame.plot.hist() and Series.plot.hist() methods.


In [24]: df4 = pd.DataFrame({'a': np.random.randn(1000) + 1, 'b': np.random.
˓→randn(1000),

....: 'c': np.random.randn(1000) - 1}, columns=['a', 'b', 'c'])


....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-24-3b054428c392> in <module>
----> 1 df4 = pd.DataFrame({'a': np.random.randn(1000) + 1, 'b': np.random.
˓→randn(1000),

2 'c': np.random.randn(1000) - 1}, columns=['a', 'b', 'c'])

NameError: name 'pd' is not defined

In [25]: plt.figure();

In [26]: df4.plot.hist(alpha=0.5)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-26-d12a7608cec9> in <module>
----> 1 df4.plot.hist(alpha=0.5)
(continues on next page)

590 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

NameError: name 'df4' is not defined

[email protected]
T56GZSRVAH

A histogram can be stacked using stacked=True. Bin size can be changed using the bins keyword.

In [27]: plt.figure();

In [28]: df4.plot.hist(stacked=True, bins=20)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-28-9a4bef475383> in <module>
----> 1 df4.plot.hist(stacked=True, bins=20)

NameError: name 'df4' is not defined

3.11. Visualization 591


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

You can pass other keywords supported by matplotlib hist. For example, horizontal and cumulative histograms can
be drawn by orientation='horizontal' and cumulative=True.

In [29]: plt.figure();

In [30]: df4['a'].plot.hist(orientation='horizontal', cumulative=True)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-30-c49999bfb88a> in <module>
----> 1 df4['a'].plot.hist(orientation='horizontal', cumulative=True)

NameError: name 'df4' is not defined

592 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

See the hist method and the matplotlib hist documentation for more.
The existing interface DataFrame.hist to plot histogram still can be used.

In [31]: plt.figure();

In [32]: df['A'].diff().hist()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-32-620f128ae072> in <module>
----> 1 df['A'].diff().hist()

NameError: name 'df' is not defined

3.11. Visualization 593


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

DataFrame.hist() plots the histograms of the columns on multiple subplots:

In [33]: plt.figure()
Out[33]: <Figure size 640x480 with 0 Axes>

In [34]: df.diff().hist(color='k', alpha=0.5, bins=50)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-34-742660109dc1> in <module>
----> 1 df.diff().hist(color='k', alpha=0.5, bins=50)

NameError: name 'df' is not defined

594 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

The by keyword can be specified to plot grouped histograms:

In [35]: data = pd.Series(np.random.randn(1000))


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-35-cd9ac77fc4c4> in <module>
----> 1 data = pd.Series(np.random.randn(1000))

NameError: name 'pd' is not defined

In [36]: data.hist(by=np.random.randint(0, 4, 1000), figsize=(6, 4))


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-36-9248a2062b4d> in <module>
----> 1 data.hist(by=np.random.randint(0, 4, 1000), figsize=(6, 4))

NameError: name 'data' is not defined

3.11. Visualization 595


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Box plots

Boxplot can be drawn calling Series.plot.box() and DataFrame.plot.box(), or DataFrame.


boxplot() to visualize the distribution of values within each column.
For instance, here is a boxplot representing five trials of 10 observations of a uniform random variable on [0,1).

In [37]: df = pd.DataFrame(np.random.rand(10, 5), columns=['A', 'B', 'C', 'D', 'E'])


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-37-a2f471686f35> in <module>
----> 1 df = pd.DataFrame(np.random.rand(10, 5), columns=['A', 'B', 'C', 'D', 'E'])

NameError: name 'pd' is not defined

In [38]: df.plot.box()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-38-8765cf6ed5ce> in <module>
----> 1 df.plot.box()

NameError: name 'df' is not defined

596 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Boxplot can be colorized by passing color keyword. You can pass a dict whose keys are boxes, whiskers,
medians and caps. If some keys are missing in the dict, default colors are used for the corresponding artists.
Also, boxplot has sym keyword to specify fliers style.
When you pass other type of arguments via color keyword, it will be directly passed to matplotlib for all the boxes,
whiskers, medians and caps colorization.
The colors are applied to every boxes to be drawn. If you want more complicated colorization, you can get each drawn
artists by passing return_type.

In [39]: color = {'boxes': 'DarkGreen', 'whiskers': 'DarkOrange',


....: 'medians': 'DarkBlue', 'caps': 'Gray'}
....:

In [40]: df.plot.box(color=color, sym='r+')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-40-2a54c0d52eaf> in <module>
----> 1 df.plot.box(color=color, sym='r+')

NameError: name 'df' is not defined

3.11. Visualization 597


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Also, you can pass other keywords supported by matplotlib boxplot. For example, horizontal and custom-positioned
boxplot can be drawn by vert=False and positions keywords.

In [41]: df.plot.box(vert=False, positions=[1, 4, 5, 6, 8])


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-41-6b82106697f4> in <module>
----> 1 df.plot.box(vert=False, positions=[1, 4, 5, 6, 8])

NameError: name 'df' is not defined

598 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

See the boxplot method and the matplotlib boxplot documentation for more.
The existing interface DataFrame.boxplot to plot boxplot still can be used.

In [42]: df = pd.DataFrame(np.random.rand(10, 5))


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-42-fd2300c25153> in <module>
----> 1 df = pd.DataFrame(np.random.rand(10, 5))

NameError: name 'pd' is not defined

In [43]: plt.figure();

In [44]: bp = df.boxplot()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-44-5b6d837d4b1a> in <module>
----> 1 bp = df.boxplot()

NameError: name 'df' is not defined

3.11. Visualization 599


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

You can create a stratified boxplot using the by keyword argument to create groupings. For instance,

In [45]: df = pd.DataFrame(np.random.rand(10, 2), columns=['Col1', 'Col2'])


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-45-da722611cdbb> in <module>
----> 1 df = pd.DataFrame(np.random.rand(10, 2), columns=['Col1', 'Col2'])

NameError: name 'pd' is not defined

In [46]: df['X'] = pd.Series(['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B'])
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-46-b2bbda782b40> in <module>
----> 1 df['X'] = pd.Series(['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B'])

NameError: name 'pd' is not defined

In [47]: plt.figure();

In [48]: bp = df.boxplot(by='X')
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-48-8598e842a6ba> in <module>
----> 1 bp = df.boxplot(by='X')
(continues on next page)

600 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

NameError: name 'df' is not defined

[email protected]
T56GZSRVAH

You can also pass a subset of columns to plot, as well as group by multiple columns:

In [49]: df = pd.DataFrame(np.random.rand(10, 3), columns=['Col1', 'Col2', 'Col3'])


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-49-9128a4c3d9c4> in <module>
----> 1 df = pd.DataFrame(np.random.rand(10, 3), columns=['Col1', 'Col2', 'Col3'])

NameError: name 'pd' is not defined

In [50]: df['X'] = pd.Series(['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B'])
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-50-b2bbda782b40> in <module>
----> 1 df['X'] = pd.Series(['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B'])

NameError: name 'pd' is not defined

In [51]: df['Y'] = pd.Series(['A', 'B', 'A', 'B', 'A', 'B', 'A', 'B', 'A', 'B'])
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
(continues on next page)

3.11. Visualization 601


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


<ipython-input-51-9bb56ebffdc5> in <module>
----> 1 df['Y'] = pd.Series(['A', 'B', 'A', 'B', 'A', 'B', 'A', 'B', 'A', 'B'])

NameError: name 'pd' is not defined

In [52]: plt.figure();

In [53]: bp = df.boxplot(column=['Col1', 'Col2'], by=['X', 'Y'])


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-53-e2989456fa84> in <module>
----> 1 bp = df.boxplot(column=['Col1', 'Col2'], by=['X', 'Y'])

NameError: name 'df' is not defined

[email protected]
T56GZSRVAH

In boxplot, the return type can be controlled by the return_type, keyword. The valid choices are {"axes",
"dict", "both", None}. Faceting, created by DataFrame.boxplot with the by keyword, will affect the
output type as well:

602 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

return_type= Faceted Output type


None No axes
None Yes 2-D ndarray of axes
'axes' No axes
'axes' Yes Series of axes
'dict' No dict of artists
'dict' Yes Series of dicts of artists
'both' No namedtuple
'both' Yes Series of namedtuples

Groupby.boxplot always returns a Series of return_type.

In [54]: np.random.seed(1234)

In [55]: df_box = pd.DataFrame(np.random.randn(50, 2))


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-55-043b0e16e969> in <module>
----> 1 df_box = pd.DataFrame(np.random.randn(50, 2))

NameError: name 'pd' is not defined

In [56]: df_box['g'] = np.random.choice(['A', 'B'], size=50)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-56-e39101f788cc> in <module>
----> 1 df_box['g'] = np.random.choice(['A', 'B'], size=50)
[email protected]
T56GZSRVAH
NameError: name 'df_box' is not defined

In [57]: df_box.loc[df_box['g'] == 'B', 1] += 3


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-57-996ee2e7f114> in <module>
----> 1 df_box.loc[df_box['g'] == 'B', 1] += 3

NameError: name 'df_box' is not defined

In [58]: bp = df_box.boxplot(by='g')
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-58-8fc769e009a9> in <module>
----> 1 bp = df_box.boxplot(by='g')

NameError: name 'df_box' is not defined

3.11. Visualization 603


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

The subplots above are split by the numeric columns first, then the value of the g column. Below the subplots are first
split by the value of g, then by the numeric columns.

In [59]: bp = df_box.groupby('g').boxplot()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-59-900c71ff9ec1> in <module>
----> 1 bp = df_box.groupby('g').boxplot()

NameError: name 'df_box' is not defined

604 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Area plot

You can create area plots with Series.plot.area() and DataFrame.plot.area(). Area plots are stacked
by default. To produce stacked area plot, each column must be either all positive or all negative values.
When input data contains NaN, it will be automatically filled by 0. If you want to drop or fill by different values, use
dataframe.dropna() or dataframe.fillna() before calling plot.

In [60]: df = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-60-1599b5508584> in <module>
----> 1 df = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])

NameError: name 'pd' is not defined

In [61]: df.plot.area();

3.11. Visualization 605


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

To produce an unstacked plot, pass stacked=False. Alpha value is set to 0.5 unless otherwise specified:

In [62]: df.plot.area(stacked=False);

606 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Scatter plot

Scatter plot can be drawn by using the DataFrame.plot.scatter() method. Scatter plot requires numeric
columns for the x and y axes. These can be specified by the x and y keywords.

In [63]: df = pd.DataFrame(np.random.rand(50, 4), columns=['a', 'b', 'c', 'd'])


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-63-a5ddb87a0bbe> in <module>
----> 1 df = pd.DataFrame(np.random.rand(50, 4), columns=['a', 'b', 'c', 'd'])

NameError: name 'pd' is not defined

In [64]: df.plot.scatter(x='a', y='b');

3.11. Visualization 607


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

To plot multiple column groups in a single axes, repeat plot method specifying target ax. It is recommended to
specify color and label keywords to distinguish each groups.

In [65]: ax = df.plot.scatter(x='a', y='b', color='DarkBlue', label='Group 1');

In [66]: df.plot.scatter(x='c', y='d', color='DarkGreen', label='Group 2', ax=ax);

608 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

The keyword c may be given as the name of a column to provide colors for each point:

In [67]: df.plot.scatter(x='a', y='b', c='c', s=50);

3.11. Visualization 609


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

You can pass other keywords supported by matplotlib scatter. The example below shows a bubble chart using a
column of the DataFrame as the bubble size.

In [68]: df.plot.scatter(x='a', y='b', s=df['c'] * 200);

610 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

See the scatter method and the matplotlib scatter documentation for more.

Hexagonal bin plot

You can create hexagonal bin plots with DataFrame.plot.hexbin(). Hexbin plots can be a useful alternative
to scatter plots if your data are too dense to plot each point individually.

In [69]: df = pd.DataFrame(np.random.randn(1000, 2), columns=['a', 'b'])


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-69-e243cb9efde5> in <module>
----> 1 df = pd.DataFrame(np.random.randn(1000, 2), columns=['a', 'b'])

NameError: name 'pd' is not defined

In [70]: df['b'] = df['b'] + np.arange(1000)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-70-09ef1c00dd7f> in <module>
----> 1 df['b'] = df['b'] + np.arange(1000)

NameError: name 'df' is not defined

In [71]: df.plot.hexbin(x='a', y='b', gridsize=25)


(continues on next page)

3.11. Visualization 611


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-71-48fcf967aa91> in <module>
----> 1 df.plot.hexbin(x='a', y='b', gridsize=25)

NameError: name 'df' is not defined

[email protected]
T56GZSRVAH

A useful keyword argument is gridsize; it controls the number of hexagons in the x-direction, and defaults to 100.
A larger gridsize means more, smaller bins.
By default, a histogram of the counts around each (x, y) point is computed. You can specify alternative aggregations
by passing values to the C and reduce_C_function arguments. C specifies the value at each (x, y) point and
reduce_C_function is a function of one argument that reduces all the values in a bin to a single number (e.g.
mean, max, sum, std). In this example the positions are given by columns a and b, while the value is given by
column z. The bins are aggregated with NumPy’s max function.
In [72]: df = pd.DataFrame(np.random.randn(1000, 2), columns=['a', 'b'])
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-72-e243cb9efde5> in <module>
----> 1 df = pd.DataFrame(np.random.randn(1000, 2), columns=['a', 'b'])

NameError: name 'pd' is not defined

(continues on next page)

612 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [73]: df['b'] = df['b'] = df['b'] + np.arange(1000)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-73-2214dd14e63e> in <module>
----> 1 df['b'] = df['b'] = df['b'] + np.arange(1000)

NameError: name 'df' is not defined

In [74]: df['z'] = np.random.uniform(0, 3, 1000)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-74-a84ae78b3d08> in <module>
----> 1 df['z'] = np.random.uniform(0, 3, 1000)

NameError: name 'df' is not defined

In [75]: df.plot.hexbin(x='a', y='b', C='z', reduce_C_function=np.max, gridsize=25)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-75-76020970b51d> in <module>
----> 1 df.plot.hexbin(x='a', y='b', C='z', reduce_C_function=np.max, gridsize=25)

NameError: name 'df' is not defined

[email protected]
T56GZSRVAH

See the hexbin method and the matplotlib hexbin documentation for more.

3.11. Visualization 613


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Pie plot

You can create a pie plot with DataFrame.plot.pie() or Series.plot.pie(). If your data includes any
NaN, they will be automatically filled with 0. A ValueError will be raised if there are any negative values in your
data.

In [76]: series = pd.Series(3 * np.random.rand(4),


....: index=['a', 'b', 'c', 'd'], name='series')
....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-76-ede9c1f3bcce> in <module>
----> 1 series = pd.Series(3 * np.random.rand(4),
2 index=['a', 'b', 'c', 'd'], name='series')

NameError: name 'pd' is not defined

In [77]: series.plot.pie(figsize=(6, 6))


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-77-e4c2454e0a19> in <module>
----> 1 series.plot.pie(figsize=(6, 6))

NameError: name 'series' is not defined

[email protected]
T56GZSRVAH

For pie plots it’s best to use square figures, i.e. a figure aspect ratio 1. You can create the figure with equal width and

614 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

height, or force the aspect ratio to be equal after plotting by calling ax.set_aspect('equal') on the returned
axes object.
Note that pie plot with DataFrame requires that you either specify a target column by the y argument or
subplots=True. When y is specified, pie plot of selected column will be drawn. If subplots=True is spec-
ified, pie plots for each column are drawn as subplots. A legend will be drawn in each pie plots by default; specify
legend=False to hide it.

In [78]: df = pd.DataFrame(3 * np.random.rand(4, 2),


....: index=['a', 'b', 'c', 'd'], columns=['x', 'y'])
....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-78-753e3aa96aef> in <module>
----> 1 df = pd.DataFrame(3 * np.random.rand(4, 2),
2 index=['a', 'b', 'c', 'd'], columns=['x', 'y'])

NameError: name 'pd' is not defined

In [79]: df.plot.pie(subplots=True, figsize=(8, 4))


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-79-bd4770dabaff> in <module>
----> 1 df.plot.pie(subplots=True, figsize=(8, 4))

NameError: name 'df' is not defined

[email protected]
T56GZSRVAH

3.11. Visualization 615


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

You can use the labels and colors keywords to specify the labels and colors of each wedge.

Warning: Most pandas plots use the label and color arguments (note the lack of “s” on those). To be
consistent with matplotlib.pyplot.pie() you must use labels and colors.

If you want to hide wedge labels, specify labels=None. If fontsize is specified, the value will be applied to
wedge labels. Also, other keywords supported by matplotlib.pyplot.pie() can be used.

In [80]: series.plot.pie(labels=['AA', 'BB', 'CC', 'DD'], colors=['r', 'g', 'b', 'c'],


....: autopct='%.2f', fontsize=20, figsize=(6, 6))
....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-80-f6a8e8e24c35> in <module>
----> 1 series.plot.pie(labels=['AA', 'BB', 'CC', 'DD'], colors=['r', 'g', 'b', 'c'],
2 autopct='%.2f', fontsize=20, figsize=(6, 6))

NameError: name 'series' is not defined

[email protected]
T56GZSRVAH

If you pass values whose sum total is less than 1.0, matplotlib draws a semicircle.

In [81]: series = pd.Series([0.1] * 4, index=['a', 'b', 'c', 'd'], name='series2')


---------------------------------------------------------------------------
(continues on next page)

616 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


NameError Traceback (most recent call last)
<ipython-input-81-80a435a1151e> in <module>
----> 1 series = pd.Series([0.1] * 4, index=['a', 'b', 'c', 'd'], name='series2')

NameError: name 'pd' is not defined

In [82]: series.plot.pie(figsize=(6, 6))


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-82-e4c2454e0a19> in <module>
----> 1 series.plot.pie(figsize=(6, 6))

NameError: name 'series' is not defined

[email protected]
T56GZSRVAH

See the matplotlib pie documentation for more.

3.11. Visualization 617


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.11.3 Plotting with missing data

Pandas tries to be pragmatic about plotting DataFrames or Series that contain missing data. Missing values are
dropped, left out, or filled depending on the plot type.

Plot Type NaN Handling


Line Leave gaps at NaNs
Line (stacked) Fill 0’s
Bar Fill 0’s
Scatter Drop NaNs
Histogram Drop NaNs (column-wise)
Box Drop NaNs (column-wise)
Area Fill 0’s
KDE Drop NaNs (column-wise)
Hexbin Drop NaNs
Pie Fill 0’s

If any of these defaults are not what you want, or if you want to be explicit about how missing values are handled,
consider using fillna() or dropna() before plotting.

3.11.4 Plotting Tools

These functions can be imported from pandas.plotting and take a Series or DataFrame as an argument.

[email protected]
Scatter matrix plot
T56GZSRVAH
You can create a scatter plot matrix using the scatter_matrix method in pandas.plotting:

In [83]: from pandas.plotting import scatter_matrix

In [84]: df = pd.DataFrame(np.random.randn(1000, 4), columns=['a', 'b', 'c', 'd'])


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-84-d09bb7ca2382> in <module>
----> 1 df = pd.DataFrame(np.random.randn(1000, 4), columns=['a', 'b', 'c', 'd'])

NameError: name 'pd' is not defined

In [85]: scatter_matrix(df, alpha=0.2, figsize=(6, 6), diagonal='kde')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-85-be6b017bf310> in <module>
----> 1 scatter_matrix(df, alpha=0.2, figsize=(6, 6), diagonal='kde')

NameError: name 'df' is not defined

618 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Density plot

You can create density plots using the Series.plot.kde() and DataFrame.plot.kde() methods.

In [86]: ser = pd.Series(np.random.randn(1000))


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-86-58c851be3369> in <module>
----> 1 ser = pd.Series(np.random.randn(1000))

NameError: name 'pd' is not defined

In [87]: ser.plot.kde()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-87-270b2425f93c> in <module>
----> 1 ser.plot.kde()

NameError: name 'ser' is not defined

3.11. Visualization 619


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Andrews curves

Andrews curves allow one to plot multivariate data as a large number of curves that are created using the attributes
of samples as coefficients for Fourier series, see the Wikipedia entry for more information. By coloring these curves
differently for each class it is possible to visualize data clustering. Curves belonging to samples of the same class will
usually be closer together and form larger structures.
Note: The “Iris” dataset is available here.
In [88]: from pandas.plotting import andrews_curves

In [89]: data = pd.read_csv('data/iris.data')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-89-ea4716c8ea20> in <module>
----> 1 data = pd.read_csv('data/iris.data')

NameError: name 'pd' is not defined

In [90]: plt.figure()
Out[90]: <Figure size 640x480 with 0 Axes>

In [91]: andrews_curves(data, 'Name')


---------------------------------------------------------------------------
(continues on next page)

620 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


NameError Traceback (most recent call last)
<ipython-input-91-fc5acbcde18a> in <module>
----> 1 andrews_curves(data, 'Name')

NameError: name 'data' is not defined

[email protected]
T56GZSRVAH

Parallel coordinates

Parallel coordinates is a plotting technique for plotting multivariate data, see the Wikipedia entry for an introduction.
Parallel coordinates allows one to see clusters in data and to estimate other statistics visually. Using parallel coordinates
points are represented as connected line segments. Each vertical line represents one attribute. One set of connected
line segments represents one data point. Points that tend to cluster will appear closer together.

In [92]: from pandas.plotting import parallel_coordinates

In [93]: data = pd.read_csv('data/iris.data')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-93-ea4716c8ea20> in <module>
----> 1 data = pd.read_csv('data/iris.data')

NameError: name 'pd' is not defined


(continues on next page)

3.11. Visualization 621


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [94]: plt.figure()
Out[94]: <Figure size 640x480 with 0 Axes>

In [95]: parallel_coordinates(data, 'Name')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-95-7d09ca821842> in <module>
----> 1 parallel_coordinates(data, 'Name')

NameError: name 'data' is not defined

[email protected]
T56GZSRVAH

Lag plot

Lag plots are used to check if a data set or time series is random. Random data should not exhibit any structure in the
lag plot. Non-random structure implies that the underlying data are not random. The lag argument may be passed,
and when lag=1 the plot is essentially data[:-1] vs. data[1:].

In [96]: from pandas.plotting import lag_plot

In [97]: plt.figure()
Out[97]: <Figure size 640x480 with 0 Axes>
(continues on next page)

622 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [98]: spacing = np.linspace(-99 * np.pi, 99 * np.pi, num=1000)

In [99]: data = pd.Series(0.1 * np.random.rand(1000) + 0.9 * np.sin(spacing))


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-99-a1dee79fc325> in <module>
----> 1 data = pd.Series(0.1 * np.random.rand(1000) + 0.9 * np.sin(spacing))

NameError: name 'pd' is not defined

In [100]: lag_plot(data)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-100-76d4c87cecfc> in <module>
----> 1 lag_plot(data)

NameError: name 'data' is not defined

[email protected]
T56GZSRVAH

3.11. Visualization 623


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Autocorrelation plot

Autocorrelation plots are often used for checking randomness in time series. This is done by computing autocorrela-
tions for data values at varying time lags. If time series is random, such autocorrelations should be near zero for any
and all time-lag separations. If time series is non-random then one or more of the autocorrelations will be significantly
non-zero. The horizontal lines displayed in the plot correspond to 95% and 99% confidence bands. The dashed line is
99% confidence band. See the Wikipedia entry for more about autocorrelation plots.

In [101]: from pandas.plotting import autocorrelation_plot

In [102]: plt.figure()
Out[102]: <Figure size 640x480 with 0 Axes>

In [103]: spacing = np.linspace(-9 * np.pi, 9 * np.pi, num=1000)

In [104]: data = pd.Series(0.7 * np.random.rand(1000) + 0.3 * np.sin(spacing))


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-104-8a50b1acf632> in <module>
----> 1 data = pd.Series(0.7 * np.random.rand(1000) + 0.3 * np.sin(spacing))

NameError: name 'pd' is not defined

In [105]: autocorrelation_plot(data)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-105-eccad460986f> in <module>
----> 1 autocorrelation_plot(data)
[email protected]
T56GZSRVAHNameError: name 'data' is not defined

624 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Bootstrap plot

Bootstrap plots are used to visually assess the uncertainty of a statistic, such as mean, median, midrange, etc. A
random subset of a specified size is selected from a data set, the statistic in question is computed for this subset and
the process is repeated a specified number of times. Resulting plots and histograms are what constitutes the bootstrap
plot.

In [106]: from pandas.plotting import bootstrap_plot

In [107]: data = pd.Series(np.random.rand(1000))


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-107-a21ce4cd1aac> in <module>
----> 1 data = pd.Series(np.random.rand(1000))

NameError: name 'pd' is not defined

In [108]: bootstrap_plot(data, size=50, samples=500, color='grey')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-108-126f9a3d5653> in <module>
----> 1 bootstrap_plot(data, size=50, samples=500, color='grey')

NameError: name 'data' is not defined

3.11. Visualization 625


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

RadViz

RadViz is a way of visualizing multi-variate data. It is based on a simple spring tension minimization algorithm.
Basically you set up a bunch of points in a plane. In our case they are equally spaced on a unit circle. Each point
represents a single attribute. You then pretend that each sample in the data set is attached to each of these points
by a spring, the stiffness of which is proportional to the numerical value of that attribute (they are normalized to
unit interval). The point in the plane, where our sample settles to (where the forces acting on our sample are at an
equilibrium) is where a dot representing our sample will be drawn. Depending on which class that sample belongs it
will be colored differently. See the R package Radviz for more information.
Note: The “Iris” dataset is available here.
In [109]: from pandas.plotting import radviz

In [110]: data = pd.read_csv('data/iris.data')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-110-ea4716c8ea20> in <module>
----> 1 data = pd.read_csv('data/iris.data')

NameError: name 'pd' is not defined

In [111]: plt.figure()
Out[111]: <Figure size 640x480 with 0 Axes>
(continues on next page)

626 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [112]: radviz(data, 'Name')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-112-1720a88f3922> in <module>
----> 1 radviz(data, 'Name')

NameError: name 'data' is not defined

[email protected]
T56GZSRVAH

3.11.5 Plot Formatting

Setting the plot style

From version 1.5 and up, matplotlib offers a range of pre-configured plotting styles. Setting the style can be
used to easily give plots the general look that you want. Setting the style is as easy as calling matplotlib.
style.use(my_plot_style) before creating your plot. For example you could write matplotlib.style.
use('ggplot') for ggplot-style plots.
You can see the various available style names at matplotlib.style.available and it’s very easy to try them
out.

3.11. Visualization 627


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

General plot style arguments

Most plotting methods have a set of keyword arguments that control the layout and formatting of the returned plot:
In [113]: plt.figure();

In [114]: ts.plot(style='k--', label='Series');

[email protected]
T56GZSRVAH

For each kind of plot (e.g. line, bar, scatter) any additional arguments keywords are passed along to the corresponding
matplotlib function (ax.plot(), ax.bar(), ax.scatter()). These can be used to control additional styling,
beyond what pandas provides.

Controlling the legend

You may set the legend argument to False to hide the legend, which is shown by default.
In [115]: df = pd.DataFrame(np.random.randn(1000, 4),
.....: index=ts.index, columns=list('ABCD'))
.....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-115-ae243d2a43b5> in <module>
----> 1 df = pd.DataFrame(np.random.randn(1000, 4),
2 index=ts.index, columns=list('ABCD'))
(continues on next page)

628 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

NameError: name 'pd' is not defined

In [116]: df = df.cumsum()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-116-08208d45ae16> in <module>
----> 1 df = df.cumsum()

NameError: name 'df' is not defined

In [117]: df.plot(legend=False)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-117-c885e70fbb28> in <module>
----> 1 df.plot(legend=False)

NameError: name 'df' is not defined

[email protected]
T56GZSRVAH

3.11. Visualization 629


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Scales

You may pass logy to get a log-scale Y axis.

In [118]: ts = pd.Series(np.random.randn(1000),
.....: index=pd.date_range('1/1/2000', periods=1000))
.....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-118-00eeb137fb11> in <module>
----> 1 ts = pd.Series(np.random.randn(1000),
2 index=pd.date_range('1/1/2000', periods=1000))

NameError: name 'pd' is not defined

In [119]: ts = np.exp(ts.cumsum())
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-119-a60c32c780a6> in <module>
----> 1 ts = np.exp(ts.cumsum())

NameError: name 'ts' is not defined

In [120]: ts.plot(logy=True)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-120-9e595842ea79> in <module>
----> 1 ts.plot(logy=True)
[email protected]
T56GZSRVAHNameError: name 'ts' is not defined

630 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

See also the logx and loglog keyword arguments.

Plotting on a secondary y-axis

To plot data on a secondary y-axis, use the secondary_y keyword:

In [121]: df['A'].plot()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-121-142941d65816> in <module>
----> 1 df['A'].plot()

NameError: name 'df' is not defined

In [122]: df['B'].plot(secondary_y=True, style='g')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-122-d13b0146b561> in <module>
----> 1 df['B'].plot(secondary_y=True, style='g')

NameError: name 'df' is not defined

3.11. Visualization 631


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

To plot some columns in a DataFrame, give the column names to the secondary_y keyword:

In [123]: plt.figure()
Out[123]: <Figure size 640x480 with 0 Axes>

In [124]: ax = df.plot(secondary_y=['A', 'B'])


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-124-c7f4eaf8c12b> in <module>
----> 1 ax = df.plot(secondary_y=['A', 'B'])

NameError: name 'df' is not defined

In [125]: ax.set_ylabel('CD scale')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-125-0396311d12a3> in <module>
----> 1 ax.set_ylabel('CD scale')

NameError: name 'ax' is not defined

In [126]: ax.right_ax.set_ylabel('AB scale')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-126-5ddf1e3892e0> in <module>
(continues on next page)

632 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


----> 1 ax.right_ax.set_ylabel('AB scale')

NameError: name 'ax' is not defined

[email protected]
T56GZSRVAH

Note that the columns plotted on the secondary y-axis is automatically marked with “(right)” in the legend. To turn off
the automatic marking, use the mark_right=False keyword:

In [127]: plt.figure()
Out[127]: <Figure size 640x480 with 0 Axes>

In [128]: df.plot(secondary_y=['A', 'B'], mark_right=False)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-128-cfe006c313fe> in <module>
----> 1 df.plot(secondary_y=['A', 'B'], mark_right=False)

NameError: name 'df' is not defined

3.11. Visualization 633


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Custom formatters for timeseries plots

Changed in version 1.0.0.


Pandas provides custom formatters for timeseries plots. These change the formatting of the axis labels for dates
and times. By default, the custom formatters are applied only to plots created by pandas with DataFrame.
plot() or Series.plot(). To have them apply to all plots, including those made by matplotlib, set the option
pd.options.plotting.matplotlib.register_converters = True or use pandas.plotting.
register_matplotlib_converters().

Suppressing tick resolution adjustment

pandas includes automatic tick resolution adjustment for regular frequency time-series data. For limited cases where
pandas cannot infer the frequency information (e.g., in an externally created twinx), you can choose to suppress this
behavior for alignment purposes.
Here is the default behavior, notice how the x-axis tick labeling is performed:
In [129]: plt.figure()
Out[129]: <Figure size 640x480 with 0 Axes>

In [130]: df['A'].plot()
---------------------------------------------------------------------------
(continues on next page)

634 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


NameError Traceback (most recent call last)
<ipython-input-130-142941d65816> in <module>
----> 1 df['A'].plot()

NameError: name 'df' is not defined

[email protected]
T56GZSRVAH

Using the x_compat parameter, you can suppress this behavior:

In [131]: plt.figure()
Out[131]: <Figure size 640x480 with 0 Axes>

In [132]: df['A'].plot(x_compat=True)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-132-3060a6ce70ed> in <module>
----> 1 df['A'].plot(x_compat=True)

NameError: name 'df' is not defined

3.11. Visualization 635


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

If you have more than one plot that needs to be suppressed, the use method in pandas.plotting.
plot_params can be used in a with statement:

In [133]: plt.figure()
Out[133]: <Figure size 640x480 with 0 Axes>

In [134]: with pd.plotting.plot_params.use('x_compat', True):


.....: df['A'].plot(color='r')
.....: df['B'].plot(color='g')
.....: df['C'].plot(color='b')
.....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-134-b939e52d1f0a> in <module>
----> 1 with pd.plotting.plot_params.use('x_compat', True):
2 df['A'].plot(color='r')
3 df['B'].plot(color='g')
4 df['C'].plot(color='b')

NameError: name 'pd' is not defined

636 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Automatic date tick adjustment

TimedeltaIndex now uses the native matplotlib tick locator methods, it is useful to call the automatic date tick
adjustment from matplotlib for figures whose ticklabels overlap.
See the autofmt_xdate method and the matplotlib documentation for more.

3.11. Visualization 637


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Subplots

Each Series in a DataFrame can be plotted on a different axis with the subplots keyword:

In [135]: df.plot(subplots=True, figsize=(6, 6));

[email protected]
T56GZSRVAH

Using layout and targeting multiple axes

The layout of subplots can be specified by the layout keyword. It can accept (rows, columns). The layout
keyword can be used in hist and boxplot also. If the input is invalid, a ValueError will be raised.
The number of axes which can be contained by rows x columns specified by layout must be larger than the number
of required subplots. If layout can contain more axes than required, blank axes are not drawn. Similar to a NumPy
array’s reshape method, you can use -1 for one dimension to automatically calculate the number of rows or columns
needed, given the other.

638 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [136]: df.plot(subplots=True, layout=(2, 3), figsize=(6, 6), sharex=False);

[email protected]
T56GZSRVAH

The above example is identical to using:

In [137]: df.plot(subplots=True, layout=(2, -1), figsize=(6, 6), sharex=False);

The required number of columns (3) is inferred from the number of series to plot and the given number of rows (2).
You can pass multiple axes created beforehand as list-like via ax keyword. This allows more complicated layouts.
The passed axes must be the same number as the subplots being drawn.
When multiple axes are passed via the ax keyword, layout, sharex and sharey keywords don’t affect to the
output. You should explicitly pass sharex=False and sharey=False, otherwise you will see a warning.

In [138]: fig, axes = plt.subplots(4, 4, figsize=(6, 6))

In [139]: plt.subplots_adjust(wspace=0.5, hspace=0.5)

In [140]: target1 = [axes[0][0], axes[1][1], axes[2][2], axes[3][3]]

In [141]: target2 = [axes[3][0], axes[2][1], axes[1][2], axes[0][3]]

In [142]: df.plot(subplots=True, ax=target1, legend=False, sharex=False,


˓→sharey=False);

(continues on next page)

3.11. Visualization 639


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [143]: (-df).plot(subplots=True, ax=target2, legend=False,
.....: sharex=False, sharey=False);
.....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-143-a9df76fad608> in <module>
----> 1 (-df).plot(subplots=True, ax=target2, legend=False,
2 sharex=False, sharey=False);

NameError: name 'df' is not defined

[email protected]
T56GZSRVAH

Another option is passing an ax argument to Series.plot() to plot on a particular axis:


In [144]: fig, axes = plt.subplots(nrows=2, ncols=2)

(continues on next page)

640 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [145]: df['A'].plot(ax=axes[0, 0]);

In [146]: axes[0, 0].set_title('A');

In [147]: df['B'].plot(ax=axes[0, 1]);

In [148]: axes[0, 1].set_title('B');

In [149]: df['C'].plot(ax=axes[1, 0]);

In [150]: axes[1, 0].set_title('C');

In [151]: df['D'].plot(ax=axes[1, 1]);

In [152]: axes[1, 1].set_title('D');

[email protected]
T56GZSRVAH

3.11. Visualization 641


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Plotting with error bars

Plotting with error bars is supported in DataFrame.plot() and Series.plot().


Horizontal and vertical error bars can be supplied to the xerr and yerr keyword arguments to plot(). The error
values can be specified using a variety of formats:
• As a DataFrame or dict of errors with column names matching the columns attribute of the plotting
DataFrame or matching the name attribute of the Series.
• As a str indicating which of the columns of plotting DataFrame contain the error values.
• As raw values (list, tuple, or np.ndarray). Must be the same length as the plotting
DataFrame/Series.
Asymmetrical error bars are also supported, however raw error values must be provided in this case. For a M
length Series, a Mx2 array should be provided indicating lower and upper (or left and right) errors. For a MxN
DataFrame, asymmetrical errors should be in a Mx2xN array.
Here is an example of one way to easily plot group means with standard deviations from the raw data.

# Generate the data


In [153]: ix3 = pd.MultiIndex.from_arrays([
.....: ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'],
.....: ['foo', 'foo', 'bar', 'bar', 'foo', 'foo', 'bar', 'bar']],
.....: names=['letter', 'word'])
.....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-153-9f015fa171f2> in <module>
----> 1 ix3 = pd.MultiIndex.from_arrays([
[email protected]
T56GZSRVAH 2 ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'],
3 ['foo', 'foo', 'bar', 'bar', 'foo', 'foo', 'bar', 'bar']],
4 names=['letter', 'word'])

NameError: name 'pd' is not defined

In [154]: df3 = pd.DataFrame({'data1': [3, 2, 4, 3, 2, 4, 3, 2],


.....: 'data2': [6, 5, 7, 5, 4, 5, 6, 5]}, index=ix3)
.....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-154-a2b5068f0300> in <module>
----> 1 df3 = pd.DataFrame({'data1': [3, 2, 4, 3, 2, 4, 3, 2],
2 'data2': [6, 5, 7, 5, 4, 5, 6, 5]}, index=ix3)

NameError: name 'pd' is not defined

# Group by index labels and take the means and standard deviations
# for each group
In [155]: gp3 = df3.groupby(level=('letter', 'word'))
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-155-3f049b0a1791> in <module>
----> 1 gp3 = df3.groupby(level=('letter', 'word'))

NameError: name 'df3' is not defined

In [156]: means = gp3.mean()


(continues on next page)

642 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-156-e6327c2cbb3d> in <module>
----> 1 means = gp3.mean()

NameError: name 'gp3' is not defined

In [157]: errors = gp3.std()


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-157-e9e61accc58e> in <module>
----> 1 errors = gp3.std()

NameError: name 'gp3' is not defined

In [158]: means
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-158-88030acd958e> in <module>
----> 1 means

NameError: name 'means' is not defined

In [159]: errors
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-159-ab14c7b75346> in <module>
----> 1 errors
[email protected]
T56GZSRVAH
NameError: name 'errors' is not defined

# Plot
In [160]: fig, ax = plt.subplots()

In [161]: means.plot.bar(yerr=errors, ax=ax, capsize=4)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-161-60abb17bbd0c> in <module>
----> 1 means.plot.bar(yerr=errors, ax=ax, capsize=4)

NameError: name 'means' is not defined

3.11. Visualization 643


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Plotting tables

Plotting with matplotlib table is now supported in DataFrame.plot() and Series.plot() with a table
keyword. The table keyword can accept bool, DataFrame or Series. The simple way to draw a table is to
specify table=True. Data will be transposed to meet matplotlib’s default layout.

In [162]: fig, ax = plt.subplots(1, 1)

In [163]: df = pd.DataFrame(np.random.rand(5, 3), columns=['a', 'b', 'c'])


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-163-05c0fbdb11a1> in <module>
----> 1 df = pd.DataFrame(np.random.rand(5, 3), columns=['a', 'b', 'c'])

NameError: name 'pd' is not defined

In [164]: ax.get_xaxis().set_visible(False) # Hide Ticks

In [165]: df.plot(table=True, ax=ax)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-165-8624f4234fa9> in <module>
----> 1 df.plot(table=True, ax=ax)
(continues on next page)

644 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

NameError: name 'df' is not defined

[email protected]
T56GZSRVAH

Also, you can pass a different DataFrame or Series to the table keyword. The data will be drawn as displayed
in print method (not transposed automatically). If required, it should be transposed manually as seen in the example
below.

In [166]: fig, ax = plt.subplots(1, 1)

In [167]: ax.get_xaxis().set_visible(False) # Hide Ticks

In [168]: df.plot(table=np.round(df.T, 2), ax=ax)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-168-d7698dba6372> in <module>
----> 1 df.plot(table=np.round(df.T, 2), ax=ax)

NameError: name 'df' is not defined

3.11. Visualization 645


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

There also exists a helper function pandas.plotting.table, which creates a table from DataFrame or
Series, and adds it to an matplotlib.Axes instance. This function can accept keywords which the matplotlib
table has.

In [169]: from pandas.plotting import table

In [170]: fig, ax = plt.subplots(1, 1)

In [171]: table(ax, np.round(df.describe(), 2),


.....: loc='upper right', colWidths=[0.2, 0.2, 0.2])
.....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-171-c7487f392c87> in <module>
----> 1 table(ax, np.round(df.describe(), 2),
2 loc='upper right', colWidths=[0.2, 0.2, 0.2])

NameError: name 'df' is not defined

In [172]: df.plot(ax=ax, ylim=(0, 2), legend=None)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-172-068255ff0f3e> in <module>
----> 1 df.plot(ax=ax, ylim=(0, 2), legend=None)

(continues on next page)

646 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


NameError: name 'df' is not defined

[email protected]
T56GZSRVAH

Note: You can get table instances on the axes using axes.tables property for further decorations. See the mat-
plotlib table documentation for more.

Colormaps

A potential issue when plotting a large number of columns is that it can be difficult to distinguish some series due to
repetition in the default colors. To remedy this, DataFrame plotting supports the use of the colormap argument,
which accepts either a Matplotlib colormap or a string that is a name of a colormap registered with Matplotlib. A
visualization of the default matplotlib colormaps is available here.
As matplotlib does not directly support colormaps for line-based plots, the colors are selected based on an even spacing
determined by the number of columns in the DataFrame. There is no consideration made for background color, so
some colormaps will produce lines that are not easily visible.
To use the cubehelix colormap, we can pass colormap='cubehelix'.

In [173]: df = pd.DataFrame(np.random.randn(1000, 10), index=ts.index)


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-173-73acce6fcaeb> in <module>
----> 1 df = pd.DataFrame(np.random.randn(1000, 10), index=ts.index)
(continues on next page)

3.11. Visualization 647


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

NameError: name 'pd' is not defined

In [174]: df = df.cumsum()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-174-08208d45ae16> in <module>
----> 1 df = df.cumsum()

NameError: name 'df' is not defined

In [175]: plt.figure()
Out[175]: <Figure size 640x480 with 0 Axes>

In [176]: df.plot(colormap='cubehelix')
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-176-0aab5a23aeee> in <module>
----> 1 df.plot(colormap='cubehelix')

NameError: name 'df' is not defined

[email protected]
T56GZSRVAH

Alternatively, we can pass the colormap itself:

648 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [177]: from matplotlib import cm

In [178]: plt.figure()
Out[178]: <Figure size 640x480 with 0 Axes>

In [179]: df.plot(colormap=cm.cubehelix)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-179-7cdc1499f1cb> in <module>
----> 1 df.plot(colormap=cm.cubehelix)

NameError: name 'df' is not defined

[email protected]
T56GZSRVAH

Colormaps can also be used other plot types, like bar charts:
In [180]: dd = pd.DataFrame(np.random.randn(10, 10)).applymap(abs)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-180-2d4edaa33d2e> in <module>
----> 1 dd = pd.DataFrame(np.random.randn(10, 10)).applymap(abs)

NameError: name 'pd' is not defined

In [181]: dd = dd.cumsum()
---------------------------------------------------------------------------
(continues on next page)

3.11. Visualization 649


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


NameError Traceback (most recent call last)
<ipython-input-181-cf596e929dc1> in <module>
----> 1 dd = dd.cumsum()

NameError: name 'dd' is not defined

In [182]: plt.figure()
Out[182]: <Figure size 640x480 with 0 Axes>

In [183]: dd.plot.bar(colormap='Greens')
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-183-d5bc68809546> in <module>
----> 1 dd.plot.bar(colormap='Greens')

NameError: name 'dd' is not defined

[email protected]
T56GZSRVAH

Parallel coordinates charts:

In [184]: plt.figure()
Out[184]: <Figure size 640x480 with 0 Axes>

In [185]: parallel_coordinates(data, 'Name', colormap='gist_rainbow')


---------------------------------------------------------------------------
(continues on next page)

650 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


NameError Traceback (most recent call last)
<ipython-input-185-a0c62c912a5a> in <module>
----> 1 parallel_coordinates(data, 'Name', colormap='gist_rainbow')

NameError: name 'data' is not defined

[email protected]
T56GZSRVAH

Andrews curves charts:

In [186]: plt.figure()
Out[186]: <Figure size 640x480 with 0 Axes>

In [187]: andrews_curves(data, 'Name', colormap='winter')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-187-3fe5a5a07312> in <module>
----> 1 andrews_curves(data, 'Name', colormap='winter')

NameError: name 'data' is not defined

3.11. Visualization 651


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

3.11.6 Plotting directly with matplotlib

In some situations it may still be preferable or necessary to prepare plots directly with matplotlib, for instance when a
certain type of plot or customization is not (yet) supported by pandas. Series and DataFrame objects behave like
arrays and can therefore be passed directly to matplotlib functions without explicit casts.
pandas also automatically registers formatters and locators that recognize date indices, thereby extending date and
time support to practically all plot types available in matplotlib. Although this formatting does not provide the same
level of refinement you would get when plotting via pandas, it can be faster when plotting a large number of points.
In [188]: price = pd.Series(np.random.randn(150).cumsum(),
.....: index=pd.date_range('2000-1-1', periods=150, freq='B'))
.....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-188-c269685eca94> in <module>
----> 1 price = pd.Series(np.random.randn(150).cumsum(),
2 index=pd.date_range('2000-1-1', periods=150, freq='B'))

NameError: name 'pd' is not defined

In [189]: ma = price.rolling(20).mean()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
(continues on next page)

652 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


<ipython-input-189-7dcf1e53fe5c> in <module>
----> 1 ma = price.rolling(20).mean()

NameError: name 'price' is not defined

In [190]: mstd = price.rolling(20).std()


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-190-e2f8c3d51887> in <module>
----> 1 mstd = price.rolling(20).std()

NameError: name 'price' is not defined

In [191]: plt.figure()
Out[191]: <Figure size 640x480 with 0 Axes>

In [192]: plt.plot(price.index, price, 'k')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-192-7bb1b226415a> in <module>
----> 1 plt.plot(price.index, price, 'k')

NameError: name 'price' is not defined

In [193]: plt.plot(ma.index, ma, 'b')


---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-193-3728ccc65de7> in <module>
[email protected]
T56GZSRVAH----> 1 plt.plot(ma.index, ma, 'b')
NameError: name 'ma' is not defined

In [194]: plt.fill_between(mstd.index, ma - 2 * mstd, ma + 2 * mstd,


.....: color='b', alpha=0.2)
.....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-194-ba00db352f3f> in <module>
----> 1 plt.fill_between(mstd.index, ma - 2 * mstd, ma + 2 * mstd,
2 color='b', alpha=0.2)

NameError: name 'mstd' is not defined

3.11. Visualization 653


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

3.12 Computational tools

3.12.1 Statistical functions

Percent change

Series and DataFrame have a method pct_change() to compute the percent change over a given number of
periods (using fill_method to fill NA/null values before computing the percent change).

In [1]: ser = pd.Series(np.random.randn(8))

In [2]: ser.pct_change()
Out[2]:
0 NaN
1 -1.602976
2 4.334938
3 -0.247456
4 -2.067345
5 -1.142903
6 -1.688214
7 -9.759729
dtype: float64

654 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [3]: df = pd.DataFrame(np.random.randn(10, 4))

In [4]: df.pct_change(periods=3)
Out[4]:
0 1 2 3
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 -0.218320 -1.054001 1.987147 -0.510183
4 -0.439121 -1.816454 0.649715 -4.822809
5 -0.127833 -3.042065 -5.866604 -1.776977
6 -2.596833 -1.959538 -2.111697 -3.798900
7 -0.117826 -2.169058 0.036094 -0.067696
8 2.492606 -1.357320 -1.205802 -1.558697
9 -1.012977 2.324558 -1.003744 -0.371806

Covariance

Series.cov() can be used to compute covariance between series (excluding missing values).

In [5]: s1 = pd.Series(np.random.randn(1000))

In [6]: s2 = pd.Series(np.random.randn(1000))

In [7]: s1.cov(s2)
Out[7]: 0.0006801088174310803
[email protected]
T56GZSRVAHAnalogously, DataFrame.cov() to compute pairwise covariances among the series in the DataFrame, also exclud-
ing NA/null values.

Note: Assuming the missing data are missing at random this results in an estimate for the covariance matrix which
is unbiased. However, for many applications this estimate may not be acceptable because the estimated covariance
matrix is not guaranteed to be positive semi-definite. This could lead to estimated correlations having absolute values
which are greater than one, and/or a non-invertible covariance matrix. See Estimation of covariance matrices for more
details.

In [8]: frame = pd.DataFrame(np.random.randn(1000, 5),


...: columns=['a', 'b', 'c', 'd', 'e'])
...:

In [9]: frame.cov()
Out[9]:
a b c d e
a 1.000882 -0.003177 -0.002698 -0.006889 0.031912
b -0.003177 1.024721 0.000191 0.009212 0.000857
c -0.002698 0.000191 0.950735 -0.031743 -0.005087
d -0.006889 0.009212 -0.031743 1.002983 -0.047952
e 0.031912 0.000857 -0.005087 -0.047952 1.042487

DataFrame.cov also supports an optional min_periods keyword that specifies the required minimum number
of observations for each column pair in order to have a valid result.

3.12. Computational tools 655


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [10]: frame = pd.DataFrame(np.random.randn(20, 3), columns=['a', 'b', 'c'])

In [11]: frame.loc[frame.index[:5], 'a'] = np.nan

In [12]: frame.loc[frame.index[5:10], 'b'] = np.nan

In [13]: frame.cov()
Out[13]:
a b c
a 1.123670 -0.412851 0.018169
b -0.412851 1.154141 0.305260
c 0.018169 0.305260 1.301149

In [14]: frame.cov(min_periods=12)
Out[14]:
a b c
a 1.123670 NaN 0.018169
b NaN 1.154141 0.305260
c 0.018169 0.305260 1.301149

Correlation

Correlation may be computed using the corr() method. Using the method parameter, several methods for com-
puting correlations are provided:

Method name Description


[email protected]
pearson Standard correlation coefficient
T56GZSRVAH (default)
kendall Kendall Tau correlation coefficient
spearman Spearman rank correlation coefficient

All of these are currently computed using pairwise complete observations. Wikipedia has articles covering the above
correlation coefficients:
• Pearson correlation coefficient
• Kendall rank correlation coefficient
• Spearman’s rank correlation coefficient

Note: Please see the caveats associated with this method of calculating correlation matrices in the covariance section.

In [15]: frame = pd.DataFrame(np.random.randn(1000, 5),


....: columns=['a', 'b', 'c', 'd', 'e'])
....:

In [16]: frame.iloc[::2] = np.nan

# Series with Series


In [17]: frame['a'].corr(frame['b'])
Out[17]: 0.013479040400098787

In [18]: frame['a'].corr(frame['b'], method='spearman')


Out[18]: -0.007289885159540637
(continues on next page)

656 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

# Pairwise correlation of DataFrame columns


In [19]: frame.corr()
Out[19]:
a b c d e
a 1.000000 0.013479 -0.049269 -0.042239 -0.028525
b 0.013479 1.000000 -0.020433 -0.011139 0.005654
c -0.049269 -0.020433 1.000000 0.018587 -0.054269
d -0.042239 -0.011139 0.018587 1.000000 -0.017060
e -0.028525 0.005654 -0.054269 -0.017060 1.000000

Note that non-numeric columns will be automatically excluded from the correlation calculation.
Like cov, corr also supports the optional min_periods keyword:

In [20]: frame = pd.DataFrame(np.random.randn(20, 3), columns=['a', 'b', 'c'])

In [21]: frame.loc[frame.index[:5], 'a'] = np.nan

In [22]: frame.loc[frame.index[5:10], 'b'] = np.nan

In [23]: frame.corr()
Out[23]:
a b c
a 1.000000 -0.121111 0.069544
b -0.121111 1.000000 0.051742
c 0.069544 0.051742 1.000000

[email protected]
In [24]: frame.corr(min_periods=12)
T56GZSRVAHOut[24]:
a b c
a 1.000000 NaN 0.069544
b NaN 1.000000 0.051742
c 0.069544 0.051742 1.000000

New in version 0.24.0.


The method argument can also be a callable for a generic correlation calculation. In this case, it should be a single
function that produces a single value from two ndarray inputs. Suppose we wanted to compute the correlation based
on histogram intersection:

# histogram intersection
In [25]: def histogram_intersection(a, b):
....: return np.minimum(np.true_divide(a, a.sum()),
....: np.true_divide(b, b.sum())).sum()
....:

In [26]: frame.corr(method=histogram_intersection)
Out[26]:
a b c
a 1.000000 -6.404882 -2.058431
b -6.404882 1.000000 -19.255743
c -2.058431 -19.255743 1.000000

A related method corrwith() is implemented on DataFrame to compute the correlation between like-labeled Series
contained in different DataFrame objects.

3.12. Computational tools 657


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [27]: index = ['a', 'b', 'c', 'd', 'e']

In [28]: columns = ['one', 'two', 'three', 'four']

In [29]: df1 = pd.DataFrame(np.random.randn(5, 4), index=index, columns=columns)

In [30]: df2 = pd.DataFrame(np.random.randn(4, 4), index=index[:4], columns=columns)

In [31]: df1.corrwith(df2)
Out[31]:
one -0.125501
two -0.493244
three 0.344056
four 0.004183
dtype: float64

In [32]: df2.corrwith(df1, axis=1)


Out[32]:
a -0.675817
b 0.458296
c 0.190809
d -0.186275
e NaN
dtype: float64

Data ranking
[email protected]
T56GZSRVAHThe rank() method produces a data ranking with ties being assigned the mean of the ranks (by default) for the group:
In [33]: s = pd.Series(np.random.randn(5), index=list('abcde'))

In [34]: s['d'] = s['b'] # so there's a tie

In [35]: s.rank()
Out[35]:
a 5.0
b 2.5
c 1.0
d 2.5
e 4.0
dtype: float64

rank() is also a DataFrame method and can rank either the rows (axis=0) or the columns (axis=1). NaN values
are excluded from the ranking.
In [36]: df = pd.DataFrame(np.random.randn(10, 6))

In [37]: df[4] = df[2][:5] # some ties

In [38]: df
Out[38]:
0 1 2 3 4 5
0 -0.904948 -1.163537 -1.457187 0.135463 -1.457187 0.294650
1 -0.976288 -0.244652 -0.748406 -0.999601 -0.748406 -0.800809
2 0.401965 1.460840 1.256057 1.308127 1.256057 0.876004
3 0.205954 0.369552 -0.669304 0.038378 -0.669304 1.140296
(continues on next page)

658 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


4 -0.477586 -0.730705 -1.129149 -0.601463 -1.129149 -0.211196
5 -1.092970 -0.689246 0.908114 0.204848 NaN 0.463347
6 0.376892 0.959292 0.095572 -0.593740 NaN -0.069180
7 -1.002601 1.957794 -0.120708 0.094214 NaN -1.467422
8 -0.547231 0.664402 -0.519424 -0.073254 NaN -1.263544
9 -0.250277 -0.237428 -1.056443 0.419477 NaN 1.375064

In [39]: df.rank(1)
Out[39]:
0 1 2 3 4 5
0 4.0 3.0 1.5 5.0 1.5 6.0
1 2.0 6.0 4.5 1.0 4.5 3.0
2 1.0 6.0 3.5 5.0 3.5 2.0
3 4.0 5.0 1.5 3.0 1.5 6.0
4 5.0 3.0 1.5 4.0 1.5 6.0
5 1.0 2.0 5.0 3.0 NaN 4.0
6 4.0 5.0 3.0 1.0 NaN 2.0
7 2.0 5.0 3.0 4.0 NaN 1.0
8 2.0 5.0 3.0 4.0 NaN 1.0
9 2.0 3.0 1.0 4.0 NaN 5.0

rank optionally takes a parameter ascending which by default is true; when false, data is reverse-ranked, with
larger values assigned a smaller rank.
rank supports different tie-breaking methods, specified with the method parameter:
• average : average rank of tied group
• min : lowest rank in the group
[email protected]
T56GZSRVAH
• max : highest rank in the group
• first : ranks assigned in the order they appear in the array

3.12.2 Window Functions

For working with data, a number of window functions are provided for computing common window or rolling statistics.
Among these are count, sum, mean, median, correlation, variance, covariance, standard deviation, skewness, and
kurtosis.
The rolling() and expanding() functions can be used directly from DataFrameGroupBy objects, see the
groupby docs.

Note: The API for window statistics is quite similar to the way one works with GroupBy objects, see the documen-
tation here.

We work with rolling, expanding and exponentially weighted data through the corresponding objects,
Rolling, Expanding and EWM.

In [40]: s = pd.Series(np.random.randn(1000),
....: index=pd.date_range('1/1/2000', periods=1000))
....:

In [41]: s = s.cumsum()

(continues on next page)

3.12. Computational tools 659


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [42]: s
Out[42]:
2000-01-01 -0.268824
2000-01-02 -1.771855
2000-01-03 -0.818003
2000-01-04 -0.659244
2000-01-05 -1.942133
...
2002-09-22 -67.457323
2002-09-23 -69.253182
2002-09-24 -70.296818
2002-09-25 -70.844674
2002-09-26 -72.475016
Freq: D, Length: 1000, dtype: float64

These are created from methods on Series and DataFrame.


In [43]: r = s.rolling(window=60)

In [44]: r
Out[44]: Rolling [window=60,center=False,axis=0]

These object provide tab-completion of the available methods and properties.


In [14]: r.<TAB> # noqa: E225, E999
r.agg r.apply r.count r.exclusions r.max r.median r.
˓→name r.skew r.sum
r.aggregate r.corr r.cov r.kurt r.mean r.min r.
[email protected]
T56GZSRVAH˓→quantile r.std r.var

Generally these methods all have the same interface. They all accept the following arguments:
• window: size of moving window
• min_periods: threshold of non-null data points to require (otherwise result is NA)
• center: boolean, whether to set the labels at the center (default is False)
We can then call methods on these rolling objects. These return like-indexed objects:
In [45]: r.mean()
Out[45]:
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 NaN
2000-01-04 NaN
2000-01-05 NaN
...
2002-09-22 -62.914971
2002-09-23 -63.061867
2002-09-24 -63.213876
2002-09-25 -63.375074
2002-09-26 -63.539734
Freq: D, Length: 1000, dtype: float64

In [46]: s.plot(style='k--')
Out[46]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d1fda4810>

(continues on next page)

660 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [47]: r.mean().plot(style='k')
Out[47]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d1fda4810>

[email protected]
T56GZSRVAH

They can also be applied to DataFrame objects. This is really just syntactic sugar for applying the moving window
operator to all of the DataFrame’s columns:

In [48]: df = pd.DataFrame(np.random.randn(1000, 4),


....: index=pd.date_range('1/1/2000', periods=1000),
....: columns=['A', 'B', 'C', 'D'])
....:

In [49]: df = df.cumsum()

In [50]: df.rolling(window=60).sum().plot(subplots=True)
Out[50]:
array([<matplotlib.axes._subplots.AxesSubplot object at 0x7f3d1f5fd450>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f3d1f616450>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f3d1f5ac810>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7f3d1f5c2b90>],
dtype=object)

3.12. Computational tools 661


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Method summary

We provide a number of common statistical functions:

Method Description
count() Number of non-null observations
sum() Sum of values
mean() Mean of values
median() Arithmetic median of values
min() Minimum
max() Maximum
std() Bessel-corrected sample standard deviation
var() Unbiased variance
skew() Sample skewness (3rd moment)
kurt() Sample kurtosis (4th moment)
quantile() Sample quantile (value at %)
apply() Generic apply
cov() Unbiased covariance (binary)
corr() Correlation (binary)

662 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Rolling Apply

The apply() function takes an extra func argument and performs generic rolling computations. The func argu-
ment should be a single function that produces a single value from an ndarray input. Suppose we wanted to compute
the mean absolute deviation on a rolling basis:

In [51]: def mad(x):


....: return np.fabs(x - x.mean()).mean()
....:

In [52]: s.rolling(window=60).apply(mad, raw=True).plot(style='k')


Out[52]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d1f45ea90>

[email protected]
T56GZSRVAH

New in version 1.0.


Additionally, apply() can leverage Numba if installed as an optional dependency. The apply aggregation can be
executed using Numba by specifying engine='numba' and engine_kwargs arguments (raw must also be set
to True). Numba will be applied in potentially two routines:
1. If func is a standard Python function, the engine will JIT the passed function. func can also be a JITed function
in which case the engine will not JIT the function again. 2. The engine will JIT the for loop where the apply function
is applied to each window.
The engine_kwargs argument is a dictionary of keyword arguments that will be passed into the numba.jit dec-
orator. These keyword arguments will be applied to both the passed function (if a standard Python function) and
the apply for loop over each window. Currently only nogil, nopython, and parallel are supported, and their

3.12. Computational tools 663


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

default values are set to False, True and False respectively.

Note: In terms of performance, the first time a function is run using the Numba engine will be slow as Numba
will have some function compilation overhead. However, rolling objects will cache the function and subsequent
calls will be fast. In general, the Numba engine is performant with a larger amount of data points (e.g. 1+ million).

In [1]: data = pd.Series(range(1_000_000))

In [2]: roll = data.rolling(10)

In [3]: def f(x):


...: return np.sum(x) + 5
# Run the first time, compilation time will affect performance
In [4]: %timeit -r 1 -n 1 roll.apply(f, engine='numba', raw=True) # noqa: E225
1.23 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
# Function is cached and performance will improve
In [5]: %timeit roll.apply(f, engine='numba', raw=True)
188 ms ± 1.93 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [6]: %timeit roll.apply(f, engine='cython', raw=True)


3.92 s ± 59 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Rolling windows

Passing win_type to .rolling generates a generic rolling window computation, that is weighted according the
[email protected]
win_type. The following methods are available:
T56GZSRVAH
Method Description
sum() Sum of values
mean() Mean of values

The weights used in the window are specified by the win_type keyword. The list of recognized types are the
scipy.signal window functions:
• boxcar
• triang
• blackman
• hamming
• bartlett
• parzen
• bohman
• blackmanharris
• nuttall
• barthann
• kaiser (needs beta)
• gaussian (needs std)
• general_gaussian (needs power, width)

664 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

• slepian (needs width)


• exponential (needs tau).

In [53]: ser = pd.Series(np.random.randn(10),


....: index=pd.date_range('1/1/2000', periods=10))
....:

In [54]: ser.rolling(window=5, win_type='triang').mean()


Out[54]:
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 NaN
2000-01-04 NaN
2000-01-05 -1.037870
2000-01-06 -0.767705
2000-01-07 -0.383197
2000-01-08 -0.395513
2000-01-09 -0.558440
2000-01-10 -0.672416
Freq: D, dtype: float64

Note that the boxcar window is equivalent to mean().

In [55]: ser.rolling(window=5, win_type='boxcar').mean()


Out[55]:
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 NaN
[email protected]
2000-01-04 NaN
T56GZSRVAH2000-01-05 -0.841164
2000-01-06 -0.779948
2000-01-07 -0.565487
2000-01-08 -0.502815
2000-01-09 -0.553755
2000-01-10 -0.472211
Freq: D, dtype: float64

In [56]: ser.rolling(window=5).mean()
Out[56]:
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 NaN
2000-01-04 NaN
2000-01-05 -0.841164
2000-01-06 -0.779948
2000-01-07 -0.565487
2000-01-08 -0.502815
2000-01-09 -0.553755
2000-01-10 -0.472211
Freq: D, dtype: float64

For some windowing functions, additional parameters must be specified:

In [57]: ser.rolling(window=5, win_type='gaussian').mean(std=0.1)


Out[57]:
2000-01-01 NaN
2000-01-02 NaN
(continues on next page)

3.12. Computational tools 665


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-03 NaN
2000-01-04 NaN
2000-01-05 -1.309989
2000-01-06 -1.153000
2000-01-07 0.606382
2000-01-08 -0.681101
2000-01-09 -0.289724
2000-01-10 -0.996632
Freq: D, dtype: float64

Note: For .sum() with a win_type, there is no normalization done to the weights for the window. Passing custom
weights of [1, 1, 1] will yield a different result than passing weights of [2, 2, 2], for example. When passing
a win_type instead of explicitly specifying the weights, the weights are already normalized so that the largest weight
is 1.
In contrast, the nature of the .mean() calculation is such that the weights are normalized with respect to each other.
Weights of [1, 1, 1] and [2, 2, 2] yield the same result.

Time-aware rolling

It is possible to pass an offset (or convertible) to a .rolling() method and have it produce variable sized windows
based on the passed time window. For each time point, this includes all preceding values occurring within the indicated
time delta.
This can be particularly useful for a non-regular time frequency index.
[email protected]
T56GZSRVAH
In [58]: dft = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]},
....: index=pd.date_range('20130101 09:00:00',
....: periods=5,
....: freq='s'))
....:

In [59]: dft
Out[59]:
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 2.0
2013-01-01 09:00:03 NaN
2013-01-01 09:00:04 4.0

This is a regular frequency index. Using an integer window parameter works to roll along the window frequency.

In [60]: dft.rolling(2).sum()
Out[60]:
B
2013-01-01 09:00:00 NaN
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 NaN
2013-01-01 09:00:04 NaN

In [61]: dft.rolling(2, min_periods=1).sum()


(continues on next page)

666 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[61]:
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:04 4.0

Specifying an offset allows a more intuitive specification of the rolling frequency.


In [62]: dft.rolling('2s').sum()
Out[62]:
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:04 4.0

Using a non-regular, but still monotonic index, rolling with an integer window does not impart any special calculation.
In [63]: dft = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]},
....: index=pd.Index([pd.Timestamp('20130101 09:00:00'),
....: pd.Timestamp('20130101 09:00:02'),
....: pd.Timestamp('20130101 09:00:03'),
....: pd.Timestamp('20130101 09:00:05'),
....: pd.Timestamp('20130101 09:00:06')],
....: name='foo'))
[email protected]
T56GZSRVAH ....:
In [64]: dft
Out[64]:
B
foo
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0

In [65]: dft.rolling(2).sum()
Out[65]:
B
foo
2013-01-01 09:00:00 NaN
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 NaN

Using the time-specification generates variable windows for this sparse data.
In [66]: dft.rolling('2s').sum()
Out[66]:
B
foo
2013-01-01 09:00:00 0.0
(continues on next page)

3.12. Computational tools 667


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0

Furthermore, we now allow an optional on parameter to specify a column (rather than the default of the index) in a
DataFrame.
In [67]: dft = dft.reset_index()

In [68]: dft
Out[68]:
foo B
0 2013-01-01 09:00:00 0.0
1 2013-01-01 09:00:02 1.0
2 2013-01-01 09:00:03 2.0
3 2013-01-01 09:00:05 NaN
4 2013-01-01 09:00:06 4.0

In [69]: dft.rolling('2s', on='foo').sum()


Out[69]:
foo B
0 2013-01-01 09:00:00 0.0
1 2013-01-01 09:00:02 1.0
2 2013-01-01 09:00:03 3.0
3 2013-01-01 09:00:05 NaN
4 2013-01-01 09:00:06 4.0
[email protected]
T56GZSRVAH
Custom window rolling

New in version 1.0.


In addition to accepting an integer or offset as a window argument, rolling also accepts a BaseIndexer subclass
that allows a user to define a custom method for calculating window bounds. The BaseIndexer subclass will need
to define a get_window_bounds method that returns a tuple of two arrays, the first being the starting indices of
the windows and second being the ending indices of the windows. Additionally, num_values, min_periods,
center, closed and will automatically be passed to get_window_bounds and the defined method must always
accept these arguments.
For example, if we have the following DataFrame:
In [70]: use_expanding = [True, False, True, False, True]

In [71]: use_expanding
Out[71]: [True, False, True, False, True]

In [72]: df = pd.DataFrame({'values': range(5)})

In [73]: df
Out[73]:
values
0 0
1 1
2 2
3 3
4 4

668 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

and we want to use an expanding window where use_expanding is True otherwise a window of size 1, we can
create the following BaseIndexer:
In [2]: from pandas.api.indexers import BaseIndexer
...:
...: class CustomIndexer(BaseIndexer):
...:
...: def get_window_bounds(self, num_values, min_periods, center, closed):
...: start = np.empty(num_values, dtype=np.int64)
...: end = np.empty(num_values, dtype=np.int64)
...: for i in range(num_values):
...: if self.use_expanding[i]:
...: start[i] = 0
...: end[i] = i + 1
...: else:
...: start[i] = i
...: end[i] = i + self.window_size
...: return start, end
...:

In [3]: indexer = CustomIndexer(window_size=1, use_expanding=use_expanding)

In [4]: df.rolling(indexer).sum()
Out[4]:
values
0 0.0
1 1.0
2 3.0
3 3.0
[email protected]
4 10.0
T56GZSRVAH

Rolling window endpoints

The inclusion of the interval endpoints in rolling window calculations can be specified with the closed parameter:

closed Description Default for


right close right endpoint time-based windows
left close left endpoint
both close both endpoints fixed windows
neither open endpoints

For example, having the right endpoint open is useful in many problems that require that there is no contamination
from present information back to past information. This allows the rolling window to compute statistics “up to that
point in time”, but not including that point in time.
In [74]: df = pd.DataFrame({'x': 1},
....: index=[pd.Timestamp('20130101 09:00:01'),
....: pd.Timestamp('20130101 09:00:02'),
....: pd.Timestamp('20130101 09:00:03'),
....: pd.Timestamp('20130101 09:00:04'),
....: pd.Timestamp('20130101 09:00:06')])
....:

In [75]: df["right"] = df.rolling('2s', closed='right').x.sum() # default

(continues on next page)

3.12. Computational tools 669


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [76]: df["both"] = df.rolling('2s', closed='both').x.sum()

In [77]: df["left"] = df.rolling('2s', closed='left').x.sum()

In [78]: df["neither"] = df.rolling('2s', closed='neither').x.sum()

In [79]: df
Out[79]:
x right both left neither
2013-01-01 09:00:01 1 1.0 1.0 NaN NaN
2013-01-01 09:00:02 1 2.0 2.0 1.0 1.0
2013-01-01 09:00:03 1 2.0 3.0 2.0 1.0
2013-01-01 09:00:04 1 2.0 3.0 2.0 1.0
2013-01-01 09:00:06 1 1.0 2.0 1.0 NaN

Currently, this feature is only implemented for time-based windows. For fixed windows, the closed parameter cannot
be set and the rolling window will always have both endpoints closed.

Time-aware rolling vs. resampling

Using .rolling() with a time-based index is quite similar to resampling. They both operate and perform reductive
operations on time-indexed pandas objects.
When using .rolling() with an offset. The offset is a time-delta. Take a backwards-in-time looking window, and
aggregate all of the values in that window (including the end-point, but not the start-point). This is the new value at
that point in the result. These are variable sized windows in time-space for each point of the input. You will get a same
[email protected]
sized result as the input.
T56GZSRVAH
When using .resample() with an offset. Construct a new index that is the frequency of the offset. For each
frequency bin, aggregate points from the input within a backwards-in-time looking window that fall in that bin. The
result of this aggregation is the output for that frequency point. The windows are fixed size in the frequency space.
Your result will have the shape of a regular frequency between the min and the max of the original input object.
To summarize, .rolling() is a time-based window operation, while .resample() is a frequency-based window
operation.

Centering windows

By default the labels are set to the right edge of the window, but a center keyword is available so the labels can be
set at the center.

In [80]: ser.rolling(window=5).mean()
Out[80]:
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 NaN
2000-01-04 NaN
2000-01-05 -0.841164
2000-01-06 -0.779948
2000-01-07 -0.565487
2000-01-08 -0.502815
2000-01-09 -0.553755
2000-01-10 -0.472211
Freq: D, dtype: float64
(continues on next page)

670 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [81]: ser.rolling(window=5, center=True).mean()


Out[81]:
2000-01-01 NaN
2000-01-02 NaN
2000-01-03 -0.841164
2000-01-04 -0.779948
2000-01-05 -0.565487
2000-01-06 -0.502815
2000-01-07 -0.553755
2000-01-08 -0.472211
2000-01-09 NaN
2000-01-10 NaN
Freq: D, dtype: float64

Binary window functions

cov() and corr() can compute moving window statistics about two Series or any combination of DataFrame/
Series or DataFrame/DataFrame. Here is the behavior in each case:
• two Series: compute the statistic for the pairing.
• DataFrame/Series: compute the statistics for each column of the DataFrame with the passed Series, thus
returning a DataFrame.
• DataFrame/DataFrame: by default compute the statistic for matching column names, returning a
DataFrame. If the keyword argument pairwise=True is passed then computes the statistic for each pair
[email protected]
T56GZSRVAH of columns, returning a MultiIndexed DataFrame whose index are the dates in question (see the next
section).
For example:

In [82]: df = pd.DataFrame(np.random.randn(1000, 4),


....: index=pd.date_range('1/1/2000', periods=1000),
....: columns=['A', 'B', 'C', 'D'])
....:

In [83]: df = df.cumsum()

In [84]: df2 = df[:20]

In [85]: df2.rolling(window=5).corr(df2['B'])
Out[85]:
A B C D
2000-01-01 NaN NaN NaN NaN
2000-01-02 NaN NaN NaN NaN
2000-01-03 NaN NaN NaN NaN
2000-01-04 NaN NaN NaN NaN
2000-01-05 0.768775 1.0 -0.977990 0.800252
... ... ... ... ...
2000-01-16 0.691078 1.0 0.807450 -0.939302
2000-01-17 0.274506 1.0 0.582601 -0.902954
2000-01-18 0.330459 1.0 0.515707 -0.545268
2000-01-19 0.046756 1.0 -0.104334 -0.419799
2000-01-20 -0.328241 1.0 -0.650974 -0.777777
(continues on next page)

3.12. Computational tools 671


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

[20 rows x 4 columns]

Computing rolling pairwise covariances and correlations

In financial data analysis and other fields it’s common to compute covariance and correlation matrices for a collection
of time series. Often one is also interested in moving-window covariance and correlation matrices. This can be done
by passing the pairwise keyword argument, which in the case of DataFrame inputs will yield a MultiIndexed
DataFrame whose index are the dates in question. In the case of a single DataFrame argument the pairwise
argument can even be omitted:

Note: Missing values are ignored and each entry is computed using the pairwise complete observations. Please see
the covariance section for caveats associated with this method of calculating covariance and correlation matrices.

In [86]: covs = (df[['B', 'C', 'D']].rolling(window=50)


....: .cov(df[['A', 'B', 'C']], pairwise=True))
....:

In [87]: covs.loc['2002-09-22':]
Out[87]:
B C D
2002-09-22 A 1.367467 8.676734 -8.047366
B 3.067315 0.865946 -1.052533
C 0.865946 7.739761 -4.943924
[email protected]
2002-09-23 A 0.910343 8.669065 -8.443062
T56GZSRVAH
B 2.625456 0.565152 -0.907654
C 0.565152 7.825521 -5.367526
2002-09-24 A 0.463332 8.514509 -8.776514
B 2.306695 0.267746 -0.732186
C 0.267746 7.771425 -5.696962
2002-09-25 A 0.467976 8.198236 -9.162599
B 2.307129 0.267287 -0.754080
C 0.267287 7.466559 -5.822650
2002-09-26 A 0.545781 7.899084 -9.326238
B 2.311058 0.322295 -0.844451
C 0.322295 7.038237 -5.684445

In [88]: correls = df.rolling(window=50).corr()

In [89]: correls.loc['2002-09-22':]
Out[89]:
A B C D
2002-09-22 A 1.000000 0.186397 0.744551 -0.769767
B 0.186397 1.000000 0.177725 -0.240802
C 0.744551 0.177725 1.000000 -0.712051
D -0.769767 -0.240802 -0.712051 1.000000
2002-09-23 A 1.000000 0.134723 0.743113 -0.758758
... ... ... ... ...
2002-09-25 D -0.739160 -0.164179 -0.704686 1.000000
2002-09-26 A 1.000000 0.087756 0.727792 -0.736562
B 0.087756 1.000000 0.079913 -0.179477
C 0.727792 0.079913 1.000000 -0.692303
(continues on next page)

672 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


D -0.736562 -0.179477 -0.692303 1.000000

[20 rows x 4 columns]

You can efficiently retrieve the time series of correlations between two columns by reshaping and indexing:
In [90]: correls.unstack(1)[('A', 'C')].plot()
Out[90]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d1f56be90>

[email protected]
T56GZSRVAH

3.12.3 Aggregation

Once the Rolling, Expanding or EWM objects have been created, several methods are available to perform multiple
computations on the data. These operations are similar to the aggregating API, groupby API, and resample API.
In [91]: dfa = pd.DataFrame(np.random.randn(1000, 3),
....: index=pd.date_range('1/1/2000', periods=1000),
....: columns=['A', 'B', 'C'])
....:

In [92]: r = dfa.rolling(window=60, min_periods=1)

In [93]: r
Out[93]: Rolling [window=60,min_periods=1,center=False,axis=0]

3.12. Computational tools 673


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

We can aggregate by passing a function to the entire DataFrame, or select a Series (or multiple Series) via standard
__getitem__.

In [94]: r.aggregate(np.sum)
Out[94]:
A B C
2000-01-01 -0.289838 -0.370545 -1.284206
2000-01-02 -0.216612 -1.675528 -1.169415
2000-01-03 1.154661 -1.634017 -1.566620
2000-01-04 2.969393 -4.003274 -1.816179
2000-01-05 4.690630 -4.682017 -2.717209
... ... ... ...
2002-09-22 2.860036 -9.270337 6.415245
2002-09-23 3.510163 -8.151439 5.177219
2002-09-24 6.524983 -10.168078 5.792639
2002-09-25 6.409626 -9.956226 5.704050
2002-09-26 5.093787 -7.074515 6.905823

[1000 rows x 3 columns]

In [95]: r['A'].aggregate(np.sum)
Out[95]:
2000-01-01 -0.289838
2000-01-02 -0.216612
2000-01-03 1.154661
2000-01-04 2.969393
2000-01-05 4.690630
...
2002-09-22 2.860036
[email protected]
2002-09-23 3.510163
T56GZSRVAH2002-09-24 6.524983
2002-09-25 6.409626
2002-09-26 5.093787
Freq: D, Name: A, Length: 1000, dtype: float64

In [96]: r[['A', 'B']].aggregate(np.sum)


Out[96]:
A B
2000-01-01 -0.289838 -0.370545
2000-01-02 -0.216612 -1.675528
2000-01-03 1.154661 -1.634017
2000-01-04 2.969393 -4.003274
2000-01-05 4.690630 -4.682017
... ... ...
2002-09-22 2.860036 -9.270337
2002-09-23 3.510163 -8.151439
2002-09-24 6.524983 -10.168078
2002-09-25 6.409626 -9.956226
2002-09-26 5.093787 -7.074515

[1000 rows x 2 columns]

As you can see, the result of the aggregation will have the selected columns, or all columns if none are selected.

674 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Applying multiple functions

With windowed Series you can also pass a list of functions to do aggregation with, outputting a DataFrame:

In [97]: r['A'].agg([np.sum, np.mean, np.std])


Out[97]:
sum mean std
2000-01-01 -0.289838 -0.289838 NaN
2000-01-02 -0.216612 -0.108306 0.256725
2000-01-03 1.154661 0.384887 0.873311
2000-01-04 2.969393 0.742348 1.009734
2000-01-05 4.690630 0.938126 0.977914
... ... ... ...
2002-09-22 2.860036 0.047667 1.132051
2002-09-23 3.510163 0.058503 1.134296
2002-09-24 6.524983 0.108750 1.144204
2002-09-25 6.409626 0.106827 1.142913
2002-09-26 5.093787 0.084896 1.151416

[1000 rows x 3 columns]

On a windowed DataFrame, you can pass a list of functions to apply to each column, which produces an aggregated
result with a hierarchical index:

In [98]: r.agg([np.sum, np.mean])


Out[98]:
A B C
sum mean sum mean sum mean
2000-01-01 -0.289838 -0.289838 -0.370545 -0.370545 -1.284206 -1.284206
[email protected]
T56GZSRVAH2000-01-02 -0.216612 -0.108306 -1.675528 -0.837764 -1.169415 -0.584708
2000-01-03 1.154661 0.384887 -1.634017 -0.544672 -1.566620 -0.522207
2000-01-04 2.969393 0.742348 -4.003274 -1.000819 -1.816179 -0.454045
2000-01-05 4.690630 0.938126 -4.682017 -0.936403 -2.717209 -0.543442
... ... ... ... ... ... ...
2002-09-22 2.860036 0.047667 -9.270337 -0.154506 6.415245 0.106921
2002-09-23 3.510163 0.058503 -8.151439 -0.135857 5.177219 0.086287
2002-09-24 6.524983 0.108750 -10.168078 -0.169468 5.792639 0.096544
2002-09-25 6.409626 0.106827 -9.956226 -0.165937 5.704050 0.095068
2002-09-26 5.093787 0.084896 -7.074515 -0.117909 6.905823 0.115097

[1000 rows x 6 columns]

Passing a dict of functions has different behavior by default, see the next section.

Applying different functions to DataFrame columns

By passing a dict to aggregate you can apply a different aggregation to the columns of a DataFrame:

In [99]: r.agg({'A': np.sum, 'B': lambda x: np.std(x, ddof=1)})


Out[99]:
A B
2000-01-01 -0.289838 NaN
2000-01-02 -0.216612 0.660747
2000-01-03 1.154661 0.689929
2000-01-04 2.969393 1.072199
2000-01-05 4.690630 0.939657
(continues on next page)

3.12. Computational tools 675


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


... ... ...
2002-09-22 2.860036 1.113208
2002-09-23 3.510163 1.132381
2002-09-24 6.524983 1.080963
2002-09-25 6.409626 1.082911
2002-09-26 5.093787 1.136199

[1000 rows x 2 columns]

The function names can also be strings. In order for a string to be valid it must be implemented on the windowed
object

In [100]: r.agg({'A': 'sum', 'B': 'std'})


Out[100]:
A B
2000-01-01 -0.289838 NaN
2000-01-02 -0.216612 0.660747
2000-01-03 1.154661 0.689929
2000-01-04 2.969393 1.072199
2000-01-05 4.690630 0.939657
... ... ...
2002-09-22 2.860036 1.113208
2002-09-23 3.510163 1.132381
2002-09-24 6.524983 1.080963
2002-09-25 6.409626 1.082911
2002-09-26 5.093787 1.136199

[1000 rows x 2 columns]


[email protected]
T56GZSRVAH
Furthermore you can pass a nested dict to indicate different aggregations on different columns.

In [101]: r.agg({'A': ['sum', 'std'], 'B': ['mean', 'std']})


Out[101]:
A B
sum std mean std
2000-01-01 -0.289838 NaN -0.370545 NaN
2000-01-02 -0.216612 0.256725 -0.837764 0.660747
2000-01-03 1.154661 0.873311 -0.544672 0.689929
2000-01-04 2.969393 1.009734 -1.000819 1.072199
2000-01-05 4.690630 0.977914 -0.936403 0.939657
... ... ... ... ...
2002-09-22 2.860036 1.132051 -0.154506 1.113208
2002-09-23 3.510163 1.134296 -0.135857 1.132381
2002-09-24 6.524983 1.144204 -0.169468 1.080963
2002-09-25 6.409626 1.142913 -0.165937 1.082911
2002-09-26 5.093787 1.151416 -0.117909 1.136199

[1000 rows x 4 columns]

676 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.12.4 Expanding windows

A common alternative to rolling statistics is to use an expanding window, which yields the value of the statistic with
all the data available up to that point in time.
These follow a similar interface to .rolling, with the .expanding method returning an Expanding object.
As these calculations are a special case of rolling statistics, they are implemented in pandas such that the following
two calls are equivalent:

In [102]: df.rolling(window=len(df), min_periods=1).mean()[:5]


Out[102]:
A B C D
2000-01-01 0.314226 -0.001675 0.071823 0.892566
2000-01-02 0.654522 -0.171495 0.179278 0.853361
2000-01-03 0.708733 -0.064489 -0.238271 1.371111
2000-01-04 0.987613 0.163472 -0.919693 1.566485
2000-01-05 1.426971 0.288267 -1.358877 1.808650

In [103]: df.expanding(min_periods=1).mean()[:5]
Out[103]:
A B C D
2000-01-01 0.314226 -0.001675 0.071823 0.892566
2000-01-02 0.654522 -0.171495 0.179278 0.853361
2000-01-03 0.708733 -0.064489 -0.238271 1.371111
2000-01-04 0.987613 0.163472 -0.919693 1.566485
2000-01-05 1.426971 0.288267 -1.358877 1.808650

These have a similar set of methods to .rolling methods.


[email protected]
T56GZSRVAH
Method summary

Function Description
count() Number of non-null observations
sum() Sum of values
mean() Mean of values
median() Arithmetic median of values
min() Minimum
max() Maximum
std() Unbiased standard deviation
var() Unbiased variance
skew() Unbiased skewness (3rd moment)
kurt() Unbiased kurtosis (4th moment)
quantile() Sample quantile (value at %)
apply() Generic apply
cov() Unbiased covariance (binary)
corr() Correlation (binary)

Aside from not having a window parameter, these functions have the same interfaces as their .rolling counter-
parts. Like above, the parameters they all accept are:
• min_periods: threshold of non-null data points to require. Defaults to minimum needed to compute statistic.
No NaNs will be output once min_periods non-null data points have been seen.
• center: boolean, whether to set the labels at the center (default is False).

3.12. Computational tools 677


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Note: The output of the .rolling and .expanding methods do not return a NaN if there are at least
min_periods non-null values in the current window. For example:
In [104]: sn = pd.Series([1, 2, np.nan, 3, np.nan, 4])

In [105]: sn
Out[105]:
0 1.0
1 2.0
2 NaN
3 3.0
4 NaN
5 4.0
dtype: float64

In [106]: sn.rolling(2).max()
Out[106]:
0 NaN
1 2.0
2 NaN
3 NaN
4 NaN
5 NaN
dtype: float64

In [107]: sn.rolling(2, min_periods=1).max()


Out[107]:
0 1.0
[email protected]
1 2.0
T56GZSRVAH2 2.0
3 3.0
4 3.0
5 4.0
dtype: float64

In case of expanding functions, this differs from cumsum(), cumprod(), cummax(), and cummin(), which
return NaN in the output wherever a NaN is encountered in the input. In order to match the output of cumsum with
expanding, use fillna():
In [108]: sn.expanding().sum()
Out[108]:
0 1.0
1 3.0
2 3.0
3 6.0
4 6.0
5 10.0
dtype: float64

In [109]: sn.cumsum()
Out[109]:
0 1.0
1 3.0
2 NaN
3 6.0
4 NaN
5 10.0
(continues on next page)

678 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


dtype: float64

In [110]: sn.cumsum().fillna(method='ffill')
Out[110]:
0 1.0
1 3.0
2 3.0
3 6.0
4 6.0
5 10.0
dtype: float64

An expanding window statistic will be more stable (and less responsive) than its rolling window counterpart as the
increasing window size decreases the relative impact of an individual data point. As an example, here is the mean()
output for the previous time series dataset:

In [111]: s.plot(style='k--')
Out[111]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d1f1f8d10>

In [112]: s.expanding().mean().plot(style='k')
Out[112]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d1f1f8d10>

[email protected]
T56GZSRVAH

3.12. Computational tools 679


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.12.5 Exponentially weighted windows

A related set of functions are exponentially weighted versions of several of the above statistics. A similar interface
to .rolling and .expanding is accessed through the .ewm method to receive an EWM object. A number of
expanding EW (exponentially weighted) methods are provided:

Function Description
mean() EW moving average
var() EW moving variance
std() EW moving standard deviation
corr() EW moving correlation
cov() EW moving covariance

In general, a weighted moving average is calculated as


∑︀𝑡
𝑖=0 𝑤𝑖 𝑥𝑡−𝑖
𝑦𝑡 = ∑︀ 𝑡 ,
𝑖=0 𝑤𝑖

where 𝑥𝑡 is the input, 𝑦𝑡 is the result and the 𝑤𝑖 are the weights.
The EW functions support two variants of exponential weights. The default, adjust=True, uses the weights 𝑤𝑖 =
(1 − 𝛼)𝑖 which gives
𝑥𝑡 + (1 − 𝛼)𝑥𝑡−1 + (1 − 𝛼)2 𝑥𝑡−2 + ... + (1 − 𝛼)𝑡 𝑥0
𝑦𝑡 =
1 + (1 − 𝛼) + (1 − 𝛼)2 + ... + (1 − 𝛼)𝑡
When adjust=False is specified, moving averages are calculated as
[email protected] 𝑦0 = 𝑥0
T56GZSRVAH
𝑦𝑡 = (1 − 𝛼)𝑦𝑡−1 + 𝛼𝑥𝑡 ,
which is equivalent to using weights
{︃
𝛼(1 − 𝛼)𝑖 if 𝑖 < 𝑡
𝑤𝑖 =
(1 − 𝛼)𝑖 if 𝑖 = 𝑡.

Note: These equations are sometimes written in terms of 𝛼′ = 1 − 𝛼, e.g.

𝑦𝑡 = 𝛼′ 𝑦𝑡−1 + (1 − 𝛼′ )𝑥𝑡 .

The difference between the above two variants arises because we are dealing with series which have finite history.
Consider a series of infinite history, with adjust=True:
𝑥𝑡 + (1 − 𝛼)𝑥𝑡−1 + (1 − 𝛼)2 𝑥𝑡−2 + ...
𝑦𝑡 =
1 + (1 − 𝛼) + (1 − 𝛼)2 + ...
Noting that the denominator is a geometric series with initial term equal to 1 and a ratio of 1 − 𝛼 we have
𝑥𝑡 + (1 − 𝛼)𝑥𝑡−1 + (1 − 𝛼)2 𝑥𝑡−2 + ...
𝑦𝑡 = 1
1−(1−𝛼)

= [𝑥𝑡 + (1 − 𝛼)𝑥𝑡−1 + (1 − 𝛼)2 𝑥𝑡−2 + ...]𝛼


= 𝛼𝑥𝑡 + [(1 − 𝛼)𝑥𝑡−1 + (1 − 𝛼)2 𝑥𝑡−2 + ...]𝛼
= 𝛼𝑥𝑡 + (1 − 𝛼)[𝑥𝑡−1 + (1 − 𝛼)𝑥𝑡−2 + ...]𝛼
= 𝛼𝑥𝑡 + (1 − 𝛼)𝑦𝑡−1

680 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

which is the same expression as adjust=False above and therefore shows the equivalence of the two variants for
infinite series. When adjust=False, we have 𝑦0 = 𝑥0 and 𝑦𝑡 = 𝛼𝑥𝑡 + (1 − 𝛼)𝑦𝑡−1 . Therefore, there is an
assumption that 𝑥0 is not an ordinary value but rather an exponentially weighted moment of the infinite series up to
that point.
One must have 0 < 𝛼 ≤ 1, and while it is possible to pass 𝛼 directly, it’s often easier to think about either the span,
center of mass (com) or half-life of an EW moment:

2
⎨ 𝑠+1 ,
⎪ for span 𝑠 ≥ 1
1
𝛼 = 1+𝑐 , for center of mass 𝑐 ≥ 0
⎪ log 0.5
1 − exp ℎ , for half-life ℎ > 0

One must specify precisely one of span, center of mass, half-life and alpha to the EW functions:
• Span corresponds to what is commonly called an “N-day EW moving average”.
• Center of mass has a more physical interpretation and can be thought of in terms of span: 𝑐 = (𝑠 − 1)/2.
• Half-life is the period of time for the exponential weight to reduce to one half.
• Alpha specifies the smoothing factor directly.
Here is an example for a univariate time series:

In [113]: s.plot(style='k--')
Out[113]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d1f02c210>

In [114]: s.ewm(span=20).mean().plot(style='k')
Out[114]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d1f02c210>

[email protected]
T56GZSRVAH

3.12. Computational tools 681


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

EWM has a min_periods argument, which has the same meaning it does for all the .expanding and .rolling
methods: no output values will be set until at least min_periods non-null values are encountered in the (expanding)
window.
EWM also has an ignore_na argument, which determines how intermediate null values affect the calculation of
the weights. When ignore_na=False (the default), weights are calculated based on absolute positions, so that
intermediate null values affect the result. When ignore_na=True, weights are calculated by ignoring intermediate
null values. For example, assuming adjust=True, if ignore_na=False, the weighted average of 3, NaN, 5
would be calculated as
(1 − 𝛼)2 · 3 + 1 · 5
.
(1 − 𝛼)2 + 1
Whereas if ignore_na=True, the weighted average would be calculated as
(1 − 𝛼) · 3 + 1 · 5
.
(1 − 𝛼) + 1
The var(), std(), and cov() functions have a bias argument, specifying whether the result should con-
tain biased or unbiased statistics. For example, if bias=True, ewmvar(x) is calculated as ewmvar(x) =
ewma(x**2) - ewma(x)**2; whereas if bias=False (the default), the biased variance statistics are scaled
by debiasing factors
(︁∑︀ )︁2
𝑡
𝑖=0 𝑤𝑖
(︁∑︀ )︁2 ∑︀ .
𝑡 𝑡 2
𝑖=0 𝑖𝑤 − 𝑤
𝑖=0 𝑖

682 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(For 𝑤𝑖 = 1, this reduces to the usual 𝑁/(𝑁 − 1) factor, with 𝑁 = 𝑡 + 1.) See Weighted Sample Variance on
Wikipedia for further details.

3.13 Group By: split-apply-combine

By “group by” we are referring to a process involving one or more of the following steps:
• Splitting the data into groups based on some criteria.
• Applying a function to each group independently.
• Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many situations we may wish to split the data set into
groups and do something with those groups. In the apply step, we might wish to do one of the following:
• Aggregation: compute a summary statistic (or statistics) for each group. Some examples:
– Compute group sums or means.
– Compute group sizes / counts.
• Transformation: perform some group-specific computations and return a like-indexed object. Some examples:
– Standardize data (zscore) within a group.
– Filling NAs within groups with a value derived from each group.
• Filtration: discard some groups, according to a group-wise computation that evaluates True or False. Some
examples:
[email protected]
– Discard data that belongs to groups with only a few members.
T56GZSRVAH
– Filter out data based on the group sum or mean.
• Some combination of the above: GroupBy will examine the results of the apply step and try to return a sensibly
combined result if it doesn’t fit into either of the above two categories.
Since the set of object instance methods on pandas data structures are generally rich and expressive, we often simply
want to invoke, say, a DataFrame function on each group. The name GroupBy should be quite familiar to those who
have used a SQL-based tool (or itertools), in which you can write code like:

SELECT Column1, Column2, mean(Column3), sum(Column4)


FROM SomeTable
GROUP BY Column1, Column2

We aim to make operations like this natural and easy to express using pandas. We’ll address each area of GroupBy
functionality then provide some non-trivial examples / use cases.
See the cookbook for some advanced strategies.

3.13. Group By: split-apply-combine 683


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.13.1 Splitting an object into groups

pandas objects can be split on any of their axes. The abstract definition of grouping is to provide a mapping of labels
to group names. To create a GroupBy object (more on what the GroupBy object is later), you may do the following:

In [1]: df = pd.DataFrame([('bird', 'Falconiformes', 389.0),


...: ('bird', 'Psittaciformes', 24.0),
...: ('mammal', 'Carnivora', 80.2),
...: ('mammal', 'Primates', np.nan),
...: ('mammal', 'Carnivora', 58)],
...: index=['falcon', 'parrot', 'lion', 'monkey', 'leopard'],
...: columns=('class', 'order', 'max_speed'))
...:

In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0

# default is axis=0
In [3]: grouped = df.groupby('class')

In [4]: grouped = df.groupby('order', axis='columns')

In [5]: grouped = df.groupby(['class', 'order'])


[email protected]
T56GZSRVAH
The mapping can be specified many different ways:
• A Python function, to be called on each of the axis labels.
• A list or NumPy array of the same length as the selected axis.
• A dict or Series, providing a label -> group name mapping.
• For DataFrame objects, a string indicating a column to be used to group. Of course df.groupby('A') is
just syntactic sugar for df.groupby(df['A']), but it makes life simpler.
• For DataFrame objects, a string indicating an index level to be used to group.
• A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example, consider the following DataFrame:

Note: A string passed to groupby may refer to either a column or an index level. If a string matches both a column
name and an index level name, a ValueError will be raised.

In [6]: df = pd.DataFrame({'A': ['foo', 'bar', 'foo', 'bar',


...: 'foo', 'bar', 'foo', 'foo'],
...: 'B': ['one', 'one', 'two', 'three',
...: 'two', 'two', 'one', 'three'],
...: 'C': np.random.randn(8),
...: 'D': np.random.randn(8)})
...:

(continues on next page)

684 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860

On a DataFrame, we obtain a GroupBy object by calling groupby(). We could naturally group by either the A or B
columns, or both:
In [8]: grouped = df.groupby('A')

In [9]: grouped = df.groupby(['A', 'B'])

New in version 0.24.


If we also have a MultiIndex on columns A and B, we can group by all but the specified columns
In [10]: df2 = df.set_index(['A', 'B'])

In [11]: grouped = df2.groupby(level=df2.index.names.difference(['B']))

In [12]: grouped.sum()
[email protected]
Out[12]:
T56GZSRVAH C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938

These will split the DataFrame on its index (rows). We could also split by the columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:

In [14]: grouped = df.groupby(get_letter_type, axis=1)

pandas Index objects support duplicate values. If a non-unique index is used as the group key in a groupby operation,
all values for the same index value will be considered to be in one group and thus the output of aggregation functions
will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]

In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)

In [17]: grouped = s.groupby(level=0)

In [18]: grouped.first()
Out[18]:
(continues on next page)

3.13. Group By: split-apply-combine 685


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 1
2 2
3 3
dtype: int64

In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64

In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64

Note that no splitting occurs until it’s needed. Creating the GroupBy object only verifies that you’ve passed a valid
mapping.

Note: Many kinds of complicated data manipulations can be expressed in terms of GroupBy operations (though can’t
be guaranteed to be the most efficient). You can get quite creative with the label mapping functions.

[email protected]
T56GZSRVAHGroupBy sorting

By default the group keys are sorted during the groupby operation. You may however pass sort=False for
potential speedups:

In [21]: df2 = pd.DataFrame({'X': ['B', 'B', 'A', 'A'], 'Y': [1, 2, 3, 4]})

In [22]: df2.groupby(['X']).sum()
Out[22]:
Y
X
A 7
B 3

In [23]: df2.groupby(['X'], sort=False).sum()


Out[23]:
Y
X
B 3
A 7

Note that groupby will preserve the order in which observations are sorted within each group. For example, the
groups created by groupby() below are in the order they appeared in the original DataFrame:

In [24]: df3 = pd.DataFrame({'X': ['A', 'B', 'A', 'B'], 'Y': [1, 4, 3, 2]})

In [25]: df3.groupby(['X']).get_group('A')
Out[25]:
(continues on next page)

686 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


X Y
0 A 1
2 A 3

In [26]: df3.groupby(['X']).get_group('B')
Out[26]:
X Y
1 B 4
3 B 2

GroupBy object attributes

The groups attribute is a dict whose keys are the computed unique groups and corresponding values being the axis
labels belonging to each group. In the above example we have:
In [27]: df.groupby('A').groups
Out[27]:
{'bar': Int64Index([1, 3, 5], dtype='int64'),
'foo': Int64Index([0, 2, 4, 6, 7], dtype='int64')}

In [28]: df.groupby(get_letter_type, axis=1).groups


Out[28]:
{'consonant': Index(['B', 'C', 'D'], dtype='object'),
'vowel': Index(['A'], dtype='object')}

Calling the standard Python len function on the GroupBy object just returns the length of the groups dict, so it is
[email protected]
T56GZSRVAHlargely just a convenience:
In [29]: grouped = df.groupby(['A', 'B'])

In [30]: grouped.groups
Out[30]:
{('bar', 'one'): Int64Index([1], dtype='int64'),
('bar', 'three'): Int64Index([3], dtype='int64'),
('bar', 'two'): Int64Index([5], dtype='int64'),
('foo', 'one'): Int64Index([0, 6], dtype='int64'),
('foo', 'three'): Int64Index([7], dtype='int64'),
('foo', 'two'): Int64Index([2, 4], dtype='int64')}

In [31]: len(grouped)
Out[31]: 6

GroupBy will tab complete column names (and other attributes):


In [32]: df
Out[32]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
(continues on next page)

3.13. Group By: split-apply-combine 687


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male

In [33]: gb = df.groupby('gender')

In [34]: gb.<TAB> # noqa: E225, E999


gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group
˓→gb.height gb.last gb.median gb.ngroups gb.plot gb.rank
˓→gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups
˓→gb.hist gb.max gb.min gb.nth gb.prod gb.resample
˓→gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head
˓→gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size
˓→gb.tail gb.weight

GroupBy with MultiIndex

With hierarchically-indexed data, it’s quite natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.

In [35]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
....:

[email protected]
In [36]: index = pd.MultiIndex.from_arrays(arrays, names=['first', 'second'])
T56GZSRVAH
In [37]: s = pd.Series(np.random.randn(8), index=index)

In [38]: s
Out[38]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64

We can then group by one of the levels in s.

In [39]: grouped = s.groupby(level=0)

In [40]: grouped.sum()
Out[40]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64

688 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

If the MultiIndex has names specified, these can be passed instead of the level number:

In [41]: s.groupby(level='second').sum()
Out[41]:
second
one 0.980950
two 1.991575
dtype: float64

The aggregation functions such as sum will take the level parameter directly. Additionally, the resulting index will be
named according to the chosen level:

In [42]: s.sum(level='second')
Out[42]:
second
one 0.980950
two 1.991575
dtype: float64

Grouping with multiple levels is supported.

In [43]: s
Out[43]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
[email protected] two 1.956030
T56GZSRVAH
qux bop one 0.017587
two -0.016692
dtype: float64

In [44]: s.groupby(level=['first', 'second']).sum()


Out[44]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64

Index level names may be supplied as keys.

In [45]: s.groupby(['first', 'second']).sum()


Out[45]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64

More on the sum function and aggregation later.

3.13. Group By: split-apply-combine 689


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Grouping DataFrame with Index levels and columns

A DataFrame may be grouped by a combination of columns and index levels by specifying the column names as
strings and the index levels as pd.Grouper objects.

In [46]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
....:

In [47]: index = pd.MultiIndex.from_arrays(arrays, names=['first', 'second'])

In [48]: df = pd.DataFrame({'A': [1, 1, 1, 1, 2, 2, 3, 3],


....: 'B': np.arange(8)},
....: index=index)
....:

In [49]: df
Out[49]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
[email protected]
T56GZSRVAHThe following example groups df by the second index level and the A column.
In [50]: df.groupby([pd.Grouper(level=1), 'A']).sum()
Out[50]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7

Index levels may also be specified by name.

In [51]: df.groupby([pd.Grouper(level='second'), 'A']).sum()


Out[51]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7

Index level names may be specified as keys directly to groupby.

690 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [52]: df.groupby(['second', 'A']).sum()


Out[52]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7

DataFrame column selection in GroupBy

Once you have created the GroupBy object from a DataFrame, you might want to do something different for each of
the columns. Thus, using [] similar to getting a column from a DataFrame, you can do:

In [53]: grouped = df.groupby(['A'])

In [54]: grouped_C = grouped['C']

In [55]: grouped_D = grouped['D']

This is mainly syntactic sugar for the alternative and much more verbose:

In [56]: df['C'].groupby(df['A'])
Out[56]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f3d27c6cf10>
[email protected]
T56GZSRVAHAdditionally this method avoids recomputing the internal grouping information derived from the passed key.

3.13.2 Iterating through groups

With the GroupBy object in hand, iterating through the grouped data is very natural and functions similarly to
itertools.groupby():

In [57]: grouped = df.groupby('A')

In [58]: for name, group in grouped:


....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580

In the case of grouping by multiple keys, the group name will be a tuple:

3.13. Group By: split-apply-combine 691


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [59]: for name, group in df.groupby(['A', 'B']):


....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652

See Iterating through groups.

[email protected]
3.13.3 Selecting a group
T56GZSRVAH
A single group can be selected using get_group():

In [60]: grouped.get_group('bar')
Out[60]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526

Or for an object grouped on multiple columns:

In [61]: df.groupby(['A', 'B']).get_group(('bar', 'one'))


Out[61]:
A B C D
1 bar one 0.254161 1.511763

692 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.13.4 Aggregation

Once the GroupBy object has been created, several methods are available to perform a computation on the grouped
data. These operations are similar to the aggregating API, window functions API, and resample API.
An obvious one is aggregation via the aggregate() or equivalently agg() method:
In [62]: grouped = df.groupby('A')

In [63]: grouped.aggregate(np.sum)
Out[63]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590

In [64]: grouped = df.groupby(['A', 'B'])

In [65]: grouped.aggregate(np.sum)
Out[65]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429

[email protected]
As you can see, the result of the aggregation will have the group names as the new index along the grouped axis. In
T56GZSRVAH
the case of multiple keys, the result is a MultiIndex by default, though this can be changed by using the as_index
option:
In [66]: grouped = df.groupby(['A', 'B'], as_index=False)

In [67]: grouped.aggregate(np.sum)
Out[67]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429

In [68]: df.groupby('A', as_index=False).sum()


Out[68]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590

Note that you could use the reset_index DataFrame function to achieve the same result as the column names are
stored in the resulting MultiIndex:
In [69]: df.groupby(['A', 'B']).sum().reset_index()
Out[69]:
A B C D
0 bar one 0.254161 1.511763
(continues on next page)

3.13. Group By: split-apply-combine 693


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429

Another simple aggregation example is to compute the size of each group. This is included in GroupBy as the size
method. It returns a Series whose index are the group names and whose values are the sizes of each group.

In [70]: grouped.size()
Out[70]:
A B
bar one 1
three 1
two 1
foo one 2
three 1
two 2
dtype: int64

In [71]: grouped.describe()
Out[71]:
C D
˓→

count mean std min 25% 50% 75% max count


˓→ mean std min 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 0.254161 0.254161 0.254161 0.254161 1.0
[email protected]
˓→1.511763 NaN 1.511763 1.511763 1.511763 1.511763 1.511763
T56GZSRVAH1 1.0 0.215897 NaN 0.215897 0.215897 0.215897 0.215897 0.215897 1.0 -
˓→0.990582 NaN -0.990582 -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 -0.077118 -0.077118 -0.077118 -0.077118 1.0
˓→1.211526 NaN 1.211526 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 -0.533567 -0.491888 -0.450209 -0.408530 2.0
˓→0.807291 0.761937 0.268520 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 -0.862495 -0.862495 -0.862495 -0.862495 1.0
˓→0.024580 NaN 0.024580 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 -0.559389 0.024925 0.609240 1.193555 2.0
˓→0.592714 1.462816 -0.441652 0.075531 0.592714 1.109898 1.627081

Note: Aggregation functions will not return the groups that you are aggregating over if they are named columns,
when as_index=True, the default. The grouped columns will be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are named columns.

Aggregating functions are the ones that reduce the dimension of the returned objects. Some common aggregating
functions are tabulated below:

694 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Function Description
mean() Compute mean of groups
sum() Compute sum of group values
size() Compute group sizes
count() Compute count of group
std() Standard deviation of groups
var() Compute variance of groups
sem() Standard error of the mean of groups
describe() Generates descriptive statistics
first() Compute first of group values
last() Compute last of group values
nth() Take nth value, or a subset if n is a list
min() Compute min of group values
max() Compute max of group values

The aggregating functions above will exclude NA values. Any function which reduces a Series to a scalar value is
an aggregation function and will work, a trivial example is df.groupby('A').agg(lambda ser: 1). Note
that nth() can act as a reducer or a filter, see here.

Applying multiple functions at once

With grouped Series you can also pass a list or dict of functions to do aggregation with, outputting a DataFrame:
In [72]: grouped = df.groupby('A')

[email protected]
In [73]: grouped['C'].agg([np.sum, np.mean, np.std])
T56GZSRVAHOut[73]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265

On a grouped DataFrame, you can pass a list of functions to apply to each column, which produces an aggregated
result with a hierarchical index:
In [74]: grouped.agg([np.sum, np.mean, np.std])
Out[74]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785

The resulting aggregations are named for the functions themselves. If you need to rename, then you can add in a
chained operation for a Series like this:
In [75]: (grouped['C'].agg([np.sum, np.mean, np.std])
....: .rename(columns={'sum': 'foo',
....: 'mean': 'bar',
....: 'std': 'baz'}))
....:
Out[75]:
foo bar baz
A
(continues on next page)

3.13. Group By: split-apply-combine 695


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265

For a grouped DataFrame, you can rename in a similar manner:

In [76]: (grouped.agg([np.sum, np.mean, np.std])


....: .rename(columns={'sum': 'foo',
....: 'mean': 'bar',
....: 'std': 'baz'}))
....:
Out[76]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785

Note: In general, the output column names should be unique. You can’t apply the same function (or two functions
with the same name) to the same column.

In [77]: grouped['C'].agg(['sum', 'sum'])


Out[77]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
[email protected]
T56GZSRVAHPandas does allow you to provide multiple lambdas. In this case, pandas will mangle the name of the (nameless)
lambda functions, appending _<i> to each subsequent lambda.

In [78]: grouped['C'].agg([lambda x: x.max() - x.min(),


....: lambda x: x.median() - x.mean()])
....:
Out[78]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962

Named aggregation

New in version 0.25.0.


To support column-specific aggregation with control over the output column names, pandas accepts the special syntax
in GroupBy.agg(), known as “named aggregation”, where
• The keywords are the output column names
• The values are tuples whose first element is the column to select and the second element is the aggregation to
apply to that column. Pandas provides the pandas.NamedAgg namedtuple with the fields ['column',
'aggfunc'] to make it clearer what the arguments are. As usual, the aggregation can be a callable or a string
alias.

696 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [79]: animals = pd.DataFrame({'kind': ['cat', 'dog', 'cat', 'dog'],


....: 'height': [9.1, 6.0, 9.5, 34.0],
....: 'weight': [7.9, 7.5, 9.9, 198.0]})
....:

In [80]: animals
Out[80]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0

In [81]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column='height', aggfunc='min'),
....: max_height=pd.NamedAgg(column='height', aggfunc='max'),
....: average_weight=pd.NamedAgg(column='weight', aggfunc=np.mean),
....: )
....:
Out[81]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75

pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.

In [82]: animals.groupby("kind").agg(
[email protected]
....: min_height=('height', 'min'),
T56GZSRVAH ....: max_height=('height', 'max'),
....: average_weight=('weight', np.mean),
....: )
....:
Out[82]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75

If your desired output column names are not valid python keywords, construct a dictionary and unpack the keyword
arguments

In [83]: animals.groupby("kind").agg(**{
....: 'total weight': pd.NamedAgg(column='weight', aggfunc=sum),
....: })
....:
Out[83]:
total weight
kind
cat 17.8
dog 205.5

Additional keyword arguments are not passed through to the aggregation functions. Only pairs of (column,
aggfunc) should be passed as **kwargs. If your aggregation functions requires additional arguments, partially
apply them with functools.partial().

Note: For Python 3.5 and earlier, the order of **kwargs in a functions was not preserved. This means that the

3.13. Group By: split-apply-combine 697


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

output column ordering would not be consistent. To ensure consistent ordering, the keys (and so output columns) will
always be sorted for Python 3.5.

Named aggregation is also valid for Series groupby aggregations. In this case there’s no column selection, so the
values are just the functions.
In [84]: animals.groupby("kind").height.agg(
....: min_height='min',
....: max_height='max',
....: )
....:
Out[84]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0

Applying different functions to DataFrame columns

By passing a dict to aggregate you can apply a different aggregation to the columns of a DataFrame:
In [85]: grouped.agg({'C': np.sum,
....: 'D': lambda x: np.std(x, ddof=1)})
....:
Out[85]:
C D
A
[email protected]
T56GZSRVAHbar 0.392940 1.366330
foo -1.796421 0.884785

The function names can also be strings. In order for a string to be valid it must be either implemented on GroupBy or
available via dispatching:
In [86]: grouped.agg({'C': 'sum', 'D': 'std'})
Out[86]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785

Cython-optimized aggregation functions

Some common aggregations, currently only sum, mean, std, and sem, have optimized Cython implementations:
In [87]: df.groupby('A').sum()
Out[87]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590

In [88]: df.groupby(['A', 'B']).mean()


Out[88]:
C D
(continues on next page)

698 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714

Of course sum and mean are implemented on pandas objects, so the above code would work even without the special
versions via dispatching (see below).

3.13.5 Transformation

The transform method returns an object that is indexed the same (same size) as the one being grouped. The
transform function must:
• Return a result that is either the same size as the group chunk or broadcastable to the size of the group chunk
(e.g., a scalar, grouped.transform(lambda x: x.iloc[-1])).
• Operate column-by-column on the group chunk. The transform is applied to the first group chunk using
chunk.apply.
• Not perform in-place operations on the group chunk. Group chunks should be treated as immutable, and changes
to a group chunk may produce unexpected results. For example, when using fillna, inplace must be
False (grouped.transform(lambda x: x.fillna(inplace=False))).
• (Optionally) operates on the entire group chunk. If this is supported, a fast path is used starting from the second
[email protected]
chunk.
T56GZSRVAH
For example, suppose we wished to standardize the data within each group:

In [89]: index = pd.date_range('10/1/1999', periods=1100)

In [90]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)

In [91]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()

In [92]: ts.head()
Out[92]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64

In [93]: ts.tail()
Out[93]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64

In [94]: transformed = (ts.groupby(lambda x: x.year)


(continues on next page)

3.13. Group By: split-apply-combine 699


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


....: .transform(lambda x: (x - x.mean()) / x.std()))
....:

We would expect the result to now have mean 0 and standard deviation 1 within each group, which we can easily
check:

# Original Data
In [95]: grouped = ts.groupby(lambda x: x.year)

In [96]: grouped.mean()
Out[96]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64

In [97]: grouped.std()
Out[97]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64

# Transformed Data
In [98]: grouped_trans = transformed.groupby(lambda x: x.year)

In [99]: grouped_trans.mean()
Out[99]:
[email protected]
T56GZSRVAH2000 1.168208e-15
2001 1.454544e-15
2002 1.726657e-15
dtype: float64

In [100]: grouped_trans.std()
Out[100]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64

We can also visually compare the original and transformed data sets.

In [101]: compare = pd.DataFrame({'Original': ts, 'Transformed': transformed})

In [102]: compare.plot()
Out[102]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d1df046d0>

700 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

Transformation functions that have lower dimension outputs are broadcast to match the shape of the input array.

In [103]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())


Out[103]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64

Alternatively, the built-in methods could be used to produce the same outputs.

In [104]: max = ts.groupby(lambda x: x.year).transform('max')

In [105]: min = ts.groupby(lambda x: x.year).transform('min')

In [106]: max - min


Out[106]:
(continues on next page)

3.13. Group By: split-apply-combine 701


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64

Another common data transform is to replace missing data with the group mean.

In [107]: data_df
Out[107]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148
[email protected] NaN
T56GZSRVAH 999 0.234564 0.517098 0.393534

[1000 rows x 3 columns]

In [108]: countries = np.array(['US', 'UK', 'GR', 'JP'])

In [109]: key = countries[np.random.randint(0, 4, 1000)]

In [110]: grouped = data_df.groupby(key)

# Non-NA count in each group


In [111]: grouped.count()
Out[111]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217

In [112]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))

We can verify that the group means have not changed in the transformed data and that the transformed data contains
no NAs.

In [113]: grouped_trans = transformed.groupby(key)

In [114]: grouped.mean() # original group means


Out[114]:
(continues on next page)

702 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603

In [115]: grouped_trans.mean() # transformation did not change group means


Out[115]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603

In [116]: grouped.count() # original has some missing data points


Out[116]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217

In [117]: grouped_trans.count() # counts after transformation


Out[117]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
[email protected]
T56GZSRVAHUS 258 258 258
In [118]: grouped_trans.size() # Verify non-NA count equals group size
Out[118]:
GR 228
JP 267
UK 247
US 258
dtype: int64

Note: Some functions will automatically transform the input when applied to a GroupBy object, but returning an
object of the same shape as the original. Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [119]: grouped.ffill()
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
(continues on next page)

3.13. Group By: split-apply-combine 703


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


999 0.234564 0.517098 0.393534

[1000 rows x 3 columns]

Window and resample operations

It is possible to use resample(), expanding() and rolling() as methods on groupbys.


The example below will apply the rolling() method on the samples of the column B based on the groups of
column A.

In [120]: df_re = pd.DataFrame({'A': [1] * 10 + [5] * 10,


.....: 'B': np.arange(20)})
.....:

In [121]: df_re
Out[121]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
[email protected]
16 5 16
T56GZSRVAH17 5 17
18 5 18
19 5 19

[20 rows x 2 columns]

In [122]: df_re.groupby('A').rolling(4).B.mean()
Out[122]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64

The expanding() method will accumulate a given operation (sum() in the example) for all the members of each
particular group.

In [123]: df_re.groupby('A').expanding().sum()
Out[123]:
A B
(continues on next page)

704 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


A
1 0 1.0 0.0
1 2.0 1.0
2 3.0 3.0
3 4.0 6.0
4 5.0 10.0
... ... ...
5 15 30.0 75.0
16 35.0 91.0
17 40.0 108.0
18 45.0 126.0
19 50.0 145.0

[20 rows x 2 columns]

Suppose you want to use the resample() method to get a daily frequency in each group of your dataframe and wish
to complete the missing values with the ffill() method.

In [124]: df_re = pd.DataFrame({'date': pd.date_range(start='2016-01-01', periods=4,


.....: freq='W'),
.....: 'group': [1, 1, 2, 2],
.....: 'val': [5, 6, 7, 8]}).set_index('date')
.....:

In [125]: df_re
Out[125]:
group val
date
[email protected]
T56GZSRVAH2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8

In [126]: df_re.groupby('group').resample('1D').ffill()
Out[126]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8

[16 rows x 2 columns]

3.13. Group By: split-apply-combine 705


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.13.6 Filtration

The filter method returns a subset of the original object. Suppose we want to take only elements that belong to
groups with a group sum greater than 2.

In [127]: sf = pd.Series([1, 1, 2, 3, 3, 3])

In [128]: sf.groupby(sf).filter(lambda x: x.sum() > 2)


Out[128]:
3 3
4 3
5 3
dtype: int64

The argument of filter must be a function that, applied to the group as a whole, returns True or False.
Another useful operation is filtering out elements that belong to groups with only a couple members.

In [129]: dff = pd.DataFrame({'A': np.arange(8), 'B': list('aabbbbcc')})

In [130]: dff.groupby('B').filter(lambda x: len(x) > 2)


Out[130]:
A B
2 2 b
3 3 b
4 4 b
5 5 b

Alternatively, instead of dropping the offending groups, we can return a like-indexed objects where the groups that do
[email protected]
T56GZSRVAHnot pass the filter are filled with NaNs.
In [131]: dff.groupby('B').filter(lambda x: len(x) > 2, dropna=False)
Out[131]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN

For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.

In [132]: dff['C'] = np.arange(8)

In [133]: dff.groupby('B').filter(lambda x: len(x['C']) > 2)


Out[133]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5

Note: Some functions when applied to a groupby object will act as a filter on the input, returning a reduced shape of
the original (and potentially eliminating groups), but with the index unchanged. Passing as_index=False will not

706 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

affect these transformation methods.


For example: head, tail.

In [134]: dff.groupby('B').head(2)
Out[134]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7

3.13.7 Dispatching to instance methods

When doing an aggregation or transformation, you might just want to call an instance method on each data group.
This is pretty easy to do by passing lambda functions:

In [135]: grouped = df.groupby('A')

In [136]: grouped.agg(lambda x: x.std())


Out[136]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
[email protected]
T56GZSRVAH
But, it’s rather verbose and can be untidy if you need to pass additional arguments. Using a bit of metaprogramming
cleverness, GroupBy now has the ability to “dispatch” method calls to the groups:

In [137]: grouped.std()
Out[137]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785

What is actually happening here is that a function wrapper is being generated. When invoked, it takes any passed
arguments and invokes the function with any arguments on each group (in the above example, the std function). The
results are then combined together much in the style of agg and transform (it actually uses apply to infer the
gluing, documented next). This enables some operations to be carried out rather succinctly:

In [138]: tsdf = pd.DataFrame(np.random.randn(1000, 3),


.....: index=pd.date_range('1/1/2000', periods=1000),
.....: columns=['A', 'B', 'C'])
.....:

In [139]: tsdf.iloc[::2] = np.nan

In [140]: grouped = tsdf.groupby(lambda x: x.year)

In [141]: grouped.fillna(method='pad')
Out[141]:
A B C
(continues on next page)

3.13. Group By: split-apply-combine 707


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135

[1000 rows x 3 columns]

In this example, we chopped the collection of time series into yearly chunks then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [142]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])

In [143]: g = pd.Series(list('abababab'))

In [144]: gb = s.groupby(g)

In [145]: gb.nlargest(3)
Out[145]:
a 4 19.0
[email protected]
0 9.0
T56GZSRVAH
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64

In [146]: gb.nsmallest(3)
Out[146]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64

3.13.8 Flexible apply

Some operations on the grouped data might not fit into either the aggregate or transform categories. Or, you may simply
want GroupBy to infer how to combine the results. For these, use the apply function, which can be substituted for
both aggregate and transform in many standard use cases. However, apply can handle some exceptional use
cases, for example:
In [147]: df
Out[147]:
A B C D
(continues on next page)

708 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580

In [148]: grouped = df.groupby('A')

# could also just call .describe()


In [149]: grouped['C'].apply(lambda x: x.describe())
Out[149]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
[email protected]
T56GZSRVAHThe dimension of the returned result can also change:
In [150]: grouped = df.groupby('A')['C']

In [151]: def f(group):


.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:

In [152]: grouped.apply(f)
Out[152]:
original demeaned
0 -0.575247 -0.215962
1 0.254161 0.123181
2 -1.143704 -0.784420
3 0.215897 0.084917
4 1.193555 1.552839
5 -0.077118 -0.208098
6 -0.408530 -0.049245
7 -0.862495 -0.503211

apply on a Series can operate on a returned value from the applied function, that is itself a series, and possibly upcast
the result to a DataFrame:

In [153]: def f(x):


.....: return pd.Series([x, x ** 2], index=['x', 'x^2'])
.....:

(continues on next page)

3.13. Group By: split-apply-combine 709


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [154]: s = pd.Series(np.random.rand(5))

In [155]: s
Out[155]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64

In [156]: s.apply(f)
Out[156]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697

Note: apply can act as a reducer, transformer, or filter function, depending on exactly what is passed to it. So
depending on the path taken, and exactly what you are grouping. Thus the grouped columns(s) may be included in the
output as well as set the indices.

[email protected]
3.13.9 Other useful features
T56GZSRVAH
Automatic exclusion of “nuisance” columns

Again consider the example DataFrame we’ve been looking at:


In [157]: df
Out[157]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580

Suppose we wish to compute the standard deviation grouped by the A column. There is a slight problem, namely that
we don’t care about the data in column B. We refer to this as a “nuisance” column. If the passed aggregation function
can’t be applied to some columns, the troublesome columns will be (silently) dropped. Thus, this does not pose any
problems:
In [158]: df.groupby('A').std()
Out[158]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785

710 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Note that df.groupby('A').colname.std(). is more efficient than df.groupby('A').std().


colname, so if the result of an aggregation function is only interesting over one column (here colname), it may be
filtered before applying the aggregation function.

Note: Any object column, also if it contains numerical values such as Decimal objects, is considered as a “nuisance”
columns. They are excluded from aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with other non-nuisance data types, you must
do so explicitly.

In [159]: from decimal import Decimal

In [160]: df_dec = pd.DataFrame(


.....: {'id': [1, 2, 1, 2],
.....: 'int_column': [1, 2, 3, 4],
.....: 'dec_column': [Decimal('0.50'), Decimal('0.15'),
.....: Decimal('0.25'), Decimal('0.40')]
.....: }
.....: )
.....:

# Decimal columns can be sum'd explicitly by themselves...


In [161]: df_dec.groupby(['id'])[['dec_column']].sum()
Out[161]:
dec_column
id
1 0.75
[email protected]
2 0.55
T56GZSRVAH
# ...but cannot be combined with standard data types or they will be excluded
In [162]: df_dec.groupby(['id'])[['int_column', 'dec_column']].sum()
Out[162]:
int_column
id
1 4
2 6

# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [163]: df_dec.groupby(['id']).agg({'int_column': 'sum', 'dec_column': 'sum'})
Out[163]:
int_column dec_column
id
1 4 0.75
2 6 0.55

3.13. Group By: split-apply-combine 711


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Handling of (un)observed Categorical values

When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those that
are observed groupers (observed=True).
Show all values:

In [164]: pd.Series([1, 1, 1]).groupby(pd.Categorical(['a', 'a', 'a'],


.....: categories=['a', 'b']),
.....: observed=False).count()
.....:
Out[164]:
a 3
b 0
dtype: int64

Show only the observed values:

In [165]: pd.Series([1, 1, 1]).groupby(pd.Categorical(['a', 'a', 'a'],


.....: categories=['a', 'b']),
.....: observed=True).count()
.....:
Out[165]:
a 3
dtype: int64

The returned dtype of the grouped will always include all of the categories that were grouped.
[email protected]
In [166]: s = pd.Series([1, 1, 1]).groupby(pd.Categorical(['a', 'a', 'a'],
T56GZSRVAH
.....: categories=['a', 'b']),
.....: observed=False).count()
.....:

In [167]: s.index.dtype
Out[167]: CategoricalDtype(categories=['a', 'b'], ordered=False)

NA and NaT group handling

If there are any NaN or NaT values in the grouping key, these will be automatically excluded. In other words, there will
never be an “NA group” or “NaT group”. This was not the case in older versions of pandas, but users were generally
discarding the NA group anyway (and supporting it was an implementation headache).

Grouping with ordered factors

Categorical variables represented as instance of pandas’s Categorical class can be used as group keys. If so, the
order of the levels will be preserved:

In [168]: data = pd.Series(np.random.randn(100))

In [169]: factor = pd.qcut(data, [0, .25, .5, .75, 1.])

In [170]: data.groupby(factor).mean()
Out[170]:
(-2.645, -0.523] -1.362896
(continues on next page)

712 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64

Grouping with a grouper specification

You may need to specify a bit more data to properly group. You can use the pd.Grouper to provide this local
control.

In [171]: import datetime

In [172]: df = pd.DataFrame({'Branch': 'A A A A A A A B'.split(),


.....: 'Buyer': 'Carl Mark Carl Carl Joe Joe Joe Carl'.split(),
.....: 'Quantity': [1, 3, 5, 1, 8, 1, 9, 3],
.....: 'Date': [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0)]
.....: })
.....:
[email protected]
T56GZSRVAHIn [173]: df
Out[173]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00

Groupby a specific column with the desired frequency. This is like resampling.

In [174]: df.groupby([pd.Grouper(freq='1M', key='Date'), 'Buyer']).sum()


Out[174]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9

You have an ambiguous specification in that you have a named index and a column that could be potential groupers.

3.13. Group By: split-apply-combine 713


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [175]: df = df.set_index('Date')

In [176]: df['Date'] = df.index + pd.offsets.MonthEnd(2)

In [177]: df.groupby([pd.Grouper(freq='6M', key='Date'), 'Buyer']).sum()


Out[177]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18

In [178]: df.groupby([pd.Grouper(freq='6M', level='Date'), 'Buyer']).sum()


Out[178]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18

Taking the first rows of each group

Just like for a DataFrame or Series you can call head and tail on a groupby:

In [179]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=['A', 'B'])


[email protected]
T56GZSRVAH
In [180]: df
Out[180]:
A B
0 1 2
1 1 4
2 5 6

In [181]: g = df.groupby('A')

In [182]: g.head(1)
Out[182]:
A B
0 1 2
2 5 6

In [183]: g.tail(1)
Out[183]:
A B
1 1 4
2 5 6

This shows the first or last n rows from each group.

714 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Taking the nth row of each group

To select from a DataFrame or Series the nth item, use nth(). This is a reduction method, and will return a single
row (or no row) per group if you pass an int for n:
In [184]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=['A', 'B'])

In [185]: g = df.groupby('A')

In [186]: g.nth(0)
Out[186]:
B
A
1 NaN
5 6.0

In [187]: g.nth(-1)
Out[187]:
B
A
1 4.0
5 6.0

In [188]: g.nth(1)
Out[188]:
B
A
1 4.0
[email protected]
T56GZSRVAHIf you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or
'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [189]: g.nth(0, dropna='any')
Out[189]:
B
A
1 4.0
5 6.0

In [190]: g.first()
Out[190]:
B
A
1 4.0
5 6.0

# nth(-1) is the same as g.last()


In [191]: g.nth(-1, dropna='any') # NaNs denote group exhausted when using dropna
Out[191]:
B
A
1 4.0
5 6.0

In [192]: g.last()
Out[192]:
B
(continues on next page)

3.13. Group By: split-apply-combine 715


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


A
1 4.0
5 6.0

In [193]: g.B.nth(0, dropna='all')


Out[193]:
A
1 4.0
5 6.0
Name: B, dtype: float64

As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.

In [194]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=['A', 'B'])

In [195]: g = df.groupby('A', as_index=False)

In [196]: g.nth(0)
Out[196]:
A B
0 1 NaN
2 5 6.0

In [197]: g.nth(-1)
Out[197]:
A B
1 1 4.0
2 5 6.0
[email protected]
T56GZSRVAH
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.

In [198]: business_dates = pd.date_range(start='4/1/2014', end='6/30/2014', freq='B')

In [199]: df = pd.DataFrame(1, index=business_dates, columns=['a', 'b'])

# get the first, 4th, and last date index for each month
In [200]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[200]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1

716 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Enumerate group items

To see the order in which each row appears within its group, use the cumcount method:

In [201]: dfg = pd.DataFrame(list('aaabba'), columns=['A'])

In [202]: dfg
Out[202]:
A
0 a
1 a
2 a
3 b
4 b
5 a

In [203]: dfg.groupby('A').cumcount()
Out[203]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64

In [204]: dfg.groupby('A').cumcount(ascending=False)
Out[204]:
0 3
[email protected]
1 2
T56GZSRVAH
2 1
3 1
4 0
5 0
dtype: int64

Enumerate groups

To see the ordering of the groups (as opposed to the order of rows within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the groups would be seen when iterating over the
groupby object, not the order they are first observed.

In [205]: dfg = pd.DataFrame(list('aaabba'), columns=['A'])

In [206]: dfg
Out[206]:
A
0 a
1 a
2 a
3 b
4 b
5 a

(continues on next page)

3.13. Group By: split-apply-combine 717


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [207]: dfg.groupby('A').ngroup()
Out[207]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64

In [208]: dfg.groupby('A').ngroup(ascending=False)
Out[208]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64

Plotting

Groupby also works with some plotting methods. For example, suppose we suspect that some features in a DataFrame
may differ by group, in this case, the values in column 1 where the group is “B” are 3 higher on average.

In [209]: np.random.seed(1234)
[email protected]
T56GZSRVAHIn [210]: df = pd.DataFrame(np.random.randn(50, 2))

In [211]: df['g'] = np.random.choice(['A', 'B'], size=50)

In [212]: df.loc[df['g'] == 'B', 1] += 3

We can easily visualize this with a boxplot:

In [213]: df.groupby('g').boxplot()
Out[213]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object

718 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[email protected]
T56GZSRVAH

The result of calling boxplot is a dictionary whose keys are the values of our grouping column g (“A” and “B”).
The values of the resulting dictionary can be controlled by the return_type keyword of boxplot. See the
visualization documentation for more.

Warning: For historical reasons, df.groupby("g").boxplot() is not equivalent to df.boxplot(by=


"g"). See here for an explanation.

Piping function calls

New in version 0.21.0.


Similar to the functionality provided by DataFrame and Series, functions that take GroupBy objects can be
chained together using a pipe method to allow for a cleaner, more readable syntax. To read about .pipe in general
terms, see here.
Combining .groupby and .pipe is often useful when you need to reuse GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products, revenue and quantity sold. We’d
like to do a groupwise calculation of prices (i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the code more readable. First we set the data:
In [214]: n = 1000

(continues on next page)

3.13. Group By: split-apply-combine 719


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [215]: df = pd.DataFrame({'Store': np.random.choice(['Store_1', 'Store_2'], n),
.....: 'Product': np.random.choice(['Product_1',
.....: 'Product_2'], n),
.....: 'Revenue': (np.random.random(n) * 50 + 10).round(2),
.....: 'Quantity': np.random.randint(1, 10, size=n)})
.....:

In [216]: df.head(2)
Out[216]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1

Now, to find prices per store/product, we can simply do:


In [217]: (df.groupby(['Store', 'Product'])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack().round(2))
.....:
Out[217]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64

Piping can also be expressive when you want to deliver a grouped object to some arbitrary function, for example:
In [218]: def mean(groupby):
[email protected]
T56GZSRVAH .....: return groupby.mean()
.....:

In [219]: df.groupby(['Store', 'Product']).pipe(mean)


Out[219]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000

where mean takes a GroupBy object and finds the mean of the Revenue and Quantity columns respectively for each
Store-Product combination. The mean function can be any function that takes in a GroupBy object; the .pipe will
pass the GroupBy object as a parameter into the function you specify.

3.13.10 Examples

Regrouping by factor

Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [220]: df = pd.DataFrame({'a': [1, 0, 0], 'b': [0, 1, 0],
.....: 'c': [1, 0, 0], 'd': [2, 3, 4]})
.....:

In [221]: df
(continues on next page)

720 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[221]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4

In [222]: df.groupby(df.sum(), axis=1).sum()


Out[222]:
1 9
0 2 2
1 1 3
2 0 4

Multi-column factorization

By using ngroup(), we can extract information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies naturally to multiple columns of mixed type and different sources.
This can be useful as an intermediate categorical-like step in processing, when the relationships between the group
rows are more important than their content, or as input to an algorithm which only accepts the integer encoding.
(For more information about support in pandas for full categorical data, see the Categorical introduction and the API
documentation.)

In [223]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})

In [224]: dfg
Out[224]:
[email protected]
T56GZSRVAH A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a

In [225]: dfg.groupby(["A", "B"]).ngroup()


Out[225]:
0 0
1 0
2 1
3 2
4 1
dtype: int64

In [226]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()


Out[226]:
0 0
1 0
2 1
3 3
4 2
dtype: int64

3.13. Group By: split-apply-combine 721


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Groupby by indexer to ‘resample’ data

Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that
generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the
groupby operation.

Note: The below example shows how we can downsample by consolidation of samples into fewer samples. Here by
using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information
contained in many samples into a small subset of values which is their standard deviation thereby reducing the number
of samples.

In [227]: df = pd.DataFrame(np.random.randn(10, 2))

In [228]: df
Out[228]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
[email protected]
7 -0.257759 -1.081009
T56GZSRVAH8 0.505895 -1.701948
9 -1.006349 0.020208

In [229]: df.index // 5
Out[229]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')

In [230]: df.groupby(df.index // 5).std()


Out[230]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941

Returning a Series to propagate names

Group DataFrame columns, compute a set of metrics and return a named Series. The Series name is used as the name
for the column index. This is especially useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [231]: df = pd.DataFrame({'a': [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: 'b': [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: 'c': [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: 'd': [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1]})
.....:

In [232]: def compute_metrics(x):


.....: result = {'b_sum': x['b'].sum(), 'c_mean': x['c'].mean()}
.....: return pd.Series(result, name='metrics')
(continues on next page)

722 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


.....:

In [233]: result = df.groupby('a').apply(compute_metrics)

In [234]: result
Out[234]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5

In [235]: result.stack()
Out[235]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64

3.14 Time series / date functionality


[email protected]
pandas contains extensive capabilities and features for working with time series data for all domains. Using the
T56GZSRVAH
NumPy datetime64 and timedelta64 dtypes, pandas has consolidated a large number of features from other
Python libraries like scikits.timeseries as well as created a tremendous amount of new functionality for
manipulating time series data.
For example, pandas supports:
Parsing time series information from various sources and formats

In [1]: import datetime

In [2]: dti = pd.to_datetime(['1/1/2018', np.datetime64('2018-01-01'),


...: datetime.datetime(2018, 1, 1)])
...:

In [3]: dti
Out[3]: DatetimeIndex(['2018-01-01', '2018-01-01', '2018-01-01'], dtype=
˓→'datetime64[ns]', freq=None)

Generate sequences of fixed-frequency dates and time spans

In [4]: dti = pd.date_range('2018-01-01', periods=3, freq='H')

In [5]: dti
Out[5]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 01:00:00',
'2018-01-01 02:00:00'],
dtype='datetime64[ns]', freq='H')

Manipulating and converting date times with timezone information

3.14. Time series / date functionality 723


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [6]: dti = dti.tz_localize('UTC')

In [7]: dti
Out[7]:
DatetimeIndex(['2018-01-01 00:00:00+00:00', '2018-01-01 01:00:00+00:00',
'2018-01-01 02:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='H')

In [8]: dti.tz_convert('US/Pacific')
Out[8]:
DatetimeIndex(['2017-12-31 16:00:00-08:00', '2017-12-31 17:00:00-08:00',
'2017-12-31 18:00:00-08:00'],
dtype='datetime64[ns, US/Pacific]', freq='H')

Resampling or converting a time series to a particular frequency

In [9]: idx = pd.date_range('2018-01-01', periods=5, freq='H')

In [10]: ts = pd.Series(range(len(idx)), index=idx)

In [11]: ts
Out[11]:
2018-01-01 00:00:00 0
2018-01-01 01:00:00 1
2018-01-01 02:00:00 2
2018-01-01 03:00:00 3
2018-01-01 04:00:00 4
Freq: H, dtype: int64
[email protected]
T56GZSRVAHIn [12]: ts.resample('2H').mean()
Out[12]:
2018-01-01 00:00:00 0.5
2018-01-01 02:00:00 2.5
2018-01-01 04:00:00 4.0
Freq: 2H, dtype: float64

Performing date and time arithmetic with absolute or relative time increments

In [13]: friday = pd.Timestamp('2018-01-05')

In [14]: friday.day_name()
Out[14]: 'Friday'

# Add 1 day
In [15]: saturday = friday + pd.Timedelta('1 day')

In [16]: saturday.day_name()
Out[16]: 'Saturday'

# Add 1 business day (Friday --> Monday)


In [17]: monday = friday + pd.offsets.BDay()

In [18]: monday.day_name()
Out[18]: 'Monday'

pandas provides a relatively compact and self-contained set of tools for performing the above tasks and more.

724 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.14.1 Overview

pandas captures 4 general time related concepts:


1. Date times: A specific date and time with timezone support. Similar to datetime.datetime from the
standard library.
2. Time deltas: An absolute time duration. Similar to datetime.timedelta from the standard library.
3. Time spans: A span of time defined by a point in time and its associated frequency.
4. Date offsets: A relative time duration that respects calendar arithmetic. Similar to dateutil.
relativedelta.relativedelta from the dateutil package.

Con- Scalar Array Class pandas Data Type Primary Creation Method
cept Class
Date Timestamp DatetimeIndex
datetime64[ns] or to_datetime or
times datetime64[ns, tz] date_range
Time Timedelta TimedeltaIndex
timedelta64[ns] to_timedelta or
deltas timedelta_range
Time Period PeriodIndex period[freq] Period or period_range
spans
Date DateOffsetNone None DateOffset
offsets

For time series data, it’s conventional to represent the time component in the index of a Series or DataFrame so
manipulations can be performed with respect to the time element.
[email protected]
In [19]: pd.Series(range(3), index=pd.date_range('2000', freq='D', periods=3))
T56GZSRVAHOut[19]:
2000-01-01 0
2000-01-02 1
2000-01-03 2
Freq: D, dtype: int64

However, Series and DataFrame can directly also support the time component as data itself.
In [20]: pd.Series(pd.date_range('2000', freq='D', periods=3))
Out[20]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]

Series and DataFrame have extended data type support and functionality for datetime, timedelta and
Period data when passed into those constructors. DateOffset data however will be stored as object data.
In [21]: pd.Series(pd.period_range('1/1/2011', freq='M', periods=3))
Out[21]:
0 2011-01
1 2011-02
2 2011-03
dtype: period[M]

In [22]: pd.Series([pd.DateOffset(1), pd.DateOffset(2)])


Out[22]:
0 <DateOffset>
(continues on next page)

3.14. Time series / date functionality 725


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 <2 * DateOffsets>
dtype: object

In [23]: pd.Series(pd.date_range('1/1/2011', freq='M', periods=3))


Out[23]:
0 2011-01-31
1 2011-02-28
2 2011-03-31
dtype: datetime64[ns]

Lastly, pandas represents null date times, time deltas, and time spans as NaT which is useful for representing missing
or null date like values and behaves similar as np.nan does for float data.

In [24]: pd.Timestamp(pd.NaT)
Out[24]: NaT

In [25]: pd.Timedelta(pd.NaT)
Out[25]: NaT

In [26]: pd.Period(pd.NaT)
Out[26]: NaT

# Equality acts as np.nan would


In [27]: pd.NaT == pd.NaT
Out[27]: False

[email protected]
T56GZSRVAH3.14.2 Timestamps vs. Time Spans
Timestamped data is the most basic type of time series data that associates values with points in time. For pandas
objects it means using the points in time.

In [28]: pd.Timestamp(datetime.datetime(2012, 5, 1))


Out[28]: Timestamp('2012-05-01 00:00:00')

In [29]: pd.Timestamp('2012-05-01')
Out[29]: Timestamp('2012-05-01 00:00:00')

In [30]: pd.Timestamp(2012, 5, 1)
Out[30]: Timestamp('2012-05-01 00:00:00')

However, in many cases it is more natural to associate things like change variables with a time span instead. The span
represented by Period can be specified explicitly, or inferred from datetime string format.
For example:

In [31]: pd.Period('2011-01')
Out[31]: Period('2011-01', 'M')

In [32]: pd.Period('2012-05', freq='D')


Out[32]: Period('2012-05-01', 'D')

Timestamp and Period can serve as an index. Lists of Timestamp and Period are automatically coerced to
DatetimeIndex and PeriodIndex respectively.

726 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [33]: dates = [pd.Timestamp('2012-05-01'),


....: pd.Timestamp('2012-05-02'),
....: pd.Timestamp('2012-05-03')]
....:

In [34]: ts = pd.Series(np.random.randn(3), dates)

In [35]: type(ts.index)
Out[35]: pandas.core.indexes.datetimes.DatetimeIndex

In [36]: ts.index
Out[36]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype=
˓→'datetime64[ns]', freq=None)

In [37]: ts
Out[37]:
2012-05-01 0.469112
2012-05-02 -0.282863
2012-05-03 -1.509059
dtype: float64

In [38]: periods = [pd.Period('2012-01'), pd.Period('2012-02'), pd.Period('2012-03')]

In [39]: ts = pd.Series(np.random.randn(3), periods)

In [40]: type(ts.index)
Out[40]: pandas.core.indexes.period.PeriodIndex

In [41]: ts.index
[email protected]
T56GZSRVAHOut[41]: PeriodIndex(['2012-01', '2012-02', '2012-03'], dtype='period[M]', freq='M')

In [42]: ts
Out[42]:
2012-01 -1.135632
2012-02 1.212112
2012-03 -0.173215
Freq: M, dtype: float64

pandas allows you to capture both representations and convert between them. Under the hood, pandas represents
timestamps using instances of Timestamp and sequences of timestamps using instances of DatetimeIndex. For
regular time spans, pandas uses Period objects for scalar values and PeriodIndex for sequences of spans. Better
support for irregular intervals with arbitrary start and end points are forth-coming in future releases.

3.14.3 Converting to timestamps

To convert a Series or list-like object of date-like objects e.g. strings, epochs, or a mixture, you can use the
to_datetime function. When passed a Series, this returns a Series (with the same index), while a list-like is
converted to a DatetimeIndex:
In [43]: pd.to_datetime(pd.Series(['Jul 31, 2009', '2010-01-10', None]))
Out[43]:
0 2009-07-31
1 2010-01-10
2 NaT
dtype: datetime64[ns]

(continues on next page)

3.14. Time series / date functionality 727


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [44]: pd.to_datetime(['2005/11/23', '2010.12.31'])
Out[44]: DatetimeIndex(['2005-11-23', '2010-12-31'], dtype='datetime64[ns]',
˓→freq=None)

If you use dates which start with the day first (i.e. European style), you can pass the dayfirst flag:

In [45]: pd.to_datetime(['04-01-2012 10:00'], dayfirst=True)


Out[45]: DatetimeIndex(['2012-01-04 10:00:00'], dtype='datetime64[ns]', freq=None)

In [46]: pd.to_datetime(['14-01-2012', '01-14-2012'], dayfirst=True)


Out[46]: DatetimeIndex(['2012-01-14', '2012-01-14'], dtype='datetime64[ns]',
˓→freq=None)

Warning: You see in the above example that dayfirst isn’t strict, so if a date can’t be parsed with the day
being first it will be parsed as if dayfirst were False.

If you pass a single string to to_datetime, it returns a single Timestamp. Timestamp can also accept string
input, but it doesn’t accept string parsing options like dayfirst or format, so use to_datetime if these are
required.

In [47]: pd.to_datetime('2010/11/12')
Out[47]: Timestamp('2010-11-12 00:00:00')

In [48]: pd.Timestamp('2010/11/12')
Out[48]: Timestamp('2010-11-12 00:00:00')
[email protected]
T56GZSRVAH
You can also use the DatetimeIndex constructor directly:

In [49]: pd.DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'])


Out[49]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype=
˓→'datetime64[ns]', freq=None)

The string ‘infer’ can be passed in order to set the frequency of the index as the inferred frequency upon creation:

In [50]: pd.DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], freq='infer')


Out[50]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype=
˓→'datetime64[ns]', freq='2D')

Providing a format argument

In addition to the required datetime string, a format argument can be passed to ensure specific parsing. This could
also potentially speed up the conversion considerably.

In [51]: pd.to_datetime('2010/11/12', format='%Y/%m/%d')


Out[51]: Timestamp('2010-11-12 00:00:00')

In [52]: pd.to_datetime('12-11-2010 00:00', format='%d-%m-%Y %H:%M')


Out[52]: Timestamp('2010-11-12 00:00:00')

For more information on the choices available when specifying the format option, see the Python datetime docu-
mentation.

728 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Assembling datetime from multiple DataFrame columns

You can also pass a DataFrame of integer or string columns to assemble into a Series of Timestamps.

In [53]: df = pd.DataFrame({'year': [2015, 2016],


....: 'month': [2, 3],
....: 'day': [4, 5],
....: 'hour': [2, 3]})
....:

In [54]: pd.to_datetime(df)
Out[54]:
0 2015-02-04 02:00:00
1 2016-03-05 03:00:00
dtype: datetime64[ns]

You can pass only the columns that you need to assemble.

In [55]: pd.to_datetime(df[['year', 'month', 'day']])


Out[55]:
0 2015-02-04
1 2016-03-05
dtype: datetime64[ns]

pd.to_datetime looks for standard designations of the datetime component in the column names, including:
• required: year, month, day
• optional: hour, minute, second, millisecond, microsecond, nanosecond
[email protected]
T56GZSRVAH
Invalid data

The default behavior, errors='raise', is to raise when unparseable:

In [2]: pd.to_datetime(['2009/07/31', 'asd'], errors='raise')


ValueError: Unknown string format

Pass errors='ignore' to return the original input when unparseable:

In [56]: pd.to_datetime(['2009/07/31', 'asd'], errors='ignore')


Out[56]: Index(['2009/07/31', 'asd'], dtype='object')

Pass errors='coerce' to convert unparseable data to NaT (not a time):

In [57]: pd.to_datetime(['2009/07/31', 'asd'], errors='coerce')


Out[57]: DatetimeIndex(['2009-07-31', 'NaT'], dtype='datetime64[ns]', freq=None)

3.14. Time series / date functionality 729


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Epoch timestamps

pandas supports converting integer or float epoch times to Timestamp and DatetimeIndex. The default unit is
nanoseconds, since that is how Timestamp objects are stored internally. However, epochs are often stored in another
unit which can be specified. These are computed from the starting point specified by the origin parameter.

In [58]: pd.to_datetime([1349720105, 1349806505, 1349892905,


....: 1349979305, 1350065705], unit='s')
....:
Out[58]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05',
'2012-10-12 18:15:05'],
dtype='datetime64[ns]', freq=None)

In [59]: pd.to_datetime([1349720105100, 1349720105200, 1349720105300,


....: 1349720105400, 1349720105500], unit='ms')
....:
Out[59]:
DatetimeIndex(['2012-10-08 18:15:05.100000', '2012-10-08 18:15:05.200000',
'2012-10-08 18:15:05.300000', '2012-10-08 18:15:05.400000',
'2012-10-08 18:15:05.500000'],
dtype='datetime64[ns]', freq=None)

Constructing a Timestamp or DatetimeIndex with an epoch timestamp with the tz argument specified will
currently localize the epoch timestamps to UTC first then convert the result to the specified time zone. However, this
behavior is deprecated, and if you have epochs in wall time in another timezone, it is recommended to read the epochs
as timezone-naive timestamps and then localize to the appropriate timezone:
[email protected]
T56GZSRVAHIn [60]: pd.Timestamp(1262347200000000000).tz_localize('US/Pacific')
Out[60]: Timestamp('2010-01-01 12:00:00-0800', tz='US/Pacific')

In [61]: pd.DatetimeIndex([1262347200000000000]).tz_localize('US/Pacific')
Out[61]: DatetimeIndex(['2010-01-01 12:00:00-08:00'], dtype='datetime64[ns, US/
˓→Pacific]', freq=None)

Note: Epoch times will be rounded to the nearest nanosecond.

Warning: Conversion of float epoch times can lead to inaccurate and unexpected results. Python floats have
about 15 digits precision in decimal. Rounding during conversion from float to high precision Timestamp is
unavoidable. The only way to achieve exact precision is to use a fixed-width types (e.g. an int64).
In [62]: pd.to_datetime([1490195805.433, 1490195805.433502912], unit='s')
Out[62]: DatetimeIndex(['2017-03-22 15:16:45.433000088', '2017-03-22 15:16:45.
˓→433502913'], dtype='datetime64[ns]', freq=None)

In [63]: pd.to_datetime(1490195805433502912, unit='ns')


Out[63]: Timestamp('2017-03-22 15:16:45.433502912')

See also:
Using the origin Parameter

730 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

From timestamps to epoch

To invert the operation from above, namely, to convert from a Timestamp to a ‘unix’ epoch:

In [64]: stamps = pd.date_range('2012-10-08 18:15:05', periods=4, freq='D')

In [65]: stamps
Out[65]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05'],
dtype='datetime64[ns]', freq='D')

We subtract the epoch (midnight at January 1, 1970 UTC) and then floor divide by the “unit” (1 second).

In [66]: (stamps - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')


Out[66]: Int64Index([1349720105, 1349806505, 1349892905, 1349979305], dtype='int64')

Using the origin Parameter

Using the origin parameter, one can specify an alternative starting point for creation of a DatetimeIndex. For
example, to use 1960-01-01 as the starting date:

In [67]: pd.to_datetime([1, 2, 3], unit='D', origin=pd.Timestamp('1960-01-01'))


Out[67]: DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], dtype=
˓→'datetime64[ns]', freq=None)

The default is set at origin='unix', which defaults to 1970-01-01 00:00:00. Commonly called ‘unix
[email protected]
T56GZSRVAHepoch’ or POSIX time.
In [68]: pd.to_datetime([1, 2, 3], unit='D')
Out[68]: DatetimeIndex(['1970-01-02', '1970-01-03', '1970-01-04'], dtype=
˓→'datetime64[ns]', freq=None)

3.14.4 Generating ranges of timestamps

To generate an index with timestamps, you can use either the DatetimeIndex or Index constructor and pass in a
list of datetime objects:

In [69]: dates = [datetime.datetime(2012, 5, 1),


....: datetime.datetime(2012, 5, 2),
....: datetime.datetime(2012, 5, 3)]
....:

# Note the frequency information


In [70]: index = pd.DatetimeIndex(dates)

In [71]: index
Out[71]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype=
˓→'datetime64[ns]', freq=None)

# Automatically converted to DatetimeIndex


In [72]: index = pd.Index(dates)

(continues on next page)

3.14. Time series / date functionality 731


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [73]: index
Out[73]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype=
˓→'datetime64[ns]', freq=None)

In practice this becomes very cumbersome because we often need a very long index with a large number of timestamps.
If we need timestamps on a regular frequency, we can use the date_range() and bdate_range() functions
to create a DatetimeIndex. The default frequency for date_range is a calendar day while the default for
bdate_range is a business day:

In [74]: start = datetime.datetime(2011, 1, 1)

In [75]: end = datetime.datetime(2012, 1, 1)

In [76]: index = pd.date_range(start, end)

In [77]: index
Out[77]:
DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03', '2011-01-04',
'2011-01-05', '2011-01-06', '2011-01-07', '2011-01-08',
'2011-01-09', '2011-01-10',
...
'2011-12-23', '2011-12-24', '2011-12-25', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30',
'2011-12-31', '2012-01-01'],
dtype='datetime64[ns]', length=366, freq='D')

In [78]: index = pd.bdate_range(start, end)


[email protected]
T56GZSRVAHIn [79]: index
Out[79]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14',
...
'2011-12-19', '2011-12-20', '2011-12-21', '2011-12-22',
'2011-12-23', '2011-12-26', '2011-12-27', '2011-12-28',
'2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', length=260, freq='B')

Convenience functions like date_range and bdate_range can utilize a variety of frequency aliases:

In [80]: pd.date_range(start, periods=1000, freq='M')


Out[80]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-30',
'2011-05-31', '2011-06-30', '2011-07-31', '2011-08-31',
'2011-09-30', '2011-10-31',
...
'2093-07-31', '2093-08-31', '2093-09-30', '2093-10-31',
'2093-11-30', '2093-12-31', '2094-01-31', '2094-02-28',
'2094-03-31', '2094-04-30'],
dtype='datetime64[ns]', length=1000, freq='M')

In [81]: pd.bdate_range(start, periods=250, freq='BQS')


Out[81]:
DatetimeIndex(['2011-01-03', '2011-04-01', '2011-07-01', '2011-10-03',
'2012-01-02', '2012-04-02', '2012-07-02', '2012-10-01',
(continues on next page)

732 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


'2013-01-01', '2013-04-01',
...
'2071-01-01', '2071-04-01', '2071-07-01', '2071-10-01',
'2072-01-01', '2072-04-01', '2072-07-01', '2072-10-03',
'2073-01-02', '2073-04-03'],
dtype='datetime64[ns]', length=250, freq='BQS-JAN')

date_range and bdate_range make it easy to generate a range of dates using various combinations of parame-
ters like start, end, periods, and freq. The start and end dates are strictly inclusive, so dates outside of those
specified will not be generated:

In [82]: pd.date_range(start, end, freq='BM')


Out[82]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')

In [83]: pd.date_range(start, end, freq='W')


Out[83]:
DatetimeIndex(['2011-01-02', '2011-01-09', '2011-01-16', '2011-01-23',
'2011-01-30', '2011-02-06', '2011-02-13', '2011-02-20',
'2011-02-27', '2011-03-06', '2011-03-13', '2011-03-20',
'2011-03-27', '2011-04-03', '2011-04-10', '2011-04-17',
'2011-04-24', '2011-05-01', '2011-05-08', '2011-05-15',
'2011-05-22', '2011-05-29', '2011-06-05', '2011-06-12',
'2011-06-19', '2011-06-26', '2011-07-03', '2011-07-10',
[email protected] '2011-07-17', '2011-07-24', '2011-07-31', '2011-08-07',
T56GZSRVAH '2011-08-14', '2011-08-21', '2011-08-28', '2011-09-04',
'2011-09-11', '2011-09-18', '2011-09-25', '2011-10-02',
'2011-10-09', '2011-10-16', '2011-10-23', '2011-10-30',
'2011-11-06', '2011-11-13', '2011-11-20', '2011-11-27',
'2011-12-04', '2011-12-11', '2011-12-18', '2011-12-25',
'2012-01-01'],
dtype='datetime64[ns]', freq='W-SUN')

In [84]: pd.bdate_range(end=end, periods=20)


Out[84]:
DatetimeIndex(['2011-12-05', '2011-12-06', '2011-12-07', '2011-12-08',
'2011-12-09', '2011-12-12', '2011-12-13', '2011-12-14',
'2011-12-15', '2011-12-16', '2011-12-19', '2011-12-20',
'2011-12-21', '2011-12-22', '2011-12-23', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', freq='B')

In [85]: pd.bdate_range(start=start, periods=20)


Out[85]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14', '2011-01-17', '2011-01-18',
'2011-01-19', '2011-01-20', '2011-01-21', '2011-01-24',
'2011-01-25', '2011-01-26', '2011-01-27', '2011-01-28'],
dtype='datetime64[ns]', freq='B')

New in version 0.23.0.


Specifying start, end, and periods will generate a range of evenly spaced dates from start to end inclusively,

3.14. Time series / date functionality 733


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

with periods number of elements in the resulting DatetimeIndex:

In [86]: pd.date_range('2018-01-01', '2018-01-05', periods=5)


Out[86]:
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05'],
dtype='datetime64[ns]', freq=None)

In [87]: pd.date_range('2018-01-01', '2018-01-05', periods=10)


Out[87]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 10:40:00',
'2018-01-01 21:20:00', '2018-01-02 08:00:00',
'2018-01-02 18:40:00', '2018-01-03 05:20:00',
'2018-01-03 16:00:00', '2018-01-04 02:40:00',
'2018-01-04 13:20:00', '2018-01-05 00:00:00'],
dtype='datetime64[ns]', freq=None)

Custom frequency ranges

bdate_range can also generate a range of custom frequency dates by using the weekmask and holidays pa-
rameters. These parameters will only be used if a custom frequency string is passed.

In [88]: weekmask = 'Mon Wed Fri'

In [89]: holidays = [datetime.datetime(2011, 1, 5), datetime.datetime(2011, 3, 14)]

In [90]: pd.bdate_range(start, end, freq='C', weekmask=weekmask, holidays=holidays)


Out[90]:
[email protected]
T56GZSRVAHDatetimeIndex(['2011-01-03', '2011-01-07', '2011-01-10', '2011-01-12',
'2011-01-14', '2011-01-17', '2011-01-19', '2011-01-21',
'2011-01-24', '2011-01-26',
...
'2011-12-09', '2011-12-12', '2011-12-14', '2011-12-16',
'2011-12-19', '2011-12-21', '2011-12-23', '2011-12-26',
'2011-12-28', '2011-12-30'],
dtype='datetime64[ns]', length=154, freq='C')

In [91]: pd.bdate_range(start, end, freq='CBMS', weekmask=weekmask)


Out[91]:
DatetimeIndex(['2011-01-03', '2011-02-02', '2011-03-02', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-02', '2011-10-03', '2011-11-02', '2011-12-02'],
dtype='datetime64[ns]', freq='CBMS')

See also:
Custom business days

734 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.14.5 Timestamp limitations

Since pandas represents timestamps in nanosecond resolution, the time span that can be represented using a 64-bit
integer is limited to approximately 584 years:
In [92]: pd.Timestamp.min
Out[92]: Timestamp('1677-09-21 00:12:43.145225')

In [93]: pd.Timestamp.max
Out[93]: Timestamp('2262-04-11 23:47:16.854775807')

See also:
Representing out-of-bounds spans

3.14.6 Indexing

One of the main uses for DatetimeIndex is as an index for pandas objects. The DatetimeIndex class contains
many time series related optimizations:
• A large range of dates for various offsets are pre-computed and cached under the hood in order to make gener-
ating subsequent date ranges very fast (just have to grab a slice).
• Fast shifting using the shift and tshift method on pandas objects.
• Unioning of overlapping DatetimeIndex objects with the same frequency is very fast (important for fast
data alignment).
• Quick access to date fields via properties such as year, month, etc.
[email protected]
T56GZSRVAH • Regularization functions like snap and very fast asof logic.
DatetimeIndex objects have all the basic functionality of regular Index objects, and a smorgasbord of advanced
time series specific methods for easy frequency processing.
See also:
Reindexing methods

Note: While pandas does not force you to have a sorted date index, some of these methods may have unexpected or
incorrect behavior if the dates are unsorted.

DatetimeIndex can be used like a regular index and offers all of its intelligent functionality like selection, slicing,
etc.
In [94]: rng = pd.date_range(start, end, freq='BM')

In [95]: ts = pd.Series(np.random.randn(len(rng)), index=rng)

In [96]: ts.index
Out[96]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')

In [97]: ts[:5].index
Out[97]:
(continues on next page)

3.14. Time series / date functionality 735


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31'],
dtype='datetime64[ns]', freq='BM')

In [98]: ts[::2].index
Out[98]:
DatetimeIndex(['2011-01-31', '2011-03-31', '2011-05-31', '2011-07-29',
'2011-09-30', '2011-11-30'],
dtype='datetime64[ns]', freq='2BM')

Partial string indexing

Dates and strings that parse to timestamps can be passed as indexing parameters:

In [99]: ts['1/31/2011']
Out[99]: 0.11920871129693428

In [100]: ts[datetime.datetime(2011, 12, 25):]


Out[100]:
2011-12-30 0.56702
Freq: BM, dtype: float64

In [101]: ts['10/31/2011':'12/31/2011']
Out[101]:
2011-10-31 0.271860
2011-11-30 -0.424972
[email protected]
2011-12-30 0.567020
T56GZSRVAHFreq: BM, dtype: float64

To provide convenience for accessing longer time series, you can also pass in the year or year and month as strings:

In [102]: ts['2011']
Out[102]:
2011-01-31 0.119209
2011-02-28 -1.044236
2011-03-31 -0.861849
2011-04-29 -2.104569
2011-05-31 -0.494929
2011-06-30 1.071804
2011-07-29 0.721555
2011-08-31 -0.706771
2011-09-30 -1.039575
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64

In [103]: ts['2011-6']
Out[103]:
2011-06-30 1.071804
Freq: BM, dtype: float64

This type of slicing will work on a DataFrame with a DatetimeIndex as well. Since the partial string selection
is a form of label slicing, the endpoints will be included. This would include matching times on an included date:

736 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [104]: dft = pd.DataFrame(np.random.randn(100000, 1), columns=['A'],


.....: index=pd.date_range('20130101', periods=100000, freq='T
˓→'))

.....:

In [105]: dft
Out[105]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043

[100000 rows x 1 columns]

In [106]: dft['2013']
Out[106]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00
[email protected] 0.113648
T56GZSRVAH2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043

[100000 rows x 1 columns]

This starts on the very first time in the month, and includes the last date and time for the month:

In [107]: dft['2013-1':'2013-2']
Out[107]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517

[84960 rows x 1 columns]

3.14. Time series / date functionality 737


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

This specifies a stop time that includes all of the times on the last day:

In [108]: dft['2013-1':'2013-2-28']
Out[108]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517

[84960 rows x 1 columns]

This specifies an exact stop time (and is not the same as the above):

In [109]: dft['2013-1':'2013-2-28 00:00:00']


Out[109]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
[email protected]
... ...
T56GZSRVAH2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501

[83521 rows x 1 columns]

We are stopping on the included end-point as it is part of the index:

In [110]: dft['2013-1-15':'2013-1-15 12:30:00']


Out[110]:
A
2013-01-15 00:00:00 -0.984810
2013-01-15 00:01:00 0.941451
2013-01-15 00:02:00 1.559365
2013-01-15 00:03:00 1.034374
2013-01-15 00:04:00 -1.480656
... ...
2013-01-15 12:26:00 0.371454
2013-01-15 12:27:00 -0.930806
2013-01-15 12:28:00 -0.069177
2013-01-15 12:29:00 0.066510
2013-01-15 12:30:00 -0.003945

[751 rows x 1 columns]

DatetimeIndex partial string indexing also works on a DataFrame with a MultiIndex:

738 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [111]: dft2 = pd.DataFrame(np.random.randn(20, 1),


.....: columns=['A'],
.....: index=pd.MultiIndex.from_product(
.....: [pd.date_range('20130101', periods=10, freq='12H'),
.....: ['a', 'b']]))
.....:

In [112]: dft2
Out[112]:
A
2013-01-01 00:00:00 a -0.298694
b 0.823553
2013-01-01 12:00:00 a 0.943285
b -1.479399
2013-01-02 00:00:00 a -1.643342
... ...
2013-01-04 12:00:00 b 0.069036
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331

[20 rows x 1 columns]

In [113]: dft2.loc['2013-01-05']
Out[113]:
A
2013-01-05 00:00:00 a 0.122297
[email protected] b 1.422060
T56GZSRVAH2013-01-05 12:00:00 a 0.370079
b 1.016331

In [114]: idx = pd.IndexSlice

In [115]: dft2 = dft2.swaplevel(0, 1).sort_index()

In [116]: dft2.loc[idx[:, '2013-01-05'], :]


Out[116]:
A
a 2013-01-05 00:00:00 0.122297
2013-01-05 12:00:00 0.370079
b 2013-01-05 00:00:00 1.422060
2013-01-05 12:00:00 1.016331

New in version 0.25.0.


Slicing with string indexing also honors UTC offset.
In [117]: df = pd.DataFrame([0], index=pd.DatetimeIndex(['2019-01-01'], tz='US/Pacific
˓→'))

In [118]: df
Out[118]:
0
2019-01-01 00:00:00-08:00 0

In [119]: df['2019-01-01 12:00:00+04:00':'2019-01-01 13:00:00+04:00']


Out[119]:
(continues on next page)

3.14. Time series / date functionality 739


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


0
2019-01-01 00:00:00-08:00 0

Slice vs. exact match

Changed in version 0.20.0.


The same string used as an indexing parameter can be treated either as a slice or as an exact match depending on the
resolution of the index. If the string is less accurate than the index, it will be treated as a slice, otherwise as an exact
match.
Consider a Series object with a minute resolution index:

In [120]: series_minute = pd.Series([1, 2, 3],


.....: pd.DatetimeIndex(['2011-12-31 23:59:00',
.....: '2012-01-01 00:00:00',
.....: '2012-01-01 00:02:00']))
.....:

In [121]: series_minute.index.resolution
Out[121]: 'minute'

A timestamp string less accurate than a minute gives a Series object.

In [122]: series_minute['2011-12-31 23']


Out[122]:
2011-12-31 23:59:00
[email protected] 1
T56GZSRVAHdtype: int64

A timestamp string with minute resolution (or more accurate), gives a scalar instead, i.e. it is not casted to a slice.

In [123]: series_minute['2011-12-31 23:59']


Out[123]: 1

In [124]: series_minute['2011-12-31 23:59:00']


Out[124]: 1

If index resolution is second, then the minute-accurate timestamp gives a Series.

In [125]: series_second = pd.Series([1, 2, 3],


.....: pd.DatetimeIndex(['2011-12-31 23:59:59',
.....: '2012-01-01 00:00:00',
.....: '2012-01-01 00:00:01']))
.....:

In [126]: series_second.index.resolution
Out[126]: 'second'

In [127]: series_second['2011-12-31 23:59']


Out[127]:
2011-12-31 23:59:59 1
dtype: int64

If the timestamp string is treated as a slice, it can be used to index DataFrame with [] as well.

740 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [128]: dft_minute = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]},


.....: index=series_minute.index)
.....:

In [129]: dft_minute['2011-12-31 23']


Out[129]:
a b
2011-12-31 23:59:00 1 4

Warning: However, if the string is treated as an exact match, the selection in DataFrame’s [] will be column-
wise and not row-wise, see Indexing Basics. For example dft_minute['2011-12-31 23:59'] will raise
KeyError as '2012-12-31 23:59' has the same resolution as the index and there is no column with such
name:
To always have unambiguous selection, whether the row is treated as a slice or a single selection, use .loc.
In [130]: dft_minute.loc['2011-12-31 23:59']
Out[130]:
a 1
b 4
Name: 2011-12-31 23:59:00, dtype: int64

Note also that DatetimeIndex resolution cannot be less precise than day.
In [131]: series_monthly = pd.Series([1, 2, 3],
.....: pd.DatetimeIndex(['2011-12', '2012-01', '2012-02
[email protected]

˓ ']))
T56GZSRVAH .....:

In [132]: series_monthly.index.resolution
Out[132]: 'day'

In [133]: series_monthly['2011-12'] # returns Series


Out[133]:
2011-12-01 1
dtype: int64

Exact indexing

As discussed in previous section, indexing a DatetimeIndex with a partial string depends on the “accuracy” of the
period, in other words how specific the interval is in relation to the resolution of the index. In contrast, indexing with
Timestamp or datetime objects is exact, because the objects have exact meaning. These also follow the semantics
of including both endpoints.
These Timestamp and datetime objects have exact hours, minutes, and seconds, even though they were
not explicitly specified (they are 0).
In [134]: dft[datetime.datetime(2013, 1, 1):datetime.datetime(2013, 2, 28)]
Out[134]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
(continues on next page)

3.14. Time series / date functionality 741


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501

[83521 rows x 1 columns]

With no defaults.
In [135]: dft[datetime.datetime(2013, 1, 1, 10, 12, 0):
.....: datetime.datetime(2013, 2, 28, 10, 12, 0)]
.....:
Out[135]:
A
2013-01-01 10:12:00 0.565375
2013-01-01 10:13:00 0.068184
2013-01-01 10:14:00 0.788871
2013-01-01 10:15:00 -0.280343
2013-01-01 10:16:00 0.931536
... ...
2013-02-28 10:08:00 0.148098
2013-02-28 10:09:00 -0.388138
2013-02-28 10:10:00 0.139348
2013-02-28 10:11:00 0.085288
2013-02-28 10:12:00 0.950146
[email protected]
T56GZSRVAH
[83521 rows x 1 columns]

Truncating & fancy indexing

A truncate() convenience function is provided that is similar to slicing. Note that truncate assumes a 0 value
for any unspecified date component in a DatetimeIndex in contrast to slicing which returns any partially matching
dates:
In [136]: rng2 = pd.date_range('2011-01-01', '2012-01-01', freq='W')

In [137]: ts2 = pd.Series(np.random.randn(len(rng2)), index=rng2)

In [138]: ts2.truncate(before='2011-11', after='2011-12')


Out[138]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
Freq: W-SUN, dtype: float64

In [139]: ts2['2011-11':'2011-12']
Out[139]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
(continues on next page)

742 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2011-12-04 0.046611
2011-12-11 0.059478
2011-12-18 -0.286539
2011-12-25 0.841669
Freq: W-SUN, dtype: float64

Even complicated fancy indexing that breaks the DatetimeIndex frequency regularity will result in a
DatetimeIndex, although frequency is lost:

In [140]: ts2[[0, 2, 6]].index


Out[140]: DatetimeIndex(['2011-01-02', '2011-01-16', '2011-02-13'], dtype=
˓→'datetime64[ns]', freq=None)

3.14.7 Time/date components

There are several time/date properties that one can access from Timestamp or a collection of timestamps like a
DatetimeIndex.

Property Description
year The year of the datetime
month The month of the datetime
day The days of the datetime
hour The hour of the datetime
minute The minutes of the datetime
second
[email protected] The seconds of the datetime
T56GZSRVAH microsecond The microseconds of the datetime
nanosecond The nanoseconds of the datetime
date Returns datetime.date (does not contain timezone information)
time Returns datetime.time (does not contain timezone information)
timetz Returns datetime.time as local time with timezone information
dayofyear The ordinal day of year
weekofyear The week ordinal of the year
week The week ordinal of the year
dayofweek The number of the day of the week with Monday=0, Sunday=6
weekday The number of the day of the week with Monday=0, Sunday=6
quarter Quarter of the date: Jan-Mar = 1, Apr-Jun = 2, etc.
days_in_month The number of days in the month of the datetime
is_month_start Logical indicating if first day of month (defined by frequency)
is_month_end Logical indicating if last day of month (defined by frequency)
is_quarter_start Logical indicating if first day of quarter (defined by frequency)
is_quarter_end Logical indicating if last day of quarter (defined by frequency)
is_year_start Logical indicating if first day of year (defined by frequency)
is_year_end Logical indicating if last day of year (defined by frequency)
is_leap_year Logical indicating if the date belongs to a leap year

Furthermore, if you have a Series with datetimelike values, then you can access these properties via the .dt
accessor, as detailed in the section on .dt accessors.

3.14. Time series / date functionality 743


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.14.8 DateOffset objects

In the preceding examples, frequency strings (e.g. 'D') were used to specify a frequency that defined:
• how the date times in DatetimeIndex were spaced when using date_range()
• the frequency of a Period or PeriodIndex
These frequency strings map to a DateOffset object and its subclasses. A DateOffset is similar to a
Timedelta that represents a duration of time but follows specific calendar duration rules. For example, a
Timedelta day will always increment datetimes by 24 hours, while a DateOffset day will increment
datetimes to the same time the next day whether a day represents 23, 24 or 25 hours due to daylight savings
time. However, all DateOffset subclasses that are an hour or smaller (Hour, Minute, Second, Milli, Micro,
Nano) behave like Timedelta and respect absolute time.
The basic DateOffset acts similar to dateutil.relativedelta (relativedelta documentation) that shifts a
date time by the corresponding calendar duration specified. The arithmetic operator (+) or the apply method can be
used to perform the shift.

# This particular day contains a day light savings time transition


In [141]: ts = pd.Timestamp('2016-10-30 00:00:00', tz='Europe/Helsinki')

# Respects absolute time


In [142]: ts + pd.Timedelta(days=1)
Out[142]: Timestamp('2016-10-30 23:00:00+0200', tz='Europe/Helsinki')

# Respects calendar time


In [143]: ts + pd.DateOffset(days=1)
Out[143]: Timestamp('2016-10-31 00:00:00+0200', tz='Europe/Helsinki')
[email protected]
T56GZSRVAHIn [144]: friday = pd.Timestamp('2018-01-05')
In [145]: friday.day_name()
Out[145]: 'Friday'

# Add 2 business days (Friday --> Tuesday)


In [146]: two_business_days = 2 * pd.offsets.BDay()

In [147]: two_business_days.apply(friday)
Out[147]: Timestamp('2018-01-09 00:00:00')

In [148]: friday + two_business_days


Out[148]: Timestamp('2018-01-09 00:00:00')

In [149]: (friday + two_business_days).day_name()


Out[149]: 'Tuesday'

Most DateOffsets have associated frequencies strings, or offset aliases, that can be passed into freq keyword
arguments. The available date offsets and associated frequency strings can be found below:

Date Offset Frequency Description


String
DateOffset None Generic offset class, defaults to 1 calendar day
BDay or 'B' business day (weekday)
BusinessDay
CDay or 'C' custom business day
CustomBusinessDay
Continued on next page

744 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Table 3 – continued from previous page


Date Offset Frequency Description
String
Week 'W' one week, optionally anchored on a day of the week
WeekOfMonth 'WOM' the x-th day of the y-th week of each month
LastWeekOfMonth
'LWOM' the x-th day of the last week of each month
MonthEnd 'M' calendar month end
MonthBegin 'MS' calendar month begin
BMonthEnd 'BM' business month end
or
BusinessMonthEnd
BMonthBegin 'BMS' business month begin
or
BusinessMonthBegin
CBMonthEnd 'CBM' custom business month end
or
CustomBusinessMonthEnd
CBMonthBegin 'CBMS' custom business month begin
or
CustomBusinessMonthBegin
SemiMonthEnd 'SM' 15th (or other day_of_month) and calendar month end
SemiMonthBegin
'SMS' 15th (or other day_of_month) and calendar month begin
QuarterEnd 'Q' calendar quarter end
QuarterBegin 'QS' calendar quarter begin
BQuarterEnd 'BQ business quarter end
BQuarterBegin'BQS' business quarter begin
[email protected]
FY5253Quarter'REQ' retail (aka 52-53 week) quarter
T56GZSRVAH
YearEnd 'A' calendar year end
YearBegin 'AS' or calendar year begin
'BYS'
BYearEnd 'BA' business year end
BYearBegin 'BAS' business year begin
FY5253 'RE' retail (aka 52-53 week) year
Easter None Easter holiday
BusinessHour 'BH' business hour
CustomBusinessHour
'CBH' custom business hour
Day 'D' one absolute day
Hour 'H' one hour
Minute 'T' or 'min' one minute
Second 'S' one second
Milli 'L' or 'ms' one millisecond
Micro 'U' or 'us' one microsecond
Nano 'N' one nanosecond

DateOffsets additionally have rollforward() and rollback() methods for moving a date forward or back-
ward respectively to a valid offset date relative to the offset. For example, business offsets will roll dates that land on
the weekends (Saturday and Sunday) forward to Monday since business offsets operate on the weekdays.
In [150]: ts = pd.Timestamp('2018-01-06 00:00:00')

In [151]: ts.day_name()
Out[151]: 'Saturday'

(continues on next page)

3.14. Time series / date functionality 745


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


# BusinessHour's valid offset dates are Monday through Friday
In [152]: offset = pd.offsets.BusinessHour(start='09:00')

# Bring the date to the closest offset date (Monday)


In [153]: offset.rollforward(ts)
Out[153]: Timestamp('2018-01-08 09:00:00')

# Date is brought to the closest offset date first and then the hour is added
In [154]: ts + offset
Out[154]: Timestamp('2018-01-08 10:00:00')

These operations preserve time (hour, minute, etc) information by default. To reset time to midnight, use
normalize() before or after applying the operation (depending on whether you want the time information included
in the operation).
In [155]: ts = pd.Timestamp('2014-01-01 09:00')

In [156]: day = pd.offsets.Day()

In [157]: day.apply(ts)
Out[157]: Timestamp('2014-01-02 09:00:00')

In [158]: day.apply(ts).normalize()
Out[158]: Timestamp('2014-01-02 00:00:00')

In [159]: ts = pd.Timestamp('2014-01-01 22:00')

In [160]: hour = pd.offsets.Hour()


[email protected]
T56GZSRVAH
In [161]: hour.apply(ts)
Out[161]: Timestamp('2014-01-01 23:00:00')

In [162]: hour.apply(ts).normalize()
Out[162]: Timestamp('2014-01-01 00:00:00')

In [163]: hour.apply(pd.Timestamp("2014-01-01 23:30")).normalize()


Out[163]: Timestamp('2014-01-02 00:00:00')

Parametric offsets

Some of the offsets can be “parameterized” when created to result in different behaviors. For example, the Week
offset for generating weekly data accepts a weekday parameter which results in the generated dates always lying on
a particular day of the week:
In [164]: d = datetime.datetime(2008, 8, 18, 9, 0)

In [165]: d
Out[165]: datetime.datetime(2008, 8, 18, 9, 0)

In [166]: d + pd.offsets.Week()
Out[166]: Timestamp('2008-08-25 09:00:00')

In [167]: d + pd.offsets.Week(weekday=4)
Out[167]: Timestamp('2008-08-22 09:00:00')

(continues on next page)

746 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [168]: (d + pd.offsets.Week(weekday=4)).weekday()
Out[168]: 4

In [169]: d - pd.offsets.Week()
Out[169]: Timestamp('2008-08-11 09:00:00')

The normalize option will be effective for addition and subtraction.

In [170]: d + pd.offsets.Week(normalize=True)
Out[170]: Timestamp('2008-08-25 00:00:00')

In [171]: d - pd.offsets.Week(normalize=True)
Out[171]: Timestamp('2008-08-11 00:00:00')

Another example is parameterizing YearEnd with the specific ending month:

In [172]: d + pd.offsets.YearEnd()
Out[172]: Timestamp('2008-12-31 09:00:00')

In [173]: d + pd.offsets.YearEnd(month=6)
Out[173]: Timestamp('2009-06-30 09:00:00')

Using offsets with Series / DatetimeIndex

Offsets can be used with either a Series or DatetimeIndex to apply the offset to each element.
[email protected]
In [174]: rng = pd.date_range('2012-01-01', '2012-01-03')
T56GZSRVAH
In [175]: s = pd.Series(rng)

In [176]: rng
Out[176]: DatetimeIndex(['2012-01-01', '2012-01-02', '2012-01-03'], dtype=
˓→'datetime64[ns]', freq='D')

In [177]: rng + pd.DateOffset(months=2)


Out[177]: DatetimeIndex(['2012-03-01', '2012-03-02', '2012-03-03'], dtype=
˓→'datetime64[ns]', freq='D')

In [178]: s + pd.DateOffset(months=2)
Out[178]:
0 2012-03-01
1 2012-03-02
2 2012-03-03
dtype: datetime64[ns]

In [179]: s - pd.DateOffset(months=2)
Out[179]:
0 2011-11-01
1 2011-11-02
2 2011-11-03
dtype: datetime64[ns]

If the offset class maps directly to a Timedelta (Day, Hour, Minute, Second, Micro, Milli, Nano) it can be
used exactly like a Timedelta - see the Timedelta section for more examples.

3.14. Time series / date functionality 747


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [180]: s - pd.offsets.Day(2)
Out[180]:
0 2011-12-30
1 2011-12-31
2 2012-01-01
dtype: datetime64[ns]

In [181]: td = s - pd.Series(pd.date_range('2011-12-29', '2011-12-31'))

In [182]: td
Out[182]:
0 3 days
1 3 days
2 3 days
dtype: timedelta64[ns]

In [183]: td + pd.offsets.Minute(15)
Out[183]:
0 3 days 00:15:00
1 3 days 00:15:00
2 3 days 00:15:00
dtype: timedelta64[ns]

Note that some offsets (such as BQuarterEnd) do not have a vectorized implementation. They can still be used but
may calculate significantly slower and will show a PerformanceWarning

In [184]: rng + pd.offsets.BQuarterEnd()


Out[184]: DatetimeIndex(['2012-03-30', '2012-03-30', '2012-03-30'], dtype=
[email protected]
˓→'datetime64[ns]', freq='D')
T56GZSRVAH

Custom business days

The CDay or CustomBusinessDay class provides a parametric BusinessDay class which can be used to create
customized business day calendars which account for local holidays and local weekend conventions.
As an interesting example, let’s look at Egypt where a Friday-Saturday weekend is observed.

In [185]: weekmask_egypt = 'Sun Mon Tue Wed Thu'

# They also observe International Workers' Day so let's


# add that for a couple of years
In [186]: holidays = ['2012-05-01',
.....: datetime.datetime(2013, 5, 1),
.....: np.datetime64('2014-05-01')]
.....:

In [187]: bday_egypt = pd.offsets.CustomBusinessDay(holidays=holidays,


.....: weekmask=weekmask_egypt)
.....:

In [188]: dt = datetime.datetime(2013, 4, 30)

In [189]: dt + 2 * bday_egypt
Out[189]: Timestamp('2013-05-05 00:00:00')

Let’s map to the weekday names:

748 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [190]: dts = pd.date_range(dt, periods=5, freq=bday_egypt)

In [191]: pd.Series(dts.weekday, dts).map(


.....: pd.Series('Mon Tue Wed Thu Fri Sat Sun'.split()))
.....:
Out[191]:
2013-04-30 Tue
2013-05-02 Thu
2013-05-05 Sun
2013-05-06 Mon
2013-05-07 Tue
Freq: C, dtype: object

Holiday calendars can be used to provide the list of holidays. See the holiday calendar section for more information.

In [192]: from pandas.tseries.holiday import USFederalHolidayCalendar

In [193]: bday_us = pd.offsets.CustomBusinessDay(calendar=USFederalHolidayCalendar())

# Friday before MLK Day


In [194]: dt = datetime.datetime(2014, 1, 17)

# Tuesday after MLK Day (Monday is skipped because it's a holiday)


In [195]: dt + bday_us
Out[195]: Timestamp('2014-01-21 00:00:00')

Monthly offsets that respect a certain holiday calendar can be defined in the usual way.

[email protected]
In [196]: bmth_us = pd.offsets.CustomBusinessMonthBegin(
T56GZSRVAH .....: calendar=USFederalHolidayCalendar())
.....:

# Skip new years


In [197]: dt = datetime.datetime(2013, 12, 17)

In [198]: dt + bmth_us
Out[198]: Timestamp('2014-01-02 00:00:00')

# Define date index with custom offset


In [199]: pd.date_range(start='20100101', end='20120101', freq=bmth_us)
Out[199]:
DatetimeIndex(['2010-01-04', '2010-02-01', '2010-03-01', '2010-04-01',
'2010-05-03', '2010-06-01', '2010-07-01', '2010-08-02',
'2010-09-01', '2010-10-01', '2010-11-01', '2010-12-01',
'2011-01-03', '2011-02-01', '2011-03-01', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-01', '2011-10-03', '2011-11-01', '2011-12-01'],
dtype='datetime64[ns]', freq='CBMS')

Note: The frequency string ‘C’ is used to indicate that a CustomBusinessDay DateOffset is used, it is important to
note that since CustomBusinessDay is a parameterised type, instances of CustomBusinessDay may differ and this is
not detectable from the ‘C’ frequency string. The user therefore needs to ensure that the ‘C’ frequency string is used
consistently within the user’s application.

3.14. Time series / date functionality 749


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Business hour

The BusinessHour class provides a business hour representation on BusinessDay, allowing to use specific start
and end times.
By default, BusinessHour uses 9:00 - 17:00 as business hours. Adding BusinessHour will increment
Timestamp by hourly frequency. If target Timestamp is out of business hours, move to the next business hour
then increment it. If the result exceeds the business hours end, the remaining hours are added to the next business day.

In [200]: bh = pd.offsets.BusinessHour()

In [201]: bh
Out[201]: <BusinessHour: BH=09:00-17:00>

# 2014-08-01 is Friday
In [202]: pd.Timestamp('2014-08-01 10:00').weekday()
Out[202]: 4

In [203]: pd.Timestamp('2014-08-01 10:00') + bh


Out[203]: Timestamp('2014-08-01 11:00:00')

# Below example is the same as: pd.Timestamp('2014-08-01 09:00') + bh


In [204]: pd.Timestamp('2014-08-01 08:00') + bh
Out[204]: Timestamp('2014-08-01 10:00:00')

# If the results is on the end time, move to the next business day
In [205]: pd.Timestamp('2014-08-01 16:00') + bh
Out[205]: Timestamp('2014-08-04 09:00:00')

[email protected]
# Remainings are added to the next day
T56GZSRVAH
In [206]: pd.Timestamp('2014-08-01 16:30') + bh
Out[206]: Timestamp('2014-08-04 09:30:00')

# Adding 2 business hours


In [207]: pd.Timestamp('2014-08-01 10:00') + pd.offsets.BusinessHour(2)
Out[207]: Timestamp('2014-08-01 12:00:00')

# Subtracting 3 business hours


In [208]: pd.Timestamp('2014-08-01 10:00') + pd.offsets.BusinessHour(-3)
Out[208]: Timestamp('2014-07-31 15:00:00')

You can also specify start and end time by keywords. The argument must be a str with an hour:minute
representation or a datetime.time instance. Specifying seconds, microseconds and nanoseconds as business hour
results in ValueError.

In [209]: bh = pd.offsets.BusinessHour(start='11:00', end=datetime.time(20, 0))

In [210]: bh
Out[210]: <BusinessHour: BH=11:00-20:00>

In [211]: pd.Timestamp('2014-08-01 13:00') + bh


Out[211]: Timestamp('2014-08-01 14:00:00')

In [212]: pd.Timestamp('2014-08-01 09:00') + bh


Out[212]: Timestamp('2014-08-01 12:00:00')

In [213]: pd.Timestamp('2014-08-01 18:00') + bh


Out[213]: Timestamp('2014-08-01 19:00:00')

750 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Passing start time later than end represents midnight business hour. In this case, business hour exceeds midnight
and overlap to the next day. Valid business hours are distinguished by whether it started from valid BusinessDay.

In [214]: bh = pd.offsets.BusinessHour(start='17:00', end='09:00')

In [215]: bh
Out[215]: <BusinessHour: BH=17:00-09:00>

In [216]: pd.Timestamp('2014-08-01 17:00') + bh


Out[216]: Timestamp('2014-08-01 18:00:00')

In [217]: pd.Timestamp('2014-08-01 23:00') + bh


Out[217]: Timestamp('2014-08-02 00:00:00')

# Although 2014-08-02 is Saturday,


# it is valid because it starts from 08-01 (Friday).
In [218]: pd.Timestamp('2014-08-02 04:00') + bh
Out[218]: Timestamp('2014-08-02 05:00:00')

# Although 2014-08-04 is Monday,


# it is out of business hours because it starts from 08-03 (Sunday).
In [219]: pd.Timestamp('2014-08-04 04:00') + bh
Out[219]: Timestamp('2014-08-04 18:00:00')

Applying BusinessHour.rollforward and rollback to out of business hours results in the next business
hour start or previous day’s end. Different from other offsets, BusinessHour.rollforward may output different
results from apply by definition.
This is because one day’s business hour end is equal to next day’s business hour start. For example, under the default
[email protected]
T56GZSRVAHbusiness hours (9:00 - 17:00), there is no gap (0 minutes) between 2014-08-01 17:00 and 2014-08-04 09:
00.

# This adjusts a Timestamp to business hour edge


In [220]: pd.offsets.BusinessHour().rollback(pd.Timestamp('2014-08-02 15:00'))
Out[220]: Timestamp('2014-08-01 17:00:00')

In [221]: pd.offsets.BusinessHour().rollforward(pd.Timestamp('2014-08-02 15:00'))


Out[221]: Timestamp('2014-08-04 09:00:00')

# It is the same as BusinessHour().apply(pd.Timestamp('2014-08-01 17:00')).


# And it is the same as BusinessHour().apply(pd.Timestamp('2014-08-04 09:00'))
In [222]: pd.offsets.BusinessHour().apply(pd.Timestamp('2014-08-02 15:00'))
Out[222]: Timestamp('2014-08-04 10:00:00')

# BusinessDay results (for reference)


In [223]: pd.offsets.BusinessHour().rollforward(pd.Timestamp('2014-08-02'))
Out[223]: Timestamp('2014-08-04 09:00:00')

# It is the same as BusinessDay().apply(pd.Timestamp('2014-08-01'))


# The result is the same as rollworward because BusinessDay never overlap.
In [224]: pd.offsets.BusinessHour().apply(pd.Timestamp('2014-08-02'))
Out[224]: Timestamp('2014-08-04 10:00:00')

BusinessHour regards Saturday and Sunday as holidays. To use arbitrary holidays, you can use
CustomBusinessHour offset, as explained in the following subsection.

3.14. Time series / date functionality 751


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Custom business hour

The CustomBusinessHour is a mixture of BusinessHour and CustomBusinessDay which allows you to


specify arbitrary holidays. CustomBusinessHour works as the same as BusinessHour except that it skips
specified custom holidays.

In [225]: from pandas.tseries.holiday import USFederalHolidayCalendar

In [226]: bhour_us = pd.offsets.


˓→CustomBusinessHour(calendar=USFederalHolidayCalendar())

# Friday before MLK Day


In [227]: dt = datetime.datetime(2014, 1, 17, 15)

In [228]: dt + bhour_us
Out[228]: Timestamp('2014-01-17 16:00:00')

# Tuesday after MLK Day (Monday is skipped because it's a holiday)


In [229]: dt + bhour_us * 2
Out[229]: Timestamp('2014-01-21 09:00:00')

You can use keyword arguments supported by either BusinessHour and CustomBusinessDay.

In [230]: bhour_mon = pd.offsets.CustomBusinessHour(start='10:00',


.....: weekmask='Tue Wed Thu Fri')
.....:

# Monday is skipped because it's a holiday, business hour starts from 10:00
In [231]: dt + bhour_mon * 2
[email protected]
T56GZSRVAHOut[231]: Timestamp('2014-01-21 10:00:00')

Offset aliases

A number of string aliases are given to useful common time series frequencies. We will refer to these aliases as offset
aliases.

752 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Alias Description
B business day frequency
C custom business day frequency
D calendar day frequency
W weekly frequency
M month end frequency
SM semi-month end frequency (15th and end of month)
BM business month end frequency
CBM custom business month end frequency
MS month start frequency
SMS semi-month start frequency (1st and 15th)
BMS business month start frequency
CBMS custom business month start frequency
Q quarter end frequency
BQ business quarter end frequency
QS quarter start frequency
BQS business quarter start frequency
A, Y year end frequency
BA, BY business year end frequency
AS, YS year start frequency
BAS, BYS business year start frequency
BH business hour frequency
H hourly frequency
T, min minutely frequency
S secondly frequency
[email protected]
L, ms milliseconds
T56GZSRVAH
U, us microseconds
N nanoseconds

Combining aliases

As we have seen previously, the alias and the offset instance are fungible in most functions:

In [232]: pd.date_range(start, periods=5, freq='B')


Out[232]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')

In [233]: pd.date_range(start, periods=5, freq=pd.offsets.BDay())


Out[233]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')

You can combine together day and intraday offsets:

In [234]: pd.date_range(start, periods=10, freq='2h20min')


Out[234]:
DatetimeIndex(['2011-01-01 00:00:00', '2011-01-01 02:20:00',
'2011-01-01 04:40:00', '2011-01-01 07:00:00',
'2011-01-01 09:20:00', '2011-01-01 11:40:00',
(continues on next page)

3.14. Time series / date functionality 753


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


'2011-01-01 14:00:00', '2011-01-01 16:20:00',
'2011-01-01 18:40:00', '2011-01-01 21:00:00'],
dtype='datetime64[ns]', freq='140T')

In [235]: pd.date_range(start, periods=10, freq='1D10U')


Out[235]:
DatetimeIndex([ '2011-01-01 00:00:00', '2011-01-02 00:00:00.000010',
'2011-01-03 00:00:00.000020', '2011-01-04 00:00:00.000030',
'2011-01-05 00:00:00.000040', '2011-01-06 00:00:00.000050',
'2011-01-07 00:00:00.000060', '2011-01-08 00:00:00.000070',
'2011-01-09 00:00:00.000080', '2011-01-10 00:00:00.000090'],
dtype='datetime64[ns]', freq='86400000010U')

Anchored offsets

For some frequencies you can specify an anchoring suffix:

Alias Description
W-SUN weekly frequency (Sundays). Same as ‘W’
W-MON weekly frequency (Mondays)
W-TUE weekly frequency (Tuesdays)
W-WED weekly frequency (Wednesdays)
W-THU weekly frequency (Thursdays)
W-FRI weekly frequency (Fridays)
W-SAT
[email protected] weekly frequency (Saturdays)
T56GZSRVAH (B)Q(S)- quarterly frequency, year ends in December. Same as ‘Q’
DEC
(B)Q(S)- quarterly frequency, year ends in January
JAN
(B)Q(S)- quarterly frequency, year ends in February
FEB
(B)Q(S)- quarterly frequency, year ends in March
MAR
(B)Q(S)- quarterly frequency, year ends in April
APR
(B)Q(S)- quarterly frequency, year ends in May
MAY
(B)Q(S)- quarterly frequency, year ends in June
JUN
(B)Q(S)- quarterly frequency, year ends in July
JUL
(B)Q(S)- quarterly frequency, year ends in August
AUG
(B)Q(S)- quarterly frequency, year ends in September
SEP
(B)Q(S)- quarterly frequency, year ends in October
OCT
(B)Q(S)- quarterly frequency, year ends in November
NOV
(B)A(S)- annual frequency, anchored end of December. Same as ‘A’
DEC
Continued on next page

754 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Table 4 – continued from previous page


Alias Description
(B)A(S)- annual frequency, anchored end of January
JAN
(B)A(S)- annual frequency, anchored end of February
FEB
(B)A(S)- annual frequency, anchored end of March
MAR
(B)A(S)- annual frequency, anchored end of April
APR
(B)A(S)- annual frequency, anchored end of May
MAY
(B)A(S)- annual frequency, anchored end of June
JUN
(B)A(S)- annual frequency, anchored end of July
JUL
(B)A(S)- annual frequency, anchored end of August
AUG
(B)A(S)- annual frequency, anchored end of September
SEP
(B)A(S)- annual frequency, anchored end of October
OCT
(B)A(S)- annual frequency, anchored end of November
NOV

These can be used as arguments to date_range, bdate_range, constructors for DatetimeIndex, as well as
[email protected]
various other timeseries-related functions in pandas.
T56GZSRVAH

Anchored offset semantics

For those offsets that are anchored to the start or end of specific frequency (MonthEnd, MonthBegin, WeekEnd,
etc), the following rules apply to rolling forward and backwards.
When n is not 0, if the given date is not on an anchor point, it snapped to the next(previous) anchor point, and moved
|n|-1 additional steps forwards or backwards.

In [236]: pd.Timestamp('2014-01-02') + pd.offsets.MonthBegin(n=1)


Out[236]: Timestamp('2014-02-01 00:00:00')

In [237]: pd.Timestamp('2014-01-02') + pd.offsets.MonthEnd(n=1)


Out[237]: Timestamp('2014-01-31 00:00:00')

In [238]: pd.Timestamp('2014-01-02') - pd.offsets.MonthBegin(n=1)


Out[238]: Timestamp('2014-01-01 00:00:00')

In [239]: pd.Timestamp('2014-01-02') - pd.offsets.MonthEnd(n=1)


Out[239]: Timestamp('2013-12-31 00:00:00')

In [240]: pd.Timestamp('2014-01-02') + pd.offsets.MonthBegin(n=4)


Out[240]: Timestamp('2014-05-01 00:00:00')

In [241]: pd.Timestamp('2014-01-02') - pd.offsets.MonthBegin(n=4)


Out[241]: Timestamp('2013-10-01 00:00:00')

If the given date is on an anchor point, it is moved |n| points forwards or backwards.

3.14. Time series / date functionality 755


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [242]: pd.Timestamp('2014-01-01') + pd.offsets.MonthBegin(n=1)


Out[242]: Timestamp('2014-02-01 00:00:00')

In [243]: pd.Timestamp('2014-01-31') + pd.offsets.MonthEnd(n=1)


Out[243]: Timestamp('2014-02-28 00:00:00')

In [244]: pd.Timestamp('2014-01-01') - pd.offsets.MonthBegin(n=1)


Out[244]: Timestamp('2013-12-01 00:00:00')

In [245]: pd.Timestamp('2014-01-31') - pd.offsets.MonthEnd(n=1)


Out[245]: Timestamp('2013-12-31 00:00:00')

In [246]: pd.Timestamp('2014-01-01') + pd.offsets.MonthBegin(n=4)


Out[246]: Timestamp('2014-05-01 00:00:00')

In [247]: pd.Timestamp('2014-01-31') - pd.offsets.MonthBegin(n=4)


Out[247]: Timestamp('2013-10-01 00:00:00')

For the case when n=0, the date is not moved if on an anchor point, otherwise it is rolled forward to the next anchor
point.

In [248]: pd.Timestamp('2014-01-02') + pd.offsets.MonthBegin(n=0)


Out[248]: Timestamp('2014-02-01 00:00:00')

In [249]: pd.Timestamp('2014-01-02') + pd.offsets.MonthEnd(n=0)


Out[249]: Timestamp('2014-01-31 00:00:00')

In [250]: pd.Timestamp('2014-01-01') + pd.offsets.MonthBegin(n=0)


[email protected]
Out[250]: Timestamp('2014-01-01 00:00:00')
T56GZSRVAH
In [251]: pd.Timestamp('2014-01-31') + pd.offsets.MonthEnd(n=0)
Out[251]: Timestamp('2014-01-31 00:00:00')

Holidays / holiday calendars

Holidays and calendars provide a simple way to define holiday rules to be used with CustomBusinessDay or
in other analysis that requires a predefined set of holidays. The AbstractHolidayCalendar class provides all
the necessary methods to return a list of holidays and only rules need to be defined in a specific holiday calendar
class. Furthermore, the start_date and end_date class attributes determine over what date range holidays are
generated. These should be overwritten on the AbstractHolidayCalendar class to have the range apply to all
calendar subclasses. USFederalHolidayCalendar is the only calendar that exists and primarily serves as an
example for developing other calendars.
For holidays that occur on fixed dates (e.g., US Memorial Day or July 4th) an observance rule determines when that
holiday is observed if it falls on a weekend or some other non-observed day. Defined observance rules are:

Rule Description
nearest_workday move Saturday to Friday and Sunday to Monday
sunday_to_monday move Sunday to following Monday
next_monday_or_tuesday
move Saturday to Monday and Sunday/Monday to Tuesday
previous_friday move Saturday and Sunday to previous Friday”
next_monday move Saturday and Sunday to following Monday

An example of how holidays and holiday calendars are defined:

756 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [252]: from pandas.tseries.holiday import Holiday, USMemorialDay,\


.....: AbstractHolidayCalendar, nearest_workday, MO
.....:

In [253]: class ExampleCalendar(AbstractHolidayCalendar):


.....: rules = [
.....: USMemorialDay,
.....: Holiday('July 4th', month=7, day=4, observance=nearest_workday),
.....: Holiday('Columbus Day', month=10, day=1,
.....: offset=pd.DateOffset(weekday=MO(2)))]
.....:

In [254]: cal = ExampleCalendar()

In [255]: cal.holidays(datetime.datetime(2012, 1, 1), datetime.datetime(2012, 12, 31))


Out[255]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype=
˓→'datetime64[ns]', freq=None)

hint weekday=MO(2) is same as 2 * Week(weekday=2)


Using this calendar, creating an index or doing offset arithmetic skips weekends and holidays (i.e., Memorial Day/July
4th). For example, the below defines a custom business day offset using the ExampleCalendar. Like any other
offset, it can be used to create a DatetimeIndex or added to datetime or Timestamp objects.

In [256]: pd.date_range(start='7/1/2012', end='7/10/2012',


.....: freq=pd.offsets.CDay(calendar=cal)).to_pydatetime()
.....:
Out[256]:
array([datetime.datetime(2012, 7, 2, 0, 0),
[email protected]
T56GZSRVAH datetime.datetime(2012, 7, 3, 0, 0),
datetime.datetime(2012, 7, 5, 0, 0),
datetime.datetime(2012, 7, 6, 0, 0),
datetime.datetime(2012, 7, 9, 0, 0),
datetime.datetime(2012, 7, 10, 0, 0)], dtype=object)

In [257]: offset = pd.offsets.CustomBusinessDay(calendar=cal)

In [258]: datetime.datetime(2012, 5, 25) + offset


Out[258]: Timestamp('2012-05-29 00:00:00')

In [259]: datetime.datetime(2012, 7, 3) + offset


Out[259]: Timestamp('2012-07-05 00:00:00')

In [260]: datetime.datetime(2012, 7, 3) + 2 * offset


Out[260]: Timestamp('2012-07-06 00:00:00')

In [261]: datetime.datetime(2012, 7, 6) + offset


Out[261]: Timestamp('2012-07-09 00:00:00')

Ranges are defined by the start_date and end_date class attributes of AbstractHolidayCalendar. The
defaults are shown below.

In [262]: AbstractHolidayCalendar.start_date
Out[262]: Timestamp('1970-01-01 00:00:00')

In [263]: AbstractHolidayCalendar.end_date
Out[263]: Timestamp('2200-12-31 00:00:00')

3.14. Time series / date functionality 757


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

These dates can be overwritten by setting the attributes as datetime/Timestamp/string.

In [264]: AbstractHolidayCalendar.start_date = datetime.datetime(2012, 1, 1)

In [265]: AbstractHolidayCalendar.end_date = datetime.datetime(2012, 12, 31)

In [266]: cal.holidays()
Out[266]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype=
˓→'datetime64[ns]', freq=None)

Every calendar class is accessible by name using the get_calendar function which returns a holiday class instance.
Any imported calendar class will automatically be available by this function. Also, HolidayCalendarFactory
provides an easy interface to create calendars that are combinations of calendars or calendars with additional rules.

In [267]: from pandas.tseries.holiday import get_calendar, HolidayCalendarFactory,\


.....: USLaborDay
.....:

In [268]: cal = get_calendar('ExampleCalendar')

In [269]: cal.rules
Out[269]:
[Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at
˓→0x7f3d08414950>),

Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]

In [270]: new_cal = HolidayCalendarFactory('NewExampleCalendar', cal, USLaborDay)


[email protected]
T56GZSRVAHIn [271]: new_cal.rules
Out[271]:
[Holiday: Labor Day (month=9, day=1, offset=<DateOffset: weekday=MO(+1)>),
Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at
˓→0x7f3d08414950>),

Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]

3.14.9 Time Series-Related Instance Methods

Shifting / lagging

One may want to shift or lag the values in a time series back and forward in time. The method for this is shift(),
which is available on all of the pandas objects.

In [272]: ts = pd.Series(range(len(rng)), index=rng)

In [273]: ts = ts[:5]

In [274]: ts.shift(1)
Out[274]:
2012-01-01 NaN
2012-01-02 0.0
2012-01-03 1.0
Freq: D, dtype: float64

758 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

The shift method accepts an freq argument which can accept a DateOffset class or other timedelta-like
object or also an offset alias:

In [275]: ts.shift(5, freq=pd.offsets.BDay())


Out[275]:
2012-01-06 0
2012-01-09 1
2012-01-10 2
Freq: B, dtype: int64

In [276]: ts.shift(5, freq='BM')


Out[276]:
2012-05-31 0
2012-05-31 1
2012-05-31 2
Freq: D, dtype: int64

Rather than changing the alignment of the data and the index, DataFrame and Series objects also have a
tshift() convenience method that changes all the dates in the index by a specified number of offsets:

In [277]: ts.tshift(5, freq='D')


Out[277]:
2012-01-06 0
2012-01-07 1
2012-01-08 2
Freq: D, dtype: int64

Note that with tshift, the leading entry is no longer NaN because the data is not being realigned.
[email protected]
T56GZSRVAH
Frequency conversion

The primary function for changing frequencies is the asfreq() method. For a DatetimeIndex, this is basically
just a thin, but convenient wrapper around reindex() which generates a date_range and calls reindex.

In [278]: dr = pd.date_range('1/1/2010', periods=3, freq=3 * pd.offsets.BDay())

In [279]: ts = pd.Series(np.random.randn(3), index=dr)

In [280]: ts
Out[280]:
2010-01-01 1.494522
2010-01-06 -0.778425
2010-01-11 -0.253355
Freq: 3B, dtype: float64

In [281]: ts.asfreq(pd.offsets.BDay())
Out[281]:
2010-01-01 1.494522
2010-01-04 NaN
2010-01-05 NaN
2010-01-06 -0.778425
2010-01-07 NaN
2010-01-08 NaN
2010-01-11 -0.253355
Freq: B, dtype: float64

3.14. Time series / date functionality 759


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

asfreq provides a further convenience so you can specify an interpolation method for any gaps that may appear after
the frequency conversion.

In [282]: ts.asfreq(pd.offsets.BDay(), method='pad')


Out[282]:
2010-01-01 1.494522
2010-01-04 1.494522
2010-01-05 1.494522
2010-01-06 -0.778425
2010-01-07 -0.778425
2010-01-08 -0.778425
2010-01-11 -0.253355
Freq: B, dtype: float64

Filling forward / backward

Related to asfreq and reindex is fillna(), which is documented in the missing data section.

Converting to Python datetimes

DatetimeIndex can be converted to an array of Python native datetime.datetime objects using the
to_pydatetime method.

3.14.10 Resampling
[email protected]
T56GZSRVAHPandas has a simple, powerful, and efficient functionality for performing resampling operations during frequency
conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to,
financial applications.
resample() is a time-based groupby, followed by a reduction method on each of its groups. See some cookbook
examples for some advanced strategies.
The resample() method can be used directly from DataFrameGroupBy objects, see the groupby docs.

Note: .resample() is similar to using a rolling() operation with a time-based offset, see a discussion here.

Basics

In [283]: rng = pd.date_range('1/1/2012', periods=100, freq='S')

In [284]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)

In [285]: ts.resample('5Min').sum()
Out[285]:
2012-01-01 25103
Freq: 5T, dtype: int64

The resample function is very flexible and allows you to specify many different parameters to control the frequency
conversion and resampling operation.
Any function available via dispatching is available as a method of the returned object, including sum, mean, std,
sem, max, min, median, first, last, ohlc:

760 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [286]: ts.resample('5Min').mean()
Out[286]:
2012-01-01 251.03
Freq: 5T, dtype: float64

In [287]: ts.resample('5Min').ohlc()
Out[287]:
open high low close
2012-01-01 308 460 9 205

In [288]: ts.resample('5Min').max()
Out[288]:
2012-01-01 460
Freq: 5T, dtype: int64

For downsampling, closed can be set to ‘left’ or ‘right’ to specify which end of the interval is closed:

In [289]: ts.resample('5Min', closed='right').mean()


Out[289]:
2011-12-31 23:55:00 308.000000
2012-01-01 00:00:00 250.454545
Freq: 5T, dtype: float64

In [290]: ts.resample('5Min', closed='left').mean()


Out[290]:
2012-01-01 251.03
Freq: 5T, dtype: float64

[email protected]
Parameters like label and loffset are used to manipulate the resulting labels. label specifies whether the result
T56GZSRVAHis labeled with the beginning or the end of the interval. loffset performs a time adjustment on the output labels.

In [291]: ts.resample('5Min').mean() # by default label='left'


Out[291]:
2012-01-01 251.03
Freq: 5T, dtype: float64

In [292]: ts.resample('5Min', label='left').mean()


Out[292]:
2012-01-01 251.03
Freq: 5T, dtype: float64

In [293]: ts.resample('5Min', label='left', loffset='1s').mean()


Out[293]:
2012-01-01 00:00:01 251.03
dtype: float64

Warning: The default values for label and closed is ‘left’ for all frequency offsets except for ‘M’, ‘A’, ‘Q’,
‘BM’, ‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’.
This might unintendedly lead to looking ahead, where the value for a later time is pulled back to a previous time
as in the following example with the BusinessDay frequency:
In [294]: s = pd.date_range('2000-01-01', '2000-01-05').to_series()

In [295]: s.iloc[2] = pd.NaT

In [296]: s.dt.day_name()

3.14. Time series / date functionality 761


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Out[296]:
2000-01-01 Saturday
2000-01-02 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: D, dtype: object

# default: label='left', closed='left'


In [297]: s.resample('B').last().dt.day_name()
Out[297]:
1999-12-31 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object

Notice how the value for Sunday got pulled back to the previous Friday. To get the behavior where the value for
Sunday is pushed to Monday, use instead
In [298]: s.resample('B', label='right', closed='right').last().dt.day_name()
Out[298]:
2000-01-03 Sunday
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object

The axis parameter can be set to 0 or 1 and allows you to resample the specified axis for a DataFrame.
[email protected]
T56GZSRVAH
kind can be set to ‘timestamp’ or ‘period’ to convert the resulting index to/from timestamp and time span represen-
tations. By default resample retains the input representation.
convention can be set to ‘start’ or ‘end’ when resampling period data (detail below). It specifies how low frequency
periods are converted to higher frequency periods.

Upsampling

For upsampling, you can specify a way to upsample and the limit parameter to interpolate over the gaps that are
created:
# from secondly to every 250 milliseconds
In [299]: ts[:2].resample('250L').asfreq()
Out[299]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 NaN
2012-01-01 00:00:00.500 NaN
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64

In [300]: ts[:2].resample('250L').ffill()
Out[300]:
2012-01-01 00:00:00.000 308
2012-01-01 00:00:00.250 308
2012-01-01 00:00:00.500 308
2012-01-01 00:00:00.750 308
(continues on next page)

762 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2012-01-01 00:00:01.000 204
Freq: 250L, dtype: int64

In [301]: ts[:2].resample('250L').ffill(limit=2)
Out[301]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 308.0
2012-01-01 00:00:00.500 308.0
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64

Sparse resampling

Sparse timeseries are the ones where you have a lot fewer points relative to the amount of time you are looking to
resample. Naively upsampling a sparse series can potentially generate lots of intermediate values. When you don’t
want to use a method to fill these values, e.g. fill_method is None, then intermediate values will be filled with
NaN.
Since resample is a time-based groupby, the following is a method to efficiently resample only the groups that are
not all NaN.

In [302]: rng = pd.date_range('2014-1-1', periods=100, freq='D') + pd.Timedelta('1s')

In [303]: ts = pd.Series(range(100), index=rng)

[email protected]
If we want to resample to the full range of the series:
T56GZSRVAH
In [304]: ts.resample('3T').sum()
Out[304]:
2014-01-01 00:00:00 0
2014-01-01 00:03:00 0
2014-01-01 00:06:00 0
2014-01-01 00:09:00 0
2014-01-01 00:12:00 0
..
2014-04-09 23:48:00 0
2014-04-09 23:51:00 0
2014-04-09 23:54:00 0
2014-04-09 23:57:00 0
2014-04-10 00:00:00 99
Freq: 3T, Length: 47521, dtype: int64

We can instead only resample those groups where we have points as follows:

In [305]: from functools import partial

In [306]: from pandas.tseries.frequencies import to_offset

In [307]: def round(t, freq):


.....: freq = to_offset(freq)
.....: return pd.Timestamp((t.value // freq.delta.value) * freq.delta.value)
.....:

In [308]: ts.groupby(partial(round, freq='3T')).sum()


(continues on next page)

3.14. Time series / date functionality 763


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[308]:
2014-01-01 0
2014-01-02 1
2014-01-03 2
2014-01-04 3
2014-01-05 4
..
2014-04-06 95
2014-04-07 96
2014-04-08 97
2014-04-09 98
2014-04-10 99
Length: 100, dtype: int64

Aggregation

Similar to the aggregating API, groupby API, and the window functions API, a Resampler can be selectively resam-
pled.
Resampling a DataFrame, the default will be to act on all columns with the same function.

In [309]: df = pd.DataFrame(np.random.randn(1000, 3),


.....: index=pd.date_range('1/1/2012', freq='S', periods=1000),
.....: columns=['A', 'B', 'C'])
.....:

In [310]: r = df.resample('3T')
[email protected]
T56GZSRVAH
In [311]: r.mean()
Out[311]:
A B C
2012-01-01 00:00:00 -0.033823 -0.121514 -0.081447
2012-01-01 00:03:00 0.056909 0.146731 -0.024320
2012-01-01 00:06:00 -0.058837 0.047046 -0.052021
2012-01-01 00:09:00 0.063123 -0.026158 -0.066533
2012-01-01 00:12:00 0.186340 -0.003144 0.074752
2012-01-01 00:15:00 -0.085954 -0.016287 -0.050046

We can select a specific column or columns using standard getitem.

In [312]: r['A'].mean()
Out[312]:
2012-01-01 00:00:00 -0.033823
2012-01-01 00:03:00 0.056909
2012-01-01 00:06:00 -0.058837
2012-01-01 00:09:00 0.063123
2012-01-01 00:12:00 0.186340
2012-01-01 00:15:00 -0.085954
Freq: 3T, Name: A, dtype: float64

In [313]: r[['A', 'B']].mean()


Out[313]:
A B
2012-01-01 00:00:00 -0.033823 -0.121514
2012-01-01 00:03:00 0.056909 0.146731
(continues on next page)

764 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2012-01-01 00:06:00 -0.058837 0.047046
2012-01-01 00:09:00 0.063123 -0.026158
2012-01-01 00:12:00 0.186340 -0.003144
2012-01-01 00:15:00 -0.085954 -0.016287

You can pass a list or dict of functions to do aggregation with, outputting a DataFrame:
In [314]: r['A'].agg([np.sum, np.mean, np.std])
Out[314]:
sum mean std
2012-01-01 00:00:00 -6.088060 -0.033823 1.043263
2012-01-01 00:03:00 10.243678 0.056909 1.058534
2012-01-01 00:06:00 -10.590584 -0.058837 0.949264
2012-01-01 00:09:00 11.362228 0.063123 1.028096
2012-01-01 00:12:00 33.541257 0.186340 0.884586
2012-01-01 00:15:00 -8.595393 -0.085954 1.035476

On a resampled DataFrame, you can pass a list of functions to apply to each column, which produces an aggregated
result with a hierarchical index:
In [315]: r.agg([np.sum, np.mean])
Out[315]:
A B C
sum mean sum mean sum mean
2012-01-01 00:00:00 -6.088060 -0.033823 -21.872530 -0.121514 -14.660515 -0.081447
2012-01-01 00:03:00 10.243678 0.056909 26.411633 0.146731 -4.377642 -0.024320
2012-01-01 00:06:00 -10.590584 -0.058837 8.468289 0.047046 -9.363825 -0.052021
2012-01-01
[email protected] 00:09:00 11.362228 0.063123 -4.708526 -0.026158 -11.975895 -0.066533
T56GZSRVAH2012-01-01 00:12:00 33.541257 0.186340 -0.565895 -0.003144 13.455299 0.074752
2012-01-01 00:15:00 -8.595393 -0.085954 -1.628689 -0.016287 -5.004580 -0.050046

By passing a dict to aggregate you can apply a different aggregation to the columns of a DataFrame:
In [316]: r.agg({'A': np.sum,
.....: 'B': lambda x: np.std(x, ddof=1)})
.....:
Out[316]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312

The function names can also be strings. In order for a string to be valid it must be implemented on the resampled
object:
In [317]: r.agg({'A': 'sum', 'B': 'std'})
Out[317]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312

3.14. Time series / date functionality 765


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Furthermore, you can also specify multiple aggregation functions for each column separately.

In [318]: r.agg({'A': ['sum', 'std'], 'B': ['mean', 'std']})


Out[318]:
A B
sum std mean std
2012-01-01 00:00:00 -6.088060 1.043263 -0.121514 1.001294
2012-01-01 00:03:00 10.243678 1.058534 0.146731 1.074597
2012-01-01 00:06:00 -10.590584 0.949264 0.047046 0.987309
2012-01-01 00:09:00 11.362228 1.028096 -0.026158 0.944953
2012-01-01 00:12:00 33.541257 0.884586 -0.003144 1.095025
2012-01-01 00:15:00 -8.595393 1.035476 -0.016287 1.035312

If a DataFrame does not have a datetimelike index, but instead you want to resample based on datetimelike column
in the frame, it can passed to the on keyword.

In [319]: df = pd.DataFrame({'date': pd.date_range('2015-01-01', freq='W', periods=5),


.....: 'a': np.arange(5)},
.....: index=pd.MultiIndex.from_arrays([
.....: [1, 2, 3, 4, 5],
.....: pd.date_range('2015-01-01', freq='W', periods=5)],
.....: names=['v', 'd']))
.....:

In [320]: df
Out[320]:
date a
v d
1 2015-01-04 2015-01-04 0
[email protected]
2 2015-01-11 2015-01-11 1
T56GZSRVAH
3 2015-01-18 2015-01-18 2
4 2015-01-25 2015-01-25 3
5 2015-02-01 2015-02-01 4

In [321]: df.resample('M', on='date').sum()


Out[321]:
a
date
2015-01-31 6
2015-02-28 4

Similarly, if you instead want to resample by a datetimelike level of MultiIndex, its name or location can be passed
to the level keyword.

In [322]: df.resample('M', level='d').sum()


Out[322]:
a
d
2015-01-31 6
2015-02-28 4

766 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Iterating through groups

With the Resampler object in hand, iterating through the grouped data is very natural and functions similarly to
itertools.groupby():

In [323]: small = pd.Series(


.....: range(6),
.....: index=pd.to_datetime(['2017-01-01T00:00:00',
.....: '2017-01-01T00:30:00',
.....: '2017-01-01T00:31:00',
.....: '2017-01-01T01:00:00',
.....: '2017-01-01T03:00:00',
.....: '2017-01-01T03:05:00'])
.....: )
.....:

In [324]: resampled = small.resample('H')

In [325]: for name, group in resampled:


.....: print("Group: ", name)
.....: print("-" * 27)
.....: print(group, end="\n\n")
.....:
Group: 2017-01-01 00:00:00
---------------------------
2017-01-01 00:00:00 0
2017-01-01 00:30:00 1
2017-01-01 00:31:00 2
dtype: int64
[email protected]
T56GZSRVAH
Group: 2017-01-01 01:00:00
---------------------------
2017-01-01 01:00:00 3
dtype: int64

Group: 2017-01-01 02:00:00


---------------------------
Series([], dtype: int64)

Group: 2017-01-01 03:00:00


---------------------------
2017-01-01 03:00:00 4
2017-01-01 03:05:00 5
dtype: int64

See Iterating through groups or Resampler.__iter__ for more.

3.14. Time series / date functionality 767


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.14.11 Time span representation

Regular intervals of time are represented by Period objects in pandas while sequences of Period objects are
collected in a PeriodIndex, which can be created with the convenience function period_range.

Period

A Period represents a span of time (e.g., a day, a month, a quarter, etc). You can specify the span via freq keyword
using a frequency alias like below. Because freq represents a span of Period, it cannot be negative like “-3D”.

In [326]: pd.Period('2012', freq='A-DEC')


Out[326]: Period('2012', 'A-DEC')

In [327]: pd.Period('2012-1-1', freq='D')


Out[327]: Period('2012-01-01', 'D')

In [328]: pd.Period('2012-1-1 19:00', freq='H')


Out[328]: Period('2012-01-01 19:00', 'H')

In [329]: pd.Period('2012-1-1 19:00', freq='5H')


Out[329]: Period('2012-01-01 19:00', '5H')

Adding and subtracting integers from periods shifts the period by its own frequency. Arithmetic is not allowed between
Period with different freq (span).

In [330]: p = pd.Period('2012', freq='A-DEC')

[email protected]
In [331]: p + 1
T56GZSRVAHOut[331]: Period('2013', 'A-DEC')

In [332]: p - 3
Out[332]: Period('2009', 'A-DEC')

In [333]: p = pd.Period('2012-01', freq='2M')

In [334]: p + 2
Out[334]: Period('2012-05', '2M')

In [335]: p - 1
Out[335]: Period('2011-11', '2M')

In [336]: p == pd.Period('2012-01', freq='3M')


---------------------------------------------------------------------------
IncompatibleFrequency Traceback (most recent call last)
<ipython-input-336-4b67dc0b596c> in <module>
----> 1 p == pd.Period('2012-01', freq='3M')

/pandas/pandas/_libs/tslibs/period.pyx in pandas._libs.tslibs.period._Period.__
˓→richcmp__()

IncompatibleFrequency: Input has different freq=3M from Period(freq=2M)

If Period freq is daily or higher (D, H, T, S, L, U, N), offsets and timedelta-like can be added if the result can
have the same freq. Otherwise, ValueError will be raised.

768 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [337]: p = pd.Period('2014-07-01 09:00', freq='H')

In [338]: p + pd.offsets.Hour(2)
Out[338]: Period('2014-07-01 11:00', 'H')

In [339]: p + datetime.timedelta(minutes=120)
Out[339]: Period('2014-07-01 11:00', 'H')

In [340]: p + np.timedelta64(7200, 's')


Out[340]: Period('2014-07-01 11:00', 'H')

In [1]: p + pd.offsets.Minute(5)
Traceback
...
ValueError: Input has different freq from Period(freq=H)

If Period has other frequencies, only the same offsets can be added. Otherwise, ValueError will be raised.

In [341]: p = pd.Period('2014-07', freq='M')

In [342]: p + pd.offsets.MonthEnd(3)
Out[342]: Period('2014-10', 'M')

In [1]: p + pd.offsets.MonthBegin(3)
Traceback
...
ValueError: Input has different freq from Period(freq=M)
[email protected]
T56GZSRVAHTaking the difference of Period instances with the same frequency will return the number of frequency units between
them:

In [343]: pd.Period('2012', freq='A-DEC') - pd.Period('2002', freq='A-DEC')


Out[343]: <10 * YearEnds: month=12>

PeriodIndex and period_range

Regular sequences of Period objects can be collected in a PeriodIndex, which can be constructed using the
period_range convenience function:

In [344]: prng = pd.period_range('1/1/2011', '1/1/2012', freq='M')

In [345]: prng
Out[345]:
PeriodIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05', '2011-06',
'2011-07', '2011-08', '2011-09', '2011-10', '2011-11', '2011-12',
'2012-01'],
dtype='period[M]', freq='M')

The PeriodIndex constructor can also be used directly:

In [346]: pd.PeriodIndex(['2011-1', '2011-2', '2011-3'], freq='M')


Out[346]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]', freq='M')

Passing multiplied frequency outputs a sequence of Period which has multiplied span.

3.14. Time series / date functionality 769


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [347]: pd.period_range(start='2014-01', freq='3M', periods=4)


Out[347]: PeriodIndex(['2014-01', '2014-04', '2014-07', '2014-10'], dtype='period[3M]
˓→', freq='3M')

If start or end are Period objects, they will be used as anchor endpoints for a PeriodIndex with frequency
matching that of the PeriodIndex constructor.

In [348]: pd.period_range(start=pd.Period('2017Q1', freq='Q'),


.....: end=pd.Period('2017Q2', freq='Q'), freq='M')
.....:
Out[348]: PeriodIndex(['2017-03', '2017-04', '2017-05', '2017-06'], dtype='period[M]',
˓→ freq='M')

Just like DatetimeIndex, a PeriodIndex can also be used to index pandas objects:

In [349]: ps = pd.Series(np.random.randn(len(prng)), prng)

In [350]: ps
Out[350]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
[email protected]
2011-10 0.056780
T56GZSRVAH2011-11 0.197035
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64

PeriodIndex supports addition and subtraction with the same rule as Period.

In [351]: idx = pd.period_range('2014-07-01 09:00', periods=5, freq='H')

In [352]: idx
Out[352]:
PeriodIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00'],
dtype='period[H]', freq='H')

In [353]: idx + pd.offsets.Hour(2)


Out[353]:
PeriodIndex(['2014-07-01 11:00', '2014-07-01 12:00', '2014-07-01 13:00',
'2014-07-01 14:00', '2014-07-01 15:00'],
dtype='period[H]', freq='H')

In [354]: idx = pd.period_range('2014-07', periods=5, freq='M')

In [355]: idx
Out[355]: PeriodIndex(['2014-07', '2014-08', '2014-09', '2014-10', '2014-11'], dtype=
˓→'period[M]', freq='M')

In [356]: idx + pd.offsets.MonthEnd(3)


(continues on next page)

770 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[356]: PeriodIndex(['2014-10', '2014-11', '2014-12', '2015-01', '2015-02'], dtype=
˓→'period[M]', freq='M')

PeriodIndex has its own dtype named period, refer to Period Dtypes.

Period dtypes

PeriodIndex has a custom period dtype. This is a pandas extension dtype similar to the timezone aware dtype
(datetime64[ns, tz]).
The period dtype holds the freq attribute and is represented with period[freq] like period[D] or
period[M], using frequency strings.
In [357]: pi = pd.period_range('2016-01-01', periods=3, freq='M')

In [358]: pi
Out[358]: PeriodIndex(['2016-01', '2016-02', '2016-03'], dtype='period[M]', freq='M')

In [359]: pi.dtype
Out[359]: period[M]

The period dtype can be used in .astype(...). It allows one to change the freq of a PeriodIndex like
.asfreq() and convert a DatetimeIndex to PeriodIndex like to_period():
# change monthly freq to daily freq
In [360]: pi.astype('period[D]')
Out[360]: PeriodIndex(['2016-01-31', '2016-02-29', '2016-03-31'], dtype='period[D]',
[email protected]
T56GZSRVAH˓→freq='D')

# convert to DatetimeIndex
In [361]: pi.astype('datetime64[ns]')
Out[361]: DatetimeIndex(['2016-01-01', '2016-02-01', '2016-03-01'], dtype=
˓→'datetime64[ns]', freq='MS')

# convert to PeriodIndex
In [362]: dti = pd.date_range('2011-01-01', freq='M', periods=3)

In [363]: dti
Out[363]: DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31'], dtype=
˓→'datetime64[ns]', freq='M')

In [364]: dti.astype('period[M]')
Out[364]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]', freq='M')

PeriodIndex partial string indexing

You can pass in dates and strings to Series and DataFrame with PeriodIndex, in the same manner as
DatetimeIndex. For details, refer to DatetimeIndex Partial String Indexing.
In [365]: ps['2011-01']
Out[365]: -2.9169013294054507

In [366]: ps[datetime.datetime(2011, 12, 25):]


Out[366]:
(continues on next page)

3.14. Time series / date functionality 771


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64

In [367]: ps['10/31/2011':'12/31/2011']
Out[367]:
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64

Passing a string representing a lower frequency than PeriodIndex returns partial sliced data.
In [368]: ps['2011']
Out[368]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
[email protected]
T56GZSRVAH
In [369]: dfp = pd.DataFrame(np.random.randn(600, 1),
.....: columns=['A'],
.....: index=pd.period_range('2013-01-01 9:00',
.....: periods=600,
.....: freq='T'))
.....:

In [370]: dfp
Out[370]:
A
2013-01-01 09:00 -0.538468
2013-01-01 09:01 -1.365819
2013-01-01 09:02 -0.969051
2013-01-01 09:03 -0.331152
2013-01-01 09:04 -0.245334
... ...
2013-01-01 18:55 0.522460
2013-01-01 18:56 0.118710
2013-01-01 18:57 0.167517
2013-01-01 18:58 0.922883
2013-01-01 18:59 1.721104

[600 rows x 1 columns]

In [371]: dfp['2013-01-01 10H']


Out[371]:
A
2013-01-01 10:00 -0.308975
(continues on next page)

772 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 10:55 -0.865621
2013-01-01 10:56 -1.167818
2013-01-01 10:57 -2.081748
2013-01-01 10:58 -0.527146
2013-01-01 10:59 0.802298

[60 rows x 1 columns]

As with DatetimeIndex, the endpoints will be included in the result. The example below slices data starting from
10:00 to 11:59.

In [372]: dfp['2013-01-01 10H':'2013-01-01 11H']


Out[372]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 11:55 -0.590204
2013-01-01 11:56 1.539990
2013-01-01 11:57 -1.224826
[email protected]
T56GZSRVAH2013-01-01 11:58 0.578798
2013-01-01 11:59 -0.685496

[120 rows x 1 columns]

Frequency conversion and resampling with PeriodIndex

The frequency of Period and PeriodIndex can be converted via the asfreq method. Let’s start with the fiscal
year 2011, ending in December:

In [373]: p = pd.Period('2011', freq='A-DEC')

In [374]: p
Out[374]: Period('2011', 'A-DEC')

We can convert it to a monthly frequency. Using the how parameter, we can specify whether to return the starting or
ending month:

In [375]: p.asfreq('M', how='start')


Out[375]: Period('2011-01', 'M')

In [376]: p.asfreq('M', how='end')


Out[376]: Period('2011-12', 'M')

The shorthands ‘s’ and ‘e’ are provided for convenience:

3.14. Time series / date functionality 773


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [377]: p.asfreq('M', 's')


Out[377]: Period('2011-01', 'M')

In [378]: p.asfreq('M', 'e')


Out[378]: Period('2011-12', 'M')

Converting to a “super-period” (e.g., annual frequency is a super-period of quarterly frequency) automatically returns
the super-period that includes the input period:
In [379]: p = pd.Period('2011-12', freq='M')

In [380]: p.asfreq('A-NOV')
Out[380]: Period('2012', 'A-NOV')

Note that since we converted to an annual frequency that ends the year in November, the monthly period of December
2011 is actually in the 2012 A-NOV period.
Period conversions with anchored frequencies are particularly useful for working with various quarterly data common
to economics, business, and other fields. Many organizations define quarters relative to the month in which their
fiscal year starts and ends. Thus, first quarter of 2011 could start in 2010 or a few months into 2011. Via anchored
frequencies, pandas works for all quarterly frequencies Q-JAN through Q-DEC.
Q-DEC define regular calendar quarters:
In [381]: p = pd.Period('2012Q1', freq='Q-DEC')

In [382]: p.asfreq('D', 's')


Out[382]: Period('2012-01-01', 'D')
[email protected]
T56GZSRVAHIn [383]: p.asfreq('D', 'e')
Out[383]: Period('2012-03-31', 'D')

Q-MAR defines fiscal year end in March:


In [384]: p = pd.Period('2011Q4', freq='Q-MAR')

In [385]: p.asfreq('D', 's')


Out[385]: Period('2011-01-01', 'D')

In [386]: p.asfreq('D', 'e')


Out[386]: Period('2011-03-31', 'D')

3.14.12 Converting between representations

Timestamped data can be converted to PeriodIndex-ed data using to_period and vice-versa using
to_timestamp:
In [387]: rng = pd.date_range('1/1/2012', periods=5, freq='M')

In [388]: ts = pd.Series(np.random.randn(len(rng)), index=rng)

In [389]: ts
Out[389]:
2012-01-31 1.931253
2012-02-29 -0.184594
2012-03-31 0.249656
(continues on next page)

774 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


2012-04-30 -0.978151
2012-05-31 -0.873389
Freq: M, dtype: float64

In [390]: ps = ts.to_period()

In [391]: ps
Out[391]:
2012-01 1.931253
2012-02 -0.184594
2012-03 0.249656
2012-04 -0.978151
2012-05 -0.873389
Freq: M, dtype: float64

In [392]: ps.to_timestamp()
Out[392]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64

Remember that ‘s’ and ‘e’ can be used to return the timestamps at the start or end of the period:

In [393]: ps.to_timestamp('D', how='s')


Out[393]:
[email protected]
T56GZSRVAH2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64

Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following
example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following
the quarter end:

In [394]: prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV')

In [395]: ts = pd.Series(np.random.randn(len(prng)), prng)

In [396]: ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9

In [397]: ts.head()
Out[397]:
1990-03-01 09:00 -0.109291
1990-06-01 09:00 -0.637235
1990-09-01 09:00 -1.735925
1990-12-01 09:00 2.096946
1991-03-01 09:00 -1.039926
Freq: H, dtype: float64

3.14. Time series / date functionality 775


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.14.13 Representing out-of-bounds spans

If you have data that is outside of the Timestamp bounds, see Timestamp limitations, then you can use a
PeriodIndex and/or Series of Periods to do computations.

In [398]: span = pd.period_range('1215-01-01', '1381-01-01', freq='D')

In [399]: span
Out[399]:
PeriodIndex(['1215-01-01', '1215-01-02', '1215-01-03', '1215-01-04',
'1215-01-05', '1215-01-06', '1215-01-07', '1215-01-08',
'1215-01-09', '1215-01-10',
...
'1380-12-23', '1380-12-24', '1380-12-25', '1380-12-26',
'1380-12-27', '1380-12-28', '1380-12-29', '1380-12-30',
'1380-12-31', '1381-01-01'],
dtype='period[D]', length=60632, freq='D')

To convert from an int64 based YYYYMMDD representation.

In [400]: s = pd.Series([20121231, 20141130, 99991231])

In [401]: s
Out[401]:
0 20121231
1 20141130
2 99991231
dtype: int64
[email protected]
T56GZSRVAHIn [402]: def conv(x):
.....: return pd.Period(year=x // 10000, month=x // 100 % 100,
.....: day=x % 100, freq='D')
.....:

In [403]: s.apply(conv)
Out[403]:
0 2012-12-31
1 2014-11-30
2 9999-12-31
dtype: period[D]

In [404]: s.apply(conv)[2]
Out[404]: Period('9999-12-31', 'D')

These can easily be converted to a PeriodIndex:

In [405]: span = pd.PeriodIndex(s.apply(conv))

In [406]: span
Out[406]: PeriodIndex(['2012-12-31', '2014-11-30', '9999-12-31'], dtype='period[D]',
˓→freq='D')

776 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.14.14 Time zone handling

pandas provides rich support for working with timestamps in different time zones using the pytz and dateutil
libraries or class:datetime.timezone objects from the standard library.

Working with time zones

By default, pandas objects are time zone unaware:


In [407]: rng = pd.date_range('3/6/2012 00:00', periods=15, freq='D')

In [408]: rng.tz is None


Out[408]: True

To localize these dates to a time zone (assign a particular time zone to a naive date), you can use the tz_localize
method or the tz keyword argument in date_range(), Timestamp, or DatetimeIndex. You can either pass
pytz or dateutil time zone objects or Olson time zone database strings. Olson time zone strings will return pytz
time zone objects by default. To return dateutil time zone objects, append dateutil/ before the string.
• In pytz you can find a list of common (and less common) time zones using from pytz import
common_timezones, all_timezones.
• dateutil uses the OS time zones so there isn’t a fixed list available. For common zones, the names are the
same as pytz.
In [409]: import dateutil

# pytz
[email protected]
In [410]: rng_pytz = pd.date_range('3/6/2012 00:00', periods=3, freq='D',
T56GZSRVAH .....: tz='Europe/London')
.....:

In [411]: rng_pytz.tz
Out[411]: <DstTzInfo 'Europe/London' LMT-1 day, 23:59:00 STD>

# dateutil
In [412]: rng_dateutil = pd.date_range('3/6/2012 00:00', periods=3, freq='D')

In [413]: rng_dateutil = rng_dateutil.tz_localize('dateutil/Europe/London')

In [414]: rng_dateutil.tz
Out[414]: tzfile('/usr/share/zoneinfo/Europe/London')

# dateutil - utc special case


In [415]: rng_utc = pd.date_range('3/6/2012 00:00', periods=3, freq='D',
.....: tz=dateutil.tz.tzutc())
.....:

In [416]: rng_utc.tz
Out[416]: tzutc()

New in version 0.25.0.


# datetime.timezone
In [417]: rng_utc = pd.date_range('3/6/2012 00:00', periods=3, freq='D',
.....: tz=datetime.timezone.utc)
.....:
(continues on next page)

3.14. Time series / date functionality 777


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [418]: rng_utc.tz
Out[418]: datetime.timezone.utc

Note that the UTC time zone is a special case in dateutil and should be constructed explicitly as an instance of
dateutil.tz.tzutc. You can also construct other time zones objects explicitly first.

In [419]: import pytz

# pytz
In [420]: tz_pytz = pytz.timezone('Europe/London')

In [421]: rng_pytz = pd.date_range('3/6/2012 00:00', periods=3, freq='D')

In [422]: rng_pytz = rng_pytz.tz_localize(tz_pytz)

In [423]: rng_pytz.tz == tz_pytz


Out[423]: True

# dateutil
In [424]: tz_dateutil = dateutil.tz.gettz('Europe/London')

In [425]: rng_dateutil = pd.date_range('3/6/2012 00:00', periods=3, freq='D',


.....: tz=tz_dateutil)
.....:

In [426]: rng_dateutil.tz == tz_dateutil


Out[426]: True
[email protected]
T56GZSRVAH
To convert a time zone aware pandas object from one time zone to another, you can use the tz_convert method.

In [427]: rng_pytz.tz_convert('US/Eastern')
Out[427]:
DatetimeIndex(['2012-03-05 19:00:00-05:00', '2012-03-06 19:00:00-05:00',
'2012-03-07 19:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq='D')

Note: When using pytz time zones, DatetimeIndex will construct a different time zone object than a
Timestamp for the same time zone input. A DatetimeIndex can hold a collection of Timestamp objects
that may have different UTC offsets and cannot be succinctly represented by one pytz time zone instance while one
Timestamp represents one point in time with a specific UTC offset.

In [428]: dti = pd.date_range('2019-01-01', periods=3, freq='D', tz='US/Pacific')

In [429]: dti.tz
Out[429]: <DstTzInfo 'US/Pacific' LMT-1 day, 16:07:00 STD>

In [430]: ts = pd.Timestamp('2019-01-01', tz='US/Pacific')

In [431]: ts.tz
Out[431]: <DstTzInfo 'US/Pacific' PST-1 day, 16:00:00 STD>

778 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Warning: Be wary of conversions between libraries. For some time zones, pytz and dateutil have different
definitions of the zone. This is more of a problem for unusual time zones than for ‘standard’ zones like US/
Eastern.

Warning: Be aware that a time zone definition across versions of time zone libraries may not be considered equal.
This may cause problems when working with stored data that is localized using one version and operated on with
a different version. See here for how to handle such a situation.

Warning: For pytz time zones, it is incorrect to pass a time zone object directly into the datetime.
datetime constructor (e.g., datetime.datetime(2011, 1, 1, tz=pytz.timezone('US/
Eastern')). Instead, the datetime needs to be localized using the localize method on the pytz time zone
object.

Under the hood, all timestamps are stored in UTC. Values from a time zone aware DatetimeIndex or Timestamp
will have their fields (day, hour, minute, etc.) localized to the time zone. However, timestamps with the same UTC
value are still considered to be equal even if they are in different time zones:

In [432]: rng_eastern = rng_utc.tz_convert('US/Eastern')

In [433]: rng_berlin = rng_utc.tz_convert('Europe/Berlin')

In [434]: rng_eastern[2]
Out[434]: Timestamp('2012-03-07 19:00:00-0500', tz='US/Eastern', freq='D')
[email protected]
T56GZSRVAH
In [435]: rng_berlin[2]
Out[435]: Timestamp('2012-03-08 01:00:00+0100', tz='Europe/Berlin', freq='D')

In [436]: rng_eastern[2] == rng_berlin[2]


Out[436]: True

Operations between Series in different time zones will yield UTC Series, aligning the data on the UTC times-
tamps:

In [437]: ts_utc = pd.Series(range(3), pd.date_range('20130101', periods=3, tz='UTC'))

In [438]: eastern = ts_utc.tz_convert('US/Eastern')

In [439]: berlin = ts_utc.tz_convert('Europe/Berlin')

In [440]: result = eastern + berlin

In [441]: result
Out[441]:
2013-01-01 00:00:00+00:00 0
2013-01-02 00:00:00+00:00 2
2013-01-03 00:00:00+00:00 4
Freq: D, dtype: int64

In [442]: result.index
Out[442]:
DatetimeIndex(['2013-01-01 00:00:00+00:00', '2013-01-02 00:00:00+00:00',
(continues on next page)

3.14. Time series / date functionality 779


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


'2013-01-03 00:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='D')

To remove time zone information, use tz_localize(None) or tz_convert(None). tz_localize(None)


will remove the time zone yielding the local time representation. tz_convert(None) will remove the time zone
after converting to UTC time.

In [443]: didx = pd.date_range(start='2014-08-01 09:00', freq='H',


.....: periods=3, tz='US/Eastern')
.....:

In [444]: didx
Out[444]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')

In [445]: didx.tz_localize(None)
Out[445]:
DatetimeIndex(['2014-08-01 09:00:00', '2014-08-01 10:00:00',
'2014-08-01 11:00:00'],
dtype='datetime64[ns]', freq='H')

In [446]: didx.tz_convert(None)
Out[446]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
[email protected] dtype='datetime64[ns]', freq='H')
T56GZSRVAH
# tz_convert(None) is identical to tz_convert('UTC').tz_localize(None)
In [447]: didx.tz_convert('UTC').tz_localize(None)
Out[447]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq='H')

Ambiguous times when localizing

tz_localize may not be able to determine the UTC offset of a timestamp because daylight savings time (DST)
in a local time zone causes some times to occur twice within one day (“clocks fall back”). The following options are
available:
• 'raise': Raises a pytz.AmbiguousTimeError (the default behavior)
• 'infer': Attempt to determine the correct offset base on the monotonicity of the timestamps
• 'NaT': Replaces ambiguous times with NaT
• bool: True represents a DST time, False represents non-DST time. An array-like of bool values is sup-
ported for a sequence of times.

In [448]: rng_hourly = pd.DatetimeIndex(['11/06/2011 00:00', '11/06/2011 01:00',


.....: '11/06/2011 01:00', '11/06/2011 02:00'])
.....:

This will fail as there are ambiguous times ('11/06/2011 01:00')

780 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [2]: rng_hourly.tz_localize('US/Eastern')
AmbiguousTimeError: Cannot infer dst time from Timestamp('2011-11-06 01:00:00'), try
˓→using the 'ambiguous' argument

Handle these ambiguous times by specifying the following.

In [449]: rng_hourly.tz_localize('US/Eastern', ambiguous='infer')


Out[449]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)

In [450]: rng_hourly.tz_localize('US/Eastern', ambiguous='NaT')


Out[450]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', 'NaT', 'NaT',
'2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)

In [451]: rng_hourly.tz_localize('US/Eastern', ambiguous=[True, True, False, False])


Out[451]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)

Nonexistent times when localizing

A DST transition may also shift the local time ahead by 1 hour creating nonexistent local times (“clocks spring
[email protected]
T56GZSRVAHforward”). The behavior of localizing a timeseries with nonexistent times can be controlled by the nonexistent
argument. The following options are available:
• 'raise': Raises a pytz.NonExistentTimeError (the default behavior)
• 'NaT': Replaces nonexistent times with NaT
• 'shift_forward': Shifts nonexistent times forward to the closest real time
• 'shift_backward': Shifts nonexistent times backward to the closest real time
• timedelta object: Shifts nonexistent times by the timedelta duration

In [452]: dti = pd.date_range(start='2015-03-29 02:30:00', periods=3, freq='H')

# 2:30 is a nonexistent time

Localization of nonexistent times will raise an error by default.

In [2]: dti.tz_localize('Europe/Warsaw')
NonExistentTimeError: 2015-03-29 02:30:00

Transform nonexistent times to NaT or shift the times.

In [453]: dti
Out[453]:
DatetimeIndex(['2015-03-29 02:30:00', '2015-03-29 03:30:00',
'2015-03-29 04:30:00'],
dtype='datetime64[ns]', freq='H')

(continues on next page)

3.14. Time series / date functionality 781


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [454]: dti.tz_localize('Europe/Warsaw', nonexistent='shift_forward')
Out[454]:
DatetimeIndex(['2015-03-29 03:00:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq='H')

In [455]: dti.tz_localize('Europe/Warsaw', nonexistent='shift_backward')


Out[455]:
DatetimeIndex(['2015-03-29 01:59:59.999999999+01:00',
'2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq='H')

In [456]: dti.tz_localize('Europe/Warsaw', nonexistent=pd.Timedelta(1, unit='H'))


Out[456]:
DatetimeIndex(['2015-03-29 03:30:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq='H')

In [457]: dti.tz_localize('Europe/Warsaw', nonexistent='NaT')


Out[457]:
DatetimeIndex(['NaT', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq='H')

Time zone series operations


[email protected]
T56GZSRVAHA Series with time zone naive values is represented with a dtype of datetime64[ns].

In [458]: s_naive = pd.Series(pd.date_range('20130101', periods=3))

In [459]: s_naive
Out[459]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
dtype: datetime64[ns]

A Series with a time zone aware values is represented with a dtype of datetime64[ns, tz] where tz is the
time zone

In [460]: s_aware = pd.Series(pd.date_range('20130101', periods=3, tz='US/Eastern'))

In [461]: s_aware
Out[461]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]

Both of these Series time zone information can be manipulated via the .dt accessor, see the dt accessor section.
For example, to localize and convert a naive stamp to time zone aware.

782 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [462]: s_naive.dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
Out[462]:
0 2012-12-31 19:00:00-05:00
1 2013-01-01 19:00:00-05:00
2 2013-01-02 19:00:00-05:00
dtype: datetime64[ns, US/Eastern]

Time zone information can also be manipulated using the astype method. This method can localize and convert
time zone naive timestamps or convert time zone aware timestamps.
# localize and convert a naive time zone
In [463]: s_naive.astype('datetime64[ns, US/Eastern]')
Out[463]:
0 2012-12-31 19:00:00-05:00
1 2013-01-01 19:00:00-05:00
2 2013-01-02 19:00:00-05:00
dtype: datetime64[ns, US/Eastern]

# make an aware tz naive


In [464]: s_aware.astype('datetime64[ns]')
Out[464]:
0 2013-01-01 05:00:00
1 2013-01-02 05:00:00
2 2013-01-03 05:00:00
dtype: datetime64[ns]

# convert to a new time zone


In [465]: s_aware.astype('datetime64[ns, CET]')
[email protected]
Out[465]:
T56GZSRVAH0 2013-01-01 06:00:00+01:00
1 2013-01-02 06:00:00+01:00
2 2013-01-03 06:00:00+01:00
dtype: datetime64[ns, CET]

Note: Using Series.to_numpy() on a Series, returns a NumPy array of the data. NumPy does not currently
support time zones (even though it is printing in the local time zone!), therefore an object array of Timestamps is
returned for time zone aware data:
In [466]: s_naive.to_numpy()
Out[466]:
array(['2013-01-01T00:00:00.000000000', '2013-01-02T00:00:00.000000000',
'2013-01-03T00:00:00.000000000'], dtype='datetime64[ns]')

In [467]: s_aware.to_numpy()
Out[467]:
array([Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern', freq='D'),
Timestamp('2013-01-02 00:00:00-0500', tz='US/Eastern', freq='D'),
Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern', freq='D')],
dtype=object)

By converting to an object array of Timestamps, it preserves the time zone information. For example, when converting
back to a Series:
In [468]: pd.Series(s_aware.to_numpy())
Out[468]:
0 2013-01-01 00:00:00-05:00
(continues on next page)

3.14. Time series / date functionality 783


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]

However, if you want an actual NumPy datetime64[ns] array (with the values converted to UTC) instead of an
array of objects, you can specify the dtype argument:
In [469]: s_aware.to_numpy(dtype='datetime64[ns]')
Out[469]:
array(['2013-01-01T05:00:00.000000000', '2013-01-02T05:00:00.000000000',
'2013-01-03T05:00:00.000000000'], dtype='datetime64[ns]')

3.15 Time deltas

Timedeltas are differences in times, expressed in difference units, e.g. days, hours, minutes, seconds. They can be
both positive and negative.
Timedelta is a subclass of datetime.timedelta, and behaves in a similar manner, but allows compatibility
with np.timedelta64 types as well as a host of custom representation, parsing, and attributes.

3.15.1 Parsing

You can construct a Timedelta scalar through various arguments:


[email protected]
T56GZSRVAH
In [1]: import datetime

# strings
In [2]: pd.Timedelta('1 days')
Out[2]: Timedelta('1 days 00:00:00')

In [3]: pd.Timedelta('1 days 00:00:00')


Out[3]: Timedelta('1 days 00:00:00')

In [4]: pd.Timedelta('1 days 2 hours')


Out[4]: Timedelta('1 days 02:00:00')

In [5]: pd.Timedelta('-1 days 2 min 3us')


Out[5]: Timedelta('-2 days +23:57:59.999997')

# like datetime.timedelta
# note: these MUST be specified as keyword arguments
In [6]: pd.Timedelta(days=1, seconds=1)
Out[6]: Timedelta('1 days 00:00:01')

# integers with a unit


In [7]: pd.Timedelta(1, unit='d')
Out[7]: Timedelta('1 days 00:00:00')

# from a datetime.timedelta/np.timedelta64
In [8]: pd.Timedelta(datetime.timedelta(days=1, seconds=1))
Out[8]: Timedelta('1 days 00:00:01')

(continues on next page)

784 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [9]: pd.Timedelta(np.timedelta64(1, 'ms'))
Out[9]: Timedelta('0 days 00:00:00.001000')

# negative Timedeltas have this string repr


# to be more consistent with datetime.timedelta conventions
In [10]: pd.Timedelta('-1us')
Out[10]: Timedelta('-1 days +23:59:59.999999')

# a NaT
In [11]: pd.Timedelta('nan')
Out[11]: NaT

In [12]: pd.Timedelta('nat')
Out[12]: NaT

# ISO 8601 Duration strings


In [13]: pd.Timedelta('P0DT0H1M0S')
Out[13]: Timedelta('0 days 00:01:00')

In [14]: pd.Timedelta('P0DT0H0M0.000000123S')
Out[14]: Timedelta('0 days 00:00:00.000000')

New in version 0.23.0: Added constructor for ISO 8601 Duration strings
DateOffsets (Day, Hour, Minute, Second, Milli, Micro, Nano) can also be used in construction.

In [15]: pd.Timedelta(pd.offsets.Second(2))
Out[15]: Timedelta('0 days 00:00:02')
[email protected]
T56GZSRVAH
Further, operations among the scalars yield another scalar Timedelta.

In [16]: pd.Timedelta(pd.offsets.Day(2)) + pd.Timedelta(pd.offsets.Second(2)) +\


....: pd.Timedelta('00:00:00.000123')
....:
Out[16]: Timedelta('2 days 00:00:02.000123')

to_timedelta

Using the top-level pd.to_timedelta, you can convert a scalar, array, list, or Series from a recognized timedelta
format / value into a Timedelta type. It will construct Series if the input is a Series, a scalar if the input is scalar-like,
otherwise it will output a TimedeltaIndex.
You can parse a single string to a Timedelta:

In [17]: pd.to_timedelta('1 days 06:05:01.00003')


Out[17]: Timedelta('1 days 06:05:01.000030')

In [18]: pd.to_timedelta('15.5us')
Out[18]: Timedelta('0 days 00:00:00.000015')

or a list/array of strings:

In [19]: pd.to_timedelta(['1 days 06:05:01.00003', '15.5us', 'nan'])


Out[19]: TimedeltaIndex(['1 days 06:05:01.000030', '0 days 00:00:00.000015', NaT],
˓→dtype='timedelta64[ns]', freq=None)

3.15. Time deltas 785


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

The unit keyword argument specifies the unit of the Timedelta:


In [20]: pd.to_timedelta(np.arange(5), unit='s')
Out[20]: TimedeltaIndex(['00:00:00', '00:00:01', '00:00:02', '00:00:03', '00:00:04'],
˓→dtype='timedelta64[ns]', freq=None)

In [21]: pd.to_timedelta(np.arange(5), unit='d')


Out[21]: TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], dtype=
˓→'timedelta64[ns]', freq=None)

Timedelta limitations

Pandas represents Timedeltas in nanosecond resolution using 64 bit integers. As such, the 64 bit integer limits
determine the Timedelta limits.
In [22]: pd.Timedelta.min
Out[22]: Timedelta('-106752 days +00:12:43.145224')

In [23]: pd.Timedelta.max
Out[23]: Timedelta('106751 days 23:47:16.854775')

3.15.2 Operations

You can operate on Series/DataFrames and construct timedelta64[ns] Series through subtraction operations on
datetime64[ns] Series, or Timestamps.
[email protected]
In [24]: s = pd.Series(pd.date_range('2012-1-1', periods=3, freq='D'))
T56GZSRVAH
In [25]: td = pd.Series([pd.Timedelta(days=i) for i in range(3)])

In [26]: df = pd.DataFrame({'A': s, 'B': td})

In [27]: df
Out[27]:
A B
0 2012-01-01 0 days
1 2012-01-02 1 days
2 2012-01-03 2 days

In [28]: df['C'] = df['A'] + df['B']

In [29]: df
Out[29]:
A B C
0 2012-01-01 0 days 2012-01-01
1 2012-01-02 1 days 2012-01-03
2 2012-01-03 2 days 2012-01-05

In [30]: df.dtypes
Out[30]:
A datetime64[ns]
B timedelta64[ns]
C datetime64[ns]
dtype: object

(continues on next page)

786 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [31]: s - s.max()
Out[31]:
0 -2 days
1 -1 days
2 0 days
dtype: timedelta64[ns]

In [32]: s - datetime.datetime(2011, 1, 1, 3, 5)
Out[32]:
0 364 days 20:55:00
1 365 days 20:55:00
2 366 days 20:55:00
dtype: timedelta64[ns]

In [33]: s + datetime.timedelta(minutes=5)
Out[33]:
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]

In [34]: s + pd.offsets.Minute(5)
Out[34]:
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
[email protected]
T56GZSRVAHIn [35]: s + pd.offsets.Minute(5) + pd.offsets.Milli(5)
Out[35]:
0 2012-01-01 00:05:00.005
1 2012-01-02 00:05:00.005
2 2012-01-03 00:05:00.005
dtype: datetime64[ns]

Operations with scalars from a timedelta64[ns] series:

In [36]: y = s - s[0]

In [37]: y
Out[37]:
0 0 days
1 1 days
2 2 days
dtype: timedelta64[ns]

Series of timedeltas with NaT values are supported:

In [38]: y = s - s.shift()

In [39]: y
Out[39]:
0 NaT
1 1 days
2 1 days
dtype: timedelta64[ns]

3.15. Time deltas 787


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Elements can be set to NaT using np.nan analogously to datetimes:

In [40]: y[1] = np.nan

In [41]: y
Out[41]:
0 NaT
1 NaT
2 1 days
dtype: timedelta64[ns]

Operands can also appear in a reversed order (a singular object operated with a Series):

In [42]: s.max() - s
Out[42]:
0 2 days
1 1 days
2 0 days
dtype: timedelta64[ns]

In [43]: datetime.datetime(2011, 1, 1, 3, 5) - s
Out[43]:
0 -365 days +03:05:00
1 -366 days +03:05:00
2 -367 days +03:05:00
dtype: timedelta64[ns]

In [44]: datetime.timedelta(minutes=5) + s
Out[44]:
[email protected]
0 2012-01-01 00:05:00
T56GZSRVAH1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]

min, max and the corresponding idxmin, idxmax operations are supported on frames:

In [45]: A = s - pd.Timestamp('20120101') - pd.Timedelta('00:05:05')

In [46]: B = s - pd.Series(pd.date_range('2012-1-2', periods=3, freq='D'))

In [47]: df = pd.DataFrame({'A': A, 'B': B})

In [48]: df
Out[48]:
A B
0 -1 days +23:54:55 -1 days
1 0 days 23:54:55 -1 days
2 1 days 23:54:55 -1 days

In [49]: df.min()
Out[49]:
A -1 days +23:54:55
B -1 days +00:00:00
dtype: timedelta64[ns]

In [50]: df.min(axis=1)
Out[50]:
0 -1 days
(continues on next page)

788 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 -1 days
2 -1 days
dtype: timedelta64[ns]

In [51]: df.idxmin()
Out[51]:
A 0
B 0
dtype: int64

In [52]: df.idxmax()
Out[52]:
A 2
B 0
dtype: int64

min, max, idxmin, idxmax operations are supported on Series as well. A scalar result will be a Timedelta.
In [53]: df.min().max()
Out[53]: Timedelta('-1 days +23:54:55')

In [54]: df.min(axis=1).min()
Out[54]: Timedelta('-1 days +00:00:00')

In [55]: df.min().idxmax()
Out[55]: 'A'

In [56]: df.min(axis=1).idxmin()
[email protected]
T56GZSRVAHOut[56]: 0

You can fillna on timedeltas, passing a timedelta to get a particular value.


In [57]: y.fillna(pd.Timedelta(0))
Out[57]:
0 0 days
1 0 days
2 1 days
dtype: timedelta64[ns]

In [58]: y.fillna(pd.Timedelta(10, unit='s'))


Out[58]:
0 0 days 00:00:10
1 0 days 00:00:10
2 1 days 00:00:00
dtype: timedelta64[ns]

In [59]: y.fillna(pd.Timedelta('-1 days, 00:00:05'))


Out[59]:
0 -1 days +00:00:05
1 -1 days +00:00:05
2 1 days 00:00:00
dtype: timedelta64[ns]

You can also negate, multiply and use abs on Timedeltas:


In [60]: td1 = pd.Timedelta('-1 days 2 hours 3 seconds')

(continues on next page)

3.15. Time deltas 789


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [61]: td1
Out[61]: Timedelta('-2 days +21:59:57')

In [62]: -1 * td1
Out[62]: Timedelta('1 days 02:00:03')

In [63]: - td1
Out[63]: Timedelta('1 days 02:00:03')

In [64]: abs(td1)
Out[64]: Timedelta('1 days 02:00:03')

3.15.3 Reductions

Numeric reduction operation for timedelta64[ns] will return Timedelta objects. As usual NaT are skipped
during evaluation.

In [65]: y2 = pd.Series(pd.to_timedelta(['-1 days +00:00:05', 'nat',


....: '-1 days +00:00:05', '1 days']))
....:

In [66]: y2
Out[66]:
0 -1 days +00:00:05
1 NaT
2 -1 days +00:00:05
[email protected]
3 1 days 00:00:00
T56GZSRVAH
dtype: timedelta64[ns]

In [67]: y2.mean()
Out[67]: Timedelta('-1 days +16:00:03.333333')

In [68]: y2.median()
Out[68]: Timedelta('-1 days +00:00:05')

In [69]: y2.quantile(.1)
Out[69]: Timedelta('-1 days +00:00:05')

In [70]: y2.sum()
Out[70]: Timedelta('-1 days +00:00:10')

3.15.4 Frequency conversion

Timedelta Series, TimedeltaIndex, and Timedelta scalars can be converted to other ‘frequencies’ by dividing
by another timedelta, or by astyping to a specific timedelta type. These operations yield Series and propagate NaT ->
nan. Note that division by the NumPy scalar is true division, while astyping is equivalent of floor division.

In [71]: december = pd.Series(pd.date_range('20121201', periods=4))

In [72]: january = pd.Series(pd.date_range('20130101', periods=4))

In [73]: td = january - december

(continues on next page)

790 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [74]: td[2] += datetime.timedelta(minutes=5, seconds=3)

In [75]: td[3] = np.nan

In [76]: td
Out[76]:
0 31 days 00:00:00
1 31 days 00:00:00
2 31 days 00:05:03
3 NaT
dtype: timedelta64[ns]

# to days
In [77]: td / np.timedelta64(1, 'D')
Out[77]:
0 31.000000
1 31.000000
2 31.003507
3 NaN
dtype: float64

In [78]: td.astype('timedelta64[D]')
Out[78]:
0 31.0
1 31.0
2 31.0
3 NaN
dtype: float64
[email protected]
T56GZSRVAH
# to seconds
In [79]: td / np.timedelta64(1, 's')
Out[79]:
0 2678400.0
1 2678400.0
2 2678703.0
3 NaN
dtype: float64

In [80]: td.astype('timedelta64[s]')
Out[80]:
0 2678400.0
1 2678400.0
2 2678703.0
3 NaN
dtype: float64

# to months (these are constant months)


In [81]: td / np.timedelta64(1, 'M')
Out[81]:
0 1.018501
1 1.018501
2 1.018617
3 NaN
dtype: float64

Dividing or multiplying a timedelta64[ns] Series by an integer or integer Series yields another


timedelta64[ns] dtypes Series.

3.15. Time deltas 791


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [82]: td * -1
Out[82]:
0 -31 days +00:00:00
1 -31 days +00:00:00
2 -32 days +23:54:57
3 NaT
dtype: timedelta64[ns]

In [83]: td * pd.Series([1, 2, 3, 4])


Out[83]:
0 31 days 00:00:00
1 62 days 00:00:00
2 93 days 00:15:09
3 NaT
dtype: timedelta64[ns]

Rounded division (floor-division) of a timedelta64[ns] Series by a scalar Timedelta gives a series of integers.

In [84]: td // pd.Timedelta(days=3, hours=4)


Out[84]:
0 9.0
1 9.0
2 9.0
3 NaN
dtype: float64

In [85]: pd.Timedelta(days=3, hours=4) // td


Out[85]:
[email protected]
0 0.0
T56GZSRVAH1 0.0
2 0.0
3 NaN
dtype: float64

The mod (%) and divmod operations are defined for Timedelta when operating with another timedelta-like or with
a numeric argument.

In [86]: pd.Timedelta(hours=37) % datetime.timedelta(hours=2)


Out[86]: Timedelta('0 days 01:00:00')

# divmod against a timedelta-like returns a pair (int, Timedelta)


In [87]: divmod(datetime.timedelta(hours=2), pd.Timedelta(minutes=11))
Out[87]: (10, Timedelta('0 days 00:10:00'))

# divmod against a numeric returns a pair (Timedelta, Timedelta)


In [88]: divmod(pd.Timedelta(hours=25), 86400000000000)
Out[88]: (Timedelta('0 days 00:00:00.000000'), Timedelta('0 days 01:00:00'))

792 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.15.5 Attributes

You can access various components of the Timedelta or TimedeltaIndex directly using the attributes
days,seconds,microseconds,nanoseconds. These are identical to the values returned by datetime.
timedelta, in that, for example, the .seconds attribute represents the number of seconds >= 0 and < 1 day.
These are signed according to whether the Timedelta is signed.
These operations can also be directly accessed via the .dt property of the Series as well.

Note: Note that the attributes are NOT the displayed values of the Timedelta. Use .components to retrieve the
displayed values.

For a Series:

In [89]: td.dt.days
Out[89]:
0 31.0
1 31.0
2 31.0
3 NaN
dtype: float64

In [90]: td.dt.seconds
Out[90]:
0 0.0
1 0.0
2 303.0
3 NaN
[email protected]
T56GZSRVAHdtype: float64

You can access the value of the fields for a scalar Timedelta directly.

In [91]: tds = pd.Timedelta('31 days 5 min 3 sec')

In [92]: tds.days
Out[92]: 31

In [93]: tds.seconds
Out[93]: 303

In [94]: (-tds).seconds
Out[94]: 86097

You can use the .components property to access a reduced form of the timedelta. This returns a DataFrame
indexed similarly to the Series. These are the displayed values of the Timedelta.

In [95]: td.dt.components
Out[95]:
days hours minutes seconds milliseconds microseconds nanoseconds
0 31.0 0.0 0.0 0.0 0.0 0.0 0.0
1 31.0 0.0 0.0 0.0 0.0 0.0 0.0
2 31.0 0.0 5.0 3.0 0.0 0.0 0.0
3 NaN NaN NaN NaN NaN NaN NaN

In [96]: td.dt.components.seconds
Out[96]:
(continues on next page)

3.15. Time deltas 793


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


0 0.0
1 0.0
2 3.0
3 NaN
Name: seconds, dtype: float64

You can convert a Timedelta to an ISO 8601 Duration string with the .isoformat method

In [97]: pd.Timedelta(days=6, minutes=50, seconds=3,


....: milliseconds=10, microseconds=10,
....: nanoseconds=12).isoformat()
....:
Out[97]: 'P6DT0H50M3.010010012S'

3.15.6 TimedeltaIndex

To generate an index with time delta, you can use either the TimedeltaIndex or the timedelta_range()
constructor.
Using TimedeltaIndex you can pass string-like, Timedelta, timedelta, or np.timedelta64 objects.
Passing np.nan/pd.NaT/nat will represent missing values.

In [98]: pd.TimedeltaIndex(['1 days', '1 days, 00:00:05', np.timedelta64(2, 'D'),


....: datetime.timedelta(days=2, seconds=2)])
....:
Out[98]:
[email protected]
TimedeltaIndex(['1 days 00:00:00', '1 days 00:00:05', '2 days 00:00:00',
T56GZSRVAH '2 days 00:00:02'],
dtype='timedelta64[ns]', freq=None)

The string ‘infer’ can be passed in order to set the frequency of the index as the inferred frequency upon creation:

In [99]: pd.TimedeltaIndex(['0 days', '10 days', '20 days'], freq='infer')


Out[99]: TimedeltaIndex(['0 days', '10 days', '20 days'], dtype='timedelta64[ns]',
˓→freq='10D')

Generating ranges of time deltas

Similar to date_range(), you can construct regular ranges of a TimedeltaIndex using


timedelta_range(). The default frequency for timedelta_range is calendar day:

In [100]: pd.timedelta_range(start='1 days', periods=5)


Out[100]: TimedeltaIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], dtype=
˓→'timedelta64[ns]', freq='D')

Various combinations of start, end, and periods can be used with timedelta_range:

In [101]: pd.timedelta_range(start='1 days', end='5 days')


Out[101]: TimedeltaIndex(['1 days', '2 days', '3 days', '4 days', '5 days'], dtype=
˓→'timedelta64[ns]', freq='D')

In [102]: pd.timedelta_range(end='10 days', periods=4)


Out[102]: TimedeltaIndex(['7 days', '8 days', '9 days', '10 days'], dtype=
˓→'timedelta64[ns]', freq='D')

794 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

The freq parameter can passed a variety of frequency aliases:


In [103]: pd.timedelta_range(start='1 days', end='2 days', freq='30T')
Out[103]:
TimedeltaIndex(['1 days 00:00:00', '1 days 00:30:00', '1 days 01:00:00',
'1 days 01:30:00', '1 days 02:00:00', '1 days 02:30:00',
'1 days 03:00:00', '1 days 03:30:00', '1 days 04:00:00',
'1 days 04:30:00', '1 days 05:00:00', '1 days 05:30:00',
'1 days 06:00:00', '1 days 06:30:00', '1 days 07:00:00',
'1 days 07:30:00', '1 days 08:00:00', '1 days 08:30:00',
'1 days 09:00:00', '1 days 09:30:00', '1 days 10:00:00',
'1 days 10:30:00', '1 days 11:00:00', '1 days 11:30:00',
'1 days 12:00:00', '1 days 12:30:00', '1 days 13:00:00',
'1 days 13:30:00', '1 days 14:00:00', '1 days 14:30:00',
'1 days 15:00:00', '1 days 15:30:00', '1 days 16:00:00',
'1 days 16:30:00', '1 days 17:00:00', '1 days 17:30:00',
'1 days 18:00:00', '1 days 18:30:00', '1 days 19:00:00',
'1 days 19:30:00', '1 days 20:00:00', '1 days 20:30:00',
'1 days 21:00:00', '1 days 21:30:00', '1 days 22:00:00',
'1 days 22:30:00', '1 days 23:00:00', '1 days 23:30:00',
'2 days 00:00:00'],
dtype='timedelta64[ns]', freq='30T')

In [104]: pd.timedelta_range(start='1 days', periods=5, freq='2D5H')


Out[104]:
TimedeltaIndex(['1 days 00:00:00', '3 days 05:00:00', '5 days 10:00:00',
'7 days 15:00:00', '9 days 20:00:00'],
dtype='timedelta64[ns]', freq='53H')

[email protected]
New in version 0.23.0.
T56GZSRVAH
Specifying start, end, and periods will generate a range of evenly spaced timedeltas from start to end
inclusively, with periods number of elements in the resulting TimedeltaIndex:
In [105]: pd.timedelta_range('0 days', '4 days', periods=5)
Out[105]: TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], dtype=
˓→'timedelta64[ns]', freq=None)

In [106]: pd.timedelta_range('0 days', '4 days', periods=10)


Out[106]:
TimedeltaIndex(['0 days 00:00:00', '0 days 10:40:00', '0 days 21:20:00',
'1 days 08:00:00', '1 days 18:40:00', '2 days 05:20:00',
'2 days 16:00:00', '3 days 02:40:00', '3 days 13:20:00',
'4 days 00:00:00'],
dtype='timedelta64[ns]', freq=None)

Using the TimedeltaIndex

Similarly to other of the datetime-like indices, DatetimeIndex and PeriodIndex, you can use
TimedeltaIndex as the index of pandas objects.
In [107]: s = pd.Series(np.arange(100),
.....: index=pd.timedelta_range('1 days', periods=100, freq='h'))
.....:

In [108]: s
Out[108]:
(continues on next page)

3.15. Time deltas 795


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


1 days 00:00:00 0
1 days 01:00:00 1
1 days 02:00:00 2
1 days 03:00:00 3
1 days 04:00:00 4
..
4 days 23:00:00 95
5 days 00:00:00 96
5 days 01:00:00 97
5 days 02:00:00 98
5 days 03:00:00 99
Freq: H, Length: 100, dtype: int64

Selections work similarly, with coercion on string-likes and slices:

In [109]: s['1 day':'2 day']


Out[109]:
1 days 00:00:00 0
1 days 01:00:00 1
1 days 02:00:00 2
1 days 03:00:00 3
1 days 04:00:00 4
..
2 days 19:00:00 43
2 days 20:00:00 44
2 days 21:00:00 45
2 days 22:00:00 46
2 days 23:00:00
[email protected] 47
T56GZSRVAH Freq: H, Length: 48, dtype: int64

In [110]: s['1 day 01:00:00']


Out[110]: 1

In [111]: s[pd.Timedelta('1 day 1h')]


Out[111]: 1

Furthermore you can use partial string selection and the range will be inferred:

In [112]: s['1 day':'1 day 5 hours']


Out[112]:
1 days 00:00:00 0
1 days 01:00:00 1
1 days 02:00:00 2
1 days 03:00:00 3
1 days 04:00:00 4
1 days 05:00:00 5
Freq: H, dtype: int64

796 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Operations

Finally, the combination of TimedeltaIndex with DatetimeIndex allow certain combination operations that
are NaT preserving:

In [113]: tdi = pd.TimedeltaIndex(['1 days', pd.NaT, '2 days'])

In [114]: tdi.to_list()
Out[114]: [Timedelta('1 days 00:00:00'), NaT, Timedelta('2 days 00:00:00')]

In [115]: dti = pd.date_range('20130101', periods=3)

In [116]: dti.to_list()
Out[116]:
[Timestamp('2013-01-01 00:00:00', freq='D'),
Timestamp('2013-01-02 00:00:00', freq='D'),
Timestamp('2013-01-03 00:00:00', freq='D')]

In [117]: (dti + tdi).to_list()


Out[117]: [Timestamp('2013-01-02 00:00:00'), NaT, Timestamp('2013-01-05 00:00:00')]

In [118]: (dti - tdi).to_list()


Out[118]: [Timestamp('2012-12-31 00:00:00'), NaT, Timestamp('2013-01-01 00:00:00')]

Conversions

Similarly to frequency conversion on a Series above, you can convert these indices to yield another Index.
[email protected]
T56GZSRVAHIn [119]: tdi / np.timedelta64(1, 's')
Out[119]: Float64Index([86400.0, nan, 172800.0], dtype='float64')

In [120]: tdi.astype('timedelta64[s]')
Out[120]: Float64Index([86400.0, nan, 172800.0], dtype='float64')

Scalars type ops work as well. These can potentially return a different type of index.

# adding or timedelta and date -> datelike


In [121]: tdi + pd.Timestamp('20130101')
Out[121]: DatetimeIndex(['2013-01-02', 'NaT', '2013-01-03'], dtype='datetime64[ns]',
˓→freq=None)

# subtraction of a date and a timedelta -> datelike


# note that trying to subtract a date from a Timedelta will raise an exception
In [122]: (pd.Timestamp('20130101') - tdi).to_list()
Out[122]: [Timestamp('2012-12-31 00:00:00'), NaT, Timestamp('2012-12-30 00:00:00')]

# timedelta + timedelta -> timedelta


In [123]: tdi + pd.Timedelta('10 days')
Out[123]: TimedeltaIndex(['11 days', NaT, '12 days'], dtype='timedelta64[ns]',
˓→freq=None)

# division can result in a Timedelta if the divisor is an integer


In [124]: tdi / 2
Out[124]: TimedeltaIndex(['0 days 12:00:00', NaT, '1 days 00:00:00'], dtype=
˓→'timedelta64[ns]', freq=None)

(continues on next page)

3.15. Time deltas 797


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


# or a Float64Index if the divisor is a Timedelta
In [125]: tdi / tdi[0]
Out[125]: Float64Index([1.0, nan, 2.0], dtype='float64')

3.15.7 Resampling

Similar to timeseries resampling, we can resample with a TimedeltaIndex.

In [126]: s.resample('D').mean()
Out[126]:
1 days 11.5
2 days 35.5
3 days 59.5
4 days 83.5
5 days 97.5
Freq: D, dtype: float64

3.16 Styling

This document is written as a Jupyter Notebook, and can be viewed or downloaded here.
You can apply conditional formatting, the visual styling of a DataFrame depending on the data within, by using
the DataFrame.style property. This is a property that returns a Styler object, which has useful methods for
formatting and displaying DataFrames.
[email protected]
T56GZSRVAH
The styling is accomplished using CSS. You write “style functions” that take scalars, DataFrames or Series, and
return like-indexed DataFrames or Series with CSS "attribute: value" pairs for the values. These functions
can be incrementally passed to the Styler which collects the styles before rendering.

3.16.1 Building styles

Pass your style functions into one of the following methods:


• Styler.applymap: elementwise
• Styler.apply: column-/row-/table-wise
Both of those methods take a function (and some other keyword arguments) and applies your function to the DataFrame
in a certain way. Styler.applymap works through the DataFrame elementwise. Styler.apply passes each
column or row into your DataFrame one-at-a-time or the entire table at once, depending on the axis keyword argu-
ment. For columnwise use axis=0, rowwise use axis=1, and for the entire table at once use axis=None.
For Styler.applymap your function should take a scalar and return a single string with the CSS attribute-value
pair.
For Styler.apply your function should take a Series or DataFrame (depending on the axis parameter), and return
a Series or DataFrame with an identical shape where each value is a string with a CSS attribute-value pair.
Let’s see some examples.

[2]: import pandas as pd


import numpy as np

(continues on next page)

798 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


np.random.seed(24)
df = pd.DataFrame({'A': np.linspace(1, 10, 10)})
df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4), columns=list('BCDE'))],
axis=1)
df.iloc[3, 3] = np.nan
df.iloc[0, 2] = np.nan

Here’s a boring example of rendering a DataFrame, without any (visible) styles:

[3]: df.style
[3]: <pandas.io.formats.style.Styler at 0x7f4104530f50>

Note: The DataFrame.style attribute is a property that returns a Styler object. Styler has a _repr_html_
method defined on it so they are rendered automatically. If you want the actual HTML back for further processing or
for writing to file call the .render() method which returns a string.
The above output looks very similar to the standard DataFrame HTML representation. But we’ve done some work
behind the scenes to attach CSS classes to each cell. We can view these by calling the .render method.

[4]: df.style.highlight_null().render().split('\n')[:10]
[4]: ['<style type="text/css" >',
' #T_b3d1fba4_692f_11ea_928a_0242ac110002row0_col2 {',
' background-color: red;',
' } #T_b3d1fba4_692f_11ea_928a_0242ac110002row3_col3 {',
' background-color: red;',
' }</style><table id="T_b3d1fba4_692f_11ea_928a_0242ac110002" ><thead> <tr>
[email protected]
˓→ <th class="blank level0" ></th> <th class="col_heading level0 col0" >
T56GZSRVAH˓→A</th> <th class="col_heading level0 col1" >B</th> <th class="col_
˓→heading level0 col2" >C</th> <th class="col_heading level0 col3" >D</th>
˓→ <th class="col_heading level0 col4" >E</th> </tr></thead><tbody>',
' <tr>',
' <th id="T_b3d1fba4_692f_11ea_928a_0242ac110002level0_row0"
˓→class="row_heading level0 row0" >0</th>',

' <td id="T_b3d1fba4_692f_11ea_928a_0242ac110002row0_col0"


˓→class="data row0 col0" >1.000000</td>',

' <td id="T_b3d1fba4_692f_11ea_928a_0242ac110002row0_col1"


˓→class="data row0 col1" >1.329212</td>']

The row0_col2 is the identifier for that particular cell. We’ve also prepended each row/column identifier with a
UUID unique to each DataFrame so that the style from one doesn’t collide with the styling from another within the
same notebook or page (you can set the uuid if you’d like to tie together the styling of two DataFrames).
When writing style functions, you take care of producing the CSS attribute / value pairs you want. Pandas matches
those up with the CSS classes that identify each cell.
Let’s write a simple style function that will color negative numbers red and positive numbers black.

[5]: def color_negative_red(val):


"""
Takes a scalar and returns a string with
the css property `'color: red'` for negative
strings, black otherwise.
"""
color = 'red' if val < 0 else 'black'
return 'color: %s' % color

3.16. Styling 799


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In this case, the cell’s style depends only on it’s own value. That means we should use the Styler.applymap
method which works elementwise.

[6]: s = df.style.applymap(color_negative_red)
s
[6]: <pandas.io.formats.style.Styler at 0x7f40e88afa90>

Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to
be able to reuse your existing knowledge of how to interact with DataFrames.
Notice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in
a <style> tag. This will be a common theme.
Finally, the input shapes matched. Styler.applymap calls the function on each scalar input, and the function
returns a scalar output.
Now suppose you wanted to highlight the maximum value in each column. We can’t use .applymap anymore since
that operated elementwise. Instead, we’ll turn to .apply which operates columnwise (or rowwise using the axis
keyword). Later on we’ll see that something like highlight_max is already defined on Styler so you wouldn’t
need to write this yourself.

[7]: def highlight_max(s):


'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]

[8]: df.style.apply(highlight_max)
[email protected]
T56GZSRVAH
[8]: <pandas.io.formats.style.Styler at 0x7f40e88a5090>

In this case the input is a Series, one column at a time. Notice that the output shape of highlight_max matches
the input shape, an array with len(s) items.
We encourage you to use method chains to build up a style piecewise, before finally rending at the end of the chain.

[9]: df.style.\
applymap(color_negative_red).\
apply(highlight_max)
[9]: <pandas.io.formats.style.Styler at 0x7f40e88a5b50>

Above we used Styler.apply to pass in each column one at a time.


Debugging Tip: If you’re having trouble writing your style function, try just passing it into DataFrame.apply. Inter-
nally, Styler.apply uses DataFrame.apply so the result should be the same.
What if you wanted to highlight just the maximum value in the entire table? Use .apply(function,
axis=None) to indicate that your function wants the entire table, not one column or row at a time. Let’s try that
next.
We’ll rewrite our highlight-max to handle either Series (from .apply(axis=0 or 1)) or DataFrames (from
.apply(axis=None)). We’ll also allow the color to be adjustable, to demonstrate that .apply, and .applymap
pass along keyword arguments.

[10]: def highlight_max(data, color='yellow'):


'''
highlight the maximum in a Series or DataFrame
(continues on next page)

800 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


'''
attr = 'background-color: {}'.format(color)
if data.ndim == 1: # Series from .apply(axis=0) or axis=1
is_max = data == data.max()
return [attr if v else '' for v in is_max]
else: # from .apply(axis=None)
is_max = data == data.max().max()
return pd.DataFrame(np.where(is_max, attr, ''),
index=data.index, columns=data.columns)

When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index
and column labels.

[11]: df.style.apply(highlight_max, color='darkorange', axis=None)


[11]: <pandas.io.formats.style.Styler at 0x7f40e88af6d0>

Building Styles Summary

Style functions should return strings with one or more CSS attribute: value delimited by semicolons. Use
• Styler.applymap(func) for elementwise styles
• Styler.apply(func, axis=0) for columnwise styles
• Styler.apply(func, axis=1) for rowwise styles
• Styler.apply(func, axis=None) for tablewise styles
[email protected]
T56GZSRVAHAnd crucially the input and output shapes of func must match. If x is the input then func(x).shape == x.
shape.

3.16.2 Finer control: slicing

Both Styler.apply, and Styler.applymap accept a subset keyword. This allows you to apply styles to
specific rows or columns, without having to code that logic into your style function.
The value passed to subset behaves similar to slicing a DataFrame.
• A scalar is treated as a column label
• A list (or series or numpy array)
• A tuple is treated as (row_indexer, column_indexer)
Consider using pd.IndexSlice to construct the tuple for the last one.

[12]: df.style.apply(highlight_max, subset=['B', 'C', 'D'])


[12]: <pandas.io.formats.style.Styler at 0x7f40e87d8fd0>

For row and column slicing, any valid indexer to .loc will work.

[13]: df.style.applymap(color_negative_red,
subset=pd.IndexSlice[2:5, ['B', 'D']])
[13]: <pandas.io.formats.style.Styler at 0x7f40e87d8f10>

3.16. Styling 801


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Only label-based slicing is supported right now, not positional.


If your style function uses a subset or axis keyword argument, consider wrapping your function in a
functools.partial, partialing out that keyword.
my_func2 = functools.partial(my_func, subset=42)

3.16.3 Finer Control: Display Values

We distinguish the display value from the actual value in Styler. To control the display value, the text is printed in
each cell, use Styler.format. Cells can be formatted according to a format spec string or a callable that takes a
single value and returns a string.
[14]: df.style.format("{:.2%}")
[14]: <pandas.io.formats.style.Styler at 0x7f40e8898750>

Use a dictionary to format specific columns.


[15]: df.style.format({'B': "{:0<4.0f}", 'D': '{:+.2f}'})
[15]: <pandas.io.formats.style.Styler at 0x7f40e87f2050>

Or pass in a callable (or dictionary of callables) for more flexible handling.


[16]: df.style.format({"B": lambda x: "±{:.2f}".format(abs(x))})
[16]: <pandas.io.formats.style.Styler at 0x7f40e87f2d90>
[email protected]
T56GZSRVAHYou can format the text displayed for missing values by na_rep.
[17]: df.style.format("{:.2%}", na_rep="-")
[17]: <pandas.io.formats.style.Styler at 0x7f40e88afed0>

These formatting techniques can be used in combination with styling.


[18]: df.style.highlight_max().format(None, na_rep="-")
[18]: <pandas.io.formats.style.Styler at 0x7f40e87f2cd0>

3.16.4 Builtin styles

Finally, we expect certain styling functions to be common enough that we’ve included a few “built-in” to the Styler,
so you don’t have to write them yourself.
[19]: df.style.highlight_null(null_color='red')
[19]: <pandas.io.formats.style.Styler at 0x7f40e6791450>

You can create “heatmaps” with the background_gradient method. These require matplotlib, and we’ll use
Seaborn to get a nice colormap.
[20]: import seaborn as sns

cm = sns.light_palette("green", as_cmap=True)

(continues on next page)

802 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


s = df.style.background_gradient(cmap=cm)
s
[20]: <pandas.io.formats.style.Styler at 0x7f40e87f2910>

Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend
the range of your data by low and high percent so that when we convert the colors, the colormap’s entire range isn’t
used. This is useful so that you can actually read the text still.

[21]: # Uses the full color range


df.loc[:4].style.background_gradient(cmap='viridis')
[21]: <pandas.io.formats.style.Styler at 0x7f40e4711950>

[22]: # Compress the color range


(df.loc[:4]
.style
.background_gradient(cmap='viridis', low=.5, high=0)
.highlight_null('red'))
[22]: <pandas.io.formats.style.Styler at 0x7f40e471b3d0>

There’s also .highlight_min and .highlight_max.

[23]: df.style.highlight_max(axis=0)
[23]: <pandas.io.formats.style.Styler at 0x7f40e47263d0>

Use Styler.set_properties when the style doesn’t actually depend on the values.
[email protected]
T56GZSRVAH
[24]: df.style.set_properties(**{'background-color': 'black',
'color': 'lawngreen',
'border-color': 'white'})
[24]: <pandas.io.formats.style.Styler at 0x7f40e4726790>

Bar charts

You can include “bar charts” in your DataFrame.

[25]: df.style.bar(subset=['A', 'B'], color='#d65f5f')


[25]: <pandas.io.formats.style.Styler at 0x7f4104591390>

New in version 0.20.0 is the ability to customize further the bar chart: You can now have the df.style.bar be
centered on zero or midpoint value (in addition to the already existing way of having the min value at the left side of
the cell), and you can pass a list of [color_negative, color_positive].
Here’s how you can change the above with the new align='mid' option:

[26]: df.style.bar(subset=['A', 'B'], align='mid', color=['#d65f5f', '#5fba7d'])


[26]: <pandas.io.formats.style.Styler at 0x7f40e8869a90>

The following example aims to give a highlight of the behavior of the new align options:

3.16. Styling 803


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[27]: import pandas as pd


from IPython.display import HTML

# Test series
test1 = pd.Series([-100,-60,-30,-20], name='All Negative')
test2 = pd.Series([10,20,50,100], name='All Positive')
test3 = pd.Series([-10,-5,0,90], name='Both Pos and Neg')

head = """
<table>
<thead>
<th>Align</th>
<th>All Negative</th>
<th>All Positive</th>
<th>Both Neg and Pos</th>
</thead>
</tbody>

"""

aligns = ['left','zero','mid']
for align in aligns:
row = "<tr><th>{}</th>".format(align)
for serie in [test1,test2,test3]:
s = serie.copy()
s.name=''
row += "<td>{}</td>".format(s.to_frame().style.bar(align=align,
color=['#d65f5f', '#5fba7d
˓→ '],
[email protected]
T56GZSRVAH width=100).render())
˓→#testn['width']

row += '</tr>'
head += row

head+= """
</tbody>
</table>"""

HTML(head)
[27]: <IPython.core.display.HTML object>

3.16.5 Sharing styles

Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame.
Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set

[28]: df2 = -df


style1 = df.style.applymap(color_negative_red)
style1
[28]: <pandas.io.formats.style.Styler at 0x7f40e46ba210>

[29]: style2 = df2.style


style2.use(style1.export())
style2

804 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[29]: <pandas.io.formats.style.Styler at 0x7f40e46d9150>

Notice that you’re able to share the styles even though they’re data aware. The styles are re-evaluated on the new
DataFrame they’ve been used upon.

3.16.6 Other Options

You’ve seen a few methods for data-driven styling. Styler also provides a few other options for styles that don’t
depend on the data.
• precision
• captions
• table-wide styles
• missing values representation
• hiding the index or columns
Each of these can be specified in two ways:
• A keyword argument to Styler.__init__
• A call to one of the .set_ or .hide_ methods, e.g. .set_caption or .hide_columns
The best method to use depends on the context. Use the Styler constructor when building many styled DataFrames
that should all share the same properties. For interactive use, the.set_ and .hide_ methods are more convenient.

[email protected]
Precision
T56GZSRVAH
You can control the precision of floats using pandas’ regular display.precision option.

[30]: with pd.option_context('display.precision', 2):


html = (df.style
.applymap(color_negative_red)
.apply(highlight_max))
html
[30]: <pandas.io.formats.style.Styler at 0x7f40e46e6a50>

Or through a set_precision method.

[31]: df.style\
.applymap(color_negative_red)\
.apply(highlight_max)\
.set_precision(2)
[31]: <pandas.io.formats.style.Styler at 0x7f40e46e5d90>

Setting the precision only affects the printed number; the full-precision values are always passed to your style func-
tions. You can always use df.round(2).style if you’d prefer to round from the start.

3.16. Styling 805


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Captions

Regular table captions can be added in a few ways.

[32]: df.style.set_caption('Colormaps, with a caption.')\


.background_gradient(cmap=cm)
[32]: <pandas.io.formats.style.Styler at 0x7f40e46e6890>

Table styles

The next option you have are “table styles”. These are styles that apply to the table as a whole, but don’t look at the
data. Certain sytlings, including pseudo-selectors like :hover can only be used this way.

[33]: from IPython.display import HTML

def hover(hover_color="#ffff99"):
return dict(selector="tr:hover",
props=[("background-color", "%s" % hover_color)])

styles = [
hover(),
dict(selector="th", props=[("font-size", "150%"),
("text-align", "center")]),
dict(selector="caption", props=[("caption-side", "bottom")])
]
html = (df.style.set_table_styles(styles)
[email protected]_caption("Hover to highlight."))
T56GZSRVAHhtml
[33]: <pandas.io.formats.style.Styler at 0x7f40e4677310>

table_styles should be a list of dictionaries. Each dictionary should have the selector and props keys.
The value for selector should be a valid CSS selector. Recall that all the styles are already attached to an id,
unique to each Styler. This selector is in addition to that id. The value for props should be a list of tuples of
('attribute', 'value').
table_styles are extremely flexible, but not as fun to type out by hand. We hope to collect some useful ones
either in pandas, or preferable in a new package that builds on top the tools here.

Missing values

You can control the default missing values representation for the entire table through set_na_rep method.

[34]: (df.style
.set_na_rep("FAIL")
.format(None, na_rep="PASS", subset=["D"])
.highlight_null("yellow"))
[34]: <pandas.io.formats.style.Styler at 0x7f40e46e68d0>

806 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Hiding the Index or Columns

The index can be hidden from rendering by calling Styler.hide_index. Columns can be hidden from rendering
by calling Styler.hide_columns and passing in the name of a column, or a slice of columns.

[35]: df.style.hide_index()
[35]: <pandas.io.formats.style.Styler at 0x7f40e6791190>

[36]: df.style.hide_columns(['C','D'])
[36]: <pandas.io.formats.style.Styler at 0x7f40e46e5950>

CSS classes

Certain CSS classes are attached to cells.


• Index and Column names include index_name and level<k> where k is its level in a MultiIndex
• Index label cells include
– row_heading
– row<n> where n is the numeric position of the row
– level<k> where k is the level in a MultiIndex
• Column label cells include
– col_heading
[email protected]
T56GZSRVAH – col<n> where n is the numeric position of the column
– level<k> where k is the level in a MultiIndex
• Blank cells include blank
• Data cells include data

Limitations

• DataFrame only (use Series.to_frame().style)


• The index and columns must be unique
• No large repr, and performance isn’t great; this is intended for summary DataFrames
• You can only style the values, not the index or columns
• You can only apply styles, you can’t insert new HTML entities
Some of these will be addressed in the future.

3.16. Styling 807


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Terms

• Style function: a function that’s passed into Styler.apply or Styler.applymap and returns values like
'css attribute: value'
• Builtin style functions: style functions that are methods on Styler
• table style: a dictionary with the two keys selector and props. selector is the CSS selector that props
will apply to. props is a list of (attribute, value) tuples. A list of table styles passed into Styler.

3.16.7 Fun stuff

Here are a few interesting examples.


Styler interacts pretty well with widgets. If you’re viewing this online instead of running the notebook yourself,
you’re missing out on interactively adjusting the color palette.

[37]: from IPython.html import widgets


@widgets.interact
def f(h_neg=(0, 359, 1), h_pos=(0, 359), s=(0., 99.9), l=(0., 99.9)):
return df.style.background_gradient(
cmap=sns.palettes.diverging_palette(h_neg=h_neg, h_pos=h_pos, s=s, l=l,
as_cmap=True)
)
<pandas.io.formats.style.Styler at 0x7f40e46e6250>

[38]: def magnify():


[email protected]
return [dict(selector="th",
T56GZSRVAH props=[("font-size", "4pt")]),
dict(selector="td",
props=[('padding', "0em 0em")]),
dict(selector="th:hover",
props=[("font-size", "12pt")]),
dict(selector="tr:hover td:hover",
props=[('max-width', '200px'),
('font-size', '12pt')])
]

[39]: np.random.seed(25)
cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True)
bigdf = pd.DataFrame(np.random.randn(20, 25)).cumsum()

bigdf.style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '1pt'})\
.set_caption("Hover to magnify")\
.set_precision(2)\
.set_table_styles(magnify())
[39]: <pandas.io.formats.style.Styler at 0x7f40e49091d0>

808 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.16.8 Export to Excel

New in version 0.20.0


Experimental: This is a new feature and still under development. We’ll be adding features and possibly making
breaking changes in future releases. We’d love to hear your feedback.
Some support is available for exporting styled DataFrames to Excel worksheets using the OpenPyXL or
XlsxWriter engines. CSS2.2 properties handled include:
• background-color
• border-style, border-width, border-color and their {top, right, bottom, left variants}
• color
• font-family
• font-style
• font-weight
• text-align
• text-decoration
• vertical-align
• white-space: nowrap
• Only CSS2 named colors and hex colors of the form #rgb or #rrggbb are currently supported.
• The following pseudo CSS properties are also available to set excel specific style properties:
[email protected]
– number-format
T56GZSRVAH
[40]: df.style.\
applymap(color_negative_red).\
apply(highlight_max).\
to_excel('styled.xlsx', engine='openpyxl')

A screenshot of the output:

3.16. Styling 809


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.16.9 Extensibility

The core of pandas is, and will remain, its “high-performance, easy-to-use data structures”. With that in mind, we
hope that DataFrame.style accomplishes two goals
• Provide an API that is pleasing to use interactively and is “good enough” for many tasks
• Provide the foundations for dedicated libraries to build on
If you build a great library on top of this, let us know and we’ll link to it.

Subclassing

If the default template doesn’t quite suit your needs, you can subclass Styler and extend or override the template. We’ll
show an example of extending the default template to insert a custom header before each table.
[41]: from jinja2 import Environment, ChoiceLoader, FileSystemLoader
from IPython.display import HTML
from pandas.io.formats.style import Styler

We’ll use the following template:


[42]: with open("templates/myhtml.tpl") as f:
print(f.read())
{% extends "html.tpl" %}
{% block table %}
<h1>{{ table_title|default("My Table") }}</h1>
{{ super() }}
[email protected]
{% endblock table %}
T56GZSRVAH

Now that we’ve created a template, we need to set up a subclass of Styler that knows about it.
[43]: class MyStyler(Styler):
env = Environment(
loader=ChoiceLoader([
FileSystemLoader("templates"), # contains ours
Styler.loader, # the default
])
)
template = env.get_template("myhtml.tpl")

Notice that we include the original loader in our environment’s loader. That’s because we extend the original template,
so the Jinja environment needs to be able to find it.
Now we can use that custom styler. It’s __init__ takes a DataFrame.
[44]: MyStyler(df)
[44]: <__main__.MyStyler at 0x7f40e03329d0>

Our custom template accepts a table_title keyword. We can provide the value in the .render method.
[45]: HTML(MyStyler(df).render(table_title="Extending Example"))
[45]: <IPython.core.display.HTML object>

For convenience, we provide the Styler.from_custom_template method that does the same as the custom
subclass.

810 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

[46]: EasyStyler = Styler.from_custom_template("templates", "myhtml.tpl")


EasyStyler(df)
[46]: <pandas.io.formats.style.Styler.from_custom_template.<locals>.MyStyler at
˓→0x7f40e0312cd0>

Here’s the template structure:


[47]: with open("templates/template_structure.html") as f:
structure = f.read()

HTML(structure)
[47]: <IPython.core.display.HTML object>

See the template in the GitHub repo for more details.

3.17 Options and settings

3.17.1 Overview

pandas has an options system that lets you customize some aspects of its behaviour, display-related options being those
the user is most likely to adjust.
Options have a full “dotted-style”, case-insensitive name (e.g. display.max_rows). You can get/set options
directly as attributes of the top-level options attribute:
[email protected]
In [1]: import pandas as pd
T56GZSRVAH
In [2]: pd.options.display.max_rows
Out[2]: 15

In [3]: pd.options.display.max_rows = 999

In [4]: pd.options.display.max_rows
Out[4]: 999

The API is composed of 5 relevant functions, available directly from the pandas namespace:
• get_option() / set_option() - get/set the value of a single option.
• reset_option() - reset one or more options to their default value.
• describe_option() - print the descriptions of one or more options.
• option_context() - execute a codeblock with a set of options that revert to prior settings after execution.
Note: Developers can check out pandas/core/config.py for more information.
All of the functions above accept a regexp pattern (re.search style) as an argument, and so passing in a substring
will work - as long as it is unambiguous:
In [5]: pd.get_option("display.max_rows")
Out[5]: 999

In [6]: pd.set_option("display.max_rows", 101)

In [7]: pd.get_option("display.max_rows")
(continues on next page)

3.17. Options and settings 811


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


Out[7]: 101

In [8]: pd.set_option("max_r", 102)

In [9]: pd.get_option("display.max_rows")
Out[9]: 102

The following will not work because it matches multiple option names, e.g. display.max_colwidth,
display.max_rows, display.max_columns:

In [10]: try:
....: pd.get_option("column")
....: except KeyError as e:
....: print(e)
....:
'Pattern matched multiple keys'

Note: Using this form of shorthand may cause your code to break if new options with similar names are added in
future versions.
You can get a list of available options and their descriptions with describe_option. When called with no argu-
ment describe_option will print out the descriptions for all available options.

3.17.2 Getting and setting options

As described above, get_option() and set_option() are available from the pandas namespace. To change an
[email protected]
option, call set_option('option regex', new_value).
T56GZSRVAH
In [11]: pd.get_option('mode.sim_interactive')
Out[11]: False

In [12]: pd.set_option('mode.sim_interactive', True)

In [13]: pd.get_option('mode.sim_interactive')
Out[13]: True

Note: The option ‘mode.sim_interactive’ is mostly used for debugging purposes.


All options also have a default value, and you can use reset_option to do just that:

In [14]: pd.get_option("display.max_rows")
Out[14]: 60

In [15]: pd.set_option("display.max_rows", 999)

In [16]: pd.get_option("display.max_rows")
Out[16]: 999

In [17]: pd.reset_option("display.max_rows")

In [18]: pd.get_option("display.max_rows")
Out[18]: 60

It’s also possible to reset multiple options at once (using a regex):

812 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [19]: pd.reset_option("^display")

option_context context manager has been exposed through the top-level API, allowing you to execute code with
given option values. Option values are restored automatically when you exit the with block:
In [20]: with pd.option_context("display.max_rows", 10, "display.max_columns", 5):
....: print(pd.get_option("display.max_rows"))
....: print(pd.get_option("display.max_columns"))
....:
10
5

In [21]: print(pd.get_option("display.max_rows"))
60

In [22]: print(pd.get_option("display.max_columns"))
0

3.17.3 Setting startup options in Python/IPython environment

Using startup scripts for the Python/IPython environment to import pandas and set options makes working with pandas
more efficient. To do this, create a .py or .ipy script in the startup directory of the desired profile. An example where
the startup folder is in a default ipython profile can be found at:
$IPYTHONDIR/profile_default/startup

[email protected]
More information can be found in the ipython documentation. An example startup script for pandas is displayed
T56GZSRVAHbelow:

import pandas as pd
pd.set_option('display.max_rows', 999)
pd.set_option('precision', 5)

3.17.4 Frequently Used Options

The following is a walk-through of the more frequently used display options.


display.max_rows and display.max_columns sets the maximum number of rows and columns displayed
when a frame is pretty-printed. Truncated lines are replaced by an ellipsis.
In [23]: df = pd.DataFrame(np.random.randn(7, 2))

In [24]: pd.set_option('max_rows', 7)

In [25]: df
Out[25]:
0 1
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
3 0.119209 -1.044236
4 -0.861849 -2.104569
5 -0.494929 1.071804
6 0.721555 -0.706771
(continues on next page)

3.17. Options and settings 813


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [26]: pd.set_option('max_rows', 5)

In [27]: df
Out[27]:
0 1
0 0.469112 -0.282863
1 -1.509059 -1.135632
.. ... ...
5 -0.494929 1.071804
6 0.721555 -0.706771

[7 rows x 2 columns]

In [28]: pd.reset_option('max_rows')

Once the display.max_rows is exceeded, the display.min_rows options determines how many rows are
shown in the truncated repr.

In [29]: pd.set_option('max_rows', 8)

In [30]: pd.set_option('min_rows', 4)

# below max_rows -> all rows shown


In [31]: df = pd.DataFrame(np.random.randn(7, 2))

In [32]: df
Out[32]:
[email protected]
T56GZSRVAH 0 1
0 -1.039575 0.271860
1 -0.424972 0.567020
2 0.276232 -1.087401
3 -0.673690 0.113648
4 -1.478427 0.524988
5 0.404705 0.577046
6 -1.715002 -1.039268

# above max_rows -> only min_rows (4) rows shown


In [33]: df = pd.DataFrame(np.random.randn(9, 2))

In [34]: df
Out[34]:
0 1
0 -0.370647 -1.157892
1 -1.344312 0.844885
.. ... ...
7 0.276662 -0.472035
8 -0.013960 -0.362543

[9 rows x 2 columns]

In [35]: pd.reset_option('max_rows')

In [36]: pd.reset_option('min_rows')

display.expand_frame_repr allows for the representation of dataframes to stretch across pages, wrapped over
the full column vs row-wise.

814 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

In [37]: df = pd.DataFrame(np.random.randn(5, 10))

In [38]: pd.set_option('expand_frame_repr', True)

In [39]: df
Out[39]:
0 1 2 3 4 5 6 7
˓→ 8 9
0 -0.006154 -0.923061 0.895717 0.805244 -1.206412 2.565646 1.431256 1.340309 -1.
˓→170299 -0.226169

1 0.410835 0.813850 0.132003 -0.827317 -0.076467 -1.187678 1.130127 -1.436737 -1.


˓→413681 1.607920
2 1.024180 0.569605 0.875906 -2.211372 0.974466 -2.006747 -0.410001 -0.078638 0.
˓→545952 -1.219217

3 -1.226825 0.769804 -1.281247 -0.727707 -0.121306 -0.097883 0.695775 0.341734 0.


˓→959726 -1.110336

4 -0.619976 0.149748 -0.732339 0.687738 0.176444 0.403310 -0.154951 0.301624 -2.


˓→179861 -1.369849

In [40]: pd.set_option('expand_frame_repr', False)

In [41]: df
Out[41]:
0 1 2 3 4 5 6 7
˓→ 8 9
0 -0.006154 -0.923061 0.895717 0.805244 -1.206412 2.565646 1.431256 1.340309 -1.
˓→170299 -0.226169

1 0.410835 0.813850 0.132003 -0.827317 -0.076467 -1.187678 1.130127 -1.436737 -1.


˓→413681
[email protected] 1.607920
T56GZSRVAH2 1.024180 0.569605 0.875906 -2.211372 0.974466 -2.006747 -0.410001 -0.078638 0.
˓→545952 -1.219217

3 -1.226825 0.769804 -1.281247 -0.727707 -0.121306 -0.097883 0.695775 0.341734 0.


˓→959726 -1.110336

4 -0.619976 0.149748 -0.732339 0.687738 0.176444 0.403310 -0.154951 0.301624 -2.


˓→179861 -1.369849

In [42]: pd.reset_option('expand_frame_repr')

display.large_repr lets you select whether to display dataframes that exceed max_columns or max_rows
as a truncated frame, or as a summary.

In [43]: df = pd.DataFrame(np.random.randn(10, 10))

In [44]: pd.set_option('max_rows', 5)

In [45]: pd.set_option('large_repr', 'truncate')

In [46]: df
Out[46]:
0 1 2 3 4 5 6 7
˓→ 8 9
0 -0.954208 1.462696 -1.743161 -0.826591 -0.345352 1.314232 0.690579 0.995761 2.
˓→396780 0.014871
1 3.357427 -0.317441 -1.236269 0.896171 -0.487602 -0.082240 -2.182937 0.380396 0.
˓→084844 0.432390
.. ... ... ... ... ... ... ... ...
˓→ ... ...
(continues on next page)

3.17. Options and settings 815


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


8 -0.303421 -0.858447 0.306996 -0.028665 0.384316 1.574159 1.588931 0.476720 0.
˓→473424 -0.242861

9 -0.014805 -0.284319 0.650776 -1.461665 -1.137707 -0.891060 -0.693921 1.613616 0.


˓→464000 0.227371

[10 rows x 10 columns]

In [47]: pd.set_option('large_repr', 'info')

In [48]: df
Out[48]:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 10 non-null float64
1 1 10 non-null float64
2 2 10 non-null float64
3 3 10 non-null float64
4 4 10 non-null float64
5 5 10 non-null float64
6 6 10 non-null float64
7 7 10 non-null float64
8 8 10 non-null float64
9 9 10 non-null float64
dtypes: float64(10)
memory usage: 928.0 bytes
[email protected]
T56GZSRVAH
In [49]: pd.reset_option('large_repr')

In [50]: pd.reset_option('max_rows')

display.max_colwidth sets the maximum width of columns. Cells of this length or longer will be truncated
with an ellipsis.

In [51]: df = pd.DataFrame(np.array([['foo', 'bar', 'bim', 'uncomfortably long string


˓→'],

....: ['horse', 'cow', 'banana', 'apple']]))


....:

In [52]: pd.set_option('max_colwidth', 40)

In [53]: df
Out[53]:
0 1 2 3
0 foo bar bim uncomfortably long string
1 horse cow banana apple

In [54]: pd.set_option('max_colwidth', 6)

In [55]: df
Out[55]:
0 1 2 3
0 foo bar bim un...
1 horse cow ba... apple
(continues on next page)

816 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)

In [56]: pd.reset_option('max_colwidth')

display.max_info_columns sets a threshold for when by-column info will be given.

In [57]: df = pd.DataFrame(np.random.randn(10, 10))

In [58]: pd.set_option('max_info_columns', 11)

In [59]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 10 non-null float64
1 1 10 non-null float64
2 2 10 non-null float64
3 3 10 non-null float64
4 4 10 non-null float64
5 5 10 non-null float64
6 6 10 non-null float64
7 7 10 non-null float64
8 8 10 non-null float64
9 9 10 non-null float64
dtypes: float64(10)
memory usage: 928.0 bytes
[email protected]
T56GZSRVAHIn [60]: pd.set_option('max_info_columns', 5)
In [61]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Columns: 10 entries, 0 to 9
dtypes: float64(10)
memory usage: 928.0 bytes

In [62]: pd.reset_option('max_info_columns')

display.max_info_rows: df.info() will usually show null-counts for each column. For large frames this
can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller
dimensions then specified. Note that you can specify the option df.info(null_counts=True) to override on
showing a particular frame.

In [63]: df = pd.DataFrame(np.random.choice([0, 1, np.nan], size=(10, 10)))

In [64]: df
Out[64]:
0 1 2 3 4 5 6 7 8 9
0 0.0 NaN 1.0 NaN NaN 0.0 NaN 0.0 NaN 1.0
1 1.0 NaN 1.0 1.0 1.0 1.0 NaN 0.0 0.0 NaN
2 0.0 NaN 1.0 0.0 0.0 NaN NaN NaN NaN 0.0
3 NaN NaN NaN 0.0 1.0 1.0 NaN 1.0 NaN 1.0
4 0.0 NaN NaN NaN 0.0 NaN NaN NaN 1.0 0.0
5 0.0 1.0 1.0 1.0 1.0 0.0 NaN NaN 1.0 0.0
6 1.0 1.0 1.0 NaN 1.0 NaN 1.0 0.0 NaN NaN
(continues on next page)

3.17. Options and settings 817


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


7 0.0 0.0 1.0 0.0 1.0 0.0 1.0 1.0 0.0 NaN
8 NaN NaN NaN 0.0 NaN NaN NaN NaN 1.0 NaN
9 0.0 NaN 0.0 NaN NaN 0.0 NaN 1.0 1.0 0.0

In [65]: pd.set_option('max_info_rows', 11)

In [66]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 8 non-null float64
1 1 3 non-null float64
2 2 7 non-null float64
3 3 6 non-null float64
4 4 7 non-null float64
5 5 6 non-null float64
6 6 2 non-null float64
7 7 6 non-null float64
8 8 6 non-null float64
9 9 6 non-null float64
dtypes: float64(10)
memory usage: 928.0 bytes

In [67]: pd.set_option('max_info_rows', 5)

In [68]: df.info()
[email protected]
T56GZSRVAH<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 10 columns):
# Column Dtype
--- ------ -----
0 0 float64
1 1 float64
2 2 float64
3 3 float64
4 4 float64
5 5 float64
6 6 float64
7 7 float64
8 8 float64
9 9 float64
dtypes: float64(10)
memory usage: 928.0 bytes

In [69]: pd.reset_option('max_info_rows')

display.precision sets the output display precision in terms of decimal places. This is only a suggestion.
In [70]: df = pd.DataFrame(np.random.randn(5, 5))

In [71]: pd.set_option('precision', 7)

In [72]: df
Out[72]:
0 1 2 3 4
(continues on next page)

818 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


0 -1.1506406 -0.7983341 -0.5576966 0.3813531 1.3371217
1 -1.5310949 1.3314582 -0.5713290 -0.0266708 -1.0856630
2 -1.1147378 -0.0582158 -0.4867681 1.6851483 0.1125723
3 -1.4953086 0.8984347 -0.1482168 -1.5960698 0.1596530
4 0.2621358 0.0362196 0.1847350 -0.2550694 -0.2710197

In [73]: pd.set_option('precision', 4)

In [74]: df
Out[74]:
0 1 2 3 4
0 -1.1506 -0.7983 -0.5577 0.3814 1.3371
1 -1.5311 1.3315 -0.5713 -0.0267 -1.0857
2 -1.1147 -0.0582 -0.4868 1.6851 0.1126
3 -1.4953 0.8984 -0.1482 -1.5961 0.1597
4 0.2621 0.0362 0.1847 -0.2551 -0.2710

display.chop_threshold sets at what level pandas rounds to zero when it displays a Series of DataFrame. This
setting does not change the precision at which the number is stored.

In [75]: df = pd.DataFrame(np.random.randn(6, 6))

In [76]: pd.set_option('chop_threshold', 0)

In [77]: df
Out[77]:
0 1 2 3 4 5
0 1.2884
[email protected] 0.2946 -1.1658 0.8470 -0.6856 0.6091
T56GZSRVAH1 -0.3040 0.6256 -0.0593 0.2497 1.1039 -1.0875
2 1.9980 -0.2445 0.1362 0.8863 -1.3507 -0.8863
3 -1.0133 1.9209 -0.3882 -2.3144 0.6655 0.4026
4 0.3996 -1.7660 0.8504 0.3881 0.9923 0.7441
5 -0.7398 -1.0549 -0.1796 0.6396 1.5850 1.9067

In [78]: pd.set_option('chop_threshold', .5)

In [79]: df
Out[79]:
0 1 2 3 4 5
0 1.2884 0.0000 -1.1658 0.8470 -0.6856 0.6091
1 0.0000 0.6256 0.0000 0.0000 1.1039 -1.0875
2 1.9980 0.0000 0.0000 0.8863 -1.3507 -0.8863
3 -1.0133 1.9209 0.0000 -2.3144 0.6655 0.0000
4 0.0000 -1.7660 0.8504 0.0000 0.9923 0.7441
5 -0.7398 -1.0549 0.0000 0.6396 1.5850 1.9067

In [80]: pd.reset_option('chop_threshold')

display.colheader_justify controls the justification of the headers. The options are ‘right’, and ‘left’.

In [81]: df = pd.DataFrame(np.array([np.random.randn(6),
....: np.random.randint(1, 9, 6) * .1,
....: np.zeros(6)]).T,
....: columns=['A', 'B', 'C'], dtype='float')
....:

(continues on next page)

3.17. Options and settings 819


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


In [82]: pd.set_option('colheader_justify', 'right')

In [83]: df
Out[83]:
A B C
0 0.1040 0.1 0.0
1 0.1741 0.5 0.0
2 -0.4395 0.4 0.0
3 -0.7413 0.8 0.0
4 -0.0797 0.4 0.0
5 -0.9229 0.3 0.0

In [84]: pd.set_option('colheader_justify', 'left')

In [85]: df
Out[85]:
A B C
0 0.1040 0.1 0.0
1 0.1741 0.5 0.0
2 -0.4395 0.4 0.0
3 -0.7413 0.8 0.0
4 -0.0797 0.4 0.0
5 -0.9229 0.3 0.0

In [86]: pd.reset_option('colheader_justify')

[email protected]
3.17.5 Available options
T56GZSRVAH

Option Default Function


display.chop_threshold None If set to a float value, all float values smaller then the given threshold will be dis
display.colheader_justify right Controls the justification of column headers. used by DataFrameFormatter.
display.column_space 12 No description available.
display.date_dayfirst False When True, prints and parses dates with the day first, eg 20/01/2005
display.date_yearfirst False When True, prints and parses dates with the year first, eg 2005/01/20
display.encoding UTF-8 Defaults to the detected encoding of the console. Specifies the encoding to be u
display.expand_frame_repr True Whether to print out the full DataFrame repr for wide DataFrames across multip
display.float_format None The callable should accept a floating point number and return a string with the d
display.large_repr truncate For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can
display.latex.repr False Whether to produce a latex DataFrame representation for jupyter frontends that
display.latex.escape True Escapes special characters in DataFrames, when using the to_latex method.
display.latex.longtable False Specifies if the to_latex method of a DataFrame uses the longtable format.
display.latex.multicolumn True Combines columns when using a MultiIndex
display.latex.multicolumn_format ‘l’ Alignment of multicolumn labels
display.latex.multirow False Combines rows when using a MultiIndex. Centered instead of top-aligned, sepa
display.max_columns 0 or 20 max_rows and max_columns are used in __repr__() methods to decide if to_str
display.max_colwidth 50 The maximum width in characters of a column in the repr of a pandas data struc
display.max_info_columns 100 max_info_columns is used in DataFrame.info method to decide if per column in
display.max_info_rows 1690785 df.info() will usually show null-counts for each column. For large frames this ca
display.max_rows 60 This sets the maximum number of rows pandas should output when printing ou
display.min_rows 10 The numbers of rows to show in a truncated repr (when max_rows is exceeded).
display.max_seq_items 100 when pretty-printing a long sequence, no more then max_seq_items will be prin

820 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Option Default Function


display.memory_usage True This specifies if the memory usage of a DataFrame should be displayed when th
display.multi_sparse True “Sparsify” MultiIndex display (don’t display repeated elements in outer levels w
display.notebook_repr_html True When True, IPython notebook will use html representation for pandas objects (i
display.pprint_nest_depth 3 Controls the number of nested levels to process when pretty-printing
display.precision 6 Floating point output precision in terms of number of places after the decimal, f
display.show_dimensions truncate Whether to print out dimensions at the end of DataFrame repr. If ‘truncate’ is sp
display.width 80 Width of the display in characters. In case python/IPython is running in a termin
display.html.table_schema False Whether to publish a Table Schema representation for frontends that support it.
display.html.border 1 A border=value attribute is inserted in the <table> tag for the DataFrame
display.html.use_mathjax True When True, Jupyter notebook will process table contents using MathJax, render
io.excel.xls.writer xlwt The default Excel writer engine for ‘xls’ files.
io.excel.xlsm.writer openpyxl The default Excel writer engine for ‘xlsm’ files. Available options: ‘openpyxl’ (
io.excel.xlsx.writer openpyxl The default Excel writer engine for ‘xlsx’ files.
io.hdf.default_format None default format writing format, if None, then put will default to ‘fixed’ and appen
io.hdf.dropna_table True drop ALL nan rows when appending to a table
io.parquet.engine None The engine to use as a default for parquet reading and writing. If None then try
mode.chained_assignment warn Controls SettingWithCopyWarning: ‘raise’, ‘warn’, or None. Raise an ex
mode.sim_interactive False Whether to simulate interactive mode for purposes of testing.
mode.use_inf_as_na False True means treat None, NaN, -INF, INF as NA (old way), False means None an
compute.use_bottleneck True Use the bottleneck library to accelerate computation if it is installed.
compute.use_numexpr True Use the numexpr library to accelerate computation if it is installed.
plotting.backend matplotlib Change the plotting backend to a different backend than the current matplotlib o
plotting.matplotlib.register_converters True Register custom converters with matplotlib. Set to False to de-register.
[email protected]
T56GZSRVAH
3.17.6 Number formatting

pandas also allows you to set how numbers are displayed in the console. This option is not set through the
set_options API.
Use the set_eng_float_format function to alter the floating-point formatting of pandas objects to produce a
particular format.
For instance:

In [87]: import numpy as np

In [88]: pd.set_eng_float_format(accuracy=3, use_eng_prefix=True)

In [89]: s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])

In [90]: s / 1.e3
Out[90]:
a 303.638u
b -721.084u
c -622.696u
d 648.250u
e -1.945m
dtype: float64

In [91]: s / 1.e6
Out[91]:
a 303.638n
(continues on next page)

3.17. Options and settings 821


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


b -721.084n
c -622.696n
d 648.250n
e -1.945u
dtype: float64

To round floats on a case-by-case basis, you can also use round() and round().

3.17.7 Unicode formatting

Warning: Enabling this option will affect the performance for printing of DataFrame and Series (about 2 times
slower). Use only when it is actually required.

Some East Asian countries use Unicode characters whose width corresponds to two Latin characters. If a DataFrame
or Series contains these characters, the default output mode may not align them properly.

Note: Screen captures are attached for each output to show the actual results.

In [92]: df = pd.DataFrame({'': ['UK', ''], '': ['Alice', '']})

In [93]: df
Out[93]:
[email protected]
T56GZSRVAH0 UK Alice
1

Enabling display.unicode.east_asian_width allows pandas to check each character’s “East Asian Width”
property. These characters can be aligned properly by setting this option to True. However, this will result in longer
render times than the standard len function.
In [94]: pd.set_option('display.unicode.east_asian_width', True)

In [95]: df
Out[95]:

0 UK Alice
1

In addition, Unicode characters whose width is “Ambiguous” can either be 1 or 2 characters wide depending on the
terminal setting or encoding. The option display.unicode.ambiguous_as_wide can be used to handle the
ambiguity.

822 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

By default, an “Ambiguous” character’s width, such as “¡” (inverted exclamation) in the example below, is taken to be
1.

In [96]: df = pd.DataFrame({'a': ['xxx', '¡¡'], 'b': ['yyy', '¡¡']})

In [97]: df
Out[97]:
a b
0 xxx yyy
1 ¡¡ ¡¡

Enabling display.unicode.ambiguous_as_wide makes pandas interpret these characters’ widths to be 2.


(Note that this option will only be effective when display.unicode.east_asian_width is enabled.)
However, setting this option incorrectly for your terminal will cause these characters to be aligned incorrectly:

In [98]: pd.set_option('display.unicode.ambiguous_as_wide', True)

In [99]: df
Out[99]:
a b
0 xxx yyy
1 ¡¡ ¡¡
[email protected]
T56GZSRVAH

3.17.8 Table schema display

DataFrame and Series will publish a Table Schema representation by default. False by default, this can be enabled
globally with the display.html.table_schema option:

In [100]: pd.set_option('display.html.table_schema', True)

Only 'display.max_rows' are serialized and published.

3.18 Enhancing performance

In this part of the tutorial, we will investigate how to speed up certain functions operating on pandas DataFrames
using three different techniques: Cython, Numba and pandas.eval(). We will see a speed improvement of ~200
when we use Cython and Numba on a test function operating row-wise on the DataFrame. Using pandas.eval()
we will speed up a sum by an order of ~2.

3.18. Enhancing performance 823


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.18.1 Cython (writing C extensions for pandas)

For many use cases writing pandas in pure Python and NumPy is sufficient. In some computationally heavy applica-
tions however, it can be possible to achieve sizable speed-ups by offloading work to cython.
This tutorial assumes you have refactored as much as possible in Python, for example by trying to remove for-loops
and making use of NumPy vectorization. It’s always worth optimising in Python first.
This tutorial walks through a “typical” process of cythonizing a slow computation. We use an example from the
Cython documentation but in the context of pandas. Our final cythonized solution is around 100 times faster than the
pure Python solution.

Pure Python

We have a DataFrame to which we want to apply a function row-wise.

In [1]: df = pd.DataFrame({'a': np.random.randn(1000),


...: 'b': np.random.randn(1000),
...: 'N': np.random.randint(100, 1000, (1000)),
...: 'x': 'x'})
...:

In [2]: df
Out[2]:
a b N x
0 0.469112 -0.218470 585 x
1 -0.282863 -0.061645 841 x
2 -1.509059 -0.723780 251 x
[email protected]
3 -1.135632 0.551225 972 x
T56GZSRVAH4 1.212112 -0.497767 181 x
.. ... ... ... ..
995 -1.512743 0.874737 374 x
996 0.933753 1.120790 246 x
997 -0.308013 0.198768 157 x
998 -0.079915 1.757555 977 x
999 -1.010589 -1.115680 770 x

[1000 rows x 4 columns]

Here’s the function in pure Python:

In [3]: def f(x):


...: return x * (x - 1)
...:

In [4]: def integrate_f(a, b, N):


...: s = 0
...: dx = (b - a) / N
...: for i in range(N):
...: s += f(a + i * dx)
...: return s * dx
...:

We achieve our result by using apply (row-wise):

In [7]: %timeit df.apply(lambda x: integrate_f(x['a'], x['b'], x['N']), axis=1)


10 loops, best of 3: 174 ms per loop

824 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

But clearly this isn’t fast enough for us. Let’s take a look and see where the time is spent during this operation (limited
to the most time consuming four calls) using the prun ipython magic function:

In [5]: %prun -l 4 df.apply(lambda x: integrate_f(x['a'], x['b'], x['N']), axis=1) #


˓→noqa E999

701222 function calls (698196 primitive calls) in 10.595 seconds

Ordered by: internal time


List reduced from 227 to 4 due to restriction <4>

ncalls tottime percall cumtime percall filename:lineno(function)


1000 4.295 0.004 8.324 0.008 <ipython-input-4-c2a74e076cf0>:
˓→1(integrate_f)

552423 4.029 0.000 4.029 0.000 <ipython-input-3-c138bdd570e3>:1(f)


18363 0.233 0.000 0.499 0.000 {built-in method builtins.isinstance}
3000 0.207 0.000 1.938 0.001 base.py:4372(get_value)

By far the majority of time is spend inside either integrate_f or f, hence we’ll concentrate our efforts cythonizing
these two functions.

Note: In Python 2 replacing the range with its generator counterpart (xrange) would mean the range line would
vanish. In Python 3 range is already a generator.

Plain Cython

First we’re going to need to import the Cython magic function to ipython:
[email protected]
T56GZSRVAH
In [6]: %load_ext Cython

Now, let’s simply copy our functions over to Cython as is (the suffix is here to distinguish between function versions):

In [7]: %%cython
...: def f_plain(x):
...: return x * (x - 1)
...: def integrate_f_plain(a, b, N):
...: s = 0
...: dx = (b - a) / N
...: for i in range(N):
...: s += f_plain(a + i * dx)
...: return s * dx
...:

Note: If you’re having trouble pasting the above into your ipython, you may need to be using bleeding edge ipython
for paste to play well with cell magics.

In [4]: %timeit df.apply(lambda x: integrate_f_plain(x['a'], x['b'], x['N']), axis=1)


10 loops, best of 3: 85.5 ms per loop

Already this has shaved a third off, not too bad for a simple copy and paste.

3.18. Enhancing performance 825


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

Adding type

We get another huge improvement simply by providing type information:


In [8]: %%cython
...: cdef double f_typed(double x) except? -2:
...: return x * (x - 1)
...: cpdef double integrate_f_typed(double a, double b, int N):
...: cdef int i
...: cdef double s, dx
...: s = 0
...: dx = (b - a) / N
...: for i in range(N):
...: s += f_typed(a + i * dx)
...: return s * dx
...:

In [4]: %timeit df.apply(lambda x: integrate_f_typed(x['a'], x['b'], x['N']), axis=1)


10 loops, best of 3: 20.3 ms per loop

Now, we’re talking! It’s now over ten times faster than the original python implementation, and we haven’t really
modified the code. Let’s have another look at what’s eating up time:
In [9]: %prun -l 4 df.apply(lambda x: integrate_f_typed(x['a'], x['b'], x['N']),
˓→axis=1)

148795 function calls (145769 primitive calls) in 2.265 seconds

Ordered by: internal time


List reduced from 222 to 4 due to restriction <4>
[email protected]
T56GZSRVAH
ncalls tottime percall cumtime percall filename:lineno(function)
18363 0.231 0.000 0.497 0.000 {built-in method builtins.isinstance}
3000 0.206 0.000 1.934 0.001 base.py:4372(get_value)
12117 0.177 0.000 0.266 0.000 generic.py:10(_check)
3006 0.115 0.000 0.841 0.000 construction.py:337(extract_array)

Using ndarray

It’s calling series. . . a lot! It’s creating a Series from each row, and get-ting from both the index and the series (three
times for each row). Function calls are expensive in Python, so maybe we could minimize these by cythonizing the
apply part.

Note: We are now passing ndarrays into the Cython function, fortunately Cython plays very nicely with NumPy.

In [10]: %%cython
....: cimport numpy as np
....: import numpy as np
....: cdef double f_typed(double x) except? -2:
....: return x * (x - 1)
....: cpdef double integrate_f_typed(double a, double b, int N):
....: cdef int i
....: cdef double s, dx
....: s = 0
....: dx = (b - a) / N
(continues on next page)

826 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


....: for i in range(N):
....: s += f_typed(a + i * dx)
....: return s * dx
....: cpdef np.ndarray[double] apply_integrate_f(np.ndarray col_a, np.ndarray col_
˓→b,

....: np.ndarray col_N):


....: assert (col_a.dtype == np.float
....: and col_b.dtype == np.float and col_N.dtype == np.int)
....: cdef Py_ssize_t i, n = len(col_N)
....: assert (len(col_a) == len(col_b) == n)
....: cdef np.ndarray[double] res = np.empty(n)
....: for i in range(len(col_a)):
....: res[i] = integrate_f_typed(col_a[i], col_b[i], col_N[i])
....: return res
....:

The implementation is simple, it creates an array of zeros and loops over the rows, applying our
integrate_f_typed, and putting this in the zeros array.

Warning: You can not pass a Series directly as a ndarray typed parameter to a Cython function. Instead
pass the actual ndarray using the Series.to_numpy(). The reason is that the Cython definition is specific
to an ndarray and not the passed Series.
So, do not do this:
apply_integrate_f(df['a'], df['b'], df['N'])

[email protected]
But rather, use Series.to_numpy() to get the underlying ndarray:
T56GZSRVAH
apply_integrate_f(df['a'].to_numpy(),
df['b'].to_numpy(),
df['N'].to_numpy())

Note: Loops like this would be extremely slow in Python, but in Cython looping over NumPy arrays is fast.

In [4]: %timeit apply_integrate_f(df['a'].to_numpy(),


df['b'].to_numpy(),
df['N'].to_numpy())
1000 loops, best of 3: 1.25 ms per loop

We’ve gotten another big improvement. Let’s check again where the time is spent:
In [11]: %%prun -l 4 apply_integrate_f(df['a'].to_numpy(),
....: df['b'].to_numpy(),
....: df['N'].to_numpy())
....:
260 function calls in 0.006 seconds

Ordered by: internal time


List reduced from 65 to 4 due to restriction <4>

ncalls tottime percall cumtime percall filename:lineno(function)


1 0.001 0.001 0.001 0.001 {built-in method _cython_magic_
˓→8f4b43b70ec22da94a87fdb44df17336.apply_integrate_f}
(continues on next page)

3.18. Enhancing performance 827


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


3 0.000 0.000 0.004 0.001 frame.py:2767(__getitem__)
33 0.000 0.000 0.000 0.000 {built-in method builtins.isinstance}
3 0.000 0.000 0.001 0.000 managers.py:979(iget)

As one might expect, the majority of the time is now spent in apply_integrate_f, so if we wanted to make
anymore efficiencies we must continue to concentrate our efforts here.

More advanced techniques

There is still hope for improvement. Here’s an example of using some more advanced Cython techniques:

In [12]: %%cython
....: cimport cython
....: cimport numpy as np
....: import numpy as np
....: cdef double f_typed(double x) except? -2:
....: return x * (x - 1)
....: cpdef double integrate_f_typed(double a, double b, int N):
....: cdef int i
....: cdef double s, dx
....: s = 0
....: dx = (b - a) / N
....: for i in range(N):
....: s += f_typed(a + i * dx)
....: return s * dx
....: @cython.boundscheck(False)
....: @cython.wraparound(False)
[email protected]
T56GZSRVAH ....: cpdef np.ndarray[double] apply_integrate_f_wrap(np.ndarray[double] col_a,
....: np.ndarray[double] col_b,
....: np.ndarray[int] col_N):
....: cdef int i, n = len(col_N)
....: assert len(col_a) == len(col_b) == n
....: cdef np.ndarray[double] res = np.empty(n)
....: for i in range(n):
....: res[i] = integrate_f_typed(col_a[i], col_b[i], col_N[i])
....: return res
....:

In [4]: %timeit apply_integrate_f_wrap(df['a'].to_numpy(),


df['b'].to_numpy(),
df['N'].to_numpy())
1000 loops, best of 3: 987 us per loop

Even faster, with the caveat that a bug in our Cython code (an off-by-one error, for example) might cause a segfault
because memory access isn’t checked. For more about boundscheck and wraparound, see the Cython docs on
compiler directives.

828 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

3.18.2 Using Numba

A recent alternative to statically compiling Cython code, is to use a dynamic jit-compiler, Numba.
Numba gives you the power to speed up your applications with high performance functions written directly in Python.
With a few annotations, array-oriented and math-heavy Python code can be just-in-time compiled to native machine
instructions, similar in performance to C, C++ and Fortran, without having to switch languages or Python interpreters.
Numba works by generating optimized machine code using the LLVM compiler infrastructure at import time, runtime,
or statically (using the included pycc tool). Numba supports compilation of Python to run on either CPU or GPU
hardware, and is designed to integrate with the Python scientific software stack.

Note: You will need to install Numba. This is easy with conda, by using: conda install numba, see installing
using miniconda.

Note: As of Numba version 0.20, pandas objects cannot be passed directly to Numba-compiled functions. Instead,
one must pass the NumPy array underlying the pandas object to the Numba-compiled function as demonstrated below.

Jit

We demonstrate how to use Numba to just-in-time compile our code. We simply take the plain Python code from
above and annotate with the @jit decorator.

import numba
[email protected]
T56GZSRVAH
@numba.jit
def f_plain(x):
return x * (x - 1)

@numba.jit
def integrate_f_numba(a, b, N):
s = 0
dx = (b - a) / N
for i in range(N):
s += f_plain(a + i * dx)
return s * dx

@numba.jit
def apply_integrate_f_numba(col_a, col_b, col_N):
n = len(col_N)
result = np.empty(n, dtype='float64')
assert len(col_a) == len(col_b) == n
for i in range(n):
result[i] = integrate_f_numba(col_a[i], col_b[i], col_N[i])
return result

def compute_numba(df):
result = apply_integrate_f_numba(df['a'].to_numpy(),
df['b'].to_numpy(),
(continues on next page)

3.18. Enhancing performance 829


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

(continued from previous page)


df['N'].to_numpy())
return pd.Series(result, index=df.index, name='result')

Note that we directly pass NumPy arrays to the Numba function. compute_numba is just a wrapper that provides a
nicer interface by passing/returning pandas objects.

In [4]: %timeit compute_numba(df)


1000 loops, best of 3: 798 us per loop

In this example, using Numba was faster than Cython.

Vectorize

Numba can also be used to write vectorized functions that do not require the user to explicitly loop over the observa-
tions of a vector; a vectorized function will be applied to each row automatically. Consider the following toy example
of doubling each observation:

import numba

def double_every_value_nonumba(x):
return x * 2

@numba.vectorize
def double_every_value_withnumba(x): # noqa E501
[email protected]
return x * 2
T56GZSRVAH
# Custom function without numba
In [5]: %timeit df['col1_doubled'] = df['a'].apply(double_every_value_nonumba) #
˓→noqa E501

1000 loops, best of 3: 797 us per loop

# Standard implementation (faster than a custom function)


In [6]: %timeit df['col1_doubled'] = df['a'] * 2
1000 loops, best of 3: 233 us per loop

# Custom function with numba


In [7]: %timeit (df['col1_doubled'] = double_every_value_withnumba(df['a'].to_numpy())
1000 loops, best of 3: 145 us per loop

Caveats

Note: Numba will execute on any function, but can only accelerate certain classes of functions.

Numba is best at accelerating functions that apply numerical functions to NumPy arrays. When passed a function that
only uses operations it knows how to accelerate, it will execute in nopython mode.
If Numba is passed a function that includes something it doesn’t know how to work with – a category that currently
includes sets, lists, dictionaries, or string functions – it will revert to object mode. In object mode, Numba
will execute but your code will not speed up significantly. If you would prefer that Numba throw an error if it cannot

830 Chapter 3. User Guide


This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3

compile a function in a way that speeds up your code, pass Numba the argument nopython=True (e.g. @numba.
jit(nopython=True)). For more on troubleshooting Numba modes, see the Numba troubleshooting page.
Read more in the Numba docs.

3.18.3 Expression evaluation via eval()

The top-level function pandas.eval() implements expression evaluation of Series and DataFrame objects.

Note: To benefit from using eval() you need to install numexpr. See the recommended dependencies section for
more details.

The point of using eval() for expression evaluation rather than plain Python is two-fold: 1) large DataFrame
objects are evaluated more efficiently and 2) large arithmetic and boolean expressions are evaluated all at once by the
underlying engine (by default numexpr is used for evaluation).

Note: You should not use eval() for simple expressions or for expressions involving small DataFrames. In fact,
eval() is many orders of magnitude slower for smaller expressions/objects than pl