Pandas User Guide: Installation & Errors
Pandas User Guide: Installation & Errors
toolkit
Release 1.0.3
[email protected]
166FVD0TPV
2Getting started 5
2.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Intro to pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Coming from. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Community tutorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4.2
[email protected] Package overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
166FVD0TPV 2.4.3 10 minutes to pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.4 Getting started tutorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.4.5 Essential basic functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.4.6 Intro to data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
2.4.7 Comparison with other tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
2.4.8 Tutorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
i
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
3.1.19 Other file formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
3.1.20 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
3.2 Indexing and selecting data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
3.2.1 Different choices for indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
3.2.2 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
3.2.3 Attribute access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
3.2.4 Slicing ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
3.2.5 Selection by label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
3.2.6 Selection by position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
3.2.7 Selection by callable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
3.2.8 IX indexer is deprecated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
3.2.9 Indexing with list with missing labels is deprecated . . . . . . . . . . . . . . . . . . . . . . 360
3.2.10 Selecting random samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
3.2.11 Setting with enlargement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
3.2.12 Fast scalar value getting and setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
3.2.13 Boolean indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
3.2.14 Indexing with isin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
3.2.15 The where() Method and Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
3.2.16 The query() Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
3.2.17 Duplicate data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
3.2.18 Dictionary-like get() method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
3.2.19 The lookup() method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
3.2.20 Index objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
3.2.21 Set / reset index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
3.2.22 Returning a view versus a copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
3.3 MultiIndex / advanced indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
3.3.1
[email protected] Hierarchical indexing (MultiIndex) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
166FVD0TPV 3.3.2 Advanced indexing with hierarchical index . . . . . . . . . . . . . . . . . . . . . . . . . . 403
3.3.3 Sorting a MultiIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
3.3.4 Take methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
3.3.5 Index types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
3.3.6 Miscellaneous indexing FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
3.4 Merge, join, and concatenate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
3.4.1 Concatenating objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
3.4.2 Database-style DataFrame or named Series joining/merging . . . . . . . . . . . . . . . . . 444
3.4.3 Timeseries friendly merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
3.5 Reshaping and pivot tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
3.5.1 Reshaping by pivoting DataFrame objects . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
3.5.2 Reshaping by stacking and unstacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
3.5.3 Reshaping by Melt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
3.5.4 Combining with stats and GroupBy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
3.5.5 Pivot tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
3.5.6 Cross tabulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
3.5.7 Tiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
3.5.8 Computing indicator / dummy variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
3.5.9 Factorizing values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
3.5.10 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
3.5.11 Exploding a list-like column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
3.6 Working with text data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
3.6.1 Text Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
3.6.2 String Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
3.6.3 Splitting and replacing strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
3.6.4 Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
3.6.5 Indexing with .str . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
ii
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
3.6.6 Extracting substrings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
3.6.7 Testing for Strings that match or contain a pattern . . . . . . . . . . . . . . . . . . . . . . . 511
3.6.8 Creating indicator variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
3.6.9 Method summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
3.7 Working with missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
3.7.1 Values considered “missing” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
3.7.2 Inserting missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
3.7.3 Calculations with missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
3.7.4 Sum/prod of empties/nans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
3.7.5 NA values in GroupBy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
3.7.6 Filling missing values: fillna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
3.7.7 Filling with a PandasObject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
3.7.8 Dropping axis labels with missing data: dropna . . . . . . . . . . . . . . . . . . . . . . . . 523
3.7.9 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
3.7.10 Replacing generic values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
3.7.11 String/regular expression replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
3.7.12 Numeric replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
3.7.13 Experimental NA scalar to denote missing values . . . . . . . . . . . . . . . . . . . . . . . 539
3.8 Categorical data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
3.8.1 Object creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
3.8.2 CategoricalDtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
3.8.3 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
3.8.4 Working with categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
3.8.5 Sorting and order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
3.8.6 Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
3.8.7 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
3.8.8
[email protected] Data munging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
166FVD0TPV 3.8.9 Getting data in/out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
3.8.10 Missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
3.8.11 Differences to R’s factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
3.8.12 Gotchas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
3.9 Nullable integer data type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
3.9.1 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
3.9.2 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
3.9.3 Scalar NA Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
3.10 Nullable Boolean Data Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
3.10.1 Indexing with NA values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
3.10.2 Kleene Logical Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
3.11 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
3.11.1 Basic plotting: plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
3.11.2 Other plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
3.11.3 Plotting with missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
3.11.4 Plotting Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
3.11.5 Plot Formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
3.11.6 Plotting directly with matplotlib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
3.12 Computational tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
3.12.1 Statistical functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
3.12.2 Window Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
3.12.3 Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
3.12.4 Expanding windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
3.12.5 Exponentially weighted windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
3.13 Group By: split-apply-combine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
3.13.1 Splitting an object into groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
3.13.2 Iterating through groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
iii
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
3.13.3 Selecting a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
3.13.4 Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
3.13.5 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
3.13.6 Filtration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
3.13.7 Dispatching to instance methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
3.13.8 Flexible apply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
3.13.9 Other useful features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
3.13.10 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
3.14 Time series / date functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
3.14.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
3.14.2 Timestamps vs. Time Spans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
3.14.3 Converting to timestamps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
3.14.4 Generating ranges of timestamps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
3.14.5 Timestamp limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
3.14.6 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
3.14.7 Time/date components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
3.14.8 DateOffset objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
3.14.9 Time Series-Related Instance Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
3.14.10 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
3.14.11 Time span representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
3.14.12 Converting between representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
3.14.13 Representing out-of-bounds spans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
3.14.14 Time zone handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
3.15 Time deltas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
3.15.1 Parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
3.15.2 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 782
3.15.3 Reductions . . . . . . . . . . . . . . . . . . . . . . .
[email protected] . . . . . . . . . . . . . . . . . . . . 786
166FVD0TPV 3.15.4 Frequency conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
3.15.5 Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
3.15.6 TimedeltaIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
3.15.7 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
3.16 Styling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
3.16.1 Building styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
3.16.2 Finer control: slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797
3.16.3 Finer Control: Display Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
3.16.4 Builtin styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
3.16.5 Sharing styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
3.16.6 Other Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
3.16.7 Fun stuff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804
3.16.8 Export to Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
3.16.9 Extensibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
3.17 Options and settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
3.17.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
3.17.2 Getting and setting options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
3.17.3 Setting startup options in Python/IPython environment . . . . . . . . . . . . . . . . . . . . 809
3.17.4 Frequently Used Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
3.17.5 Available options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816
3.17.6 Number formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
3.17.7 Unicode formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
3.17.8 Table schema display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
3.18 Enhancing performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
3.18.1 Cython (writing C extensions for pandas) . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
3.18.2 Using Numba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
3.18.3 Expression evaluation via eval() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
iv
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
3.19 Scaling to large datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
3.19.1 Load less data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
3.19.2 Use efficient datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
3.19.3 Use chunking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
3.19.4 Use other libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
3.20 Sparse data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
3.20.1 SparseArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846
3.20.2 SparseDtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846
3.20.3 Sparse accessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
3.20.4 Sparse calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
3.20.5 Migrating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
3.20.6 Interaction with scipy.sparse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
3.21 Frequently Asked Questions (FAQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
3.21.1 DataFrame memory usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
3.21.2 Using if/truth statements with pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
3.21.3 NaN, Integer NA values and NA type promotions . . . . . . . . . . . . . . . . . . . . . . . . 857
3.21.4 Differences with NumPy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
3.21.5 Thread-safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
3.21.6 Byte-Ordering issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
3.22 Cookbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
3.22.1 Idioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
3.22.2 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864
3.22.3 MultiIndexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
3.22.4 Missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
3.22.5 Grouping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873
3.22.6 Timeseries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
3.22.7 Merge . . . . . . . . . . . . . . . . . . . . . .
[email protected] . . . . . . . . . . . . . . . . . . . . . . . . 883
166FVD0TPV 3.22.8 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
3.22.9 Data In/Out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
3.22.10 Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
3.22.11 Timedeltas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
3.22.12 Aliasing axis names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894
3.22.13 Creating example data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895
v
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
4.2.3 Top-level conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
4.2.4 Top-level dealing with datetimelike . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
4.2.5 Top-level dealing with intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984
4.2.6 Top-level evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
4.2.7 Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988
4.2.8 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
4.3 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
4.3.1 Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
4.3.2 Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
4.3.3 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
4.3.4 Indexing, iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214
4.3.5 Binary operator functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215
4.3.6 Function application, groupby & window . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216
4.3.7 Computations / descriptive stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216
4.3.8 Reindexing / selection / label manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . 1217
4.3.9 Missing data handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
4.3.10 Reshaping, sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
4.3.11 Combining / joining / merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219
4.3.12 Time series-related . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219
4.3.13 Accessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219
4.3.14 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319
4.3.15 Serialization / IO / conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
4.4 DataFrame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
4.4.1 Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
4.4.2 Attributes and underlying data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1678
4.4.3 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679
4.4.4
[email protected] Indexing, iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679
166FVD0TPV 4.4.5 Binary operator functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1680
4.4.6 Function application, GroupBy & window . . . . . . . . . . . . . . . . . . . . . . . . . . . 1681
4.4.7 Computations / descriptive stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1681
4.4.8 Reindexing / selection / label manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . 1682
4.4.9 Missing data handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1683
4.4.10 Reshaping, sorting, transposing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1683
4.4.11 Combining / joining / merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1684
4.4.12 Time series-related . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1684
4.4.13 Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1685
4.4.14 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1685
4.4.15 Sparse accessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1731
4.4.16 Serialization / IO / conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1732
4.5 Pandas arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1733
4.5.1 pandas.array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1733
4.5.2 Datetime data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1737
4.5.3 Timedelta data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1757
4.5.4 Timespan data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1766
4.5.5 Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1766
4.5.6 Interval data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1781
4.5.7 Nullable integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1795
4.5.8 Categorical data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1800
4.5.9 Sparse data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1805
4.5.10 Text data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1807
4.5.11 Boolean data with missing values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1809
4.6 Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1811
4.7 Index objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1811
4.7.1 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1811
vi
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
4.7.2 Numeric Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1867
4.7.3 CategoricalIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1871
4.7.4 IntervalIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1880
4.7.5 MultiIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1892
4.7.6 DatetimeIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1909
4.7.7 TimedeltaIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1936
4.7.8 PeriodIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1944
4.8 Date offsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1951
4.8.1 DateOffset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1951
4.8.2 BusinessDay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1956
4.8.3 BusinessHour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1959
4.8.4 CustomBusinessDay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1963
4.8.5 CustomBusinessHour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1967
4.8.6 MonthOffset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1971
4.8.7 MonthEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1974
4.8.8 MonthBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1977
4.8.9 BusinessMonthEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1980
4.8.10 BusinessMonthBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1983
4.8.11 CustomBusinessMonthEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1986
4.8.12 CustomBusinessMonthBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1990
4.8.13 SemiMonthOffset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1995
4.8.14 SemiMonthEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1998
4.8.15 SemiMonthBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2001
4.8.16 Week . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2004
4.8.17 WeekOfMonth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2007
4.8.18 LastWeekOfMonth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2011
4.8.19 QuarterOffset . . . . . . . .
[email protected] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2015
166FVD0TPV 4.8.20 BQuarterEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2018
4.8.21 BQuarterBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2021
4.8.22 QuarterEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2025
4.8.23 QuarterBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2028
4.8.24 YearOffset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2031
4.8.25 BYearEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2034
4.8.26 BYearBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2037
4.8.27 YearEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2040
4.8.28 YearBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2043
4.8.29 FY5253 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2046
4.8.30 FY5253Quarter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2050
4.8.31 Easter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2055
4.8.32 Tick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2059
4.8.33 Day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2063
4.8.34 Hour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2067
4.8.35 Minute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2070
4.8.36 Second . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2074
4.8.37 Milli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2078
4.8.38 Micro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2081
4.8.39 Nano . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2085
4.8.40 BDay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2089
4.8.41 BMonthEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2092
4.8.42 BMonthBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2094
4.8.43 CBMonthEnd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2097
4.8.44 CBMonthBegin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2101
4.8.45 CDay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2104
4.9 Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2107
vii
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
4.9.1 pandas.tseries.frequencies.to_offset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2108
4.10 Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2108
4.10.1 Standard moving window functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2109
4.10.2 Standard expanding window functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2127
4.10.3 Exponentially-weighted moving window functions . . . . . . . . . . . . . . . . . . . . . . 2140
4.10.4 Window Indexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2142
4.11 GroupBy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2143
4.11.1 Indexing, iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2143
4.11.2 Function application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2145
4.11.3 Computations / descriptive stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2147
4.12 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2189
4.12.1 Indexing, iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2189
4.12.2 Function application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2190
4.12.3 Upsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2194
4.12.4 Computations / descriptive stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2206
4.13 Style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2211
4.13.1 Styler constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2211
4.13.2 Styler properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2226
4.13.3 Style application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2226
4.13.4 Builtin styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2226
4.13.5 Style export and import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2227
4.14 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2227
4.14.1 pandas.plotting.andrews_curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2227
4.14.2 pandas.plotting.autocorrelation_plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2228
4.14.3 pandas.plotting.bootstrap_plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2228
4.14.4 pandas.plotting.boxplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2229
4.14.5 pandas.plotting.deregister_matplotlib_converters .
[email protected] . . . . . . . . . . . . . . . . . . . . . . 2236
166FVD0TPV 4.14.6 pandas.plotting.lag_plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2236
4.14.7 pandas.plotting.parallel_coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2236
4.14.8 pandas.plotting.plot_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2237
4.14.9 pandas.plotting.radviz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2237
4.14.10 pandas.plotting.register_matplotlib_converters . . . . . . . . . . . . . . . . . . . . . . . . . 2238
4.14.11 pandas.plotting.scatter_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2240
4.14.12 pandas.plotting.table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2240
4.15 General utility functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2241
4.15.1 Working with options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2241
4.15.2 Testing functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2255
4.15.3 Exceptions and warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2258
4.15.4 Data types related functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2261
4.16 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2288
4.16.1 pandas.api.extensions.register_extension_dtype . . . . . . . . . . . . . . . . . . . . . . . . 2289
4.16.2 pandas.api.extensions.register_dataframe_accessor . . . . . . . . . . . . . . . . . . . . . . 2289
4.16.3 pandas.api.extensions.register_series_accessor . . . . . . . . . . . . . . . . . . . . . . . . . 2290
4.16.4 pandas.api.extensions.register_index_accessor . . . . . . . . . . . . . . . . . . . . . . . . . 2292
4.16.5 pandas.api.extensions.ExtensionDtype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2293
4.16.6 pandas.api.extensions.ExtensionArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2296
4.16.7 pandas.arrays.PandasArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2309
4.16.8 pandas.api.indexers.check_array_indexer . . . . . . . . . . . . . . . . . . . . . . . . . . . 2309
5 Development 2313
5.1 Contributing to pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2313
5.1.1 Where to start? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2314
5.1.2 Bug reports and enhancement requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2315
5.1.3 Working with the code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2315
viii
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
5.1.4 Contributing to the documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2319
5.1.5 Contributing to the code base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2337
5.1.6 Contributing your changes to pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2350
5.2 pandas code style guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2353
5.2.1 Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2353
5.2.2 String formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2353
5.3 Pandas Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
5.3.1 Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
5.3.2 Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
5.3.3 Issue Triage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
5.3.4 Closing Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2356
5.3.5 Reviewing Pull Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2357
5.3.6 Cleaning up old Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2357
5.3.7 Cleaning up old Pull Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2357
5.3.8 Becoming a pandas maintainer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2357
5.4 Internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2358
5.4.1 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2358
5.4.2 Subclassing pandas data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2360
5.5 Extending pandas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2360
5.5.1 Registering custom accessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2360
5.5.2 Extension types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2361
5.5.3 Subclassing pandas data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2364
5.5.4 Plotting backends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2367
5.6 Developer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2367
5.6.1 Storing pandas DataFrame objects in Apache Parquet format . . . . . . . . . . . . . . . . . 2367
5.7 Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2370
5.7.1
[email protected] Version Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2370
166FVD0TPV 5.7.2 Python Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2371
5.8 Roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2371
5.8.1 Extensibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2371
5.8.2 String data type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2371
5.8.3 Apache Arrow interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2372
5.8.4 Block manager rewrite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2372
5.8.5 Decoupling of indexing and internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2372
5.8.6 Numba-accelerated operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2372
5.8.7 Documentation improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2373
5.8.8 Package docstring validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2373
5.8.9 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2373
5.8.10 Roadmap Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2373
5.9 Developer Meetings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2374
5.9.1 Minutes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2374
5.9.2 Calendar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2374
ix
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
6.3.1 Whats new in 0.24.2 (March 12, 2019) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2460
6.3.2 Whats new in 0.24.1 (February 3, 2019) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2463
6.3.3 What’s new in 0.24.0 (January 25, 2019) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2464
6.4 Version 0.23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2521
6.4.1 What’s new in 0.23.4 (August 3, 2018) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2521
6.4.2 What’s new in 0.23.3 (July 7, 2018) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2522
6.4.3 What’s new in 0.23.2 (July 5, 2018) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2523
6.4.4 What’s new in 0.23.1 (June 12, 2018) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2526
6.4.5 What’s new in 0.23.0 (May 15, 2018) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2530
6.5 Version 0.22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2580
6.5.1 v0.22.0 (December 29, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2580
6.6 Version 0.21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2585
6.6.1 v0.21.1 (December 12, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2585
6.6.2 v0.21.0 (October 27, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2590
6.7 Version 0.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2622
6.7.1 v0.20.3 (July 7, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2622
6.7.2 v0.20.2 (June 4, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2625
6.7.3 v0.20.1 (May 5, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2629
6.8 Version 0.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2677
6.8.1 v0.19.2 (December 24, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2677
6.8.2 v0.19.1 (November 3, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2680
6.8.3 v0.19.0 (October 2, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2683
6.9 Version 0.18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2728
6.9.1 v0.18.1 (May 3, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2728
6.9.2 v0.18.0 (March 13, 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2747
6.10 Version 0.17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2782
6.10.1 v0.17.1 (November 21, 2015) . . . . . .
[email protected] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2782
166FVD0TPV 6.10.2 v0.17.0 (October 9, 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2789
6.11 Version 0.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2819
6.11.1 v0.16.2 (June 12, 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2819
6.11.2 v0.16.1 (May 11, 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2824
6.11.3 v0.16.0 (March 22, 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2837
6.12 Version 0.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2855
6.12.1 v0.15.2 (December 12, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2855
6.12.2 v0.15.1 (November 9, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2862
6.12.3 v0.15.0 (October 18, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2869
6.13 Version 0.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2901
6.13.1 v0.14.1 (July 11, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2901
6.13.2 v0.14.0 (May 31 , 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2908
6.14 Version 0.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2938
6.14.1 v0.13.1 (February 3, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2938
6.14.2 v0.13.0 (January 3, 2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2949
6.15 Version 0.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2979
6.15.1 v0.12.0 (July 24, 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2979
6.16 Version 0.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2991
6.16.1 v0.11.0 (April 22, 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2991
6.17 Version 0.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3001
6.17.1 v0.10.1 (January 22, 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3001
6.17.2 v0.10.0 (December 17, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3007
6.18 Version 0.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3019
6.18.1 v0.9.1 (November 14, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3019
6.18.2 v0.9.0 (October 7, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3024
6.19 Version 0.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3026
6.19.1 v0.8.1 (July 22, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3026
x
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
6.19.2 v0.8.0 (June 29, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3027
6.20 Version 0.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3034
6.20.1 v.0.7.3 (April 12, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3034
6.20.2 v.0.7.2 (March 16, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3036
6.20.3 v.0.7.1 (February 29, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3037
6.20.4 v.0.7.0 (February 9, 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3038
6.21 Version 0.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3044
6.21.1 v.0.6.1 (December 13, 2011) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3044
6.21.2 v.0.6.0 (November 25, 2011) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3045
6.22 Version 0.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3047
6.22.1 v.0.5.0 (October 24, 2011) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3047
6.23 Version 0.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3049
6.23.1 v.0.4.1 through v0.4.3 (September 25 - October 9, 2011) . . . . . . . . . . . . . . . . . . . 3049
Bibliography 3051
[email protected]
166FVD0TPV
xi
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
[email protected]
166FVD0TPV
xii
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3
[email protected]
166FVD0TPV
CONTENTS 1
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3
[email protected]
166FVD0TPV
2 CONTENTS
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
CHAPTER
ONE
These are the changes in pandas 1.0.1. See Release Notes for a full changelog including other versions of pandas.
• Fixed regression in DataFrame setting values with a slice (e.g. df[-4:] = 1) indexing by label instead of
position (GH31469)
• Fixed regression when indexing a Series or DataFrame indexed by DatetimeIndex with a slice containg
a datetime.date (GH31501)
• Fixed regression in DataFrame.__setitem__ raising an AttributeError with a MultiIndex and
a non-monotonic indexer (GH31449)
• Fixed regression in Series multiplication when multiplying a numeric Series with >10000 elements with a
[email protected]
166FVD0TPV timedelta-like scalar (GH31457)
• Fixed regression in .groupby().agg() raising an AssertionError for some reductions like min on
object-dtype columns (GH31522)
• Fixed regression in .groupby() aggregations with categorical dtype using Cythonized reduction functions
(e.g. first) (GH31450)
• Fixed regression in GroupBy.apply() if called with a function which returned a non-pandas non-scalar
object (e.g. a list or numpy array) (GH31441)
• Fixed regression in DataFrame.groupby() whereby taking the minimum or maximum of a column with
period dtype would raise a TypeError. (GH31471)
• Fixed regression in DataFrame.groupby() with an empty DataFrame grouping by a level of a MultiIndex
(GH31670).
• Fixed regression in DataFrame.apply() with object dtype and non-reducing function (GH31505)
• Fixed regression in to_datetime() when parsing non-nanosecond resolution datetimes (GH31491)
• Fixed regression in to_csv() where specifying an na_rep might truncate the values written (GH31447)
• Fixed regression in Categorical construction with numpy.str_ categories (GH31499)
• Fixed regression in DataFrame.loc() and DataFrame.iloc() when selecting a row containing a single
datetime64 or timedelta64 column (GH31649)
• Fixed regression where setting pd.options.display.max_colwidth was not accepting negative inte-
ger. In addition, this behavior has been deprecated in favor of using None (GH31532)
• Fixed regression in objTOJSON.c fix return-type warning (GH31463)
3
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3
1.2 Deprecations
Datetimelike
• Fixed bug in to_datetime() raising when cache=True and out-of-bound values are present (GH31491)
Numeric
• Bug in dtypes being lost in DataFrame.__invert__ (~ operator) with mixed dtypes (GH31183) and for
extension-array backed Series and DataFrame (GH23087)
Plotting
[email protected]
166FVD0TPV • Plotting tz-aware timeseries no longer gives UserWarning (GH31205)
Interval
• Bug in Series.shift() with interval dtype raising a TypeError when shifting an interval array of
integers or datetimes (GH34195)
1.4 Contributors
A total of 7 people contributed patches to this release. People with a “+” by their names contributed a patch for the
first time.
• Guillaume Lemaitre
• Jeff Reback
• Joris Van den Bossche
• Kaiqi Dong
• MeeseeksMachine
• Pandas Development Team
• Tom Augspurger
TWO
GETTING STARTED
2.1 Installation
Learn more
[email protected]
166FVD0TPV
2.2 Intro to pandas
Straight to tutorial. . .
When working with tabular data, such as data stored in spreadsheets or databases, Pandas is the right tool for you.
Pandas will help you to explore, clean and process your data. In Pandas, a data table is called a DataFrame.
To introduction tutorial
To user guide
Straight to tutorial. . .
Pandas supports the integration with many file formats or data sources out of the box (csv, excel, sql, json, parquet,. . . ).
Importing data from each of these data sources is provided by function with the prefix read_*. Similarly, the to_*
methods are used to store data.
To introduction tutorial
To user guide
Straight to tutorial. . .
Selecting or filtering specific rows and/or columns? Filtering the data on a condition? Methods for slicing, selecting,
and extracting the data you need are available in Pandas.
5
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3
To introduction tutorial
To user guide
Straight to tutorial. . .
Pandas provides plotting your data out of the box, using the power of Matplotlib. You can pick the plot type (scatter,
bar, boxplot,. . . ) corresponding to your data.
To introduction tutorial
To user guide
Straight to tutorial. . .
There is no need to loop over all rows of your data table to do calculations. Data manipulations on a column work
elementwise. Adding a column to a DataFrame based on existing data in other columns is straightforward.
To introduction tutorial
To user guide
Straight to tutorial. . .
Basic statistics (mean, median, min, max, counts. . . ) are easily calculable. These or custom aggregations can be
applied on the entire data set, a sliding window of the data or grouped by categories. The latter is also known as the
split-apply-combine approach.
[email protected]
To introduction tutorial
166FVD0TPV
To user guide
Straight to tutorial. . .
Change the structure of your data table in multiple ways. You can melt() your data table from wide to long/tidy form
or pivot() from long to wide format. With aggregations built-in, a pivot table is created with a sinlge command.
To introduction tutorial
To user guide
Straight to tutorial. . .
Multiple tables can be concatenated both column wise as row wise and database-like join/merge operations are pro-
vided to combine multiple tables of data.
To introduction tutorial
To user guide
Straight to tutorial. . .
Pandas has great support for time series and has an extensive set of tools for working with dates, times, and time-
indexed data.
To introduction tutorial
To user guide
Straight to tutorial. . .
Data sets do not only contain numerical data. Pandas provides a wide range of functions to cleaning textual data and
extract useful information from it.
To introduction tutorial
To user guide
Currently working with other software for data manipulation in a tabular format? You’re probably familiar to typical
data operations and know what to do with your tabular data, but lacking the syntax to execute these operations. Get to
know the pandas syntax by looking for equivalents from the software you already know:
Learn more
Learn more
Learn more
Learn more
The community produces a wide variety of tutorials available online. Some of the material is enlisted in the community
contributed Tutorials.
[email protected]
166FVD0TPV
2.4.1 Installation
The easiest way to install pandas is to install it as part of the Anaconda distribution, a cross platform distribution for
data analysis and scientific computing. This is the recommended installation method for most users.
Instructions for installing from source, PyPI, ActivePython, various Linux distributions, or a development version are
also provided.
Installing pandas
Installing pandas and the rest of the NumPy and SciPy stack can be a little difficult for inexperienced users.
The simplest way to install not only pandas, but Python and the most popular packages that make up the SciPy
stack (IPython, NumPy, Matplotlib, . . . ) is with Anaconda, a cross-platform (Linux, Mac OS X, Windows) Python
distribution for data analytics and scientific computing.
After running the installer, the user will have access to pandas and the rest of the SciPy stack without needing to install
anything else, and without needing to wait for any software to be compiled.
The previous section outlined how to get pandas installed as part of the Anaconda distribution. However this approach
means you will install well over one hundred packages and involves downloading the installer which is a few hundred
megabytes in size.
If you want to have more control on which packages, or have a limited internet bandwidth, then installing pandas with
Miniconda may be a better solution.
Conda is the package manager that the Anaconda distribution is built upon. It is a package manager that is both
cross-platform and language agnostic (it can play a similar role to a pip and virtualenv combination).
Miniconda allows you to create a minimal self contained Python installation, and then use the Conda command to
install additional packages.
First you will need Conda to be installed and downloading and running the Miniconda will do this for you. The
installer can be found here
The next step is to create a new conda environment. A conda environment is like a virtualenv that allows you to specify
a specific version of Python and set of libraries. Run the following commands from a terminal window:
activate name_of_my_env
The final step required is to install pandas. This can be done with the following command:
If you need packages that are available to pip but not conda, then install pip, and then use pip to install those packages:
Installation instructions for ActivePython can be found here. Versions 2.7, 3.5 and 3.6 include pandas.
The commands in this table will install pandas for Python 3 from your distribution. To install pandas for Python 2,
you may need to use the python-pandas package.
However, the packages in the linux package managers are often a few versions behind, so to get the newest version of
pandas, it’s recommended to install using the pip or conda methods described above.
See the contributing guide for complete instructions on building from the git source tree. Further, see creating a
development environment if you wish to create a pandas development environment.
pandas is equipped with an exhaustive set of unit tests, covering about 97% of the code base as of this writing. To
run it on your machine to verify that everything is working (and that you have all of the dependencies, soft and hard,
installed), make sure you have pytest >= 5.0.1 and Hypothesis >= 3.58, then run:
>>> pd.test()
running: pytest --skip-slow --skip-network C:\Users\TP\Anaconda3\envs\py36\lib\site-
˓→packages\pandas
..................................................................S......
........S................................................................
.........................................................................
Dependencies
Recommended dependencies
• numexpr: for accelerating certain numerical operations. numexpr uses multiple cores as well as smart chunk-
ing and caching to achieve large speedups. If installed, must be Version 2.6.2 or higher.
• bottleneck: for accelerating certain types of nan evaluations. bottleneck uses specialized cython routines
[email protected]
to achieve large speedups. If installed, must be Version 1.2.1 or higher.
166FVD0TPV
Note: You are highly encouraged to install these libraries, as they provide speed improvements, especially when
working with large data sets.
Optional dependencies
Pandas has many optional dependencies that are only used for specific methods. For example, pandas.
read_hdf() requires the pytables package, while DataFrame.to_markdown() requires the tabulate
package. If the optional dependency is not installed, pandas will raise an ImportError when the method requiring
that dependency is called.
[email protected]
166FVD0TPV Optional dependencies for parsing HTML
One of the following combinations of libraries is needed to use the top-level read_html() function:
Changed in version 0.23.0.
• BeautifulSoup4 and html5lib
• BeautifulSoup4 and lxml
• BeautifulSoup4 and html5lib and lxml
• Only lxml, although see HTML Table Parsing for reasons as to why you should probably not take this approach.
Warning:
• if you install BeautifulSoup4 you must install either lxml or html5lib or both. read_html() will not work
with only BeautifulSoup4 installed.
• You are highly encouraged to read HTML Table Parsing gotchas. It explains issues surrounding the installa-
tion and usage of the above three libraries.
pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with
“relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing
practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful
and flexible open source data analysis / manipulation tool available in any language. It is already well on its way
toward this goal.
pandas is well suited for many different kinds of data:
• Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet
• Ordered and unordered (not necessarily fixed-frequency) time series data.
• Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels
• Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed
into a pandas data structure
The two primary data structures of pandas, Series (1-dimensional) and DataFrame (2-dimensional), handle the
vast majority of typical use cases in finance, statistics, social science, and many areas of engineering. For R users,
DataFrame provides everything that R’s data.frame provides and much more. pandas is built on top of NumPy
and is intended to integrate well within a scientific computing environment with many other 3rd party libraries.
Here are just a few of the things that pandas does well:
• Easy handling of missing data (represented as NaN) in floating point as well as non-floating point data
• Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
• Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the user can
simply ignore the labels and let Series, DataFrame, etc. automatically align the data for you in computations
[email protected]
166FVD0TPV
• Powerful, flexible group by functionality to perform split-apply-combine operations on data sets, for both ag-
gregating and transforming data
• Make it easy to convert ragged, differently-indexed data in other Python and NumPy data structures into
DataFrame objects
• Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
• Intuitive merging and joining data sets
• Flexible reshaping and pivoting of data sets
• Hierarchical labeling of axes (possible to have multiple labels per tick)
• Robust IO tools for loading data from flat files (CSV and delimited), Excel files, databases, and saving / loading
data from the ultrafast HDF5 format
• Time series-specific functionality: date range generation and frequency conversion, moving window statistics,
date shifting and lagging.
Many of these principles are here to address the shortcomings frequently experienced using other languages / scientific
research environments. For data scientists, working with data is typically divided into multiple stages: munging and
cleaning data, analyzing / modeling it, then organizing the results of the analysis into a form suitable for plotting or
tabular display. pandas is the ideal tool for all of these tasks.
Some other notes
• pandas is fast. Many of the low-level algorithmic bits have been extensively tweaked in Cython code. However,
as with anything else generalization usually sacrifices performance. So if you focus on one feature for your
application you may be able to create a faster specialized tool.
• pandas is a dependency of statsmodels, making it an important part of the statistical computing ecosystem in
Python.
• pandas has been used extensively in production in financial applications.
Data structures
The best way to think about the pandas data structures is as flexible containers for lower dimensional data. For
example, DataFrame is a container for Series, and Series is a container for scalars. We would like to be able to insert
and remove objects from these containers in a dictionary-like fashion.
Also, we would like sensible default behaviors for the common API functions which take into account the typical
orientation of time series and cross-sectional data sets. When using ndarrays to store 2- and 3-dimensional data, a
burden is placed on the user to consider the orientation of the data set when writing functions; axes are considered
more or less equivalent (except when C- or Fortran-contiguousness matters for performance). In pandas, the axes are
intended to lend more semantic meaning to the data; i.e., for a particular data set there is likely to be a “right” way to
orient the data. The goal, then, is to reduce the amount of mental effort required to code up data transformations in
downstream functions.
[email protected]
166FVD0TPV For example, with tabular data (DataFrame) it is more semantically helpful to think of the index (the rows) and the
columns rather than axis 0 and axis 1. Iterating through the columns of the DataFrame thus results in more readable
code:
All pandas data structures are value-mutable (the values they contain can be altered) but not always size-mutable. The
length of a Series cannot be changed, but, for example, columns can be inserted into a DataFrame. However, the vast
majority of methods produce new objects and leave the input data untouched. In general we like to favor immutability
where sensible.
Getting support
The first stop for pandas issues and ideas is the Github Issue Tracker. If you have a general question, pandas community
experts can answer through Stack Overflow.
Community
pandas is actively supported today by a community of like-minded individuals around the world who contribute their
valuable time and energy to help make open source pandas possible. Thanks to all of our contributors.
If you’re interested in contributing, please visit the contributing guide.
pandas is a NumFOCUS sponsored project. This will help ensure the success of development of pandas as a world-
class open-source project, and makes it possible to donate to the project.
Project governance
The governance process that pandas project has used informally since its inception in 2008 is formalized in Project
Governance documents. The documents clarify how decisions are made and how the various elements of our commu-
nity interact, including the relationship between open source collaborative development and work that may be funded
by for-profit or non-profit entities.
Wes McKinney is the Benevolent Dictator for Life (BDFL).
Development team
The list of the Core Team members and more detailed information can be found on the people’s page of the governance
repo.
Institutional partners
[email protected]
166FVD0TPV The information about current institutional partners can be found on pandas website page.
License
Copyright (c) 2008-2012, AQR Capital Management, LLC, Lambda Foundry, Inc. and PyData
˓→Development Team
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
(continues on next page)
{{ header }}
This is a short introduction to pandas, geared mainly for new users. You can see more complex recipes in the Cookbook.
Customarily, we import as follows:
Object creation
In [4]: s
Out[4]:
0 1.0
1 3.0
2 5.0
3 NaN
4 6.0
5 8.0
dtype: float64
Creating a DataFrame by passing a NumPy array, with a datetime index and labeled columns:
In [6]: dates
Out[6]:
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
'2013-01-05', '2013-01-06'],
dtype='datetime64[ns]', freq='D')
In [8]: df
Out[8]:
A B C D
2013-01-01 0.537250 -0.315005 -0.935783 1.199686
(continues on next page)
In [10]: df2
Out[10]:
A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo
In [11]: df2.dtypes
Out[11]:
[email protected]
166FVD0TPV A float64
B datetime64[ns]
C float32
D int32
E category
F object
dtype: object
If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled.
Here’s a subset of the attributes that will be completed:
As you can see, the columns A, B, C, and D are automatically tab completed. E is there as well; the rest of the attributes
have been truncated for brevity.
Viewing data
In [14]: df.tail(3)
Out[14]:
A B C D
2013-01-04 -0.221818 -0.530745 0.645645 2.909493
2013-01-05 0.126389 -0.162619 0.780624 -0.213437
2013-01-06 0.045735 -0.554200 -1.404626 -0.286590
DataFrame.to_numpy() gives a NumPy representation of the underlying data. Note that this can be an expensive
operation when your DataFrame has columns with different data types, which comes down to a fundamental differ-
ence between pandas and NumPy: NumPy arrays have one dtype for the entire array, while pandas DataFrames
have one dtype per column. When you call DataFrame.to_numpy(), pandas will find the NumPy dtype that
can hold all of the dtypes in the DataFrame. This may end up being object, which requires casting every value to a
Python object.
For df, our DataFrame of all floating-point values, DataFrame.to_numpy() is fast and doesn’t require copying
data.
In [17]: df.to_numpy()
Out[17]:
array([[ 0.53725033, -0.31500536, -0.93578271, 1.19968629],
[-1.09344303, 1.27962224, -0.08537764, 1.15689587],
[ 0.04582511, -0.27488522, -0.21329122, 1.03342476],
[-0.22181841, -0.53074538, 0.64564452, 2.90949261],
[ 0.12638926, -0.16261927, 0.78062425, -0.21343653],
[ 0.04573531, -0.55419961, -1.40462594, -0.28659015]])
For df2, the DataFrame with multiple dtypes, DataFrame.to_numpy() is relatively expensive.
In [18]: df2.to_numpy()
Out[18]:
array([[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'test', 'foo'],
[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'train', 'foo'],
(continues on next page)
Note: DataFrame.to_numpy() does not include the index or column labels in the output.
In [19]: df.describe()
Out[19]:
A B C D
count 6.000000 6.000000 6.000000 6.000000
mean -0.093344 -0.092972 -0.202135 0.966579
std 0.547968 0.689294 0.858199 1.169000
min -1.093443 -0.554200 -1.404626 -0.286590
25% -0.154930 -0.476810 -0.755160 0.098279
50% 0.045780 -0.294945 -0.149334 1.095160
75% 0.106248 -0.190686 0.462889 1.188989
max 0.537250 1.279622 0.780624 2.909493
In [20]: df.T
Out[20]:
2013-01-01 2013-01-02 2013-01-03 2013-01-04 2013-01-05 2013-01-06
A 0.537250 -1.093443 0.045825 -0.221818 0.126389 0.045735
[email protected]
166FVD0TPV B -0.315005 1.279622 -0.274885 -0.530745 -0.162619 -0.554200
C -0.935783 -0.085378 -0.213291 0.645645 0.780624 -1.404626
D 1.199686 1.156896 1.033425 2.909493 -0.213437 -0.286590
Sorting by an axis:
Sorting by values:
In [22]: df.sort_values(by='B')
Out[22]:
A B C D
2013-01-06 0.045735 -0.554200 -1.404626 -0.286590
2013-01-04 -0.221818 -0.530745 0.645645 2.909493
2013-01-01 0.537250 -0.315005 -0.935783 1.199686
2013-01-03 0.045825 -0.274885 -0.213291 1.033425
2013-01-05 0.126389 -0.162619 0.780624 -0.213437
2013-01-02 -1.093443 1.279622 -0.085378 1.156896
Selection
Note: While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for
interactive work, for production code, we recommend the optimized pandas data access methods, .at, .iat, .loc
and .iloc.
See the indexing documentation Indexing and Selecting Data and MultiIndex / Advanced Indexing.
Getting
In [23]: df['A']
Out[23]:
2013-01-01 0.537250
2013-01-02 -1.093443
2013-01-03 0.045825
2013-01-04 -0.221818
2013-01-05 0.126389
2013-01-06 0.045735
Freq: D, Name: A, dtype: float64
In [24]: df[0:3]
[email protected]
Out[24]:
166FVD0TPV A B C D
2013-01-01 0.537250 -0.315005 -0.935783 1.199686
2013-01-02 -1.093443 1.279622 -0.085378 1.156896
2013-01-03 0.045825 -0.274885 -0.213291 1.033425
In [25]: df['20130102':'20130104']
Out[25]:
A B C D
2013-01-02 -1.093443 1.279622 -0.085378 1.156896
2013-01-03 0.045825 -0.274885 -0.213291 1.033425
2013-01-04 -0.221818 -0.530745 0.645645 2.909493
Selection by label
In [26]: df.loc[dates[0]]
Out[26]:
A 0.537250
B -0.315005
C -0.935783
D 1.199686
Name: 2013-01-01 00:00:00, dtype: float64
Selection by position
In [35]: df.iloc[1:3, :]
Out[35]:
A B C D
2013-01-02 -1.093443 1.279622 -0.085378 1.156896
2013-01-03 0.045825 -0.274885 -0.213291 1.033425
In [37]: df.iloc[1, 1]
Out[37]: 1.2796222412458425
In [38]: df.iat[1, 1]
Out[38]: 1.2796222412458425
Boolean indexing
In [43]: df2
Out[43]:
A B C D E
2013-01-01 0.537250 -0.315005 -0.935783 1.199686 one
2013-01-02 -1.093443 1.279622 -0.085378 1.156896 one
2013-01-03 0.045825 -0.274885 -0.213291 1.033425 two
2013-01-04 -0.221818 -0.530745 0.645645 2.909493 three
2013-01-05 0.126389 -0.162619 0.780624 -0.213437 four
2013-01-06 0.045735 -0.554200 -1.404626 -0.286590 three
Setting
In [46]: s1
Out[46]:
2013-01-02 1
2013-01-03 2
2013-01-04 3
2013-01-05 4
2013-01-06 5
2013-01-07 6
Freq: D, dtype: int64
In [47]: df['F'] = s1
In [49]: df.iat[0, 1] = 0
In [51]: df
Out[51]:
A B C D F
2013-01-01 0.000000 0.000000 -0.935783 5 NaN
2013-01-02 -1.093443 1.279622 -0.085378 5 1.0
2013-01-03 0.045825 -0.274885 -0.213291 5 2.0
2013-01-04 -0.221818 -0.530745 0.645645 5 3.0
2013-01-05 0.126389 -0.162619 0.780624 5 4.0
2013-01-06 0.045735 -0.554200 -1.404626 5 5.0
In [54]: df2
Out[54]:
A B C D F
2013-01-01 0.000000 0.000000 -0.935783 -5 NaN
[email protected]
2013-01-02 -1.093443 -1.279622 -0.085378 -5 -1.0
166FVD0TPV 2013-01-03 -0.045825 -0.274885 -0.213291 -5 -2.0
2013-01-04 -0.221818 -0.530745 -0.645645 -5 -3.0
2013-01-05 -0.126389 -0.162619 -0.780624 -5 -4.0
2013-01-06 -0.045735 -0.554200 -1.404626 -5 -5.0
Missing data
pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See
the Missing Data section.
Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data.
In [57]: df1
Out[57]:
A B C D F E
2013-01-01 0.000000 0.000000 -0.935783 5 NaN 1.0
2013-01-02 -1.093443 1.279622 -0.085378 5 1.0 1.0
2013-01-03 0.045825 -0.274885 -0.213291 5 2.0 NaN
2013-01-04 -0.221818 -0.530745 0.645645 5 3.0 NaN
In [58]: df1.dropna(how='any')
Out[58]:
A B C D F E
2013-01-02 -1.093443 1.279622 -0.085378 5 1.0 1.0
In [59]: df1.fillna(value=5)
Out[59]:
A B C D F E
2013-01-01 0.000000 0.000000 -0.935783 5 5.0 1.0
2013-01-02 -1.093443 1.279622 -0.085378 5 1.0 1.0
2013-01-03 0.045825 -0.274885 -0.213291 5 2.0 5.0
2013-01-04 -0.221818 -0.530745 0.645645 5 3.0 5.0
In [60]: pd.isna(df1)
Out[60]:
A B C D F E
2013-01-01 False False False False True False
2013-01-02 False False False False False False
2013-01-03 False False False False False True
2013-01-04 False False False False False True
Operations
[email protected]
166FVD0TPV See the Basic section on Binary Ops.
Stats
In [61]: df.mean()
Out[61]:
A -0.182885
B -0.040471
C -0.202135
D 5.000000
F 3.000000
dtype: float64
In [62]: df.mean(1)
Out[62]:
2013-01-01 1.016054
2013-01-02 1.220160
2013-01-03 1.311530
2013-01-04 1.578616
2013-01-05 1.948879
2013-01-06 1.617382
Freq: D, dtype: float64
Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically
broadcasts along the specified dimension.
In [64]: s
Out[64]:
2013-01-01 NaN
2013-01-02 NaN
2013-01-03 1.0
2013-01-04 3.0
2013-01-05 5.0
2013-01-06 NaN
Freq: D, dtype: float64
Apply
[email protected]
Applying functions to the data:
166FVD0TPV
In [66]: df.apply(np.cumsum)
Out[66]:
A B C D F
2013-01-01 0.000000 0.000000 -0.935783 5 NaN
2013-01-02 -1.093443 1.279622 -1.021160 10 1.0
2013-01-03 -1.047618 1.004737 -1.234452 15 3.0
2013-01-04 -1.269436 0.473992 -0.588807 20 6.0
2013-01-05 -1.143047 0.311372 0.191817 25 10.0
2013-01-06 -1.097312 -0.242827 -1.212809 30 15.0
Histogramming
In [69]: s
Out[69]:
0 3
1 4
2 4
3 3
4 1
5 3
6 3
7 5
8 4
9 0
dtype: int64
In [70]: s.value_counts()
Out[70]:
3 4
4 3
5 1
1 1
0 1
dtype: int64
[email protected]
166FVD0TPV
String Methods
Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each
element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions
by default (and in some cases always uses them). See more at Vectorized String Methods.
In [71]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
In [72]: s.str.lower()
Out[72]:
0 a
1 b
2 c
3 aaba
4 baca
5 NaN
6 caba
7 dog
8 cat
dtype: object
Merge
Concat
pandas provides various facilities for easily combining together Series and DataFrame objects with various kinds of
set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
See the Merging section.
Concatenating pandas objects together with concat():
In [74]: df
Out[74]:
0 1 2 3
0 0.734273 -0.935628 0.902144 0.063131
1 -0.493928 0.905459 -0.736241 0.330944
2 0.101657 -2.083426 0.254902 0.026104
3 0.347046 0.407484 0.130171 -0.146293
4 1.094031 0.941765 -0.698465 1.187225
5 0.781335 -0.858982 -0.051083 -0.894259
6 -1.818150 0.571072 -0.639691 -0.103313
7 -1.528309 0.684885 -0.450234 0.121959
8 -1.545637 -1.075357 -0.377368 0.937646
9 0.960006 1.657349 0.973478 -0.746665
Note: Adding a column to a DataFrame is relatively fast. However, adding a row requires a copy, and may be
expensive. We recommend passing a pre-built list of records to the DataFrame constructor instead of building a
DataFrame by iteratively appending records to it. See Appending to dataframe for more.
Join
In [79]: left
Out[79]:
key lval
0 foo 1
1 foo 2
In [80]: right
Out[80]:
key rval
0 foo 4
1 foo 5
[email protected]
Another example that can be given is:
166FVD0TPV
In [82]: left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]})
In [84]: left
Out[84]:
key lval
0 foo 1
1 bar 2
In [85]: right
Out[85]:
key rval
0 foo 4
1 bar 5
Grouping
By “group by” we are referring to a process involving one or more of the following steps:
• Splitting the data into groups based on some criteria
• Applying a function to each group independently
• Combining the results into a data structure
See the Grouping section.
In [88]: df
Out[88]:
A B C D
0 foo one -1.708850 -0.063695
1 bar one 0.378037 0.550798
2 foo two 0.215822 1.216365
3 bar three 0.753779 0.590228
4 foo two 0.178214 -0.849016
5 bar two 1.623137 1.803818
6 foo one 2.769917 1.362410
[email protected]
7 foo three 0.645515 0.412037
166FVD0TPV
Grouping and then applying the sum() function to the resulting groups.
In [89]: df.groupby('A').sum()
Out[89]:
C D
A
bar 2.754954 2.944844
foo 2.100618 2.078100
Grouping by multiple columns forms a hierarchical index, and again we can apply the sum function.
Reshaping
Stack
In [95]: df2
Out[95]:
A B
first second
bar one 1.294317 1.636713
two 0.986587 -0.877156
baz one -0.649757 1.186025
two -2.445453 -0.108421
[email protected]
166FVD0TPV The stack() method “compresses” a level in the DataFrame’s columns.
In [96]: stacked = df2.stack()
In [97]: stacked
Out[97]:
first second
bar one A 1.294317
B 1.636713
two A 0.986587
B -0.877156
baz one A -0.649757
B 1.186025
two A -2.445453
B -0.108421
dtype: float64
With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is
unstack(), which by default unstacks the last level:
In [98]: stacked.unstack()
Out[98]:
A B
first second
bar one 1.294317 1.636713
two 0.986587 -0.877156
baz one -0.649757 1.186025
two -2.445453 -0.108421
In [100]: stacked.unstack(0)
Out[100]:
first bar baz
second
one A 1.294317 -0.649757
B 1.636713 1.186025
two A 0.986587 -2.445453
B -0.877156 -0.108421
Pivot tables
In [102]: df
Out[102]:
A B C D E
0 one A foo -0.618069 -0.814689
1 one B foo 0.846151 1.033482
2 two C foo -0.494035 -0.541444
3 three A bar -1.118823 0.254531
4 one B bar -0.340439 -0.604735
5 one C bar 0.945814 -0.955822
6 two A foo 0.823720 0.544094
7 three B foo 0.812442 1.461520
8 one C foo 2.212842 -1.555660
9 one A bar 0.632421 0.290112
10 two B bar 0.387412 -0.880864
11 three C bar 1.778351 -0.353401
Time series
pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency con-
version (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial
applications. See the Time Series section.
In [104]: rng = pd.date_range('1/1/2012', periods=100, freq='S')
In [106]: ts.resample('5Min').sum()
Out[106]:
2012-01-01 23247
Freq: 5T, dtype: int64
In [111]: ts_utc
Out[111]:
2012-03-06 00:00:00+00:00 1.215326
2012-03-07 00:00:00+00:00 0.265352
2012-03-08 00:00:00+00:00 -0.142587
2012-03-09 00:00:00+00:00 0.134160
2012-03-10 00:00:00+00:00 -0.842578
Freq: D, dtype: float64
In [115]: ts
Out[115]:
2012-01-31 2.872280
2012-02-29 -0.138958
2012-03-31 -0.006695
2012-04-30 0.114531
2012-05-31 0.061088
Freq: M, dtype: float64
In [116]: ps = ts.to_period()
In [117]: ps
Out[117]:
2012-01 2.872280
2012-02 -0.138958
2012-03 -0.006695
2012-04 0.114531
2012-05 0.061088
Freq: M,
[email protected]: float64
166FVD0TPV
In [118]: ps.to_timestamp()
Out[118]:
2012-01-01 2.872280
2012-02-01 -0.138958
2012-03-01 -0.006695
2012-04-01 0.114531
2012-05-01 0.061088
Freq: MS, dtype: float64
Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following
example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following
the quarter end:
In [122]: ts.head()
Out[122]:
1990-03-01 09:00 -0.047052
1990-06-01 09:00 2.133754
1990-09-01 09:00 0.694554
1990-12-01 09:00 1.031604
1991-03-01 09:00 -0.477875
Freq: H, dtype: float64
Categoricals
pandas can include categorical data in a DataFrame. For full docs, see the categorical introduction and the API
documentation.
In [125]: df["grade"]
Out[125]:
0 a
1 b
2 b
3 a
4 a
5 e
Name: grade, dtype: category
Categories (3, object): [a, b, e]
Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new
[email protected]
166FVD0TPV Series by default).
In [128]: df["grade"]
Out[128]:
0 very good
1 good
2 good
3 very good
4 very good
5 very bad
Name: grade, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]
In [129]: df.sort_values(by="grade")
Out[129]:
id raw_grade grade
5 6 e very bad
1 2 b good
2 3 b good
0 1 a very good
3 4 a very good
4 5 a very good
In [130]: df.groupby("grade").size()
Out[130]:
grade
very bad 1
bad 0
medium 0
good 2
very good 3
dtype: int64
Plotting
In [132]: plt.close('all')
In [133]: ts = pd.Series(np.random.randn(1000),
.....: index=pd.date_range('1/1/2000', periods=1000))
.....:
In [134]: ts = ts.cumsum()
[email protected]
166FVD0TPV In [135]: ts.plot()
Out[135]: <matplotlib.axes._subplots.AxesSubplot at 0x7f69fa500bd0>
[email protected]
166FVD0TPV
On a DataFrame, the plot() method is a convenience to plot all of the columns with labels:
In [137]: df = df.cumsum()
In [138]: plt.figure()
Out[138]: <Figure size 640x480 with 0 Axes>
In [139]: df.plot()
Out[139]: <matplotlib.axes._subplots.AxesSubplot at 0x7f69f9a960d0>
In [140]: plt.legend(loc='best')
Out[140]: <matplotlib.legend.Legend at 0x7f69f9a96d90>
[email protected]
166FVD0TPV
CSV
In [141]: df.to_csv('foo.csv')
In [142]: pd.read_csv('foo.csv')
Out[142]:
Unnamed: 0 A B C D
0 2000-01-01 -0.024395 -0.459905 0.424974 0.460299
1 2000-01-02 0.441403 0.309116 0.295536 -1.331048
2 2000-01-03 1.412170 0.519094 0.759803 -0.798177
3 2000-01-04 -0.280951 -0.284814 1.419353 -1.598425
4 2000-01-05 -1.961733 0.986828 3.894422 -2.294805
.. ... ... ... ... ...
995 2002-09-22 -19.414236 27.809222 39.064016 20.429488
996 2002-09-23 -20.199321 28.740891 36.143194 20.148467
997 2002-09-24 -21.278959 29.251941 36.579199 20.988765
998 2002-09-25 -21.462526 27.865121 36.807859 19.868755
(continues on next page)
HDF5
Excel
Gotchas
If you are attempting to perform an operation you might see an exception like:
I want to store passenger data of the Titanic. For a number of passengers, I know the name (characters), age (integers)
and sex (male/female) data.
In [2]: df = pd.DataFrame({
...: "Name": ["Braund, Mr. Owen Harris",
...: "Allen, Mr. William Henry",
...: "Bonnell, Miss. Elizabeth"],
...: "Age": [22, 35, 58],
...: "Sex": ["male", "male", "female"]}
...: )
...:
In [3]: df
Out[3]:
Name Age Sex
0 Braund, Mr. Owen Harris 22 male
1 Allen, Mr. William Henry 35 male
2 Bonnell, Miss. Elizabeth 58 female
To manually store data in a table, create a DataFrame. When using a Python dictionary of lists, the dictionary keys
will be used as column headers and the values in each list as rows of the DataFrame.
A DataFrame is a 2-dimensional data structure that can store data of different types (including characters, integers,
floating point values, categorical data and more) in columns. It is similar to a spreadsheet, a SQL table or the data.
frame in R.
• The table has 3 columns, each of them with a column label. The column labels are respectively Name, Age and
Sex.
• The column Name consists of textual data with each value a string, the column Age are numbers and the column
Sex is textual data.
In spreadsheet software, the table representation of our data would look very similar:
[email protected]
166FVD0TPV
I’m just interested in working with the data in the column Age
In [4]: df["Age"]
Out[4]:
0 22
1 35
2 58
Name: Age, dtype: int64
When selecting a single column of a pandas DataFrame, the result is a pandas Series. To select the column, use
the column label in between square brackets [].
Note: If you are familiar to Python dictionaries, the selection of a single column is very similar to selection of
In [6]: ages
Out[6]:
0 22
1 35
2 58
Name: Age, dtype: int64
A pandas Series has no column labels, as it is just a single column of a DataFrame. A Series does have row
labels.
In [7]: df["Age"].max()
Out[7]: 58
Or to the Series:
[email protected]
In [8]: ages.max()
166FVD0TPV Out[8]: 58
As illustrated by the max() method, you can do things with a DataFrame or Series. pandas provides a lot of
functionalities, each of them a method you can apply to a DataFrame or Series. As methods are functions, do not
forget to use parentheses ().
I’m interested in some basic statistics of the numerical data of my data table
In [9]: df.describe()
Out[9]:
Age
count 3.000000
mean 38.333333
std 18.230012
min 22.000000
25% 28.500000
50% 35.000000
75% 46.500000
max 58.000000
The describe() method provides a quick overview of the numerical data in a DataFrame. As the Name and Sex
columns are textual data, these are by default not taken into account by the describe() method.
Many pandas operations return a DataFrame or a Series. The describe() method is an example of a pandas
operation returning a pandas Series.
Check more options on describe in the user guide section about aggregations with describe
Note: This is just a starting point. Similar to spreadsheet software, pandas represents data as a table with columns
and rows. Apart from the representation, also the data manipulations and calculations you would do in spreadsheet
software are supported by pandas. Continue reading the next tutorials to get started!
This tutorial uses the titanic data set, stored as CSV. The data consists of the following data columns:
• PassengerId: Id of every passenger.
• Survived: This feature have value 0 and 1. 0 for not survived and 1 for survived.
• Pclass: There are 3 classes: Class 1, Class 2 and Class 3.
• Name: Name of passenger.
• Sex: Gender of passenger.
• Age: Age of passenger.
• SibSp: Indication that passenger have siblings and spouse.
[email protected]
166FVD0TPV • Parch: Whether a passenger is alone or have family.
• Ticket: Ticket number of passenger.
• Fare: Indicating the fare.
• Cabin: The cabin of passenger.
• Embarked: The embarked category.
pandas provides the read_csv() function to read data stored as a csv file into a pandas DataFrame. pandas
supports many different file formats or data sources out of the box (csv, excel, sql, json, parquet, . . . ), each of them
with the prefix read_*.
Make sure to always have a check on the data after reading in the data. When displaying a DataFrame, the first and
last 5 rows will be shown by default:
In [3]: titanic
Out[3]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
(continues on next page)
To see the first N rows of a DataFrame, use the head() method with the required number of rows (in this case 8)
as argument.
Note: Interested in the last N rows instead? pandas also provides a tail() method. For example, titanic.
tail(10) will return the last 10 rows of the DataFrame.
A check on how pandas interpreted each of the column data types can be done by requesting the pandas dtypes
attribute:
In [5]: titanic.dtypes
Out[5]:
PassengerId int64
Survived int64
Pclass int64
Name object
Sex object
Age float64
SibSp int64
Parch int64
Ticket object
Fare float64
Cabin object
Embarked object
dtype: object
For each of the columns, the used data type is enlisted. The data types in this DataFrame are integers (int64),
floats (float63) and strings (object).
Note: When asking for the dtypes, no brackets are used! dtypes is an attribute of a DataFrame and
Series. Attributes of DataFrame or Series do not need brackets. Attributes represent a characteristic of a
DataFrame/Series, whereas a method (which requires brackets) do something with the DataFrame/Series as
introduced in the first tutorial.
In [8]: titanic.head()
Out[8]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S
In [9]: titanic.info()
<class 'pandas.core.frame.DataFrame'>
(continues on next page)
The method info() provides technical information about a DataFrame, so let’s explain the output in more detail:
• It is indeed a DataFrame.
• There are 891 entries, i.e. 891 rows.
• Each row has a row label (aka the index) with values ranging from 0 to 890.
• The table has 12 columns. Most columns have a value for each of the rows (all 891 values are non-null).
Some columns do have missing values and less than 891 non-null values.
[email protected]
166FVD0TPV • The columns Name, Sex, Cabin and Embarked consists of textual data (strings, aka object). The other
columns are numerical data with some of them whole numbers (aka integer) and others are real numbers
(aka float).
• The kind of data (characters, integers,. . . ) in the different columns are summarized by listing the dtypes.
• The approximate amount of RAM used to hold the DataFrame is provided as well.
• Getting data in to pandas from many different file formats or data sources is supported by read_* functions.
• Exporting data out of pandas is provided by different to_*methods.
• The head/tail/info methods and the dtypes attribute are convenient for a first check.
For a complete overview of the input and output possibilites from and to pandas, see the user guide section about
reader and writer functions.
This tutorial uses the titanic data set, stored as CSV. The data consists of the following data columns:
• PassengerId: Id of every passenger.
• Survived: This feature have value 0 and 1. 0 for not survived and 1 for survived.
• Pclass: There are 3 classes: Class 1, Class 2 and Class 3.
• Name: Name of passenger.
• Sex: Gender of passenger.
• Age: Age of passenger.
In [3]: titanic.head()
Out[3]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S
In [5]: ages.head()
Out[5]:
0 22.0
1 38.0
2 26.0
3 35.0
4 35.0
Name: Age, dtype: float64
To select a single column, use square brackets [] with the column name of the column of interest.
Each column in a DataFrame is a Series. As a single column is selected, the returned object is a pandas
DataFrame. We can verify this by checking the type of the output:
In [6]: type(titanic["Age"])
Out[6]: pandas.core.series.Series
In [7]: titanic["Age"].shape
Out[7]: (891,)
DataFrame.shape is an attribute (remember tutorial on reading and writing, do not use parantheses for attributes)
of a pandas Series and DataFrame containing the number of rows and columns: (nrows, ncolumns). A pandas
Series is 1-dimensional and only the number of rows is returned.
I’m interested in the age and sex of the titanic passengers.
In [8]: age_sex = titanic[["Age", "Sex"]]
In [9]: age_sex.head()
Out[9]:
Age Sex
0 22.0 male
1 38.0 female
2 26.0 female
3 35.0 female
4 35.0 male
To select multiple columns, use a list of column names within the selection brackets [].
Note: The inner square brackets define a Python list with column names, whereas the outer brackets are used to select
the data from a pandas DataFrame as seen in the previous example.
The selection returned a DataFrame with 891 rows and 2 columns. Remember, a DataFrame is 2-dimensional
with both a row and column dimension.
For basic information on indexing, see the user guide section on indexing and selecting data.
In [13]: above_35.head()
Out[13]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
6 7 0 1 McCarthy, Mr. Timothy J
˓→ male 54.0 0 0 17463 51.8625 E46 S
11 12 1 1 Bonnell, Miss. Elizabeth
˓→female 58.0 0 0 113783 26.5500 C103 S (continues on next page)
To select rows based on a conditional expression, use a condition inside the selection brackets [].
The condition inside the selection brackets titanic["Age"] > 35 checks for which rows the Age column has a
value larger than 35:
The output of the conditional expression (>, but also ==, !=, <, <=,. . . would work) is actually a pandas Series of
boolean values (either True or False) with the same number of rows as the original DataFrame. Such a Series
of boolean values can be used to filter the DataFrame by putting it in between the selection brackets []. Only rows
[email protected]
166FVD0TPV for which the value is True will be selected.
We now from before that the original titanic DataFrame consists of 891 rows. Let’s have a look at the amount of
rows which satisfy the condition by checking the shape attribute of the resulting DataFrame above_35:
In [15]: above_35.shape
Out[15]: (217, 12)
In [17]: class_23.head()
Out[17]:
PassengerId Survived Pclass Name Sex Age SibSp
˓→ Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris male 22.0 1
˓→ 0 A/5 21171 7.2500 NaN S
2 3 1 3 Heikkinen, Miss. Laina female 26.0 0
˓→ 0 STON/O2. 3101282 7.9250 NaN S
4 5 0 3 Allen, Mr. William Henry male 35.0 0
˓→ 0 373450 8.0500 NaN S
5 6 0 3 Moran, Mr. James male NaN 0
˓→ 0 330877 8.4583 NaN Q
7 8 0 3 Palsson, Master. Gosta Leonard male 2.0 3
˓→ 1 349909 21.0750 NaN S
Similar to the conditional expression, the isin() conditional function returns a True for each row the values are in
the provided list. To filter the rows based on such a function, use the conditional function inside the selection brackets
[]. In this case, the condition inside the selection brackets titanic["Pclass"].isin([2, 3]) checks for
which rows the Pclass column is either 2 or 3.
The above is equivalent to filtering by rows for which the class is either 2 or 3 and combining the two statements with
an | (or) operator:
In [19]: class_23.head()
Out[19]:
PassengerId Survived Pclass Name Sex Age SibSp
˓→ Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris male 22.0 1
˓→ 0 A/5 21171 7.2500 NaN S
2 3 1 3 Heikkinen, Miss. Laina female 26.0 0
˓→ 0 STON/O2. 3101282 7.9250 NaN S
4 5 0 3 Allen, Mr. William Henry male 35.0 0
˓→ 0 373450 8.0500 NaN S
5 6 0 3 Moran, Mr. James male NaN 0
˓→ 0 330877 8.4583 NaN Q
7 8 0 3 Palsson, Master. Gosta Leonard male 2.0 3
˓→ 1 349909 21.0750 NaN S
Note: When combining multiple conditional statements, each condition must be surrounded by parentheses ().
Moreover, you can not use or/and but need to use the or operator | and the and operator &.
See the dedicated section in the user guide about boolean indexing or about the isin function.
[email protected]
166FVD0TPV I want to work with passenger data for which the age is known.
In [20]: age_no_na = titanic[titanic["Age"].notna()]
In [21]: age_no_na.head()
Out[21]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S
The notna() conditional function returns a True for each row the values are not an Null value. As such, this can
be combined with the selection brackets [] to filter the data table.
You might wonder what actually changed, as the first 5 lines are still the same values. One way to verify is to check if
the shape has changed:
In [22]: age_no_na.shape
Out[22]: (714, 12)
For more dedicated functions on missing values, see the user guide section about handling missing data.
In [24]: adult_names.head()
Out[24]:
1 Cumings, Mrs. John Bradley (Florence Briggs Th...
6 McCarthy, Mr. Timothy J
11 Bonnell, Miss. Elizabeth
13 Andersson, Mr. Anders Johan
15 Hewlett, Mrs. (Mary D Kingcome)
Name: Name, dtype: object
In this case, a subset of both rows and columns is made in one go and just using selection brackets [] is not sufficient
anymore. The loc/iloc operators are required in front of the selection brackets []. When using loc/iloc, the
part before the comma is the rows you want, and the part after the comma is the columns you want to select.
When using the column names, row labels or a condition expression, use the loc operator in front of the selection
brackets []. For both the part before and after the comma, you can use a single label, a list of labels, a slice of labels,
a conditional expression or a colon. Using a colon specificies you want to select all rows or columns.
I’m interested in rows 10 till 25 and columns 3 to 5.
In [25]: titanic.iloc[9:25, 2:5]
Out[25]:
[email protected]
Pclass Name Sex
166FVD0TPV 9 2 Nasser, Mrs. Nicholas (Adele Achem) female
10 3 Sandstrom, Miss. Marguerite Rut female
11 1 Bonnell, Miss. Elizabeth female
12 3 Saundercock, Mr. William Henry male
13 3 Andersson, Mr. Anders Johan male
.. ... ... ...
20 2 Fynney, Mr. Joseph J male
21 2 Beesley, Mr. Lawrence male
22 3 McGowan, Miss. Anna "Annie" female
23 1 Sloper, Mr. William Thompson male
24 3 Palsson, Miss. Torborg Danira female
Again, a subset of both rows and columns is made in one go and just using selection brackets [] is not sufficient
anymore. When specifically interested in certain rows and/or columns based on their position in the table, use the
iloc operator in front of the selection brackets [].
When selecting specific rows and/or columns with loc or iloc, new values can be assigned to the selected data. For
example, to assign the name anonymous to the first 3 elements of the third column:
In [26]: titanic.iloc[0:3, 3] = "anonymous"
In [27]: titanic.head()
Out[27]:
PassengerId Survived Pclass Name
˓→Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 anonymous
˓→male 22.0 1 0 A/5 21171 7.2500 NaN S
(continues on next page)
See the user guide section on different choices for indexing to get more insight in the usage of loc and iloc.
• When selecting subsets of data, square brackets [] are used.
• Inside these brackets, you can use a single column/row label, a list of column/row labels, a slice of labels, a
conditional expression or a colon.
• Select specific rows and/or columns using loc when using the row and column names
• Select specific rows and/or columns using iloc when using the positions in the table
• You can assign new values to a selection based on loc/iloc.
A full overview about indexing is provided in the user guide pages on indexing and selecting data.
For this tutorial, air quality data about 𝑁 𝑂2 is used, made available by openaq and using the py-openaq package.
The air_quality_no2.csv data set provides 𝑁 𝑂2 values for the measurement stations FR04014, BETR801 and
[email protected]
166FVD0TPV London Westminster in respectively Paris, Antwerp and London.
In [4]: air_quality.head()
Out[4]:
station_antwerp station_paris station_london
datetime
2019-05-07 02:00:00 NaN NaN 23.0
2019-05-07 03:00:00 50.5 25.0 19.0
2019-05-07 04:00:00 45.0 27.7 19.0
2019-05-07 05:00:00 NaN 50.4 16.0
2019-05-07 06:00:00 NaN 61.9 NaN
Note: The usage of the index_col and parse_dates parameters of the read_csv function to define the first
(0th) column as index of the resulting DataFrame and convert the dates in the column to Timestamp objects,
respectively.
In [5]: air_quality.plot()
Out[5]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d255a9f90>
[email protected]
166FVD0TPV
With a DataFrame, pandas creates by default one line plot for each of the columns with numeric data.
I want to plot only the columns of the data table with the data from Paris.
In [6]: air_quality["station_paris"].plot()
Out[6]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d2561db10>
[email protected]
166FVD0TPV
To plot a specific column, use the selection method of the subset data tutorial in combination with the plot()
method. Hence, the plot() method works on both Series and DataFrame.
I want to visually compare the 𝑁 02 values measured in London versus Paris.
In [7]: air_quality.plot.scatter(x="station_london",
...: y="station_paris",
...: alpha=0.5)
...:
Out[7]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d27809c90>
[email protected]
166FVD0TPV
Apart from the default line plot when using the plot function, a number of alternatives are available to plot data.
Let’s use some standard Python to get an overview of the available plot methods:
Note: In many development environments as well as ipython and jupyter notebook, use the TAB button to get an
overview of the available methods, for example air_quality.plot. + TAB.
One of the options is DataFrame.plot.box(), which refers to a boxplot. The box method is applicable on the
air quality example data:
In [9]: air_quality.plot.box()
Out[9]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d2553a9d0>
[email protected]
166FVD0TPV
For an introduction to plots other than the default line plot, see the user guide section about supported plot styles.
I want each of the columns in a separate subplot.
Separate subplots for each of the data columns is supported by the subplots argument of the plot functions. The
builtin options available in each of the pandas plot functions that are worthwhile to have a look.
Some more formatting options are explained in the user guide section on plot formatting.
I want to further customize, extend or save the resulting plot.
In [12]: air_quality.plot.area(ax=axs);
In [14]: fig.savefig("no2_concentrations.png")
Each of the plot objects created by pandas are a matplotlib object. As Matplotlib provides plenty of options to cus-
tomize plots, making the link between pandas and Matplotlib explicit enables all the power of matplotlib to the plot.
[email protected]
This strategy is applied in the previous example:
166FVD0TPV
fig, axs = plt.subplots(figsize=(12, 4)) # Create an empty matplotlib Figure
˓→and Axes
For this tutorial, air quality data about 𝑁 𝑂2 is used, made available by openaq and using the py-openaq package.
The air_quality_no2.csv data set provides 𝑁 𝑂2 values for the measurement stations FR04014, BETR801 and
London Westminster in respectively Paris, Antwerp and London.
In [3]: air_quality.head()
(continues on next page)
In [5]: air_quality.head()
Out[5]:
station_antwerp station_paris station_london london_mg_per_
˓→cubic
datetime
˓→
To create a new column, use the [] brackets with the new column name at the left side of the assignment.
Note: The calculation of the values is done element_wise. This means all values in the given column are multiplied
by the value 1.882 at once. You do not need to use a loop to iterate each of the rows!
I want to check the ratio of the values in Paris versus Antwerp and save the result in a new column
In [6]: air_quality["ratio_paris_antwerp"] = \
...: air_quality["station_paris"] / air_quality["station_antwerp"]
...:
In [7]: air_quality.head()
Out[7]:
station_antwerp station_paris station_london london_mg_per_
˓→cubic ratio_paris_antwerp
datetime
˓→
The calculation is again element-wise, so the / is applied for the values in each row.
Also other mathematical operators (+, -, *, /) or logical operators (<, >, =,. . . ) work element wise. The latter was
already used in the subset data tutorial to filter rows of a table using a conditional expression.
I want to rename the data columns to the corresponding station identifiers used by openAQ
In [9]: air_quality_renamed.head()
Out[9]:
BETR801 FR04014 London Westminster london_mg_per_cubic ratio_
˓→paris_antwerp
[email protected]
datetime
166FVD0TPV
˓→
The rename() function can be used for both row labels and column labels. Provide a dictionary with the keys the
current names and the values the new names to update the corresponding names.
The mapping should not be restricted to fixed names only, but can be a mapping function as well. For example,
converting the column names to lowercase letters can be done using a function as well:
In [11]: air_quality_renamed.head()
Out[11]:
betr801 fr04014 london westminster london_mg_per_cubic ratio_
˓→paris_antwerp
datetime
˓→
Details about column or row label renaming is provided in the user guide section on renaming labels.
• Create a new column by assigning the output to the DataFrame with a new column name in between the [].
• Operations are element-wise, no need to loop over rows.
• Use rename with a dictionary or function to rename row labels or column names.
The user guide contains a separate section on column addition and deletion.
This tutorial uses the titanic data set, stored as CSV. The data consists of the following data columns:
• PassengerId: Id of every passenger.
• Survived: This feature have value 0 and 1. 0 for not survived and 1 for survived.
• Pclass: There are 3 classes: Class 1, Class 2 and Class 3.
• Name: Name of passenger.
[email protected]
166FVD0TPV • Sex: Gender of passenger.
• Age: Age of passenger.
• SibSp: Indication that passenger have siblings and spouse.
• Parch: Whether a passenger is alone or have family.
• Ticket: Ticket number of passenger.
• Fare: Indicating the fare.
• Cabin: The cabin of passenger.
• Embarked: The embarked category.
In [3]: titanic.head()
Out[3]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S
Aggregating statistics
In [4]: titanic["Age"].mean()
Out[4]: 29.69911764705882
Different statistics are available and can be applied to columns with numerical data. Operations in general exclude
missing data and operate across rows by default.
What is the median age and ticket fare price of the titanic passengers?
The statistic applied to multiple columns of a DataFrame (the selection of two columns return a DataFrame, see
the subset data tutorial) is calculated for each numeric column.
The aggregating statistic can be calculated for multiple columns at the same time. Remember the describe function
from first tutorial tutorial?
[email protected]
166FVD0TPV In [6]: titanic[["Age", "Fare"]].describe()
Out[6]:
Age Fare
count 714.000000 891.000000
mean 29.699118 32.204208
std 14.526497 49.693429
min 0.420000 0.000000
25% 20.125000 7.910400
50% 28.000000 14.454200
75% 38.000000 31.000000
max 80.000000 512.329200
Instead of the predefined statistics, specific combinations of aggregating statistics for given columns can be defined
using the DataFrame.agg() method:
Details about descriptive statistics are provided in the user guide section on descriptive statistics.
What is the average age for male versus female titanic passengers?
As our interest is the average age for each gender, a subselection on these two columns is made first: titanic[[
"Sex", "Age"]]. Next, the groupby() method is applied on the Sex column to make a group per category.
The average age for each gender is calculated and returned.
Calculating a given statistic (e.g. mean age) for each category in a column (e.g. male/female in the Sex column) is a
common pattern. The groupby method is used to support this type of operations. More general, this fits in the more
general split-apply-combine pattern:
• Split the data into groups
• Apply a function to each group independently
• Combine the results into a data structure
The apply and combine steps are typically done together in pandas.
In the previous example, we explicitly selected the 2 columns first. If not, the mean method is applied to each column
containing
[email protected] numerical columns:
166FVD0TPV
In [9]: titanic.groupby("Sex").mean()
Out[9]:
PassengerId Survived Pclass Age SibSp Parch Fare
Sex
female 431.028662 0.742038 2.159236 27.915709 0.694268 0.649682 44.479818
male 454.147314 0.188908 2.389948 30.726645 0.429809 0.235702 25.523893
It does not make much sense to get the average value of the Pclass. if we are only interested in the average age for
each gender, the selection of columns (rectangular brackets [] as usual) is supported on the grouped data as well:
In [10]: titanic.groupby("Sex")["Age"].mean()
Out[10]:
Sex
female 27.915709
male 30.726645
Name: Age, dtype: float64
Note: The Pclass column contains numerical data but actually represents 3 categories (or factors) with respectively
the labels ‘1’, ‘2’ and ‘3’. Calculating statistics on these does not make much sense. Therefore, pandas provides a
Categorical data type to handle this type of data. More information is provided in the user guide Categorical data
section.
What is the mean ticket fare price for each of the sex and cabin class combinations?
Grouping can be done by multiple columns at the same time. Provide the column names as a list to the groupby()
method.
A full description on the split-apply-combine approach is provided in the user guide section on groupby operations.
In [12]: titanic["Pclass"].value_counts()
Out[12]:
3 491
1 216
2 184
Name: Pclass, dtype: int64
[email protected]
166FVD0TPV
The value_counts() method counts the number of records for each category in a column.
The function is a shortcut, as it is actually a groupby operation in combination with counting of the number of records
within each group:
In [13]: titanic.groupby("Pclass")["Pclass"].count()
Out[13]:
Pclass
1 216
2 184
3 491
Name: Pclass, dtype: int64
Note: Both size and count can be used in combination with groupby. Whereas size includes NaN values and
just provides the number of rows (size of the table), count excludes the missing values. In the value_counts
method, use the dropna argument to include or exclude the NaN values.
The user guide has a dedicated section on value_counts , see page on discretization.
• Aggregation statistics can be calculated on entire columns or rows
• groupby provides the power of the split-apply-combine pattern
• value_counts is a convenient shortcut to count the number of entries in each category of a variable
A full description on the split-apply-combine approach is provided in the user guide pages about groupby operations.
This tutorial uses the titanic data set, stored as CSV. The data consists of the following data columns:
• PassengerId: Id of every passenger.
• Survived: This feature have value 0 and 1. 0 for not survived and 1 for survived.
• Pclass: There are 3 classes: Class 1, Class 2 and Class 3.
• Name: Name of passenger.
• Sex: Gender of passenger.
• Age: Age of passenger.
• SibSp: Indication that passenger have siblings and spouse.
• Parch: Whether a passenger is alone or have family.
• Ticket: Ticket number of passenger.
• Fare: Indicating the fare.
• Cabin: The cabin of passenger.
• Embarked: The embarked category.
In [3]: titanic.head()
Out[3]:
PassengerId Survived Pclass Name
[email protected]
166FVD0TPV ˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S
This tutorial uses air quality data about 𝑁 𝑂2 and Particulate matter less than 2.5 micrometers, made available by
openaq and using the py-openaq package. The air_quality_long.csv data set provides 𝑁 𝑂2 and 𝑃 𝑀25 values
for the measurement stations FR04014, BETR801 and London Westminster in respectively Paris, Antwerp and London.
The air-quality data set has the following columns:
• city: city where the sensor is used, either Paris, Antwerp or London
• country: country where the sensor is used, either FR, BE or GB
• location: the id of the sensor, either FR04014, BETR801 or London Westminster
• parameter: the parameter measured by the sensor, either 𝑁 𝑂2 or Particulate matter
• value: the measured value
• unit: the unit of the measured parameter, in this case ‘µg/m3 ’
and the index of the DataFrame is datetime, the datetime of the measurement.
Note: The air-quality data is provided in a so-called long format data representation with each observation on a
separate row and each variable a separate column of the data table. The long/narrow format is also known as the tidy
data format.
In [5]: air_quality.head()
Out[5]:
city country location parameter value unit
date.utc
2019-06-18 06:00:00+00:00 Antwerpen BE BETR801 pm25 18.0 µg/m3
2019-06-17 08:00:00+00:00 Antwerpen BE BETR801 pm25 6.5 µg/m3
2019-06-17 07:00:00+00:00 Antwerpen BE BETR801 pm25 18.5 µg/m3
2019-06-17 06:00:00+00:00 Antwerpen BE BETR801 pm25 16.0 µg/m3
2019-06-17 05:00:00+00:00 Antwerpen BE BETR801 pm25 7.5 µg/m3
I want to sort the titanic data according to the age of the passengers.
[email protected]
166FVD0TPV In [6]: titanic.sort_values(by="Age").head()
Out[6]:
PassengerId Survived Pclass Name Sex Age
˓→SibSp Parch Ticket Fare Cabin Embarked
803 804 1 3 Thomas, Master. Assad Alexander male 0.42
˓→ 0 1 2625 8.5167 NaN C
755 756 1 2 Hamalainen, Master. Viljo male 0.67
˓→ 1 1 250649 14.5000 NaN S
644 645 1 3 Baclini, Miss. Eugenie female 0.75
˓→ 2 1 2666 19.2583 NaN C
469 470 1 3 Baclini, Miss. Helene Barbara female 0.75
˓→ 2 1 2666 19.2583 NaN C
78 79 1 2 Caldwell, Master. Alden Gates male 0.83
˓→ 0 2 248738 29.0000 NaN S
I want to sort the titanic data according to the cabin class and age in descending order.
In [7]: titanic.sort_values(by=['Pclass', 'Age'], ascending=False).head()
Out[7]:
PassengerId Survived Pclass Name Sex Age SibSp
˓→Parch Ticket Fare Cabin Embarked
851 852 0 3 Svensson, Mr. Johan male 74.0 0
˓→ 0 347060 7.7750 NaN S
116 117 0 3 Connors, Mr. Patrick male 70.5 0
˓→ 0 370369 7.7500 NaN Q
280 281 0 3 Duane, Mr. Frank male 65.0 0
˓→ 0 336439 7.7500 NaN Q
483 484 1 3 Turkula, Mrs. (Hedwig) female 63.0 0
˓→ 0 4134 9.5875 NaN S (continues on next page)
With Series.sort_values(), the rows in the table are sorted according to the defined column(s). The index
will follow the row order.
More details about sorting of tables is provided in the using guide section on sorting data.
Let’s use a small subset of the air quality data set. We focus on 𝑁 𝑂2 data and only use the first two measurements of
each location (i.e. the head of each group). The subset of data will be called no2_subset
In [10]: no2_subset
Out[10]:
city country location parameter value
˓→unit
date.utc
˓→
I want the values for the three stations as separate columns next to each other
The pivot_table() function is purely reshaping of the data: a single value for each index/column combination is
required.
As pandas support plotting of multiple columns (see plotting tutorial) out of the box, the conversion from long to wide
table format enables the plotting of the different time series at the same time:
In [12]: no2.head()
Out[12]:
city country location parameter value unit
date.utc
2019-06-21 00:00:00+00:00 Paris FR FR04014 no2 20.0 µg/m3
2019-06-20 23:00:00+00:00 Paris FR FR04014 no2 21.8 µg/m3
2019-06-20 22:00:00+00:00 Paris FR FR04014 no2 26.5 µg/m3
2019-06-20 21:00:00+00:00 Paris FR FR04014 no2 24.9 µg/m3
2019-06-20 20:00:00+00:00 Paris FR FR04014 no2 21.4 µg/m3
[email protected]
166FVD0TPV
Note: When the index parameter is not defined, the existing index (row labels) is used.
For more information about pivot(), see the user guide section on pivoting DataFrame objects.
Pivot table
I want the mean concentrations for 𝑁 𝑂2 and 𝑃 𝑀2.5 in each of the stations in table form
In the case of pivot(), the data is only rearranged. When multiple values need to be aggregated (in this specific
case, the values on different time steps) pivot_table() can be used, providing an aggregation function (e.g. mean)
on how to combine these values.
Pivot table is a well known concept in spreadsheet software. When interested in summary columns for each variable
separately as well, put the margin parameter to True:
For more information about pivot_table(), see the user guide section on pivot tables.
Note: If case you are wondering, pivot_table() is indeed directly linked to groupby(). The same result can
be derived by grouping on both parameter and location:
air_quality.groupby(["parameter", "location"]).mean()
Have a look at groupby() in combination with unstack() at the user guide section on combining stats and
groupby.
Starting again from the wide format table created in the previous section:
In [17]: no2_pivoted.head()
Out[17]:
location date.utc BETR801 FR04014 London Westminster
0 2019-04-09 01:00:00+00:00 22.5 24.4 NaN
1 2019-04-09 02:00:00+00:00 53.5 27.4 67.0
2 2019-04-09 03:00:00+00:00 54.5 34.2 67.0
3 2019-04-09 04:00:00+00:00 34.5 48.5 41.0
4 2019-04-09 05:00:00+00:00 46.5 59.5 41.0
I want to collect all air quality 𝑁 𝑂2 measurements in a single column (long format)
In [19]: no_2.head()
Out[19]:
date.utc location value
0 2019-04-09 01:00:00+00:00 BETR801 22.5
1 2019-04-09 02:00:00+00:00 BETR801 53.5
2 2019-04-09 03:00:00+00:00 BETR801 54.5
3 2019-04-09 04:00:00+00:00 BETR801 34.5
4 2019-04-09 05:00:00+00:00 BETR801 46.5
[email protected]
166FVD0TPV
The pandas.melt() method on a DataFrame converts the data table from wide format to long format. The
column headers become the variable names in a newly created column.
The solution is the short version on how to apply pandas.melt(). The method will melt all columns NOT men-
tioned in id_vars together into two columns: A columns with the column header names and a column with the
values itself. The latter column gets by default the name value.
The pandas.melt() method can be defined in more detail:
In [21]: no_2.head()
Out[21]:
date.utc id_location NO_2
0 2019-04-09 01:00:00+00:00 BETR801 22.5
1 2019-04-09 02:00:00+00:00 BETR801 53.5
2 2019-04-09 03:00:00+00:00 BETR801 54.5
3 2019-04-09 04:00:00+00:00 BETR801 34.5
4 2019-04-09 05:00:00+00:00 BETR801 46.5
• value_name provides a custom column name for the values column instead of the default columns name
value
• var_name provides a custom column name for the columns collecting the column header names. Otherwise it
takes the index name or a default variable
Hence, the arguments value_name and var_name are just user-defined names for the two generated columns. The
columns to melt are defined by id_vars and value_vars.
Conversion from wide to long format with pandas.melt() is explained in the user guide section on reshaping by
melt.
• Sorting by one or more columns is supported by sort_values
• The pivot function is purely restructering of the data, pivot_table supports aggregations
• The reverse of pivot (long to wide format) is melt (wide to long format)
A full overview is available in the user guide on the pages about reshaping and pivoting.
For this tutorial, air quality data about 𝑁 𝑂2 is used, made available by openaq and downloaded using the py-openaq
package.
The air_quality_no2_long.csv data set provides 𝑁 𝑂2 values for the measurement stations FR04014,
BETR801 and London Westminster in respectively Paris, Antwerp and London.
In [4]: air_quality_no2.head()
Out[4]:
date.utc location parameter value
0 2019-06-21 00:00:00+00:00 FR04014 no2 20.0
1 2019-06-20 23:00:00+00:00 FR04014 no2 21.8
2 2019-06-20 22:00:00+00:00 FR04014 no2 26.5
3 2019-06-20 21:00:00+00:00 FR04014 no2 24.9
4 2019-06-20 20:00:00+00:00 FR04014 no2 21.4
For this tutorial, air quality data about Particulate matter less than 2.5 micrometers is used, made available by openaq
and downloaded using the py-openaq package.
The air_quality_pm25_long.csv data set provides 𝑃 𝑀25 values for the measurement stations FR04014,
BETR801 and London Westminster in respectively Paris, Antwerp and London.
In [7]: air_quality_pm25.head()
(continues on next page)
Concatenating objects
I want to combine the measurements of 𝑁 𝑂2 and 𝑃 𝑀25 , two tables with a similar structure, in a single table
In [9]: air_quality.head()
Out[9]:
date.utc location parameter value
0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5
[email protected]
166FVD0TPV
The concat() function performs concatenation operations of multiple tables along one of the axis (row-wise or
column-wise).
By default concatenation is along axis 0, so the resulting table combines the rows of the input tables. Let’s check the
shape of the original and the concatenated tables to verify the operation:
Note: The axis argument will return in a number of pandas methods that can be applied along an axis. A
DataFrame has two corresponding axes: the first running vertically downwards across rows (axis 0), and the second
running horizontally across columns (axis 1). Most operations like concatenation or summary statistics are by default
across rows (axis 0), but can be applied across columns as well.
Sorting the table on the datetime information illustrates also the combination of both tables, with the parameter
column defining the origin of the table (either no2 from table air_quality_no2 or pm25 from table
air_quality_pm25):
In [14]: air_quality.head()
Out[14]:
date.utc location parameter value
2067 2019-05-07 01:00:00+00:00 London Westminster no2 23.0
1003 2019-05-07 01:00:00+00:00 FR04014 no2 25.0
100 2019-05-07 01:00:00+00:00 BETR801 pm25 12.5
1098 2019-05-07 01:00:00+00:00 BETR801 no2 50.5
1109 2019-05-07 01:00:00+00:00 London Westminster pm25 8.0
In this specific example, the parameter column provided by the data ensures that each of the original tables can be
identified. This is not always the case. the concat function provides a convenient solution with the keys argument,
adding an additional (hierarchical) row index. For example:
In [15]: air_quality_ = pd.concat([air_quality_pm25, air_quality_no2],
....: keys=["PM25", "NO2"])
....:
In [16]: air_quality_.head()
Out[16]:
date.utc location parameter value
PM25 0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5
[email protected]
166FVD0TPV
Note: The existence of multiple row/column indices at the same time has not been mentioned within these tutorials.
Hierarchical indexing or MultiIndex is an advanced and powerfull pandas feature to analyze higher dimensional data.
Multi-indexing is out of scope for this pandas introduction. For the moment, remember that the func-
tion reset_index can be used to convert any level of an index to a column, e.g. air_quality.
reset_index(level=0)
Feel free to dive into the world of multi-indexing at the user guide section on advanced indexing.
More options on table concatenation (row and column wise) and how concat can be used to define the logic (union
or intersection) of the indexes on the other axes is provided at the section on object concatenation.
Add the station coordinates, provided by the stations metadata table, to the corresponding rows in the measurements
table.
Warning: The air quality measurement station coordinates are stored in a data file air_quality_stations.
csv, downloaded using the py-openaq package.
Note: The stations used in this example (FR04014, BETR801 and London Westminster) are just three entries enlisted
in the metadata table. We only want to add the coordinates of these three to the measurements table, each on the
corresponding rows of the air_quality table.
In [19]: air_quality.head()
Out[19]:
date.utc location parameter value
2067 2019-05-07 01:00:00+00:00 London Westminster no2 23.0
1003 2019-05-07 01:00:00+00:00 FR04014 no2 25.0
100 2019-05-07 01:00:00+00:00 BETR801 pm25 12.5
1098 2019-05-07 01:00:00+00:00 BETR801 no2 50.5
1109 2019-05-07 01:00:00+00:00 London Westminster pm25 8.0
Using the merge() function, for each of the rows in the air_quality table, the corresponding coordinates are
added from the air_quality_stations_coord table. Both tables have the column location in common
which is used as a key to combine the information. By choosing the left join, only the locations available in the
air_quality (left) table, i.e. FR04014, BETR801 and London Westminster, end up in the resulting table. The
merge function supports multiple join options similar to database-style operations.
Add the parameter full description and name, provided by the parameters metadata table, to the measurements table
Warning: The air quality parameters metadata are stored in a data file air_quality_parameters.csv,
downloaded using the py-openaq package.
In [23]: air_quality_parameters.head()
Out[23]:
id description name
0 bc Black Carbon BC
1 co Carbon Monoxide CO
2 no2 Nitrogen Dioxide NO2
3 o3 Ozone O3
4 pm10 Particulate matter less than 10 micrometers in... PM10
In [25]: air_quality.head()
Out[25]:
date.utc location parameter value coordinates.
˓→latitude coordinates.longitude id
˓→description name
0 2019-05-07 01:00:00+00:00 London Westminster no2 23.0 51.
˓→49467 -0.13193 no2 Nitrogen
˓→Dioxide NO2
1 2019-05-07 01:00:00+00:00 FR04014 no2 25.0 48.
˓→83724 2.39390 no2 Nitrogen
˓→Dioxide NO2
2 2019-05-07 01:00:00+00:00 FR04014 no2 25.0 48.
˓→83722 2.39390 no2 Nitrogen
[email protected]
˓→Dioxide NO2
166FVD0TPV 3 2019-05-07 01:00:00+00:00 BETR801 pm25 12.5 51.
˓→20966 4.43182 pm25 Particulate matter less than 2.5 micrometers i..
˓→. PM2.5
4 2019-05-07 01:00:00+00:00 BETR801 no2 50.5 51.
˓→20966 4.43182 no2 Nitrogen
˓→Dioxide NO2
Compared to the previous example, there is no common column name. However, the parameter column in the
air_quality table and the id column in the air_quality_parameters_name both provide the measured
variable in a common format. The left_on and right_on arguments are used here (instead of just on) to make
the link between the two tables.
pandas supports also inner, outer, and right joins. More information on join/merge of tables is provided in the user
guide section on database style merging of tables. Or have a look at the comparison with SQL page.
• Multiple tables can be concatenated both column as row wise using the concat function.
• For database-like merging/joining of tables, use the merge function.
See the user guide for a full description of the various facilities to combine data tables.
For this tutorial, air quality data about 𝑁 𝑂2 and Particulate matter less than 2.5 micrometers is used, made available
by openaq and downloaded using the py-openaq package. The air_quality_no2_long.csv" data set provides
𝑁 𝑂2 values for the measurement stations FR04014, BETR801 and London Westminster in respectively Paris, Antwerp
and London.
In [5]: air_quality.head()
Out[5]:
city country datetime location parameter value unit
0 Paris FR 2019-06-21 00:00:00+00:00 FR04014 no2 20.0 µg/m3
1 Paris FR 2019-06-20 23:00:00+00:00 FR04014 no2 21.8 µg/m3
2 Paris FR 2019-06-20 22:00:00+00:00 FR04014 no2 26.5 µg/m3
3 Paris FR 2019-06-20 21:00:00+00:00 FR04014 no2 24.9 µg/m3
4 Paris FR 2019-06-20 20:00:00+00:00 FR04014 no2 21.4 µg/m3
In [6]: air_quality.city.unique()
Out[6]: array(['Paris', 'Antwerpen', 'London'], dtype=object)
I want to work with the dates in the column datetime as datetime objects instead of plain text
In [8]: air_quality["datetime"]
Out[8]:
[email protected]
0 2019-06-21 00:00:00+00:00
166FVD0TPV
1 2019-06-20 23:00:00+00:00
2 2019-06-20 22:00:00+00:00
3 2019-06-20 21:00:00+00:00
4 2019-06-20 20:00:00+00:00
...
2063 2019-05-07 06:00:00+00:00
2064 2019-05-07 04:00:00+00:00
2065 2019-05-07 03:00:00+00:00
2066 2019-05-07 02:00:00+00:00
2067 2019-05-07 01:00:00+00:00
Name: datetime, Length: 2068, dtype: datetime64[ns, UTC]
Initially, the values in datetime are character strings and do not provide any datetime operations (e.g. extract the
year, day of the week,. . . ). By applying the to_datetime function, pandas interprets the strings and convert these to
datetime (i.e. datetime64[ns, UTC]) objects. In pandas we call these datetime objects similar to datetime.
datetime from the standard library a pandas.Timestamp.
Note: As many data sets do contain datetime information in one of the columns, pandas input function like pandas.
read_csv() and pandas.read_json() can do the transformation to dates when reading the data using the
parse_dates parameter with a list of the columns to read as Timestamp:
pd.read_csv("../data/air_quality_no2_long.csv", parse_dates=["datetime"])
Why are these pandas.Timestamp objects useful. Let’s illustrate the added value with some example cases.
What is the start and end date of the time series data set working with?
Using pandas.Timestamp for datetimes enable us to calculate with date information and make them comparable.
Hence, we can use this to get the length of our time series:
The result is a pandas.Timedelta object, similar to datetime.timedelta from the standard Python library
and defining a time duration.
The different time concepts supported by pandas are explained in the user guide section on time related concepts.
I want to add a new column to the DataFrame containing only the month of the measurement
In [12]: air_quality.head()
Out[12]:
city country datetime location parameter value unit month
0 Paris FR 2019-06-21 00:00:00+00:00 FR04014 no2 20.0 µg/m3 6
1 Paris FR 2019-06-20 23:00:00+00:00 FR04014 no2 21.8 µg/m3 6
2 Paris FR 2019-06-20 22:00:00+00:00 FR04014 no2 26.5 µg/m3 6
3 Paris FR 2019-06-20 21:00:00+00:00 FR04014 no2 24.9 µg/m3 6
4 Paris FR 2019-06-20 20:00:00+00:00 FR04014 no2 21.4 µg/m3 6
[email protected]
166FVD0TPV By using Timestamp objects for dates, a lot of time-related properties are provided by pandas. For example the
month, but also year, weekofyear, quarter,. . . All of these properties are accessible by the dt accessor.
An overview of the existing date properties is given in the time and date components overview table. More details
about the dt accessor to return datetime like properties is explained in a dedicated section on the dt accessor.
What is the average 𝑁 𝑂2 concentration for each day of the week for each of the measurement locations?
In [13]: air_quality.groupby(
....: [air_quality["datetime"].dt.weekday, "location"])["value"].mean()
....:
Out[13]:
datetime location
0 BETR801 27.875000
FR04014 24.856250
London Westminster 23.969697
1 BETR801 22.214286
FR04014 30.999359
...
5 FR04014 25.266154
London Westminster 24.977612
6 BETR801 21.896552
FR04014 23.274306
London Westminster 24.859155
Name: value, Length: 21, dtype: float64
Remember the split-apply-combine pattern provided by groupby from the tutorial on statistics calculation? Here,
we want to calculate a given statistic (e.g. mean 𝑁 𝑂2 ) for each weekday and for each measurement location. To
group on weekdays, we use the datetime property weekday (with Monday=0 and Sunday=6) of pandas Timestamp,
which is also accessible by the dt accessor. The grouping on both locations and weekdays can be done to split the
calculation of the mean on each of these combinations.
Danger: As we are working with a very short time series in these examples, the analysis does not provide a
long-term representative result!
Plot the typical 𝑁 𝑂2 pattern during the day of our time series of all stations together. In other words, what is the
average value for each hour of the day?
In [15]: air_quality.groupby(
....: air_quality["datetime"].dt.hour)["value"].mean().plot(kind='bar',
....: rot=0,
....: ax=axs)
....:
Out[15]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3d253d7bd0>
[email protected]
166FVD0TPV
Similar to the previous case, we want to calculate a given statistic (e.g. mean 𝑁 𝑂2 ) for each hour of the day and we
can use the split-apply-combine approach again. For this case, the datetime property hour of pandas Timestamp,
which is also accessible by the dt accessor.
Datetime as index
In the tutorial on reshaping, pivot() was introduced to reshape the data table with each of the measurements
locations as a separate column:
In [19]: no_2.head()
Out[19]:
location BETR801 FR04014 London Westminster
datetime
2019-05-07 01:00:00+00:00 50.5 25.0 23.0
(continues on next page)
Note: By pivoting the data, the datetime information became the index of the table. In general, setting a column as
an index can be achieved by the set_index function.
Working with a datetime index (i.e. DatetimeIndex) provides powerful functionalities. For example, we do not
need the dt accessor to get the time series properties, but have these properties available on the index directly:
Some other advantages are the convenient subsetting of time period or the adapted time scale on plots. Let’s apply this
on our data.
Create a plot of the 𝑁 𝑂2 values in the different stations from the 20th of May till the end of 21st of May
[email protected]
166FVD0TPV
In [21]: no_2["2019-05-20":"2019-05-21"].plot();
[email protected]
166FVD0TPV
By providing a string that parses to a datetime, a specific subset of the data can be selected on a DatetimeIndex.
More information on the DatetimeIndex and the slicing by using strings is provided in the section on time series
indexing.
Aggregate the current hourly time series values to the monthly maximum value in each of the stations.
In [23]: monthly_max
Out[23]:
location BETR801 FR04014 London Westminster
datetime
2019-05-31 00:00:00+00:00 74.5 97.0 97.0
2019-06-30 00:00:00+00:00 52.5 84.7 52.0
A very powerful method on time series data with a datetime index, is the ability to resample() time series to
another frequency (e.g., converting secondly data into 5-minutely data).
The resample() method is similar to a groupby operation:
• it provides a time-based grouping, by using a string (e.g. M, 5H,. . . ) that defines the target frequency
• it requires an aggregation function such as mean, max,. . .
An overview of the aliases used to define time series frequencies is given in the offset aliases overview table.
When defined, the frequency of the time series is provided by the freq attribute:
In [24]: monthly_max.index.freq
Out[24]: <MonthEnd>
[email protected]
166FVD0TPV
More details on the power of time series resampling is provided in the user gudie section on resampling.
• Valid date strings can be converted to datetime objects using to_datetime function or as part of read func-
tions.
• Datetime objects in pandas supports calculations, logical operations and convenient date-related properties using
the dt accessor.
• A DatetimeIndex contains these date-related properties and supports convenient slicing.
• Resample is a powerful method to change the frequency of a time series.
A full overview on time series is given in the pages on time series and date functionality.
This tutorial uses the titanic data set, stored as CSV. The data consists of the following data columns:
• PassengerId: Id of every passenger.
• Survived: This feature have value 0 and 1. 0 for not survived and 1 for survived.
• Pclass: There are 3 classes: Class 1, Class 2 and Class 3.
• Name: Name of passenger.
• Sex: Gender of passenger.
• Age: Age of passenger.
In [3]: titanic.head()
Out[3]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris
˓→ male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th...
˓→female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina
˓→female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel)
˓→female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry
˓→ male 35.0 0 0 373450 8.0500 NaN S
In [4]: titanic["Name"].str.lower()
Out[4]:
0 braund, mr. owen harris
1 cumings, mrs. john bradley (florence briggs th...
2 heikkinen, miss. laina
3 futrelle, mrs. jacques heath (lily may peel)
4 allen, mr. william henry
...
886 montvila, rev. juozas
887 graham, miss. margaret edith
888 johnston, miss. catherine helen "carrie"
889 behr, mr. karl howell
890 dooley, mr. patrick
Name: Name, Length: 891, dtype: object
To make each of the strings in the Name column lowercase, select the Name column (see tutorial on selection of data),
add the str accessor and apply the lower method. As such, each of the strings is converted element wise.
Similar to datetime objects in the time series tutorial having a dt accessor, a number of specialized string methods are
available when using the str accessor. These methods have in general matching names with the equivalent built-in
string methods for single elements, but are applied element-wise (remember element wise calculations?) on each of
the values of the columns.
Create a new column Surname that contains the surname of the Passengers by extracting the part before the comma.
In [5]: titanic["Name"].str.split(",")
Out[5]:
0 [Braund, Mr. Owen Harris]
1 [Cumings, Mrs. John Bradley (Florence Briggs ...
2 [Heikkinen, Miss. Laina]
3 [Futrelle, Mrs. Jacques Heath (Lily May Peel)]
4 [Allen, Mr. William Henry]
...
886 [Montvila, Rev. Juozas]
887 [Graham, Miss. Margaret Edith]
888 [Johnston, Miss. Catherine Helen "Carrie"]
889 [Behr, Mr. Karl Howell]
890 [Dooley, Mr. Patrick]
Name: Name, Length: 891, dtype: object
Using the Series.str.split() method, each of the values is returned as a list of 2 elements. The first element
is the part before the comma and the second element the part after the comma.
In [7]: titanic["Surname"]
Out[7]:
0 Braund
1 Cumings
2 Heikkinen
3 Futrelle
4 Allen
...
[email protected]
886 Montvila
166FVD0TPV 887 Graham
888 Johnston
889 Behr
890 Dooley
Name: Surname, Length: 891, dtype: object
As we are only interested in the first part representing the surname (element 0), we can again use the str accessor
and apply Series.str.get() to extract the relevant part. Indeed, these string functions can be concatenated to
combine multiple functions at once!
More information on extracting parts of strings is available in the user guide section on splitting and replacing strings.
Extract the passenger data about the Countess on board of the Titanic.
In [8]: titanic["Name"].str.contains("Countess")
Out[8]:
0 False
1 False
2 False
3 False
4 False
...
886 False
887 False
888 False
889 False
890 False
Name: Name, Length: 891, dtype: bool
In [9]: titanic[titanic["Name"].str.contains("Countess")]
Out[9]:
PassengerId Survived Pclass Name
˓→ Sex Age SibSp Parch Ticket Fare Cabin Embarked Surname
759 760 1 1 Rothes, the Countess. of (Lucy Noel Martha Dye...
˓→ female 33.0 0 0 110152 86.5 B77 S Rothes
Note: More powerful extractions on strings is supported, as the Series.str.contains() and Series.str.
extract() methods accepts regular expressions, but out of scope of this tutorial.
More information on extracting parts of strings is available in the user guide section on string matching and extracting.
Which passenger of the titanic has the longest name?
In [10]: titanic["Name"].str.len()
Out[10]:
0 23
1 51
2 22
3 44
[email protected]
4 24
166FVD0TPV ..
886 21
887 28
888 40
889 21
890 19
Name: Name, Length: 891, dtype: int64
To get the longest name we first have to get the lenghts of each of the names in the Name column. By using pandas
string methods, the Series.str.len() function is applied to each of the names individually (element-wise).
In [11]: titanic["Name"].str.len().idxmax()
Out[11]: 307
Next, we need to get the corresponding location, preferably the index label, in the table for which the name length is
the largest. The idxmax`() method does exactly that. It is not a string method and is applied to integers, so no str
is used.
Based on the index name of the row (307) and the column (Name), we can do a selection using the loc operator,
introduced in the tutorial on subsetting.
In the ‘Sex’ columns, replace values of ‘male’ by ‘M’ and all ‘female’ values by ‘F’
In [14]: titanic["Sex_short"]
Out[14]:
0 M
1 F
2 F
3 F
4 M
..
886 M
887 F
888 F
889 M
890 M
Name: Sex_short, Length: 891, dtype: object
Whereas replace() is not a string method, it provides a convenient way to use mappings or vocabularies to translate
certain values. It requires a dictionary to define the mapping {from : to}.
Warning: There is also a replace() methods available to replace a specific set of characters. However, when
having a mapping of multiple values, this would become:
titanic["Sex_short"] = titanic["Sex"].str.replace("female", "F")
titanic["Sex_short"] = titanic["Sex_short"].str.replace("male", "M")
[email protected]
166FVD0TPV This would become cumbersome and easily lead to mistakes. Just think (or try out yourself) what would happen if
those two statements are applied in the opposite order. . .
Here we discuss a lot of the essential functionality common to the pandas data structures. Here’s how to create some
of the objects used in the examples from the previous section:
To view a small sample of a Series or DataFrame object, use the head() and tail() methods. The default number
of elements to display is five, but you may pass a custom number.
In [5]: long_series.head()
Out[5]:
0 -1.157892
1 -1.344312
2 0.844885
3 1.075770
4 -0.109050
dtype: float64
In [6]: long_series.tail(3)
Out[6]:
997 -0.289388
998 -1.020544
999 0.589993
dtype: float64
pandas objects have a number of attributes enabling you to access the metadata
• shape: gives the axis dimensions of the object, consistent with ndarray
[email protected]
166FVD0TPV
• Axis labels
– Series: index (only axis)
– DataFrame: index (rows) and columns
Note, these attributes can be safely assigned to!
In [7]: df[:2]
Out[7]:
A B C
2000-01-01 -0.173215 0.119209 -1.044236
2000-01-02 -0.861849 -2.104569 -0.494929
In [9]: df
Out[9]:
a b c
2000-01-01 -0.173215 0.119209 -1.044236
2000-01-02 -0.861849 -2.104569 -0.494929
2000-01-03 1.071804 0.721555 -0.706771
2000-01-04 -1.039575 0.271860 -0.424972
2000-01-05 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427
2000-01-07 0.524988 0.404705 0.577046
2000-01-08 -1.715002 -1.039268 -0.370647
Pandas objects (Index, Series, DataFrame) can be thought of as containers for arrays, which hold the actual
data and do the actual computation. For many types, the underlying array is a numpy.ndarray. However, pandas
and 3rd party libraries may extend NumPy’s type system to add support for custom arrays (see dtypes).
To get the actual data inside a Index or Series, use the .array property
In [10]: s.array
Out[10]:
<PandasArray>
[ 0.4691122999071863, -0.2828633443286633, -1.5090585031735124,
-1.1356323710171934, 1.2121120250208506]
Length: 5, dtype: float64
In [11]: s.index.array
Out[11]:
<PandasArray>
['a', 'b', 'c', 'd', 'e']
Length: 5, dtype: object
array will always be an ExtensionArray. The exact details of what an ExtensionArray is and why pandas
uses them is a bit beyond the scope of this introduction. See dtypes for more.
If you know you need a NumPy array, use to_numpy() or numpy.asarray().
In [12]: s.to_numpy()
Out[12]: array([ 0.4691, -0.2829, -1.5091, -1.1356, 1.2121])
In [13]: np.asarray(s)
Out[13]: array([ 0.4691, -0.2829, -1.5091, -1.1356, 1.2121])
[email protected]
166FVD0TPV When the Series or Index is backed by an ExtensionArray, to_numpy() may involve copying data and coercing
values. See dtypes for more.
to_numpy() gives some control over the dtype of the resulting numpy.ndarray. For example, consider date-
times with timezones. NumPy doesn’t have a dtype to represent timezone-aware datetimes, so there are two possibly
useful representations:
1. An object-dtype numpy.ndarray with Timestamp objects, each with the correct tz
2. A datetime64[ns] -dtype numpy.ndarray, where the values have been converted to UTC and the time-
zone discarded
Timezones may be preserved with dtype=object
In [15]: ser.to_numpy(dtype=object)
Out[15]:
array([Timestamp('2000-01-01 00:00:00+0100', tz='CET', freq='D'),
Timestamp('2000-01-02 00:00:00+0100', tz='CET', freq='D')],
dtype=object)
In [16]: ser.to_numpy(dtype="datetime64[ns]")
Out[16]:
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
dtype='datetime64[ns]')
Getting the “raw data” inside a DataFrame is possibly a bit more complex. When your DataFrame only has a
single data type for all the columns, DataFrame.to_numpy() will return the underlying data:
In [17]: df.to_numpy()
Out[17]:
array([[-0.1732, 0.1192, -1.0442],
[-0.8618, -2.1046, -0.4949],
[ 1.0718, 0.7216, -0.7068],
[-1.0396, 0.2719, -0.425 ],
[ 0.567 , 0.2762, -1.0874],
[-0.6737, 0.1136, -1.4784],
[ 0.525 , 0.4047, 0.577 ],
[-1.715 , -1.0393, -0.3706]])
If a DataFrame contains homogeneously-typed data, the ndarray can actually be modified in-place, and the changes
will be reflected in the data structure. For heterogeneous data (e.g. some of the DataFrame’s columns are not all the
same dtype), this will not be the case. The values attribute itself, unlike the axis labels, cannot be assigned to.
Note: When working with heterogeneous data, the dtype of the resulting ndarray will be chosen to accommodate all
of the data involved. For example, if strings are involved, the result will be of object dtype. If there are only floats and
integers, the resulting array will be of float dtype.
In the past, pandas recommended Series.values or DataFrame.values for extracting the data from a Series
or DataFrame. You’ll still find references to these in old code bases and online. Going forward, we recommend
avoiding .values and using .array or .to_numpy(). .values has the following drawbacks:
1. When your Series contains an extension type, it’s unclear whether Series.values returns a NumPy array
[email protected]
or the extension array. Series.array will always return an ExtensionArray, and will never copy data.
166FVD0TPV
Series.to_numpy() will always return a NumPy array, potentially at the cost of copying / coercing values.
2. When your DataFrame contains a mixture of data types, DataFrame.values may involve copying data and
coercing values to a common dtype, a relatively expensive operation. DataFrame.to_numpy(), being a
method, makes it clearer that the returned NumPy array may not be a view on the same data in the DataFrame.
Accelerated operations
pandas has support for accelerating certain types of binary numerical and boolean operations using the numexpr
library and the bottleneck libraries.
These libraries are especially useful when dealing with large data sets, and provide large speedups. numexpr uses
smart chunking, caching, and multiple cores. bottleneck is a set of specialized cython routines that are especially
fast when dealing with arrays that have nans.
Here is a sample (using 100 column x 100,000 row DataFrames):
You are highly encouraged to install both libraries. See the section Recommended Dependencies for more installation
info.
These are both enabled to be used by default, you can control this by setting the options:
pd.set_option('compute.use_bottleneck', False)
pd.set_option('compute.use_numexpr', False)
With binary operations between pandas data structures, there are two key points of interest:
• Broadcasting behavior between higher- (e.g. DataFrame) and lower-dimensional (e.g. Series) objects.
• Missing data in computations.
We will demonstrate how to manage these issues independently, though they can be handled simultaneously.
DataFrame has the methods add(), sub(), mul(), div() and related functions radd(), rsub(), . . . for
carrying out binary operations. For broadcasting behavior, Series input is of primary interest. Using these functions,
you can use to either match on the index or columns via the axis keyword:
In [18]: df = pd.DataFrame({
....: 'one': pd.Series(np.random.randn(3), index=['a', 'b', 'c']),
....: 'two': pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd']),
....: 'three': pd.Series(np.random.randn(3), index=['b', 'c', 'd'])})
....:
In [19]: df
[email protected]
Out[19]:
166FVD0TPV one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
Series and Index also support the divmod() builtin. This function takes the floor division and modulo operation at
the same time returning a two-tuple of the same type as the left hand side. For example:
In [29]: s = pd.Series(np.arange(10))
In [30]: s
Out[30]:
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
dtype: int64
In [32]: div
Out[32]:
0 0
(continues on next page)
In [33]: rem
Out[33]:
0 0
1 1
2 2
3 0
4 1
5 2
6 0
7 1
8 2
9 0
dtype: int64
In [35]: idx
[email protected]
166FVD0TPV Out[35]: Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype='int64')
In [36]: div, rem = divmod(idx, 3)
In [37]: div
Out[37]: Int64Index([0, 0, 0, 1, 1, 1, 2, 2, 2, 3], dtype='int64')
In [38]: rem
Out[38]: Int64Index([0, 1, 2, 0, 1, 2, 0, 1, 2, 0], dtype='int64')
In [40]: div
Out[40]:
0 0
1 0
2 0
3 1
4 1
5 1
6 1
7 1
8 1
9 1
dtype: int64
In [41]: rem
(continues on next page)
In Series and DataFrame, the arithmetic functions have the option of inputting a fill_value, namely a value to substitute
when at most one of the values at a location are missing. For example, when adding two DataFrame objects, you may
wish to treat NaN as 0 unless both DataFrames are missing that value, in which case the result will be NaN (you can
later replace NaN with some other value using fillna if you wish).
In [42]: df
Out[42]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246
[email protected] 1.478369 1.227435
166FVD0TPV d NaN 0.279344 -0.613172
In [43]: df2
Out[43]:
one two three
a 1.394981 1.772517 1.000000
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
In [44]: df + df2
Out[44]:
one two three
a 2.789963 3.545034 NaN
b 0.686107 3.824246 -0.100780
c 1.390491 2.956737 2.454870
d NaN 0.558688 -1.226343
Flexible comparisons
Series and DataFrame have the binary comparison methods eq, ne, lt, gt, le, and ge whose behavior is analogous
to the binary arithmetic operations described above:
In [46]: df.gt(df2)
Out[46]:
one two three
a False False False
b False False False
c False False False
d False False False
In [47]: df2.ne(df)
Out[47]:
one two three
a False False True
b False False False
c False False False
d True False False
These operations produce a pandas object of the same type as the left-hand-side input that is of dtype bool. These
boolean objects can be used in indexing operations, see the section on Boolean indexing.
Boolean reductions
You can apply the reductions: empty, any(), all(), and bool() to provide a way to summarize a boolean result.
[email protected]
166FVD0TPV
In [48]: (df > 0).all()
Out[48]:
one False
two True
three False
dtype: bool
You can test if a pandas object is empty, via the empty property.
In [51]: df.empty
Out[51]: False
In [52]: pd.DataFrame(columns=list('ABC')).empty
Out[52]: True
To evaluate single-element pandas objects in a boolean context, use the method bool():
In [53]: pd.Series([True]).bool()
Out[53]: True
In [54]: pd.Series([False]).bool()
Out[54]: False
In [55]: pd.DataFrame([[True]]).bool()
Out[55]: True
In [56]: pd.DataFrame([[False]]).bool()
Out[56]: False
Or
>>> df and df2
These will both raise errors, as you are trying to compare multiple values.:
ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.
˓→all().
Often you may find that there is more than one way to compute the same result. As a simple example, consider df
+ df and df * 2. To test that these two computations produce the same result, given the tools shown above, you
might imagine using (df + df == df * 2).all(). But in fact, this expression is False:
In [57]: df + df == df * 2
Out[57]:
one two three
a True True False
b True True True
c True True True
d False True True
Notice that the boolean DataFrame df + df == df * 2 contains some False values! This is because NaNs do
not compare as equals:
So, NDFrames (such as Series and DataFrames) have an equals() method for testing equality, with NaNs in corre-
sponding locations treated as equal.
Note that the Series or DataFrame index needs to be in the same order for equality to be True:
In [63]: df1.equals(df2)
Out[63]: False
In [64]: df1.equals(df2.sort_index())
Out[64]: True
You can conveniently perform element-wise comparisons when comparing a pandas data structure with a scalar value:
Pandas also handles element-wise comparisons between different array-like objects of the same length:
Trying to compare Index or Series objects of different lengths will raise a ValueError:
Note that this is different from the NumPy behavior where a comparison can be broadcast:
A problem occasionally arising is the combination of two similar data sets where values in one are preferred over the
other. An example would be two data series representing a particular economic indicator where one is considered to
be of “higher quality”. However, the lower quality series might extend further back in history or have more complete
data coverage. As such, we would like to combine two DataFrame objects where missing values in one DataFrame
are conditionally filled with like-labeled values from the other DataFrame. The function implementing this operation
is combine_first(), which we illustrate:
In [73]: df1
[email protected]
Out[73]:
166FVD0TPV A B
0 1.0 NaN
1 NaN 2.0
2 3.0 3.0
3 5.0 NaN
4 NaN 6.0
In [74]: df2
Out[74]:
A B
0 5.0 NaN
1 2.0 NaN
2 4.0 3.0
3 NaN 4.0
4 3.0 6.0
5 7.0 8.0
In [75]: df1.combine_first(df2)
Out[75]:
A B
0 1.0 NaN
1 2.0 2.0
2 3.0 3.0
3 5.0 4.0
4 3.0 6.0
5 7.0 8.0
The combine_first() method above calls the more general DataFrame.combine(). This method takes
another DataFrame and a combiner function, aligns the input DataFrame and then passes the combiner function pairs
of Series (i.e., columns whose names are the same).
So, for instance, to reproduce combine_first() as above:
In [76]: def combiner(x, y):
....: return np.where(pd.isna(x), y, x)
....:
Descriptive statistics
There exists a large number of methods for computing descriptive statistics and other related operations on Series,
DataFrame. Most of these are aggregations (hence producing a lower-dimensional result) like sum(), mean(), and
quantile(), but some of them, like cumsum() and cumprod(), produce an object of the same size. Generally
speaking, these methods take an axis argument, just like ndarray.{sum, std, . . . }, but the axis can be specified by name
or integer:
• Series: no axis argument needed
• DataFrame: “index” (axis=0, default), “columns” (axis=1)
For example:
In [77]: df
Out[77]:
[email protected]
one two three
166FVD0TPV a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
In [78]: df.mean(0)
Out[78]:
one 0.811094
two 1.360588
three 0.187958
dtype: float64
In [79]: df.mean(1)
Out[79]:
a 1.583749
b 0.734929
c 1.133683
d -0.166914
dtype: float64
All such methods have a skipna option signaling whether to exclude missing data (True by default):
In [80]: df.sum(0, skipna=False)
Out[80]:
one NaN
two 5.442353
three NaN
dtype: float64
(continues on next page)
Combined with the broadcasting / arithmetic behavior, one can describe various statistical procedures, like standard-
ization (rendering data zero mean and standard deviation 1), very concisely:
In [83]: ts_stand.std()
Out[83]:
one 1.0
two 1.0
three 1.0
dtype: float64
In [85]: xs_stand.std(1)
Out[85]:
a 1.0
b 1.0
c 1.0
[email protected]
166FVD0TPV d 1.0
dtype: float64
Note that methods like cumsum() and cumprod() preserve the location of NaN values. This is somewhat different
from expanding() and rolling(). For more details please see this note.
In [86]: df.cumsum()
Out[86]:
one two three
a 1.394981 1.772517 NaN
b 1.738035 3.684640 -0.050390
c 2.433281 5.163008 1.177045
d NaN 5.442353 0.563873
Here is a quick reference summary table of common functions. Each also takes an optional level parameter which
applies only if the object has a hierarchical index.
Function Description
count Number of non-NA observations
sum Sum of values
mean Mean of values
mad Mean absolute deviation
median Arithmetic median of values
min Minimum
max Maximum
mode Mode
abs Absolute Value
prod Product of values
std Bessel-corrected sample standard deviation
var Unbiased variance
sem Standard error of the mean
skew Sample skewness (3rd moment)
kurt Sample kurtosis (4th moment)
quantile Sample quantile (value at %)
cumsum Cumulative sum
cumprod Cumulative product
cummax Cumulative maximum
cummin Cumulative minimum
Note that by chance some NumPy methods, like mean, std, and sum, will exclude NAs on Series input by default:
In [87]: np.mean(df['one'])
[email protected]
Out[87]: 0.8110935116651192
166FVD0TPV
In [88]: np.mean(df['one'].to_numpy())
Out[88]: nan
In [91]: series[10:20] = 5
In [92]: series.nunique()
Out[92]: 11
There is a convenient describe() function which computes a variety of summary statistics about a Series or the
columns of a DataFrame (excluding NAs of course):
In [95]: series.describe()
Out[95]:
(continues on next page)
In [98]: frame.describe()
Out[98]:
a b c d e
count 500.000000 500.000000 500.000000 500.000000 500.000000
mean 0.033387 0.030045 -0.043719 -0.051686 0.005979
std 1.017152 0.978743 1.025270 1.015988 1.006695
min -3.000951 -2.637901 -3.303099 -3.159200 -3.188821
25% -0.647623 -0.576449 -0.712369 -0.691338 -0.691115
50% 0.047578 -0.021499 -0.023888 -0.032652 -0.025363
75% 0.729907 0.775880 0.618896 0.670047 0.649748
max 2.740139 2.752332 3.004229 2.728702 3.240991
[email protected]
166FVD0TPV You can select specific percentiles to include in the output:
In [99]: series.describe(percentiles=[.05, .25, .75, .95])
Out[99]:
count 500.000000
mean -0.021292
std 1.015906
min -2.683763
5% -1.645423
25% -0.699070
50% -0.069718
75% 0.714483
95% 1.711409
max 3.160915
dtype: float64
In [101]: s.describe()
Out[101]:
count 9
unique 4
top a
freq 5
(continues on next page)
Note that on a mixed-type DataFrame object, describe() will restrict the summary to include only numerical
columns or, if none are, only categorical columns:
In [103]: frame.describe()
Out[103]:
b
count 4.000000
mean 1.500000
std 1.290994
min 0.000000
25% 0.750000
50% 1.500000
75% 2.250000
max 3.000000
This behavior can be controlled by providing a list of types as include/exclude arguments. The special value
all can also be used:
In [104]: frame.describe(include=['object'])
Out[104]:
a
count 4
unique 2
[email protected]
top Yes
166FVD0TPV freq 2
In [105]: frame.describe(include=['number'])
Out[105]:
b
count 4.000000
mean 1.500000
std 1.290994
min 0.000000
25% 0.750000
50% 1.500000
75% 2.250000
max 3.000000
In [106]: frame.describe(include='all')
Out[106]:
a b
count 4 4.000000
unique 2 NaN
top Yes NaN
freq 2 NaN
mean NaN 1.500000
std NaN 1.290994
min NaN 0.000000
25% NaN 0.750000
50% NaN 1.500000
75% NaN 2.250000
max NaN 3.000000
That feature relies on select_dtypes. Refer to there for details about accepted inputs.
The idxmin() and idxmax() functions on Series and DataFrame compute the index labels with the minimum and
maximum corresponding values:
In [107]: s1 = pd.Series(np.random.randn(5))
In [108]: s1
Out[108]:
0 1.118076
1 -0.352051
2 -1.242883
3 -1.277155
4 -0.641184
dtype: float64
In [111]: df1
Out[111]:
A B C
0 -0.327863 -0.946180 -0.137570
1 -0.186235 -0.257213 -0.486567
[email protected]
166FVD0TPV 2 -0.507027 -0.871259 -0.111110
3 2.000339 -2.430505 0.089759
4 -0.321434 -0.033695 0.096271
In [112]: df1.idxmin(axis=0)
Out[112]:
A 2
B 3
C 1
dtype: int64
In [113]: df1.idxmax(axis=1)
Out[113]:
0 C
1 A
2 C
3 A
4 C
dtype: object
When there are multiple rows (or columns) matching the minimum or maximum value, idxmin() and idxmax()
return the first matching index:
In [115]: df3
Out[115]:
A
(continues on next page)
In [116]: df3['A'].idxmin()
Out[116]: 'd'
Note: idxmin and idxmax are called argmin and argmax in NumPy.
The value_counts() Series method and top-level function computes a histogram of a 1D array of values. It can
also be used as a function on regular arrays:
In [117]: data = np.random.randint(0, 7, size=50)
In [118]: data
Out[118]:
array([6, 6, 2, 3, 5, 3, 2, 5, 4, 5, 4, 3, 4, 5, 0, 2, 0, 4, 2, 0, 3, 2,
2, 5, 6, 5, 3, 4, 6, 4, 3, 5, 6, 4, 3, 6, 2, 6, 6, 2, 3, 4, 2, 1,
6, 2, 6, 1, 5, 4])
[email protected]
In [119]: s = pd.Series(data)
166FVD0TPV
In [120]: s.value_counts()
Out[120]:
6 10
2 10
4 9
5 8
3 8
0 3
1 2
dtype: int64
In [121]: pd.value_counts(data)
Out[121]:
6 10
2 10
4 9
5 8
3 8
0 3
1 2
dtype: int64
Similarly, you can get the most frequently occurring value(s) (the mode) of the values in a Series or DataFrame:
In [122]: s5 = pd.Series([1, 1, 3, 3, 3, 5, 5, 7, 7, 7])
In [123]: s5.mode()
(continues on next page)
In [125]: df5.mode()
Out[125]:
A B
0 1.0 -9
1 NaN 10
2 NaN 13
Continuous values can be discretized using the cut() (bins based on values) and qcut() (bins based on sample
quantiles) functions:
In [126]: arr = np.random.randn(20)
In [128]: factor
[email protected]
166FVD0TPV Out[128]:
[(-0.251, 0.464], (-0.968, -0.251], (0.464, 1.179], (-0.251, 0.464], (-0.968, -0.251],
˓→ ..., (-0.251, 0.464], (-0.968, -0.251], (-0.968, -0.251], (-0.968, -0.251], (-0.
˓→968, -0.251]]
Length: 20
Categories (4, interval[float64]): [(-0.968, -0.251] < (-0.251, 0.464] < (0.464, 1.
˓→179] <
(1.179, 1.893]]
In [130]: factor
Out[130]:
[(0, 1], (-1, 0], (0, 1], (0, 1], (-1, 0], ..., (-1, 0], (-1, 0], (-1, 0], (-1, 0], (-
˓→1, 0]]
Length: 20
Categories (4, interval[int64]): [(-5, -1] < (-1, 0] < (0, 1] < (1, 5]]
qcut() computes sample quantiles. For example, we could slice up some normally distributed data into equal-size
quartiles like so:
In [131]: arr = np.random.randn(30)
In [133]: factor
Out[133]:
[(0.569, 1.184], (-2.278, -0.301], (-2.278, -0.301], (0.569, 1.184], (0.569, 1.184], .
˓→.., (-0.301, 0.569], (1.184, 2.346], (1.184, 2.346], (-0.301, 0.569], (-2.278, -0.
(1.184, 2.346]]
In [134]: pd.value_counts(factor)
Out[134]:
(1.184, 2.346] 8
(-2.278, -0.301] 8
(0.569, 1.184] 7
(-0.301, 0.569] 7
dtype: int64
In [137]: factor
Out[137]:
[(-inf, 0.0], (0.0, inf], (0.0, inf], (-inf, 0.0], (-inf, 0.0], ..., (-inf, 0.0], (-
˓→inf, 0.0], (-inf, 0.0], (0.0, inf], (0.0, inf]]
Length: 20
Categories (2, interval[float64]): [(-inf, 0.0] < (0.0, inf]]
[email protected]
Function application
166FVD0TPV
To apply your own or another library’s functions to pandas objects, you should be aware of the three methods below.
The appropriate method to use depends on whether your function expects to operate on an entire DataFrame or
Series, row- or column-wise, or elementwise.
1. Tablewise Function Application: pipe()
2. Row or Column-wise Function Application: apply()
3. Aggregation API: agg() and transform()
4. Applying Elementwise Functions: applymap()
DataFrames and Series can be passed into functions. However, if the function needs to be called in a chain,
consider using the pipe() method.
First some setup:
Is equivalent to:
In [142]: (df_p.pipe(extract_city_name)
.....: .pipe(add_country_name, country_name="US"))
.....:
Out[142]:
city_and_code city_name city_and_country
0 Chicago, IL Chicago ChicagoUS
[email protected]
166FVD0TPV
Pandas encourages the second style, which is known as method chaining. pipe makes it easy to use your own or
another library’s functions in method chains, alongside pandas’ methods.
In the example above, the functions extract_city_name and add_country_name each expected a
DataFrame as the first positional argument. What if the function you wish to apply takes its data as, say, the
second argument? In this case, provide pipe with a tuple of (callable, data_keyword). .pipe will route
the DataFrame to the argument specified in the tuple.
For example, we can fit a regression using statsmodels. Their API expects a formula first and a DataFrame as the
second argument, data. We pass in the function, keyword pair (sm.ols, 'data') to pipe:
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly
˓→specified.
[2] The condition number is large, 1.49e+07. This might indicate that there are
strong multicollinearity or other numerical problems.
[email protected]
166FVD0TPV """
The pipe method is inspired by unix pipes and more recently dplyr and magrittr, which have introduced the popular
(%>%) (read pipe) operator for R. The implementation of pipe here is quite clean and feels right at home in python.
We encourage you to view the source code of pipe().
Arbitrary functions can be applied along the axes of a DataFrame using the apply() method, which, like the de-
scriptive statistics methods, takes an optional axis argument:
In [146]: df.apply(np.mean)
Out[146]:
one 0.811094
two 1.360588
three 0.187958
dtype: float64
In [149]: df.apply(np.cumsum)
Out[149]:
one two three
a 1.394981 1.772517 NaN
b 1.738035 3.684640 -0.050390
c 2.433281 5.163008 1.177045
d NaN 5.442353 0.563873
In [150]: df.apply(np.exp)
Out[150]:
one two three
a 4.034899 5.885648 NaN
b 1.409244 6.767440 0.950858
c 2.004201 4.385785 3.412466
d NaN 1.322262 0.541630
The return type of the function passed to apply() affects the type of the final output from DataFrame.apply for
the default behaviour:
• If the applied function returns a Series, the final output is a DataFrame. The columns match the index of
the Series returned by the applied function.
• If the applied function returns any other type, the final output is a Series.
This default behaviour can be overridden using the result_type, which accepts three options: reduce,
broadcast, and expand. These will determine how list-likes return values expand (or not) to a DataFrame.
apply() combined with some cleverness can be used to answer many questions about a data set. For example,
suppose we wanted to extract the date where the maximum value for each column occurred:
In [153]: tsdf = pd.DataFrame(np.random.randn(1000, 3), columns=['A', 'B', 'C'],
.....: index=pd.date_range('1/1/2000', periods=1000))
.....:
You may also pass additional arguments and keyword arguments to the apply() method. For instance, consider the
following function you would like to apply:
Another useful feature is the ability to pass Series methods to carry out some Series operation on each column or row:
In [155]: tsdf
Out[155]:
A B C
2000-01-01 -0.158131 -0.232466 0.321604
2000-01-02 -1.810340 -3.105758 0.433834
2000-01-03 -1.209847 -1.156793 -0.136794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
[email protected]
2000-01-07 NaN NaN NaN
166FVD0TPV 2000-01-08 -0.653602 0.178875 1.008298
2000-01-09 1.007996 0.462824 0.254472
2000-01-10 0.307473 0.600337 1.643950
In [156]: tsdf.apply(pd.Series.interpolate)
Out[156]:
A B C
2000-01-01 -0.158131 -0.232466 0.321604
2000-01-02 -1.810340 -3.105758 0.433834
2000-01-03 -1.209847 -1.156793 -0.136794
2000-01-04 -1.098598 -0.889659 0.092225
2000-01-05 -0.987349 -0.622526 0.321243
2000-01-06 -0.876100 -0.355392 0.550262
2000-01-07 -0.764851 -0.088259 0.779280
2000-01-08 -0.653602 0.178875 1.008298
2000-01-09 1.007996 0.462824 0.254472
2000-01-10 0.307473 0.600337 1.643950
Finally, apply() takes an argument raw which is False by default, which converts each row or column into a Series
before applying the function. When set to True, the passed function will instead receive an ndarray object, which has
positive performance implications if you do not need the indexing functionality.
Aggregation API
The aggregation API allows one to express possibly multiple aggregation operations in a single concise way. This API
is similar across pandas objects, see groupby API, the window functions API, and the resample API. The entry point
for aggregation is DataFrame.aggregate(), or the alias DataFrame.agg().
We will use a similar starting frame from above:
In [159]: tsdf
Out[159]:
A B C
2000-01-01 1.257606 1.004194 0.167574
2000-01-02 -0.749892 0.288112 -0.757304
2000-01-03 -0.207550 -0.298599 0.116018
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.814347 -0.257623 0.869226
2000-01-09 -0.250663 -1.206601 0.896839
2000-01-10 2.169758 -1.333363 0.283157
Using a single function is equivalent to apply(). You can also pass named methods as strings. These will return a
[email protected]
166FVD0TPV Series of the aggregated output:
In [160]: tsdf.agg(np.sum)
Out[160]:
A 3.033606
B -1.803879
C 1.575510
dtype: float64
In [161]: tsdf.agg('sum')
Out[161]:
A 3.033606
B -1.803879
C 1.575510
dtype: float64
In [163]: tsdf['A'].agg('sum')
Out[163]: 3.033606102414146
You can pass multiple aggregation arguments as a list. The results of each of the passed functions will be a row in the
resulting DataFrame. These are naturally named from the aggregation function.
In [164]: tsdf.agg(['sum'])
Out[164]:
A B C
sum 3.033606 -1.803879 1.57551
Passing a named function will yield that name for the row:
Passing a dictionary of column names to a scalar or a list of scalars, to DataFrame.agg allows you to customize
which functions are applied to which columns. Note that the results are not in any particular order, you can use an
OrderedDict instead to guarantee ordering.
Passing a list-like will generate a DataFrame output. You will get a matrix-like output of all of the aggregators. The
output will consist of all unique functions. Those that are not noted for a particular column will be NaN:
Mixed dtypes
When presented with mixed dtypes that cannot aggregate, .agg will only take the valid aggregations. This is similar
to how groupby .agg works.
In [173]: mdf.dtypes
Out[173]:
[email protected]
166FVD0TPV A int64
B float64
C object
D datetime64[ns]
dtype: object
Custom describe
With .agg() is it possible to easily create a custom describe function, similar to the built in describe function.
Transform API
The transform() method returns an object that is indexed the same (same size) as the original. This API allows
you to provide multiple operations at the same time rather than one-by-one. Its API is quite similar to the .agg API.
We create a frame similar to the one used in the above sections.
In [183]: tsdf
[email protected]
Out[183]:
166FVD0TPV A B C
2000-01-01 -0.428759 -0.864890 -0.675341
2000-01-02 -0.168731 1.338144 -1.279321
2000-01-03 -1.621034 0.438107 0.903794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 -1.240447 -0.201052
2000-01-09 -0.157795 0.791197 -1.144209
2000-01-10 -0.030876 0.371900 0.061932
Transform the entire frame. .transform() allows input functions as: a NumPy function, a string function name or
a user defined function.
In [184]: tsdf.transform(np.abs)
Out[184]:
A B C
2000-01-01 0.428759 0.864890 0.675341
2000-01-02 0.168731 1.338144 1.279321
2000-01-03 1.621034 0.438107 0.903794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 1.240447 0.201052
2000-01-09 0.157795 0.791197 1.144209
2000-01-10 0.030876 0.371900 0.061932
(continues on next page)
In [185]: tsdf.transform('abs')
Out[185]:
A B C
2000-01-01 0.428759 0.864890 0.675341
2000-01-02 0.168731 1.338144 1.279321
2000-01-03 1.621034 0.438107 0.903794
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.254374 1.240447 0.201052
2000-01-09 0.157795 0.791197 1.144209
2000-01-10 0.030876 0.371900 0.061932
Passing a single function to .transform() with a Series will yield a single Series in return.
In [188]: tsdf['A'].transform(np.abs)
Out[188]:
2000-01-01 0.428759
2000-01-02 0.168731
2000-01-03 1.621034
2000-01-04 NaN
2000-01-05 NaN
2000-01-06 NaN
2000-01-07 NaN
2000-01-08 0.254374
(continues on next page)
Passing multiple functions will yield a column MultiIndexed DataFrame. The first level will be the original frame
column names; the second level will be the names of the transforming functions.
In [189]: tsdf.transform([np.abs, lambda x: x + 1])
Out[189]:
A B C
absolute <lambda> absolute <lambda> absolute <lambda>
2000-01-01 0.428759 0.571241 0.864890 0.135110 0.675341 0.324659
2000-01-02 0.168731 0.831269 1.338144 2.338144 1.279321 -0.279321
2000-01-03 1.621034 -0.621034 0.438107 1.438107 0.903794 1.903794
2000-01-04 NaN NaN NaN NaN NaN NaN
2000-01-05 NaN NaN NaN NaN NaN NaN
2000-01-06 NaN NaN NaN NaN NaN NaN
2000-01-07 NaN NaN NaN NaN NaN NaN
2000-01-08 0.254374 1.254374 1.240447 -0.240447 0.201052 0.798948
2000-01-09 0.157795 0.842205 0.791197 1.791197 1.144209 -0.144209
2000-01-10 0.030876 0.969124 0.371900 1.371900 0.061932 1.061932
Passing multiple functions to a Series will yield a DataFrame. The resulting column names will be the transforming
functions.
[email protected]
166FVD0TPV
In [190]: tsdf['A'].transform([np.abs, lambda x: x + 1])
Out[190]:
absolute <lambda>
2000-01-01 0.428759 0.571241
2000-01-02 0.168731 0.831269
2000-01-03 1.621034 -0.621034
2000-01-04 NaN NaN
2000-01-05 NaN NaN
2000-01-06 NaN NaN
2000-01-07 NaN NaN
2000-01-08 0.254374 1.254374
2000-01-09 0.157795 0.842205
2000-01-10 0.030876 0.969124
Passing a dict of lists will generate a MultiIndexed DataFrame with these selective transforms.
Since not all functions can be vectorized (accept NumPy arrays and return another array or value), the methods
[email protected]
166FVD0TPV applymap() on DataFrame and analogously map() on Series accept any Python function taking a single value and
returning a single value. For example:
In [193]: df4
Out[193]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
In [195]: df4['one'].map(f)
Out[195]:
a 18
b 19
c 18
d 3
Name: one, dtype: int64
In [196]: df4.applymap(f)
Out[196]:
one two three
a 18 17 3
b 19 18 20
(continues on next page)
Series.map() has an additional feature; it can be used to easily “link” or “map” values defined by a secondary
series. This is closely related to merging/joining functionality:
In [199]: s
Out[199]:
a six
b seven
c six
d seven
e six
dtype: object
In [200]: s.map(t)
Out[200]:
a 6.0
b 7.0
c 6.0
d 7.0
e 6.0
[email protected]
166FVD0TPV dtype: float64
reindex() is the fundamental data alignment method in pandas. It is used to implement nearly all other features
relying on label-alignment functionality. To reindex means to conform the data to match a given set of labels along a
particular axis. This accomplishes several things:
• Reorders the existing data to match a new set of labels
• Inserts missing value (NA) markers in label locations where no data for that label existed
• If specified, fill data for missing labels using logic (highly relevant to working with time series data)
Here is a simple example:
In [202]: s
Out[202]:
a 1.695148
b 1.328614
c 1.234686
d -0.385845
e -1.326508
dtype: float64
Here, the f label was not contained in the Series and hence appears as NaN in the result.
With a DataFrame, you can simultaneously reindex the index and columns:
In [204]: df
Out[204]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
Note that the Index objects containing the actual axis labels can be shared between objects. So if we have a Series
and a DataFrame, the following can be done:
In [207]: rs = s.reindex(df.index)
In [208]: rs
Out[208]:
a 1.695148
b 1.328614
c 1.234686
d -0.385845
dtype: float64
This means that the reindexed Series’s index is the same Python object as the DataFrame’s index.
New in version 0.21.0.
DataFrame.reindex() also supports an “axis-style” calling convention, where you specify a single labels
argument and the axis it applies to.
See also:
MultiIndex / Advanced Indexing is an even more concise way of doing reindexing.
Note: When writing performance-sensitive code, there is a good reason to spend some time becoming a reindexing
ninja: many operations are faster on pre-aligned data. Adding two unaligned DataFrames internally triggers a
reindexing step. For exploratory analysis you will hardly notice the difference (because reindex has been heavily
optimized), but when CPU cycles matter sprinkling a few explicit reindex calls here and there can have an impact.
In [212]: df2
Out[212]:
one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369
In [213]: df3
Out[213]:
one two
a 0.583888 0.051514
b -0.468040 0.191120
c -0.115848 -0.242634
In [214]: df.reindex_like(df2)
Out[214]:
one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369
The align() method is the fastest way to simultaneously align two objects. It supports a join argument (related to
joining and merging):
• join='outer': take the union of the indexes (default)
• join='left': use the calling object’s index
• join='right': use the passed object’s index
• join='inner': intersect the indexes
It returns a tuple with both of the reindexed Series:
In [216]: s1 = s[:4]
In [217]: s2 = s[1:]
In [218]: s1.align(s2)
Out[218]:
(a -0.186646
b -1.692424
c -0.303893
d -1.425662
e NaN
dtype: float64,
a NaN
[email protected]
b -1.692424
166FVD0TPV c -0.303893
d -1.425662
e 1.114285
dtype: float64)
For DataFrames, the join method will be applied to both the index and the columns by default:
In [221]: df.align(df2, join='inner')
Out[221]:
( one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369,
one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369)
You can also pass an axis option to only align on the specified axis:
In [222]: df.align(df2, join='inner', axis=0)
Out[222]:
( one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435,
one two
a 1.394981 1.772517
b 0.343054 1.912123
c 0.695246 1.478369)
If you pass a Series to DataFrame.align(), you can choose to align both objects either on the DataFrame’s index
or columns using the axis argument:
[email protected]
In [223]: df.align(df2.iloc[0], axis=1)
166FVD0TPV Out[223]:
( one three two
a 1.394981 NaN 1.772517
b 0.343054 -0.050390 1.912123
c 0.695246 1.227435 1.478369
d NaN -0.613172 0.279344,
one 1.394981
three NaN
two 1.772517
Name: a, dtype: float64)
reindex() takes an optional parameter method which is a filling method chosen from the following table:
Method Action
pad / ffill Fill values forward
bfill / backfill Fill values backward
nearest Fill from the nearest index value
In [227]: ts
Out[227]:
2000-01-03 0.183051
2000-01-04 0.400528
2000-01-05 -0.015083
2000-01-06 2.395489
2000-01-07 1.414806
2000-01-08 0.118428
2000-01-09 0.733639
2000-01-10 -0.936077
Freq: D, dtype: float64
In [228]: ts2
Out[228]:
2000-01-03 0.183051
2000-01-06 2.395489
2000-01-09 0.733639
dtype: float64
In [229]: ts2.reindex(ts.index)
Out[229]:
2000-01-03 0.183051
2000-01-04 NaN
2000-01-05 NaN
2000-01-06 2.395489
[email protected]
166FVD0TPV 2000-01-07 NaN
2000-01-08 NaN
2000-01-09 0.733639
2000-01-10 NaN
Freq: D, dtype: float64
These methods require that the indexes are ordered increasing or decreasing.
Note that the same result could have been achieved using fillna (except for method='nearest') or interpolate:
In [233]: ts2.reindex(ts.index).fillna(method='ffill')
Out[233]:
2000-01-03 0.183051
2000-01-04 0.183051
2000-01-05 0.183051
2000-01-06 2.395489
2000-01-07 2.395489
2000-01-08 2.395489
2000-01-09 0.733639
2000-01-10 0.733639
Freq: D, dtype: float64
[email protected]
166FVD0TPV
reindex() will raise a ValueError if the index is not monotonically increasing or decreasing. fillna() and
interpolate() will not perform any checks on the order of the index.
The limit and tolerance arguments provide additional control over filling while reindexing. Limit specifies the
maximum count of consecutive matches:
In contrast, tolerance specifies the maximum distance between the index and indexer values:
Notice that when used on a DatetimeIndex, TimedeltaIndex or PeriodIndex, tolerance will coerced
into a Timedelta if possible. This allows you to specify tolerance with appropriate strings.
A method closely related to reindex is the drop() function. It removes a set of labels from an axis:
In [236]: df
Out[236]:
one two three
a 1.394981 1.772517 NaN
b 0.343054 1.912123 -0.050390
c 0.695246 1.478369 1.227435
d NaN 0.279344 -0.613172
Note that the following also works, but is a bit less obvious / clean:
The rename() method allows you to relabel an axis based on some mapping (a dict or Series) or an arbitrary function.
In [240]: s
Out[240]:
a -0.186646
b -1.692424
c -0.303893
d -1.425662
e 1.114285
dtype: float64
In [241]: s.rename(str.upper)
Out[241]:
A -0.186646
B -1.692424
C -0.303893
D -1.425662
E 1.114285
dtype: float64
If you pass a function, it must return a value when called with any of the labels (and must produce a set of unique
values). A dict or Series can also be used:
If the mapping doesn’t include a column/index label, it isn’t renamed. Note that extra labels in the mapping don’t
throw an error.
New in version 0.21.0.
DataFrame.rename() also supports an “axis-style” calling convention, where you specify a single mapper and
the axis to apply that mapping to.
The rename() method also provides an inplace named parameter that is by default False and copies the under-
lying data. Pass inplace=True to rename the data in place.
Finally, rename() also accepts a scalar or list-like for altering the Series.name attribute.
In [245]: s.rename("scalar-name")
Out[245]:
a -0.186646
b -1.692424
c -0.303893
d -1.425662
e 1.114285
Name: scalar-name, dtype: float64
In [247]: df
Out[247]:
x y
let num
[email protected]
166FVD0TPV a 1 1 10
2 2 20
b 1 3 30
2 4 40
c 1 5 50
2 6 60
In [249]: df.rename_axis(index=str.upper)
Out[249]:
x y
LET NUM
a 1 1 10
2 2 20
b 1 3 30
2 4 40
c 1 5 50
2 6 60
Iteration
The behavior of basic iteration over pandas objects depends on the type. When iterating over a Series, it is regarded
as array-like, and basic iteration produces the values. DataFrames follow the dict-like convention of iterating over the
“keys” of the objects.
In short, basic iteration (for i in object) produces:
• Series: values
• DataFrame: column labels
Thus, for example, iterating over a DataFrame gives you the column names:
In [250]: df = pd.DataFrame({'col1': np.random.randn(3),
.....: 'col2': np.random.randn(3)}, index=['a', 'b', 'c'])
.....:
Pandas objects also have the dict-like items() method to iterate over the (key, value) pairs.
To iterate over the rows of a DataFrame, you can use the following methods:
• iterrows(): Iterate over the rows of a DataFrame as (index, Series) pairs. This converts the rows to Series
objects, which can change the dtypes and has some performance implications.
[email protected]
• itertuples(): Iterate over the rows of a DataFrame as namedtuples of the values. This is a lot faster than
166FVD0TPV
iterrows(), and is in most cases preferable to use to iterate over the values of a DataFrame.
Warning: Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is
not needed and can be avoided with one of the following approaches:
• Look for a vectorized solution: many operations can be performed using built-in methods or NumPy func-
tions, (boolean) indexing, . . .
• When you have a function that cannot work on the full DataFrame/Series at once, it is better to use apply()
instead of iterating over the values. See the docs on function application.
• If you need to do iterative manipulations on the values but performance is important, consider writing the in-
ner loop with cython or numba. See the enhancing performance section for some examples of this approach.
Warning: You should never modify something you are iterating over. This is not guaranteed to work in all cases.
Depending on the data types, the iterator returns a copy and not a view, and writing to it will have no effect!
For example, in the following case setting the value has no effect:
In [252]: df = pd.DataFrame({'a': [1, 2, 3], 'b': ['a', 'b', 'c']})
In [254]: df
Out[254]:
a b
0 1 a
1 2 b
2 3 c
items
Consistent with the dict-like interface, items() iterates through key-value pairs:
• Series: (index, scalar value) pairs
• DataFrame: (column, Series) pairs
For example:
iterrows
iterrows() allows you to iterate through the rows of a DataFrame as Series objects. It returns an iterator yielding
each index value along with a Series containing the data in each row:
Note: Because iterrows() returns a Series for each row, it does not preserve dtypes across the rows (dtypes are
preserved across columns for DataFrames). For example,
In [258]: df_orig.dtypes
Out[258]:
int int64
float float64
dtype: object
In [260]: row
Out[260]:
int 1.0
float 1.5
Name: 0, dtype: float64
All values in row, returned as a Series, are now upcasted to floats, also the original integer value in column x:
In [261]: row['int'].dtype
Out[261]: dtype('float64')
In [262]: df_orig['int'].dtype
Out[262]: dtype('int64')
To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns namedtuples of the
values and which is generally much faster than iterrows().
In [264]: print(df2)
x y
0 1 4
1 2 5
2 3 6
In [265]: print(df2.T)
0 1 2
x 1 2 3
y 4 5 6
In [267]: print(df2_t)
0 1 2
x 1 2 3
y 4 5 6
itertuples
The itertuples() method will return an iterator yielding a namedtuple for each row in the DataFrame. The first
element of the tuple will be the row’s corresponding index value, while the remaining values are the row values.
For instance:
This method does not convert the row to a Series object; it merely returns the values inside a namedtuple. Therefore,
itertuples() preserves the data type of the values and is generally faster as iterrows().
Note: The column names will be renamed to positional names if they are invalid Python identifiers, repeated, or start
with an underscore. With a large number of columns (>255), regular tuples are returned.
.dt accessor
Series has an accessor to succinctly return datetime like properties for the values of the Series, if it is a date-
time/period like Series. This will return a Series, indexed like the existing Series.
# datetime
[email protected]
166FVD0TPV In [269]: s = pd.Series(pd.date_range('20130101 09:10:12', periods=4))
In [270]: s
Out[270]:
0 2013-01-01 09:10:12
1 2013-01-02 09:10:12
2 2013-01-03 09:10:12
3 2013-01-04 09:10:12
dtype: datetime64[ns]
In [271]: s.dt.hour
Out[271]:
0 9
1 9
2 9
3 9
dtype: int64
In [272]: s.dt.second
Out[272]:
0 12
1 12
2 12
3 12
dtype: int64
In [273]: s.dt.day
Out[273]:
(continues on next page)
In [274]: s[s.dt.day == 2]
Out[274]:
1 2013-01-02 09:10:12
dtype: datetime64[ns]
In [276]: stz
Out[276]:
0 2013-01-01 09:10:12-05:00
1 2013-01-02 09:10:12-05:00
2 2013-01-03 09:10:12-05:00
3 2013-01-04 09:10:12-05:00
dtype: datetime64[ns, US/Eastern]
In [277]: stz.dt.tz
Out[277]: <DstTzInfo 'US/Eastern' LMT-1 day, 19:04:00 STD>
[email protected]
166FVD0TPV You can also chain these types of operations:
In [278]: s.dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
Out[278]:
0 2013-01-01 04:10:12-05:00
1 2013-01-02 04:10:12-05:00
2 2013-01-03 04:10:12-05:00
3 2013-01-04 04:10:12-05:00
dtype: datetime64[ns, US/Eastern]
You can also format datetime values as strings with Series.dt.strftime() which supports the same format as
the standard strftime().
# DatetimeIndex
In [279]: s = pd.Series(pd.date_range('20130101', periods=4))
In [280]: s
Out[280]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: datetime64[ns]
In [281]: s.dt.strftime('%Y/%m/%d')
Out[281]:
0 2013/01/01
1 2013/01/02
(continues on next page)
# PeriodIndex
In [282]: s = pd.Series(pd.period_range('20130101', periods=4))
In [283]: s
Out[283]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
3 2013-01-04
dtype: period[D]
In [284]: s.dt.strftime('%Y/%m/%d')
Out[284]:
0 2013/01/01
1 2013/01/02
2 2013/01/03
3 2013/01/04
dtype: object
In [287]: s.dt.year
Out[287]:
0 2013
1 2013
2 2013
3 2013
dtype: int64
In [288]: s.dt.day
Out[288]:
0 1
1 2
2 3
3 4
dtype: int64
# timedelta
In [289]: s = pd.Series(pd.timedelta_range('1 day 00:00:05', periods=4, freq='s'))
In [290]: s
(continues on next page)
In [291]: s.dt.days
Out[291]:
0 1
1 1
2 1
3 1
dtype: int64
In [292]: s.dt.seconds
Out[292]:
0 5
1 6
2 7
3 8
dtype: int64
In [293]: s.dt.components
Out[293]:
days hours minutes seconds milliseconds microseconds nanoseconds
0 1 0 0 5 0 0 0
1 1 0 0 6 0 0 0
[email protected]
166FVD0TPV 2 1 0 0 7 0 0 0
3 1 0 0 8 0 0 0
Note: Series.dt will raise a TypeError if you access with a non-datetime-like values.
Series is equipped with a set of string processing methods that make it easy to operate on each element of the array.
Perhaps most importantly, these methods exclude missing/NA values automatically. These are accessed via the Series’s
str attribute and generally have names matching the equivalent (scalar) built-in string methods. For example:
.....: dtype="string")
.....:
In [295]: s.str.lower()
Out[295]:
0 a
1 b
2 c
3 aaba
4 baca
5 <NA>
6 caba
(continues on next page)
Powerful pattern-matching methods are provided as well, but note that pattern-matching generally uses regular expres-
sions by default (and in some cases always uses them).
Note: Prior to pandas 1.0, string methods were only available on object -dtype Series. Pandas 1.0 added the
StringDtype which is dedicated to strings. See Text Data Types for more.
Sorting
Pandas supports three kinds of sorting: sorting by index labels, sorting by column values, and sorting by a combination
of both.
By index
The Series.sort_index() and DataFrame.sort_index() methods are used to sort a pandas object by its
index levels.
In [296]: df = pd.DataFrame({
[email protected]
.....: 'one': pd.Series(np.random.randn(3), index=['a', 'b', 'c']),
166FVD0TPV .....: 'two': pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd']),
.....: 'three': pd.Series(np.random.randn(3), index=['b', 'c', 'd'])})
.....:
In [298]: unsorted_df
Out[298]:
three two one
a NaN -1.152244 0.562973
d -0.252916 -0.109597 NaN
c 1.273388 -0.167123 0.640382
b -0.098217 0.009797 -1.299504
# DataFrame
In [299]: unsorted_df.sort_index()
Out[299]:
three two one
a NaN -1.152244 0.562973
b -0.098217 0.009797 -1.299504
c 1.273388 -0.167123 0.640382
d -0.252916 -0.109597 NaN
In [300]: unsorted_df.sort_index(ascending=False)
Out[300]:
three two one
(continues on next page)
In [301]: unsorted_df.sort_index(axis=1)
Out[301]:
one three two
a 0.562973 NaN -1.152244
d NaN -0.252916 -0.109597
c 0.640382 1.273388 -0.167123
b -1.299504 -0.098217 0.009797
# Series
In [302]: unsorted_df['three'].sort_index()
Out[302]:
a NaN
b -0.098217
c 1.273388
d -0.252916
Name: three, dtype: float64
By values
The Series.sort_values() method is used to sort a Series by its values. The DataFrame.sort_values()
method is used to sort a DataFrame by its column or row values. The optional by parameter to DataFrame.
[email protected]
166FVD0TPV sort_values() may used to specify one or more columns to use to determine the sorted order.
In [303]: df1 = pd.DataFrame({'one': [2, 1, 1, 1],
.....: 'two': [1, 3, 2, 4],
.....: 'three': [5, 4, 3, 2]})
.....:
In [304]: df1.sort_values(by='two')
Out[304]:
one two three
0 2 1 5
2 1 2 3
1 1 3 4
3 1 4 2
These methods have special treatment of NA values via the na_position argument:
In [306]: s[2] = np.nan
In [308]: s.sort_values(na_position='first')
Out[308]:
2 <NA>
5 <NA>
0 A
3 Aaba
1 B
4 Baca
6 CABA
8 cat
7 dog
dtype: string
[email protected]
By indexes and values
166FVD0TPV
New in version 0.23.0.
Strings passed as the by parameter to DataFrame.sort_values() may refer to either columns or index level
names.
# Build MultiIndex
In [309]: idx = pd.MultiIndex.from_tuples([('a', 1), ('a', 2), ('a', 2),
.....: ('b', 2), ('b', 1), ('b', 1)])
.....:
# Build DataFrame
In [311]: df_multi = pd.DataFrame({'A': np.arange(6, 0, -1)},
.....: index=idx)
.....:
In [312]: df_multi
Out[312]:
A
first second
a 1 6
2 5
2 4
b 2 3
1 2
1 1
Note: If a string matches both a column name and an index level name then a warning is issued and the column takes
precedence. This will result in an ambiguity error in a future version.
searchsorted
[email protected]
In [316]: ser.searchsorted([0, 4])
166FVD0TPV Out[316]: array([0, 3])
Series has the nsmallest() and nlargest() methods which return the smallest or largest 𝑛 values. For a
large Series this can be much faster than sorting the entire Series and calling head(n) on the result.
In [321]: s = pd.Series(np.random.permutation(10))
In [322]: s
Out[322]:
0 2
1 0
2 3
3 7
(continues on next page)
In [323]: s.sort_values()
Out[323]:
1 0
4 1
0 2
2 3
9 4
5 5
7 6
3 7
8 8
6 9
dtype: int64
In [324]: s.nsmallest(3)
Out[324]:
1 0
4 1
0 2
dtype: int64
[email protected]
166FVD0TPV
In [325]: s.nlargest(3)
Out[325]:
6 9
8 8
3 7
dtype: int64
You must be explicit about sorting when the column is a MultiIndex, and fully specify all levels to by.
Copying
The copy() method on pandas objects copies the underlying data (though not the axis indexes, since they are im-
mutable) and returns a new object. Note that it is seldom necessary to copy objects. For example, there are only a
handful of ways to alter a DataFrame in-place:
• Inserting, deleting, or modifying a column.
• Assigning to the index or columns attributes.
• For homogeneous data, directly modifying the values via the values attribute or advanced indexing.
To be clear, no pandas method has the side effect of modifying your data; almost every method returns a new object,
leaving the original object untouched. If the data is modified, it is because you did so explicitly.
dtypes
For the most part, pandas uses NumPy arrays and dtypes for Series or individual columns of a DataFrame. NumPy
provides support for float, int, bool, timedelta64[ns] and datetime64[ns] (note that NumPy does not
support timezone-aware datetimes).
Pandas and third-party libraries extend NumPy’s type system in a few places. This section describes the extensions
pandas has made internally. See Extension types for how to write your own extension that works with pandas. See
ecosystem.extensions for a list of third-party libraries that have implemented an extension.
The following table lists all of pandas extension types. For methods requiring dtype arguments, strings can be
specified as indicated. See the respective documentation sections for more on each type.
In [334]: dft
Out[334]:
A B C D E F G
0 0.035962 1 foo 2001-01-02 1.0 False 1
1 0.701379 1 foo 2001-01-02 1.0 False 1
2 0.281885 1 foo 2001-01-02 1.0 False 1
In [335]: dft.dtypes
Out[335]:
A float64
B int64
C object
D datetime64[ns]
E float32
F bool
G int8
dtype: object
If a pandas object contains data with multiple dtypes in a single column, the dtype of the column will be chosen to
accommodate all of the data types (object is the most general).
The number of columns of each type in a DataFrame can be found by calling DataFrame.dtypes.
value_counts().
In [339]: dft.dtypes.value_counts()
Out[339]:
bool 1
datetime64[ns] 1
object 1
int8 1
int64 1
float32 1
float64 1
dtype: int64
Numeric dtypes will propagate and can coexist in DataFrames. If a dtype is passed (either directly via the dtype
keyword, a passed ndarray, or a passed Series), then it will be preserved in DataFrame operations. Furthermore,
different numeric dtypes will NOT be combined. The following example will give you a taste.
In [341]: df1
Out[341]:
A
0 0.224364
1 1.890546
2 0.182879
3 0.787847
4 -0.188449
5 0.667715
6 -0.011736
7 -0.399073
[email protected]
166FVD0TPV In [342]: df1.dtypes
Out[342]:
A float32
dtype: object
In [344]: df2
Out[344]:
A B C
0 0.823242 0.256090 0
1 1.607422 1.426469 0
2 -0.333740 -0.416203 255
3 -0.063477 1.139976 0
4 -1.014648 -1.193477 0
5 0.678711 0.096706 0
6 -0.040863 -1.956850 1
7 -0.357422 -0.714337 0
In [345]: df2.dtypes
Out[345]:
A float16
B float64
C uint8
(continues on next page)
defaults
By default integer types are int64 and float types are float64, regardless of platform (32-bit or 64-bit). The
following will all result in int64 dtypes.
Note that Numpy will choose platform-dependent types when creating arrays. The following WILL result in int32
on 32-bit platform.
upcasting
Types can potentially be upcasted when combined with other types, meaning they are promoted from the current type
(e.g. int to float).
In [351]: df3
Out[351]:
A B C
0 1.047606 0.256090 0.0
1 3.497968 1.426469 0.0
2 -0.150862 -0.416203 255.0
3 0.724370 1.139976 0.0
4 -1.203098 -1.193477 0.0
5 1.346426 0.096706 0.0
6 -0.052599 -1.956850 1.0
7 -0.756495 -0.714337 0.0
In [352]: df3.dtypes
Out[352]:
A float32
B float64
C float64
dtype: object
DataFrame.to_numpy() will return the lower-common-denominator of the dtypes, meaning the dtype that can
accommodate ALL of the types in the resulting homogeneous dtyped NumPy array. This can force some upcasting.
In [353]: df3.to_numpy().dtype
Out[353]: dtype('float64')
astype
You can use the astype() method to explicitly convert dtypes from one to another. These will by default return a
copy, even if the dtype was unchanged (pass copy=False to change this behavior). In addition, they will raise an
exception if the astype operation is invalid.
Upcasting is always according to the numpy rules. If two different dtypes are involved in an operation, then the more
general one will be used as the result of the operation.
In [354]: df3
Out[354]:
A B C
0 1.047606 0.256090 0.0
1 3.497968 1.426469 0.0
2 -0.150862 -0.416203 255.0
3 0.724370 1.139976 0.0
4 -1.203098 -1.193477 0.0
5 1.346426 0.096706 0.0
6 -0.052599 -1.956850 1.0
7 -0.756495 -0.714337 0.0
[email protected]
In [355]: df3.dtypes
166FVD0TPV Out[355]:
A float32
B float64
C float64
dtype: object
# conversion of dtypes
In [356]: df3.astype('float32').dtypes
Out[356]:
A float32
B float32
C float32
dtype: object
In [357]: dft = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]})
In [359]: dft
Out[359]:
a b c
0 1 4 7
1 2 5 8
2 3 6 9
In [360]: dft.dtypes
(continues on next page)
In [361]: dft1 = pd.DataFrame({'a': [1, 0, 1], 'b': [4, 5, 6], 'c': [7, 8, 9]})
In [363]: dft1
Out[363]:
a b c
0 True 4 7.0
1 False 5 8.0
2 True 6 9.0
In [364]: dft1.dtypes
Out[364]:
a bool
b int64
c float64
dtype: object
Note: When trying to convert a subset of columns to a specified type using astype() and loc(), upcasting occurs.
[email protected]
166FVD0TPV
loc() tries to fit in what we are assigning to the current dtypes, while [] will overwrite them taking the dtype from
the right hand side. Therefore the following piece of code produces the unintended result.
In [365]: dft = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]})
In [368]: dft.dtypes
Out[368]:
a int64
b int64
c int64
dtype: object
object conversion
pandas offers various functions to try to force conversion of types from the object dtype to other types. In cases
where the data is already of the correct type, but stored in an object array, the DataFrame.infer_objects()
and Series.infer_objects() methods can be used to soft convert to the correct type.
In [371]: df = df.T
In [372]: df
Out[372]:
0 1 2
0 1 a 2016-03-02
1 2 b 2016-03-02
In [373]: df.dtypes
Out[373]:
0 object
1 object
2 datetime64[ns]
dtype: object
[email protected]
166FVD0TPV Because the data was transposed the original inference stored all columns as object, which infer_objects will
correct.
In [374]: df.infer_objects().dtypes
Out[374]:
0 int64
1 object
2 datetime64[ns]
dtype: object
The following functions are available for one dimensional object arrays or scalars to perform hard conversion of objects
to a specified type:
• to_numeric() (conversion to numeric dtypes)
In [375]: m = ['1.1', 2, 3]
In [376]: pd.to_numeric(m)
Out[376]: array([1.1, 2. , 3. ])
In [379]: pd.to_datetime(m)
Out[379]: DatetimeIndex(['2016-07-09', '2016-03-02'], dtype='datetime64[ns]',
˓→freq=None)
In [381]: pd.to_timedelta(m)
Out[381]: TimedeltaIndex(['0 days 00:00:00.000005', '1 days 00:00:00'], dtype=
˓→'timedelta64[ns]', freq=None)
To force a conversion, we can pass in an errors argument, which specifies how pandas should deal with elements
that cannot be converted to desired dtype or object. By default, errors='raise', meaning that any errors encoun-
tered will be raised during the conversion process. However, if errors='coerce', these errors will be ignored
and pandas will convert problematic elements to pd.NaT (for datetime and timedelta) or np.nan (for numeric).
This might be useful if you are reading in data which is mostly of the desired dtype (e.g. numeric, datetime), but
occasionally has non-conforming elements intermixed that you want to represent as missing:
In [385]: m = ['apple', 2, 3]
The errors parameter has a third option of errors='ignore', which will simply return the passed in data if it
encounters any errors with the conversion to a desired data type:
In [392]: m = ['apple', 2, 3]
In addition to object conversion, to_numeric() provides another argument downcast, which gives the option of
downcasting the newly (or already) numeric data to a smaller dtype, which can conserve memory:
In [396]: m = ['1', 2, 3]
As these methods apply only to one-dimensional arrays, lists or scalars; they cannot be used directly on multi-
dimensional objects such as DataFrames. However, with apply(), we can “apply” the function over each column
efficiently:
In [402]: df = pd.DataFrame([
.....: ['2016-07-09', datetime.datetime(2016, 3, 2)]] * 2, dtype='O')
.....:
In [403]: df
Out[403]:
0 1
0 2016-07-09 2016-03-02 00:00:00
1 2016-07-09 2016-03-02 00:00:00
[email protected]
166FVD0TPV In [404]: df.apply(pd.to_datetime)
Out[404]:
0 1
0 2016-07-09 2016-03-02
1 2016-07-09 2016-03-02
In [406]: df
Out[406]:
0 1 2
0 1.1 2 3
1 1.1 2 3
In [407]: df.apply(pd.to_numeric)
Out[407]:
0 1 2
0 1.1 2 3
1 1.1 2 3
In [409]: df
Out[409]:
0 1
0 5us 1 days 00:00:00
1 5us 1 days 00:00:00
gotchas
Performing selection operations on integer type data can easily upcast the data to floating. The dtype of the
input data will be preserved in cases where nans are not introduced. See also Support for integer NA.
In [411]: dfi = df3.astype('int32')
In [412]: dfi['E'] = 1
In [413]: dfi
Out[413]:
A B C E
0 1 0 0 1
1 3 1 0 1
2 0 0 255 1
3 0 1 0 1
4 -1 -1 0 1
5 1 0 0 1
6 0 -1 1 1
7 0 0 0 1
[email protected]
166FVD0TPV
In [414]: dfi.dtypes
Out[414]:
A int32
B int32
C int32
E int64
dtype: object
In [416]: casted
Out[416]:
A B C E
0 1.0 NaN NaN 1
1 3.0 1.0 NaN 1
2 NaN NaN 255.0 1
3 NaN 1.0 NaN 1
4 NaN NaN NaN 1
5 1.0 NaN NaN 1
6 NaN NaN 1.0 1
7 NaN NaN NaN 1
In [417]: casted.dtypes
Out[417]:
A float64
B float64
C float64
E int64
(continues on next page)
In [420]: dfa.dtypes
Out[420]:
A float32
B float64
C float64
dtype: object
In [422]: casted
Out[422]:
A B C
0 1.047606 0.256090 NaN
1 3.497968 1.426469 NaN
2 NaN NaN 255.0
3 NaN 1.139976 NaN
4 NaN NaN NaN
5 1.346426 0.096706 NaN
6 NaN NaN 1.0
7 NaN
[email protected] NaN NaN
166FVD0TPV
In [423]: casted.dtypes
Out[423]:
A float32
B float64
C float64
dtype: object
In [429]: df
Out[429]:
string int64 uint8 float64 bool1 bool2 dates category
˓→tdeltas uint64 other_dates tz_aware_dates
0 a 1 3 4.0 True False 2020-03-18 15:38:47.007134 A
˓→NaT 3 2013-01-01 2013-01-01 00:00:00-05:00
1 b 2 4 5.0 False True 2020-03-19 15:38:47.007134 B 1
˓→days 4 2013-01-02 2013-01-02 00:00:00-05:00
2 c 3 5 6.0 True False 2020-03-20 15:38:47.007134 C 1
˓→days 5 2013-01-03 2013-01-03 00:00:00-05:00
In [430]: df.dtypes
Out[430]:
string object
int64 int64
uint8 uint8
float64 float64
bool1 bool
bool2 bool
dates datetime64[ns]
category
[email protected] category
166FVD0TPV tdeltas timedelta64[ns]
uint64 uint64
other_dates datetime64[ns]
tz_aware_dates datetime64[ns, US/Eastern]
dtype: object
select_dtypes() has two parameters include and exclude that allow you to say “give me the columns with
these dtypes” (include) and/or “give the columns without these dtypes” (exclude).
For example, to select bool columns:
In [431]: df.select_dtypes(include=[bool])
Out[431]:
bool1 bool2
0 True False
1 False True
2 True False
You can also pass the name of a dtype in the NumPy dtype hierarchy:
In [432]: df.select_dtypes(include=['bool'])
Out[432]:
bool1 bool2
0 True False
1 False True
2 True False
In [434]: df.select_dtypes(include=['object'])
Out[434]:
string
0 a
1 b
2 c
To see all the child dtypes of a generic dtype like numpy.number you can define a function that returns a tree of
child dtypes:
Note: Pandas also defines the types category, and datetime64[ns, tz], which are not integrated into the
normal NumPy hierarchy and won’t show up with the above function.
We’ll start with a quick, non-comprehensive overview of the fundamental data structures in pandas to get you started.
The fundamental behavior about data types, indexing, and axis labeling / alignment apply across all of the objects. To
get started, import NumPy and load pandas into your namespace:
Here is a basic tenet to keep in mind: data alignment is intrinsic. The link between labels and data will not be broken
unless done so explicitly by you.
We’ll give a brief intro to the data structures, then consider all of the broad categories of functionality and methods in
separate sections.
Series
Series is a one-dimensional labeled array capable of holding any data type (integers, strings, floating point numbers,
Python objects, etc.). The axis labels are collectively referred to as the index. The basic method to create a Series is
to call:
[email protected]
166FVD0TPV >>> s = pd.Series(data, index=index)
In [4]: s
Out[4]:
a 0.469112
b -0.282863
c -1.509059
d -1.135632
e 1.212112
dtype: float64
In [5]: s.index
Out[5]: Index(['a', 'b', 'c', 'd', 'e'], dtype='object')
(continues on next page)
In [6]: pd.Series(np.random.randn(5))
Out[6]:
0 -0.173215
1 0.119209
2 -1.044236
3 -0.861849
4 -2.104569
dtype: float64
Note: pandas supports non-unique index values. If an operation that does not support duplicate index values is
attempted, an exception will be raised at that time. The reason for being lazy is nearly all performance-based (there
are many instances in computations, like parts of GroupBy, where the index is not used).
From dict
Series can be instantiated from dicts:
In [8]: pd.Series(d)
Out[8]:
b 1
a 0
c 2
dtype: int64
[email protected]
166FVD0TPV
Note: When the data is a dict, and an index is not passed, the Series index will be ordered by the dict’s insertion
order, if you’re using Python version >= 3.6 and Pandas version >= 0.23.
If you’re using Python < 3.6 or Pandas < 0.23, and an index is not passed, the Series index will be the lexically
ordered list of dict keys.
In the example above, if you were on a Python version lower than 3.6 or a Pandas version lower than 0.23, the Series
would be ordered by the lexical order of the dict keys (i.e. ['a', 'b', 'c'] rather than ['b', 'a', 'c']).
If an index is passed, the values in data corresponding to the labels in the index will be pulled out.
In [10]: pd.Series(d)
Out[10]:
a 0.0
b 1.0
c 2.0
dtype: float64
Note: NaN (not a number) is the standard missing data marker used in pandas.
Series is ndarray-like
Series acts very similarly to a ndarray, and is a valid argument to most NumPy functions. However, operations
such as slicing will also slice the index.
In [13]: s[0]
Out[13]: 0.4691122999071863
In [14]: s[:3]
Out[14]:
a 0.469112
[email protected]
166FVD0TPV b -0.282863
c -1.509059
dtype: float64
In [17]: np.exp(s)
Out[17]:
a 1.598575
b 0.753623
c 0.221118
d 0.321219
e 3.360575
dtype: float64
In [18]: s.dtype
Out[18]: dtype('float64')
This is often a NumPy dtype. However, pandas and 3rd-party libraries extend NumPy’s type system in a few places,
in which case the dtype would be a ExtensionDtype. Some examples within pandas are Categorical data and
Nullable integer data type. See dtypes for more.
If you need the actual array backing a Series, use Series.array.
In [19]: s.array
Out[19]:
<PandasArray>
[ 0.4691122999071863, -0.2828633443286633, -1.5090585031735124,
-1.1356323710171934, 1.2121120250208506]
Length: 5, dtype: float64
Accessing the array can be useful when you need to do some operation without the index (to disable automatic
alignment, for example).
Series.array will always be an ExtensionArray. Briefly, an ExtensionArray is a thin wrapper around one
or more concrete arrays like a numpy.ndarray. Pandas knows how to take an ExtensionArray and store it in
a Series or a column of a DataFrame. See dtypes for more.
While Series is ndarray-like, if you need an actual ndarray, then use Series.to_numpy().
In [20]: s.to_numpy()
Out[20]: array([ 0.4691, -0.2829, -1.5091, -1.1356, 1.2121])
[email protected]
166FVD0TPV Even if the Series is backed by a ExtensionArray, Series.to_numpy() will return a NumPy ndarray.
Series is dict-like
A Series is like a fixed-size dict in that you can get and set values by index label:
In [21]: s['a']
Out[21]: 0.4691122999071863
In [23]: s
Out[23]:
a 0.469112
b -0.282863
c -1.509059
d -1.135632
e 12.000000
dtype: float64
In [24]: 'e' in s
Out[24]: True
In [25]: 'f' in s
Out[25]: False
>>> s['f']
KeyError: 'f'
Using the get method, a missing label will return None or specified default:
In [26]: s.get('f')
When working with raw NumPy arrays, looping through value-by-value is usually not necessary. The same is true
when working with Series in pandas. Series can also be passed into most NumPy methods expecting an ndarray.
In [28]: s + s
Out[28]:
a 0.938225
b -0.565727
c -3.018117
d -2.271265
e 24.000000
dtype: float64
In [29]: s 2
[email protected] *
166FVD0TPV Out[29]:
a 0.938225
b -0.565727
c -3.018117
d -2.271265
e 24.000000
dtype: float64
In [30]: np.exp(s)
Out[30]:
a 1.598575
b 0.753623
c 0.221118
d 0.321219
e 162754.791419
dtype: float64
A key difference between Series and ndarray is that operations between Series automatically align the data based on
label. Thus, you can write computations without giving consideration to whether the Series involved have the same
labels.
The result of an operation between unaligned Series will have the union of the indexes involved. If a label is not found
in one Series or the other, the result will be marked as missing NaN. Being able to write code without doing any explicit
data alignment grants immense freedom and flexibility in interactive data analysis and research. The integrated data
alignment features of the pandas data structures set pandas apart from the majority of related tools for working with
labeled data.
Note: In general, we chose to make the default result of operations between differently indexed objects yield the
union of the indexes in order to avoid loss of information. Having an index label, though the data is missing, is
typically important information as part of a computation. You of course have the option of dropping labels with
missing data via the dropna function.
Name attribute
In [33]: s
Out[33]:
0 -0.494929
1 1.071804
2 0.721555
3 -0.706771
4 -1.039575
Name: something, dtype: float64
[email protected]
166FVD0TPV In [34]: s.name
Out[34]: 'something'
The Series name will be assigned automatically in many cases, in particular when taking 1D slices of DataFrame as
you will see below.
You can rename a Series with the pandas.Series.rename() method.
In [35]: s2 = s.rename("different")
In [36]: s2.name
Out[36]: 'different'
DataFrame
DataFrame is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it
like a spreadsheet or SQL table, or a dict of Series objects. It is generally the most commonly used pandas object.
Like Series, DataFrame accepts many different kinds of input:
• Dict of 1D ndarrays, lists, dicts, or Series
• 2-D numpy.ndarray
• Structured or record ndarray
• A Series
• Another DataFrame
Along with the data, you can optionally pass index (row labels) and columns (column labels) arguments. If you pass
an index and / or columns, you are guaranteeing the index and / or columns of the resulting DataFrame. Thus, a dict
of Series plus a specific index will discard all data not matching up to the passed index.
If axis labels are not passed, they will be constructed from the input data based on common sense rules.
Note: When the data is a dict, and columns is not specified, the DataFrame columns will be ordered by the dict’s
insertion order, if you are using Python version >= 3.6 and Pandas >= 0.23.
If you are using Python < 3.6 or Pandas < 0.23, and columns is not specified, the DataFrame columns will be the
lexically ordered list of dict keys.
The resulting index will be the union of the indexes of the various Series. If there are any nested dicts, these will first
be converted to Series. If no columns are passed, the columns will be the ordered list of dict keys.
In [38]: df = pd.DataFrame(d)
In [39]: df
Out[39]:
one two
[email protected]
a 1.0 1.0
166FVD0TPV b 2.0 2.0
c 3.0 3.0
d NaN 4.0
The row and column labels can be accessed respectively by accessing the index and columns attributes:
Note: When a particular set of columns is passed along with a dict of data, the passed columns override the keys in
the dict.
In [42]: df.index
Out[42]: Index(['a', 'b', 'c', 'd'], dtype='object')
The ndarrays must all be the same length. If an index is passed, it must clearly also be the same length as the arrays.
If no index is passed, the result will be range(n), where n is the array length.
In [45]: pd.DataFrame(d)
Out[45]:
one two
0 1.0 4.0
1 2.0 3.0
2 3.0 2.0
3 4.0 1.0
In [49]: pd.DataFrame(data)
Out[49]:
A B C
0 1 2.0 b'Hello'
1 2 3.0 b'World'
Note: DataFrame is not intended to work exactly like a 2-dimensional NumPy ndarray.
In [52]: data2 = [{'a': 1, 'b': 2}, {'a': 5, 'b': 10, 'c': 20}]
In [53]: pd.DataFrame(data2)
Out[53]:
a b c
0 1 2 NaN
1 5 10 20.0
[email protected]
166FVD0TPV From a dict of tuples
From a Series
The result will be a DataFrame with the same index as the input Series, and with one column whose name is the
original name of the Series (only if no other column name provided).
Missing data
Much more will be said on this topic in the Missing data section. To construct a DataFrame with missing data, we use
np.nan to represent missing values. Alternatively, you may pass a numpy.MaskedArray as the data argument to
the DataFrame constructor, and its masked entries will be considered missing.
Alternate constructors
DataFrame.from_dict
DataFrame.from_dict takes a dict of dicts or a dict of array-like sequences and returns a DataFrame. It operates
like the DataFrame constructor except for the orient parameter which is 'columns' by default, but which can
be set to 'index' in order to use the dict keys as row labels.
In [57]: pd.DataFrame.from_dict(dict([('A', [1, 2, 3]), ('B', [4, 5, 6])]))
Out[57]:
A B
0 1 4
1 2 5
2 3 6
If you pass orient='index', the keys will be the row labels. In this case, you can also pass the desired column
names:
In [58]: pd.DataFrame.from_dict(dict([('A', [1, 2, 3]), ('B', [4, 5, 6])]),
....: orient='index', columns=['one', 'two', 'three'])
....:
Out[58]:
one two three
A 1 2 3
B 4 5 6
DataFrame.from_records
DataFrame.from_records takes a list of tuples or an ndarray with structured dtype. It works analogously to the
[email protected]
166FVD0TPV normal DataFrame constructor, except that the resulting DataFrame index may be a specific field of the structured
dtype. For example:
In [59]: data
Out[59]:
array([(1, 2., b'Hello'), (2, 3., b'World')],
dtype=[('A', '<i4'), ('B', '<f4'), ('C', 'S10')])
You can treat a DataFrame semantically like a dict of like-indexed Series objects. Getting, setting, and deleting
columns works with the same syntax as the analogous dict operations:
In [61]: df['one']
Out[61]:
a 1.0
b 2.0
c 3.0
d NaN
Name: one, dtype: float64
(continues on next page)
In [64]: df
Out[64]:
one two three flag
a 1.0 1.0 1.0 False
b 2.0 2.0 4.0 False
c 3.0 3.0 9.0 True
d NaN 4.0 NaN False
In [67]: df
Out[67]:
one flag
a 1.0 False
b 2.0 False
c 3.0 True
d NaN False
When inserting a scalar value, it will naturally be propagated to fill the column:
[email protected]
166FVD0TPV
In [68]: df['foo'] = 'bar'
In [69]: df
Out[69]:
one flag foo
a 1.0 False bar
b 2.0 False bar
c 3.0 True bar
d NaN False bar
When inserting a Series that does not have the same index as the DataFrame, it will be conformed to the DataFrame’s
index:
In [71]: df
Out[71]:
one flag foo one_trunc
a 1.0 False bar 1.0
b 2.0 False bar 2.0
c 3.0 True bar NaN
d NaN False bar NaN
You can insert raw ndarrays but their length must match the length of the DataFrame’s index.
By default, columns get inserted at the end. The insert function is available to insert at a particular location in the
columns:
In [73]: df
Out[73]:
one bar flag foo one_trunc
a 1.0 1.0 False bar 1.0
b 2.0 2.0 False bar 2.0
c 3.0 3.0 True bar NaN
d NaN NaN False bar NaN
Inspired by dplyr’s mutate verb, DataFrame has an assign() method that allows you to easily create new columns
that are potentially derived from existing columns.
In [75]: iris.head()
Out[75]:
SepalLength SepalWidth PetalLength PetalWidth Name
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
In the example above, we inserted a precomputed value. We can also pass in a function of one argument to be evaluated
on the DataFrame being assigned to.
Out[77]:
SepalLength SepalWidth PetalLength PetalWidth Name sepal_ratio
0 5.1 3.5 1.4 0.2 Iris-setosa 0.686275
1 4.9 3.0 1.4 0.2 Iris-setosa 0.612245
2 4.7 3.2 1.3 0.2 Iris-setosa 0.680851
3 4.6 3.1 1.5 0.2 Iris-setosa 0.673913
4 5.0 3.6 1.4 0.2 Iris-setosa 0.720000
assign always returns a copy of the data, leaving the original DataFrame untouched.
Passing a callable, as opposed to an actual value to be inserted, is useful when you don’t have a reference to the
DataFrame at hand. This is common when using assign in a chain of operations. For example, we can limit the
DataFrame to just those observations with a Sepal Length greater than 5, calculate the ratio, and plot:
[email protected]
166FVD0TPV
Since a function is passed in, the function is computed on the DataFrame being assigned to. Importantly, this is the
DataFrame that’s been filtered to those rows with sepal length greater than 5. The filtering happens first, and then the
ratio calculations. This is an example where we didn’t have a reference to the filtered DataFrame available.
The function signature for assign is simply **kwargs. The keys are the column names for the new fields, and the
values are either a value to be inserted (for example, a Series or NumPy array), or a function of one argument to be
called on the DataFrame. A copy of the original DataFrame is returned, with the new values inserted.
Changed in version 0.23.0.
Starting with Python 3.6 the order of **kwargs is preserved. This allows for dependent assignment, where an
expression later in **kwargs can refer to a column created earlier in the same assign().
In the second expression, x['C'] will refer to the newly created column, that’s equal to dfa['A'] + dfa['B'].
Indexing / selection
Row selection, for example, returns a Series whose index is the columns of the DataFrame:
In [81]: df.loc['b']
Out[81]:
[email protected]
166FVD0TPV one 2
bar 2
flag False
foo bar
one_trunc 2
Name: b, dtype: object
In [82]: df.iloc[2]
Out[82]:
one 3
bar 3
flag True
foo bar
one_trunc NaN
Name: c, dtype: object
For a more exhaustive treatment of sophisticated label-based indexing and slicing, see the section on indexing. We
will address the fundamentals of reindexing / conforming to new sets of labels in the section on reindexing.
Data alignment between DataFrame objects automatically align on both the columns and the index (row labels).
Again, the resulting object will have the union of the column and row labels.
In [85]: df + df2
Out[85]:
A B C D
0 0.045691 -0.014138 1.380871 NaN
1 -0.955398 -1.501007 0.037181 NaN
2 -0.662690 1.534833 -0.859691 NaN
3 -2.452949 1.237274 -0.133712 NaN
4 1.414490 1.951676 -2.320422 NaN
5 -0.494922 -1.649727 -1.084601 NaN
6 -1.047551 -0.748572 -0.805479 NaN
7 NaN NaN NaN NaN
8 NaN NaN NaN NaN
9 NaN NaN NaN NaN
When doing an operation between DataFrame and Series, the default behavior is to align the Series index on the
DataFrame columns, thus broadcasting row-wise. For example:
In [86]: df - df.iloc[0]
Out[86]:
[email protected] B C D
166FVD0TPV 0 0.000000 0.000000 0.000000 0.000000
1 -1.359261 -0.248717 -0.453372 -1.754659
2 0.253128 0.829678 0.010026 -1.991234
3 -1.311128 0.054325 -1.724913 -1.620544
4 0.573025 1.500742 -0.676070 1.367331
5 -1.741248 0.781993 -1.241620 -2.053136
6 -1.240774 -0.869551 -0.153282 0.000430
7 -0.743894 0.411013 -0.929563 -0.282386
8 -1.194921 1.320690 0.238224 -1.482644
9 2.293786 1.856228 0.773289 -1.446531
In the special case of working with time series data, if the DataFrame index contains dates, the broadcasting will be
column-wise:
In [89]: df
Out[89]:
A B C
2000-01-01 -1.226825 0.769804 -1.281247
2000-01-02 -0.727707 -0.121306 -0.097883
2000-01-03 0.695775 0.341734 0.959726
2000-01-04 -1.110336 -0.619976 0.149748
2000-01-05 -0.732339 0.687738 0.176444
2000-01-06 0.403310 -0.154951 0.301624
2000-01-07 -2.179861 -1.369849 -0.954208
(continues on next page)
In [90]: type(df['A'])
Out[90]: pandas.core.series.Series
In [91]: df - df['A']
Out[91]:
2000-01-01 00:00:00 2000-01-02 00:00:00 2000-01-03 00:00:00 2000-01-04
˓→00:00:00 2000-01-05 00:00:00 ... 2000-01-07 00:00:00 2000-01-08 00:00:00 A
˓→B C
2000-01-01 NaN NaN NaN
˓→ NaN NaN ... NaN NaN NaN
˓→NaN NaN
[8 rows x 11 columns]
Warning:
df - df['A']
is now deprecated and will be removed in a future release. The preferred way to replicate this behavior is
df.sub(df['A'], axis=0)
For explicit control over the matching and broadcasting behavior, see the section on flexible binary operations.
Operations with scalars are just as you would expect:
In [92]: df * 5 + 2
Out[92]:
A B C
2000-01-01 -4.134126 5.849018 -4.406237
2000-01-02 -1.638535 1.393469 1.510587
2000-01-03 5.478873 3.708672 6.798628
(continues on next page)
In [93]: 1 / df
Out[93]:
A B C
2000-01-01 -0.815112 1.299033 -0.780489
2000-01-02 -1.374179 -8.243600 -10.216313
2000-01-03 1.437247 2.926250 1.041965
2000-01-04 -0.900628 -1.612966 6.677871
2000-01-05 -1.365487 1.454041 5.667510
2000-01-06 2.479485 -6.453662 3.315381
2000-01-07 -0.458745 -0.730007 -1.047990
2000-01-08 0.683669 -0.573671 -1.209788
In [94]: df ** 4
Out[94]:
A B C
2000-01-01 2.265327 0.351172 2.694833
2000-01-02 0.280431 0.000217 0.000092
2000-01-03 0.234355 0.013638 0.848376
2000-01-04 1.519910 0.147740 0.000503
2000-01-05 0.287640 0.223714 0.000969
2000-01-06 0.026458 0.000576 0.008277
2000-01-07 22.579530 3.521204 0.829033
[email protected]
166FVD0TPV 2000-01-08 4.577374 9.233151 0.466834
Transposing
To transpose, access the T attribute (also the transpose function), similar to an ndarray:
Elementwise NumPy ufuncs (log, exp, sqrt, . . . ) and various other NumPy functions can be used with no issues on
Series and DataFrame, assuming the data within are numeric:
[email protected]
166FVD0TPV In [102]: np.exp(df)
Out[102]:
A B C
2000-01-01 0.293222 2.159342 0.277691
2000-01-02 0.483015 0.885763 0.906755
2000-01-03 2.005262 1.407386 2.610980
2000-01-04 0.329448 0.537957 1.161542
2000-01-05 0.480783 1.989212 1.192968
2000-01-06 1.496770 0.856457 1.352053
2000-01-07 0.113057 0.254145 0.385117
2000-01-08 4.317584 0.174966 0.437538
In [103]: np.asarray(df)
Out[103]:
array([[-1.2268, 0.7698, -1.2812],
[-0.7277, -0.1213, -0.0979],
[ 0.6958, 0.3417, 0.9597],
[-1.1103, -0.62 , 0.1497],
[-0.7323, 0.6877, 0.1764],
[ 0.4033, -0.155 , 0.3016],
[-2.1799, -1.3698, -0.9542],
[ 1.4627, -1.7432, -0.8266]])
DataFrame is not intended to be a drop-in replacement for ndarray as its indexing semantics and data model are quite
different in places from an n-dimensional array.
Series implements __array_ufunc__, which allows it to work with NumPy’s universal functions.
The ufunc is applied to the underlying array in a Series.
In [105]: np.exp(ser)
Out[105]:
0 2.718282
1 7.389056
2 20.085537
3 54.598150
dtype: float64
Changed in version 0.25.0: When multiple Series are passed to a ufunc, they are aligned before performing the
operation.
Like other parts of the library, pandas will automatically align labeled inputs as part of a ufunc with multiple inputs.
For example, using numpy.remainder() on two Series with differently ordered labels will align before the
operation.
In [108]: ser1
Out[108]:
a 1
b 2
c 3
dtype: int64
In [109]: ser2
[email protected]
166FVD0TPV Out[109]:
b 1
a 3
c 5
dtype: int64
As usual, the union of the two indices is taken, and non-overlapping values are filled with missing values.
In [112]: ser3
Out[112]:
b 2
c 4
d 6
dtype: int64
When a binary ufunc is applied to a Series and Index, the Series implementation takes precedence and a Series is
returned.
In [114]: ser = pd.Series([1, 2, 3])
NumPy ufuncs are safe to apply to Series backed by non-ndarray arrays, for example arrays.SparseArray
(see Sparse calculation). If possible, the ufunc is applied without converting the underlying data to an ndarray.
Console display
Very large DataFrames will be truncated to display them in the console. You can also get a summary using info().
(Here I am reading a CSV version of the baseball dataset from the plyr R package):
[email protected]
In [117]: baseball = pd.read_csv('data/baseball.csv')
166FVD0TPV
In [118]: print(baseball)
id player year stint team lg g ab r h X2b X3b hr rbi sb
˓→ cs bb so ibb hbp sh sf gidp
0 88641 womacto01 2006 2 CHN NL 19 50 6 14 1 0 1 2.0 1.0
˓→ 1.0 4 4.0 0.0 0.0 3.0 0.0 0.0
1 88643 schilcu01 2006 1 BOS AL 31 2 0 1 0 0 0 0.0 0.0
˓→ 0.0 0 1.0 0.0 0.0 0.0 0.0 0.0
.. ... ... ... ... ... .. .. ... .. ... ... ... .. ... ...
˓→ ... .. ... ... ... ... ... ...
98 89533 aloumo01 2007 1 NYN NL 87 328 51 112 19 1 13 49.0 3.0
˓→ 0.0 27 30.0 5.0 2.0 0.0 3.0 13.0
99 89534 alomasa02 2007 1 NYN NL 8 22 1 3 1 0 0 0.0 0.0
˓→ 0.0 0 3.0 0.0 0.0 0.0 0.0 0.0
In [119]: baseball.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 100 entries, 0 to 99
Data columns (total 23 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 100 non-null int64
1 player 100 non-null object
2 year 100 non-null int64
3 stint 100 non-null int64
4 team 100 non-null object
(continues on next page)
However, using to_string will return a string representation of the DataFrame in tabular form, though it won’t
always fit the console width:
In [120]: print(baseball.iloc[-20:, :12].to_string())
id player year stint team lg g ab r h X2b X3b
80 89474 finlest01 2007 1 COL NL 43 94 9 17 3 0
81 89480 embreal01 2007
[email protected] 1 OAK AL 4 0 0 0 0 0
166FVD0TPV 82 89481 edmonji01 2007 1 SLN NL 117 365 39 92 15 2
83 89482 easleda01 2007 1 NYN NL 76 193 24 54 6 0
84 89489 delgaca01 2007 1 NYN NL 139 538 71 139 30 0
85 89493 cormirh01 2007 1 CIN NL 6 0 0 0 0 0
86 89494 coninje01 2007 2 NYN NL 21 41 2 8 2 0
87 89495 coninje01 2007 1 CIN NL 80 215 23 57 11 1
88 89497 clemero02 2007 1 NYA AL 2 2 0 1 0 0
89 89498 claytro01 2007 2 BOS AL 8 6 1 0 0 0
90 89499 claytro01 2007 1 TOR AL 69 189 23 48 14 0
91 89501 cirilje01 2007 2 ARI NL 28 40 6 8 4 0
92 89502 cirilje01 2007 1 MIN AL 50 153 18 40 9 2
93 89521 bondsba01 2007 1 SFN NL 126 340 75 94 14 0
94 89523 biggicr01 2007 1 HOU NL 141 517 68 130 31 3
95 89525 benitar01 2007 2 FLO NL 34 0 0 0 0 0
96 89526 benitar01 2007 1 SFN NL 19 0 0 0 0 0
97 89530 ausmubr01 2007 1 HOU NL 117 349 38 82 16 3
98 89533 aloumo01 2007 1 NYN NL 87 328 51 112 19 1
99 89534 alomasa02 2007 1 NYN NL 8 22 1 3 1 0
You can change how much to print on a single row by setting the display.width option:
You can adjust the max width of the individual columns by setting display.max_colwidth
In [126]: pd.DataFrame(datafile)
[email protected]
166FVD0TPV Out[126]:
filename path
0 filename_01 media/user_name/storage/fo...
1 filename_02 media/user_name/storage/fo...
In [128]: pd.DataFrame(datafile)
Out[128]:
filename path
0 filename_01 media/user_name/storage/folder_01/filename_01
1 filename_02 media/user_name/storage/folder_02/filename_02
You can also disable this feature via the expand_frame_repr option. This will print the table in one block.
If a DataFrame column label is a valid Python variable name, the column can be accessed like an attribute:
In [130]: df
Out[130]:
foo1 foo2
0 1.171216 -0.858447
(continues on next page)
In [131]: df.foo1
Out[131]:
0 1.171216
1 0.520260
2 -1.197071
3 -1.066969
4 -0.303421
Name: foo1, dtype: float64
The columns are also connected to the IPython completion mechanism so they can be tab-completed:
Since pandas aims to provide a lot of the data manipulation and analysis functionality that people use R for, this
page was started to provide a more detailed look at the R language and its many third party libraries as they relate to
[email protected]
pandas. In comparisons with R and CRAN libraries, we care about the following things:
166FVD0TPV
• Functionality / flexibility: what can/cannot be done with each tool
• Performance: how fast are operations. Hard numbers/benchmarks are preferable
• Ease-of-use: Is one tool easier/harder to use (you may have to be the judge of this, given side-by-side code
comparisons)
This page is also here to offer a bit of a translation guide for users of these R packages.
For transfer of DataFrame objects from pandas to R, one option is to use HDF5 files, see External compatibility
for an example.
Quick reference
We’ll start off with a quick reference guide pairing some common R operations using dplyr with pandas equivalents.
R pandas
dim(df) df.shape
head(df) df.head()
slice(df, 1:10) df.iloc[:9]
filter(df, col1 == 1, col2 == 1) df.query('col1 == 1 & col2 == 1')
df[df$col1 == 1 & df$col2 == 1,] df[(df.col1 == 1) & (df.col2 == 1)]
select(df, col1, col2) df[['col1', 'col2']]
select(df, col1:col3) df.loc[:, 'col1':'col3']
select(df, -(col1:col3)) df.drop(cols_to_drop, axis=1) but see1
distinct(select(df, col1)) df[['col1']].drop_duplicates()
distinct(select(df, col1, col2)) df[['col1', 'col2']].drop_duplicates()
sample_n(df, 10) df.sample(n=10)
sample_frac(df, 0.01) df.sample(frac=0.01)
Sorting
R pandas
arrange(df, col1, col2) df.sort_values(['col1', 'col2'])
arrange(df, desc(col1)) df.sort_values('col1', ascending=False)
[email protected]
166FVD0TPV Transforming
R pandas
select(df, col_one = df.rename(columns={'col1': 'col_one'})['col_one']
col1)
rename(df, col_one = df.rename(columns={'col1': 'col_one'})
col1)
mutate(df, c=a-b) df.assign(c=df['a']-df['b'])
R pandas
summary(df) df.describe()
gdf <- group_by(df, col1) gdf = df.groupby('col1')
summarise(gdf, avg=mean(col1, na. df.groupby('col1').agg({'col1':
rm=TRUE)) 'mean'})
summarise(gdf, total=sum(col1)) df.groupby('col1').sum()
1 R’s shorthand for a subrange of columns (select(df, col1:col3)) can be approached cleanly in pandas, if you have the list of columns,
for example df[cols[1:3]] or df.drop(cols[1:3]), but doing this by column name is a bit messy.
Base R
or by integer location
df <- data.frame(matrix(rnorm(1000), ncol=100))
df[, c(1:10, 25:30, 40, 50:100)]
Selecting multiple noncontiguous columns by integer location can be achieved with a combination of the iloc indexer
attribute and numpy.r_.
In [4]: named = list('abcdefg')
In [5]: n = 30
aggregate
In R you may want to split data into subsets and compute the mean for each. Using a data.frame called df and splitting
it into groups by1 and by2:
df <- data.frame(
v1 = c(1,3,5,7,8,3,5,NA,4,5,7,9),
v2 = c(11,33,55,77,88,33,55,NA,44,55,77,99),
by1 = c("red", "blue", 1, 2, NA, "big", 1, 2, "red", 1, NA, 12),
by2 = c("wet", "dry", 99, 95, NA, "damp", 95, 99, "red", 99, NA, NA))
aggregate(x=df[, c("v1", "v2")], by=list(mydf2$by1, mydf2$by2), FUN = mean)
match / %in%
A common way to select data in R is using %in% which is defined using the function match. The operator %in% is
used to return a logical vector indicating if there is a match or not:
s <- 0:4
s %in% c(2,4)
The match function returns a vector of the positions of matches of its first argument in its second:
s <- 0:4
match(s, c(2,4))
tapply
tapply is similar to aggregate, but data can be in a ragged array, since the subclass sizes are possibly irregular.
Using a data.frame called baseball, and retrieving information based on the array team:
baseball <-
data.frame(team = gl(5, 5,
labels = paste("Team", LETTERS[1:5])),
player = sample(letters, 25),
batting.average = runif(25, .200, .400))
tapply(baseball$batting.average, baseball.example$team,
max)
subset
The query() method is similar to the base R subset function. In R you might want to get the rows of a data.
frame where one column’s values are less than another column’s values:
with
An expression using a data.frame called df in R with the columns a and b would be evaluated using with like so:
In pandas the equivalent expression, using the eval() method, would be:
In certain cases eval() will be much faster than evaluation in pure Python. For more details and examples see the
eval documentation.
plyr
plyr is an R library for the split-apply-combine strategy for data analysis. The functions revolve around three data
structures in R, a for arrays, l for lists, and d for data.frame. The table below shows how these data
structures could be mapped in Python.
R Python
array list
lists dictionary or list of objects
data.frame dataframe
ddply
require(plyr)
df <- data.frame(
x = runif(120, 1, 168),
y = runif(120, 7, 334),
z = runif(120, 1.7, 20.7),
month = rep(c(5,6,7,8),30),
week = sample(1:4, 120, TRUE)
)
In pandas the equivalent expression, using the groupby() method, would be:
reshape / reshape2
melt.array
An expression using a 3 dimensional array called a in R where you want to melt it into a data.frame:
melt.list
An expression using a list called a in R where you want to melt it into a data.frame:
In Python, this list would be a list of tuples, so DataFrame() method would convert it to a dataframe as required.
In [31]: pd.DataFrame(a)
Out[31]:
0 1
0 0 1.0
1 1 2.0
2 2 3.0
3 3 4.0
4 4 NaN
For more details and examples see the Into to Data Structures documentation.
melt.data.frame
An expression using a data.frame called cheese in R where you want to reshape the data.frame:
cast
In R acast is an expression using a data.frame called df in R to cast into a higher dimensional array:
df <- data.frame(
x = runif(12, 1, 168),
y = runif(12, 7, 334),
z = runif(12, 1.7, 20.7),
month = rep(c(5,6,7),4),
week = rep(c(1,2), 6)
)
Similarly for dcast which uses a data.frame called df in R to aggregate information based on Animal and
FeedType:
df <- data.frame(
Animal = c('Animal1', 'Animal2', 'Animal3', 'Animal2', 'Animal1',
'Animal2', 'Animal3'),
[email protected]
FeedType = c('A', 'B', 'A', 'A', 'B', 'B', 'A'),
166FVD0TPV Amount = c(10, 7, 4, 2, 5, 6, 2)
)
Python can approach this in two different ways. Firstly, similar to above using pivot_table():
In [38]: df = pd.DataFrame({
....: 'Animal': ['Animal1', 'Animal2', 'Animal3', 'Animal2', 'Animal1',
....: 'Animal2', 'Animal3'],
....: 'FeedType': ['A', 'B', 'A', 'A', 'B', 'B', 'A'],
....: 'Amount': [10, 7, 4, 2, 5, 6, 2],
....: })
....:
For more details and examples see the reshaping documentation or the groupby documentation.
factor
cut(c(1,2,3,4,5,6), 3)
factor(c(1,2,3,2,2,3))
For more details and examples see categorical introduction and the API documentation. There is also a documentation
regarding the differences to R’s factor.
Since many potential pandas users have some familiarity with SQL, this page is meant to provide some examples of
how various SQL operations would be performed using pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas to familiarize yourself with the
library.
As is customary, we import pandas and NumPy as follows:
Most of the examples will utilize the tips dataset found within pandas tests. We’ll read the data into a DataFrame
called tips and assume we have a database table of the same name and structure.
In [5]: tips.head()
Out[5]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
[email protected]
3 23.68 3.31 Male No Sun Dinner 2
166FVD0TPV 4 24.59 3.61 Female No Sun Dinner 4
SELECT
In SQL, selection is done using a comma-separated list of columns you’d like to select (or a * to select all columns):
With pandas, column selection is done by passing a list of column names to your DataFrame:
Calling the DataFrame without the list of column names would display all columns (akin to SQL’s *).
In SQL, you can add a calculated column:
With pandas, you can use the DataFrame.assign() method of a DataFrame to append a new column:
WHERE
SELECT *
FROM tips
WHERE time = 'Dinner'
LIMIT 5;
DataFrames can be filtered in multiple ways; the most intuitive of which is using boolean indexing.
The above statement is simply passing a Series of True/False objects to the DataFrame, returning all rows with
True.
In [10]: is_dinner.value_counts()
Out[10]:
True 176
False 68
Name: time, dtype: int64
In [11]: tips[is_dinner].head(5)
Out[11]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
Just like SQL’s OR and AND, multiple conditions can be passed to a DataFrame using | (OR) and & (AND).
-- tips by parties of at least 5 diners OR bill total was more than $45
SELECT *
FROM tips
[email protected]
166FVD0TPV WHERE size >= 5 OR total_bill > 45;
# tips by parties of at least 5 diners OR bill total was more than $45
In [13]: tips[(tips['size'] >= 5) | (tips['total_bill'] > 45)]
Out[13]:
total_bill tip sex smoker day time size
59 48.27 6.73 Male No Sat Dinner 4
125 29.80 4.20 Female No Thur Lunch 6
141 34.30 6.70 Male No Thur Lunch 6
142 41.19 5.00 Male No Thur Lunch 5
143 27.05 5.00 Female No Thur Lunch 6
155 29.85 5.14 Female No Sun Dinner 5
156 48.17 5.00 Male No Sun Dinner 6
170 50.81 10.00 Male Yes Sat Dinner 3
182 45.35 3.50 Male Yes Sun Dinner 3
185 20.69 5.00 Male No Sun Dinner 5
187 30.46 2.00 Male Yes Sun Dinner 5
212 48.33 9.00 Male No Sat Dinner 4
216 28.15 3.00 Male Yes Sat Dinner 5
In [15]: frame
Out[15]:
(continues on next page)
Assume we have a table of the same structure as our DataFrame above. We can see only the records where col2 IS
NULL with the following query:
SELECT *
FROM frame
WHERE col2 IS NULL;
In [16]: frame[frame['col2'].isna()]
Out[16]:
col1 col2
1 B NaN
Getting items where col1 IS NOT NULL can be done with notna().
SELECT *
FROM frame
WHERE col1 IS NOT NULL;
In [17]: frame[frame['col1'].notna()]
Out[17]:
[email protected]
166FVD0TPV col1 col2
0 A F
1 B NaN
3 C H
4 D I
GROUP BY
In pandas, SQL’s GROUP BY operations are performed using the similarly named groupby() method.
groupby() typically refers to a process where we’d like to split a dataset into groups, apply some function (typically
aggregation) , and then combine the groups together.
A common SQL operation would be getting the count of records in each group throughout a dataset. For instance, a
query getting us the number of tips left by sex:
SELECT sex, count(*)
FROM tips
GROUP BY sex;
/*
Female 87
Male 157
*/
Notice that in the pandas code we used size() and not count(). This is because count() applies the function
to each column, returning the number of not null records within each.
In [19]: tips.groupby('sex').count()
Out[19]:
total_bill tip smoker day time size
sex
Female 87 87 87 87 87 87
Male 157 157 157 157 157 157
Multiple functions can also be applied at once. For instance, say we’d like to see how tip amount differs by day of
the week - agg() allows you to pass a dictionary to your grouped DataFrame, indicating which functions to apply to
specific columns.
[email protected]
166FVD0TPV SELECT day, AVG(tip), COUNT(*)
FROM tips
GROUP BY day;
/*
Fri 2.734737 19
Sat 2.993103 87
Sun 3.255132 76
Thur 2.771452 62
*/
Grouping by more than one column is done by passing a list of columns to the groupby() method.
SELECT smoker, day, COUNT(*), AVG(tip)
FROM tips
GROUP BY smoker, day;
/*
smoker day
No Fri 4 2.812500
Sat 45 3.102889
(continues on next page)
JOIN
JOINs can be performed with join() or merge(). By default, join() will join the DataFrames on their indices.
[email protected]
Each method has parameters allowing you to specify the type of join to perform (LEFT, RIGHT, INNER, FULL) or
166FVD0TPV the columns to join on (column names or indices).
Assume we have two database tables of the same name and structure as our DataFrames.
Now let’s go over the various types of JOINs.
INNER JOIN
SELECT *
FROM df1
INNER JOIN df2
ON df1.key = df2.key;
merge() also offers parameters for cases when you’d like to join one DataFrame’s column with another DataFrame’s
index.
RIGHT JOIN
FULL JOIN
pandas also allows for FULL JOINs, which display both sides of the dataset, whether or not the joined columns find a
match. As of writing, FULL JOINs are not supported in all RDBMS (MySQL).
UNION
SQL’s UNION is similar to UNION ALL, however UNION will remove duplicate rows.
-- MySQL
SELECT * FROM tips
ORDER BY tip DESC
LIMIT 10 OFFSET 5;
Let’s find tips with (rank < 3) per gender group for (tips < 2). Notice that when using rank(method='min')
function rnk_min remains the same for the same tip (as Oracle’s RANK() function)
UPDATE
[email protected]
166FVD0TPV UPDATE tips
SET tip = tip*2
WHERE tip < 2;
DELETE
In pandas we select the rows that should remain, instead of deleting them
For potential users coming from SAS this page is meant to demonstrate how different SAS operations would be
performed in pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas to familiarize yourself with the
library.
As is customary, we import pandas and NumPy as follows:
Note: Throughout this tutorial, the pandas DataFrame will be displayed by calling df.head(), which displays
the first N (default 5) rows of the DataFrame. This is often used in interactive work (e.g. Jupyter notebook or
terminal) - the equivalent in SAS would be:
Data structures
DataFrame / Series
A DataFrame in pandas is analogous to a SAS data set - a two-dimensional data source with labeled columns that
can be of different types. As will be shown in this document, almost any operation that can be applied to a data set
using SAS’s DATA step, can also be accomplished in pandas.
A Series is the data structure that represents one column of a DataFrame. SAS doesn’t have a separate data
structure for a single column, but in general, working with a Series is analogous to referencing a column in the
DATA step.
Index
Every DataFrame and Series has an Index - which are labels on the rows of the data. SAS does not have an
exactly analogous concept. A data set’s rows are essentially unlabeled, other than an implicit integer index that can be
accessed during the DATA step (_N_).
In pandas, if no index is specified, an integer index is also used by default (first row = 0, second row = 1, and so on).
While using a labeled Index or MultiIndex can enable sophisticated analyses and is ultimately an important part
of pandas to understand, for this comparison we will essentially ignore the Index and just treat the DataFrame as
a collection of columns. Please see the indexing documentation for much more on how to use an Index effectively.
A SAS data set can be built from specified values by placing the data after a datalines statement and specifying
the column names.
data df;
input x y;
datalines;
1 2
3 4
5 6
;
run;
[email protected]
166FVD0TPV A pandas DataFrame can be constructed in many different ways, but for a small number of values, it is often
convenient to specify it as a Python dictionary, where the keys are the column names and the values are the data.
In [4]: df
Out[4]:
x y
0 1 2
1 3 4
2 5 6
Like SAS, pandas provides utilities for reading in data from many formats. The tips dataset, found within the pandas
tests (csv) will be used in many of the following examples.
SAS provides PROC IMPORT to read csv data into a data set.
In [7]: tips.head()
Out[7]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
Like PROC IMPORT, read_csv can take a number of parameters to specify how the data should be parsed. For
example, if the data was instead tab delimited, and did not have column names, the pandas command would be:
In addition to text/csv, pandas supports a variety of other data formats such as Excel, HDF5, and SQL databases. These
are all read via a pd.read_* function. See the IO documentation for more details.
Exporting data
[email protected]
166FVD0TPV
The inverse of PROC IMPORT in SAS is PROC EXPORT
Similarly in pandas, the opposite of read_csv is to_csv(), and other data formats follow a similar api.
tips.to_csv('tips2.csv')
Data operations
Operations on columns
In the DATA step, arbitrary math expressions can be used on new or existing columns.
data tips;
set tips;
total_bill = total_bill - 2;
new_bill = total_bill / 2;
run;
pandas provides similar vectorized operations by specifying the individual Series in the DataFrame. New
columns can be assigned in the same way.
In [10]: tips.head()
Out[10]:
total_bill tip sex smoker day time size new_bill
0 14.99 1.01 Female No Sun Dinner 2 7.495
1 8.34 1.66 Male No Sun Dinner 3 4.170
2 19.01 3.50 Male No Sun Dinner 3 9.505
3 21.68 3.31 Male No Sun Dinner 2 10.840
4 22.59 3.61 Female No Sun Dinner 4 11.295
Filtering
data tips;
set tips;
if total_bill > 10;
run;
data tips;
set tips;
where total_bill > 10;
/* equivalent in this case - where happens before the
DATA step begins and can also be used in PROC statements */
[email protected]
166FVD0TPV run;
DataFrames can be filtered in multiple ways; the most intuitive of which is using boolean indexing
If/then logic
data tips;
set tips;
format bucket $4.;
The same operation in pandas can be accomplished using the where method from numpy.
In [13]: tips.head()
Out[13]:
total_bill tip sex smoker day time size bucket
0 14.99 1.01 Female No Sun Dinner 2 high
1 8.34 1.66 Male No Sun Dinner 3 low
2 19.01 3.50 Male No Sun Dinner 3 high
3 21.68 3.31 Male No Sun Dinner 2 high
4 22.59 3.61 Female No Sun Dinner 4 high
Date functionality
data tips;
set tips;
format date1 date2 date1_plusmonth mmddyy10.;
date1 = mdy(1, 15, 2013);
date2 = mdy(2, 15, 2015);
date1_year = year(date1);
date2_month = month(date2);
* shift date to beginning of next interval;
date1_next = intnx('MONTH', date1, 1);
* count intervals between dates;
months_between = intck('MONTH', date1, date2);
run;
[email protected]
166FVD0TPV
The equivalent pandas operations are shown below. In addition to these functions pandas supports other Time Series
features not available in Base SAS (such as resampling and custom offsets) - see the timeseries documentation for
more details.
In [19]: tips['months_between'] = (
....: tips['date2'].dt.to_period('M') - tips['date1'].dt.to_period('M'))
....:
Selection of columns
SAS provides keywords in the DATA step to select, drop, and rename columns.
data tips;
set tips;
keep sex total_bill tip;
run;
data tips;
set tips;
drop sex;
run;
data tips;
set tips;
rename total_bill=total_bill_2;
run;
# keep
In [21]: tips[['sex', 'total_bill', 'tip']].head()
Out[21]:
sex total_bill tip
0 Female 14.99 1.01
1 Male 8.34 1.66
2 Male 19.01 3.50
3 Male
[email protected] 21.68 3.31
166FVD0TPV 4 Female 22.59 3.61
# drop
In [22]: tips.drop('sex', axis=1).head()
Out[22]:
total_bill tip smoker day time size
0 14.99 1.01 No Sun Dinner 2
1 8.34 1.66 No Sun Dinner 3
2 19.01 3.50 No Sun Dinner 3
3 21.68 3.31 No Sun Dinner 2
4 22.59 3.61 No Sun Dinner 4
# rename
In [23]: tips.rename(columns={'total_bill': 'total_bill_2'}).head()
Out[23]:
total_bill_2 tip sex smoker day time size
0 14.99 1.01 Female No Sun Dinner 2
1 8.34 1.66 Male No Sun Dinner 3
2 19.01 3.50 Male No Sun Dinner 3
3 21.68 3.31 Male No Sun Dinner 2
4 22.59 3.61 Female No Sun Dinner 4
Sorting by values
pandas objects have a sort_values() method, which takes a list of columns to sort by.
In [25]: tips.head()
Out[25]:
total_bill tip sex smoker day time size
67 1.07 1.00 Female Yes Sat Dinner 1
92 3.75 1.00 Female Yes Fri Dinner 2
111 5.25 1.00 Female No Sat Dinner 1
145 6.35 1.50 Female No Thur Lunch 2
135 6.51 1.25 Female No Thur Lunch 2
String processing
Length
SAS determines the length of a character string with the LENGTHN and LENGTHC functions. LENGTHN excludes
[email protected]
166FVD0TPV trailing blanks and LENGTHC includes trailing blanks.
data _null_;
set tips;
put(LENGTHN(time));
put(LENGTHC(time));
run;
Python determines the length of a character string with the len function. len includes trailing blanks. Use len and
rstrip to exclude trailing blanks.
In [26]: tips['time'].str.len().head()
Out[26]:
67 6
92 6
111 6
145 5
135 5
Name: time, dtype: int64
In [27]: tips['time'].str.rstrip().str.len().head()
Out[27]:
67 6
92 6
111 6
145 5
135 5
Name: time, dtype: int64
Find
SAS determines the position of a character in a string with the FINDW function. FINDW takes the string defined by
the first argument and searches for the first position of the substring you supply as the second argument.
data _null_;
set tips;
put(FINDW(sex,'ale'));
run;
Python determines the position of a character in a string with the find function. find searches for the first position
of the substring. If the substring is found, the function returns its position. Keep in mind that Python indexes are
zero-based and the function will return -1 if it fails to find the substring.
In [28]: tips['sex'].str.find("ale").head()
Out[28]:
67 3
92 3
111 3
145 3
135 3
Name: sex, dtype: int64
Substring
SAS extracts a substring from a string based on its position with the SUBSTR function.
[email protected]
166FVD0TPV data _null_;
set tips;
put(substr(sex,1,1));
run;
With pandas you can use [] notation to extract a substring from a string by position locations. Keep in mind that
Python indexes are zero-based.
In [29]: tips['sex'].str[0:1].head()
Out[29]:
67 F
92 F
111 F
145 F
135 F
Name: sex, dtype: object
Scan
The SAS SCAN function returns the nth word from a string. The first argument is the string you want to parse and the
second argument specifies which word you want to extract.
data firstlast;
input String $60.;
First_Name = scan(string, 1);
Last_Name = scan(string, -1);
(continues on next page)
Python extracts a substring from a string based on its text by using regular expressions. There are much more powerful
approaches, but this just shows a simple approach.
In [33]: firstlast
Out[33]:
String First_Name Last_Name
0 John Smith John John
1 Jane Cook Jane Jane
The SAS UPCASE LOWCASE and PROPCASE functions change the case of the argument.
data firstlast;
[email protected]
166FVD0TPV input String $60.;
string_up = UPCASE(string);
string_low = LOWCASE(string);
string_prop = PROPCASE(string);
datalines2;
John Smith;
Jane Cook;
;;;
run;
In [38]: firstlast
Out[38]:
String string_up string_low string_prop
0 John Smith JOHN SMITH john smith John Smith
1 Jane Cook JANE COOK jane cook Jane Cook
Merging
In [40]: df1
Out[40]:
key value
0 A 0.469112
1 B -0.282863
2 C -1.509059
3 D -1.135632
In [42]: df2
Out[42]:
key value
0 B 1.212112
1 D -0.173215
2 D 0.119209
3 E -1.044236
[email protected]
In SAS, data must be explicitly sorted before merging. Different types of joins are accomplished using the in= dummy
166FVD0TPV variables to track whether a match was found in one or both input frames.
pandas DataFrames have a merge() method, which provides similar functionality. Note that the data does not have
to be sorted ahead of time, and different join types are accomplished via the how keyword.
In [44]: inner_join
Out[44]:
key value_x value_y
0 B -0.282863 1.212112
(continues on next page)
In [46]: left_join
Out[46]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
In [48]: right_join
Out[48]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
3 E NaN -1.044236
In [50]: outer_join
Out[50]:
[email protected]
166FVD0TPV key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236
Missing data
Like SAS, pandas has a representation for missing data - which is the special float value NaN (not a number). Many
of the semantics are the same, for example missing data propagates through numeric operations, and is ignored by
default for aggregations.
In [51]: outer_join
Out[51]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236
In [53]: outer_join['value_x'].sum()
Out[53]: -3.5940742896293765
One difference is that missing data cannot be compared to its sentinel value. For example, in SAS you could do this
to filter missing values.
data outer_join_nulls;
set outer_join;
if value_x = .;
run;
data outer_join_no_nulls;
set outer_join;
if value_x ^= .;
run;
Which doesn’t work in pandas. Instead, the pd.isna or pd.notna functions should be used for comparisons.
In [54]: outer_join[pd.isna(outer_join['value_x'])]
Out[54]:
key value_x
[email protected] value_y
166FVD0TPV 5 E NaN -1.044236
In [55]: outer_join[pd.notna(outer_join['value_x'])]
Out[55]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
pandas also provides a variety of methods to work with missing data - some of which would be challenging to express
in SAS. For example, there are methods to drop all rows with any missing values, replacing missing values with a
specified value, like the mean, or forward filling from previous rows. See the missing data documentation for more.
In [56]: outer_join.dropna()
Out[56]:
key value_x value_y
1 B -0.282863 1.212112
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
In [57]: outer_join.fillna(method='ffill')
Out[57]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 1.212112
(continues on next page)
In [58]: outer_join['value_x'].fillna(outer_join['value_x'].mean())
Out[58]:
0 0.469112
1 -0.282863
2 -1.509059
3 -1.135632
4 -1.135632
5 -0.718815
Name: value_x, dtype: float64
GroupBy
Aggregation
SAS’s PROC SUMMARY can be used to group by one or more key variables and compute aggregations on numeric
columns.
In [60]: tips_summed.head()
Out[60]:
total_bill tip
sex smoker
Female No 869.68 149.77
Yes 527.27 96.74
Male No 1725.75 302.00
Yes 1217.07 183.07
Transformation
In SAS, if the group aggregations need to be used with the original frame, it must be merged back together. For
example, to subtract the mean for each observation by smoker group.
data tips;
merge tips(in=a) smoker_means(in=b);
by smoker;
adj_total_bill = total_bill - group_bill;
if a and b;
run;
pandas groupby provides a transform mechanism that allows these type of operations to be succinctly expressed
in one operation.
In [61]: gb = tips.groupby('smoker')['total_bill']
In [63]: tips.head()
Out[63]:
total_bill tip sex smoker day time size adj_total_bill
67 1.07 1.00 Female Yes Sat Dinner 1 -17.686344
92 3.75 1.00 Female Yes Fri Dinner 2 -15.006344
111 5.25 1.00 Female No Sat Dinner 1 -11.938278
145 6.35 1.50 Female No Thur Lunch 2 -10.838278
135 6.51 1.25 Female No Thur Lunch 2 -10.678278
[email protected]
166FVD0TPV By group processing
In addition to aggregation, pandas groupby can be used to replicate most other by group processing from SAS. For
example, this DATA step reads the data by sex/smoker group and filters to the first entry for each.
data tips_first;
set tips;
by sex smoker;
if FIRST.sex or FIRST.smoker then output;
run;
Other Considerations
Disk vs memory
pandas operates exclusively in memory, where a SAS data set exists on disk. This means that the size of data able to
be loaded in pandas is limited by your machine’s memory, but also that the operations on that data may be faster.
If out of core processing is needed, one possibility is the dask.dataframe library (currently in development) which
provides a subset of pandas functionality for an on-disk DataFrame
Data interop
pandas provides a read_sas() method that can read SAS data saved in the XPORT or SAS7BDAT binary format.
df = pd.read_sas('transport-file.xpt')
df = pd.read_sas('binary-file.sas7bdat')
You can also specify the file format directly. By default, pandas will try to infer the file format based on its extension.
df = pd.read_sas('transport-file.xpt', format='xport')
[email protected]
df = pd.read_sas('binary-file.sas7bdat', format='sas7bdat')
166FVD0TPV
XPORT is a relatively limited format and the parsing of it is not as optimized as some of the other pandas readers. An
alternative way to interop data between SAS and pandas is to serialize to csv.
For potential users coming from Stata this page is meant to demonstrate how different Stata operations would be
performed in pandas.
If you’re new to pandas, you might want to first read through 10 Minutes to pandas to familiarize yourself with the
library.
As is customary, we import pandas and NumPy as follows. This means that we can refer to the libraries as pd and np,
respectively, for the rest of the document.
Note: Throughout this tutorial, the pandas DataFrame will be displayed by calling df.head(), which displays
the first N (default 5) rows of the DataFrame. This is often used in interactive work (e.g. Jupyter notebook or
terminal) – the equivalent in Stata would be:
list in 1/5
Data structures
pandas Stata
DataFrame data set
column variable
row observation
groupby bysort
NaN .
DataFrame / Series
A DataFrame in pandas is analogous to a Stata data set – a two-dimensional data source with labeled columns that
can be of different types. As will be shown in this document, almost any operation that can be applied to a data set in
[email protected]
166FVD0TPV Stata can also be accomplished in pandas.
A Series is the data structure that represents one column of a DataFrame. Stata doesn’t have a separate data
structure for a single column, but in general, working with a Series is analogous to referencing a column of a data
set in Stata.
Index
Every DataFrame and Series has an Index – labels on the rows of the data. Stata does not have an exactly
analogous concept. In Stata, a data set’s rows are essentially unlabeled, other than an implicit integer index that can
be accessed with _n.
In pandas, if no index is specified, an integer index is also used by default (first row = 0, second row = 1, and so on).
While using a labeled Index or MultiIndex can enable sophisticated analyses and is ultimately an important part
of pandas to understand, for this comparison we will essentially ignore the Index and just treat the DataFrame as
a collection of columns. Please see the indexing documentation for much more on how to use an Index effectively.
A Stata data set can be built from specified values by placing the data after an input statement and specifying the
column names.
input x y
1 2
3 4
5 6
end
A pandas DataFrame can be constructed in many different ways, but for a small number of values, it is often
convenient to specify it as a Python dictionary, where the keys are the column names and the values are the data.
In [3]: df = pd.DataFrame({'x': [1, 3, 5], 'y': [2, 4, 6]})
In [4]: df
Out[4]:
x y
0 1 2
1 3 4
2 5 6
The pandas method is read_csv(), which works similarly. Additionally, it will automatically download the data
set if presented with a url.
In [5]: url = ('https://raw.github.com/pandas-dev'
...: '/pandas/master/pandas/tests/data/tips.csv')
...:
In [7]: tips.head()
Out[7]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
Like import delimited, read_csv() can take a number of parameters to specify how the data should be
parsed. For example, if the data were instead tab delimited, did not have column names, and existed in the current
working directory, the pandas command would be:
Pandas can also read Stata data sets in .dta format with the read_stata() function.
df = pd.read_stata('data.dta')
In addition to text/csv and Stata files, pandas supports a variety of other data formats such as Excel, SAS, HDF5,
Parquet, and SQL databases. These are all read via a pd.read_* function. See the IO documentation for more
details.
Exporting data
Pandas can also export to Stata file format with the DataFrame.to_stata() method.
tips.to_stata('tips2.dta')
[email protected]
166FVD0TPV
Data operations
Operations on columns
In Stata, arbitrary math expressions can be used with the generate and replace commands on new or existing
columns. The drop command drops the column from the data set.
replace total_bill = total_bill - 2
generate new_bill = total_bill / 2
drop new_bill
pandas provides similar vectorized operations by specifying the individual Series in the DataFrame. New
columns can be assigned in the same way. The DataFrame.drop() method drops a column from the DataFrame.
In [8]: tips['total_bill'] = tips['total_bill'] - 2
In [10]: tips.head()
Out[10]:
total_bill tip sex smoker day time size new_bill
0 14.99 1.01 Female No Sun Dinner 2 7.495
1 8.34 1.66 Male No Sun Dinner 3 4.170
2 19.01 3.50 Male No Sun Dinner 3 9.505
3 21.68 3.31 Male No Sun Dinner 2 10.840
4 22.59 3.61 Female No Sun Dinner 4 11.295
(continues on next page)
Filtering
DataFrames can be filtered in multiple ways; the most intuitive of which is using boolean indexing.
If/then logic
The same operation in pandas can be accomplished using the where method from numpy.
In [14]: tips.head()
Out[14]:
total_bill tip sex smoker day time size bucket
0 14.99 1.01 Female No Sun Dinner 2 high
1 8.34 1.66 Male No Sun Dinner 3 low
2 19.01 3.50 Male No Sun Dinner 3 high
3 21.68 3.31 Male No Sun Dinner 2 high
4 22.59 3.61 Female No Sun Dinner 4 high
Date functionality
The equivalent pandas operations are shown below. In addition to these functions, pandas supports other Time Series
features not available in Stata (such as time zone handling and custom offsets) – see the timeseries documentation for
more details.
Selection of columns
drop sex
The same operations are expressed in pandas below. Note that in contrast to Stata, these operations do not happen in
place. To make these changes persist, assign the operation back to a variable.
# keep
In [22]: tips[['sex', 'total_bill', 'tip']].head()
Out[22]:
sex total_bill tip
0 Female 14.99 1.01
1 Male 8.34 1.66
(continues on next page)
# drop
In [23]: tips.drop('sex', axis=1).head()
Out[23]:
total_bill tip smoker day time size
0 14.99 1.01 No Sun Dinner 2
1 8.34 1.66 No Sun Dinner 3
2 19.01 3.50 No Sun Dinner 3
3 21.68 3.31 No Sun Dinner 2
4 22.59 3.61 No Sun Dinner 4
# rename
In [24]: tips.rename(columns={'total_bill': 'total_bill_2'}).head()
Out[24]:
total_bill_2 tip sex smoker day time size
0 14.99 1.01 Female No Sun Dinner 2
1 8.34 1.66 Male No Sun Dinner 3
2 19.01 3.50 Male No Sun Dinner 3
3 21.68 3.31 Male No Sun Dinner 2
4 22.59 3.61 Female No Sun Dinner 4
Sorting by values
[email protected]
166FVD0TPV Sorting in Stata is accomplished via sort
sort sex total_bill
pandas objects have a DataFrame.sort_values() method, which takes a list of columns to sort by.
In [26]: tips.head()
Out[26]:
total_bill tip sex smoker day time size
67 1.07 1.00 Female Yes Sat Dinner 1
92 3.75 1.00 Female Yes Fri Dinner 2
111 5.25 1.00 Female No Sat Dinner 1
145 6.35 1.50 Female No Thur Lunch 2
135 6.51 1.25 Female No Thur Lunch 2
String processing
Stata determines the length of a character string with the strlen() and ustrlen() functions for ASCII and
Unicode strings, respectively.
Python determines the length of a character string with the len function. In Python 3, all strings are Unicode strings.
len includes trailing blanks. Use len and rstrip to exclude trailing blanks.
In [27]: tips['time'].str.len().head()
Out[27]:
67 6
92 6
111 6
145 5
135 5
Name: time, dtype: int64
In [28]: tips['time'].str.rstrip().str.len().head()
Out[28]:
67 6
92 6
111 6
145 5
135 5
Name: time, dtype: int64
Stata determines the position of a character in a string with the strpos() function. This takes the string defined by
the first argument and searches for the first position of the substring you supply as the second argument.
generate str_position = strpos(sex, "ale")
[email protected]
166FVD0TPV
Python determines the position of a character in a string with the find() function. find searches for the first
position of the substring. If the substring is found, the function returns its position. Keep in mind that Python indexes
are zero-based and the function will return -1 if it fails to find the substring.
In [29]: tips['sex'].str.find("ale").head()
Out[29]:
67 3
92 3
111 3
145 3
135 3
Name: sex, dtype: int64
Stata extracts a substring from a string based on its position with the substr() function.
generate short_sex = substr(sex, 1, 1)
With pandas you can use [] notation to extract a substring from a string by position locations. Keep in mind that
Python indexes are zero-based.
In [30]: tips['sex'].str[0:1].head()
Out[30]:
67 F
92 F
(continues on next page)
The Stata word() function returns the nth word from a string. The first argument is the string you want to parse and
the second argument specifies which word you want to extract.
clear
input str20 string
"John Smith"
"Jane Cook"
end
Python extracts a substring from a string based on its text by using regular expressions. There are much more powerful
approaches, but this just shows a simple approach.
In [34]: firstlast
Out[34]:
string First_Name Last_Name
0 John Smith John John
1 Jane Cook Jane Jane
Changing case
clear
input str20 string
"John Smith"
"Jane Cook"
end
In [39]: firstlast
Out[39]:
string upper lower title
0 John Smith JOHN SMITH john smith John Smith
1 Jane Cook JANE COOK jane cook Jane Cook
Merging
In [41]: df1
Out[41]:
key value
0 A 0.469112
1 B -0.282863
[email protected]
166FVD0TPV 2 C -1.509059
3 D -1.135632
In [43]: df2
Out[43]:
key value
0 B 1.212112
1 D -0.173215
2 D 0.119209
3 E -1.044236
In Stata, to perform a merge, one data set must be in memory and the other must be referenced as a file name on disk.
In contrast, Python must have both DataFrames already in memory.
By default, Stata performs an outer join, where all observations from both data sets are left in memory after the merge.
One can keep only observations from the initial data set, the merged data set, or the intersection of the two by using
the values created in the _merge variable.
preserve
* Left join
merge 1:n key using df2.dta
keep if _merge == 1
* Right join
restore, preserve
merge 1:n key using df2.dta
keep if _merge == 2
* Inner join
restore, preserve
merge 1:n key using df2.dta
[email protected]
166FVD0TPV keep if _merge == 3
* Outer join
restore
merge 1:n key using df2.dta
pandas DataFrames have a DataFrame.merge() method, which provides similar functionality. Note that different
join types are accomplished via the how keyword.
In [45]: inner_join
Out[45]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
In [47]: left_join
Out[47]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
(continues on next page)
In [49]: right_join
Out[49]:
key value_x value_y
0 B -0.282863 1.212112
1 D -1.135632 -0.173215
2 D -1.135632 0.119209
3 E NaN -1.044236
In [51]: outer_join
Out[51]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236
Missing data
Like Stata, pandas has a representation for missing data – the special float value NaN (not a number). Many of the
[email protected]
166FVD0TPV semantics are the same; for example missing data propagates through numeric operations, and is ignored by default
for aggregations.
In [52]: outer_join
Out[52]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E NaN -1.044236
In [54]: outer_join['value_x'].sum()
Out[54]: -3.5940742896293765
One difference is that missing data cannot be compared to its sentinel value. For example, in Stata you could do this
to filter missing values.
This doesn’t work in pandas. Instead, the pd.isna() or pd.notna() functions should be used for comparisons.
In [55]: outer_join[pd.isna(outer_join['value_x'])]
Out[55]:
key value_x value_y
5 E NaN -1.044236
In [56]: outer_join[pd.notna(outer_join['value_x'])]
Out[56]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 NaN
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
Pandas also provides a variety of methods to work with missing data – some of which would be challenging to express
in Stata. For example, there are methods to drop all rows with any missing values, replacing missing values with a
specified value, like the mean, or forward filling from previous rows. See the missing data documentation for more.
# Fill forwards
In [58]: outer_join.fillna(method='ffill')
Out[58]:
key value_x value_y
0 A 0.469112 NaN
1 B -0.282863 1.212112
2 C -1.509059 1.212112
3 D -1.135632 -0.173215
4 D -1.135632 0.119209
5 E -1.135632 -1.044236
GroupBy
Aggregation
Stata’s collapse can be used to group by one or more key variables and compute aggregations on numeric columns.
pandas provides a flexible groupby mechanism that allows similar aggregations. See the groupby documentation for
more details and examples.
In [61]: tips_summed.head()
Out[61]:
total_bill tip
sex smoker
Female No 869.68 149.77
Yes 527.27 96.74
Male No 1725.75 302.00
Yes 1217.07 183.07
Transformation
In Stata, if the group aggregations need to be used with the original data set, one would usually use bysort with
egen(). For example, to subtract the mean for each observation by smoker group.
[email protected]
166FVD0TPV
bysort sex smoker: egen group_bill = mean(total_bill)
generate adj_total_bill = total_bill - group_bill
pandas groupby provides a transform mechanism that allows these type of operations to be succinctly expressed
in one operation.
In [62]: gb = tips.groupby('smoker')['total_bill']
In [64]: tips.head()
Out[64]:
total_bill tip sex smoker day time size adj_total_bill
67 1.07 1.00 Female Yes Sat Dinner 1 -17.686344
92 3.75 1.00 Female Yes Fri Dinner 2 -15.006344
111 5.25 1.00 Female No Sat Dinner 1 -11.938278
145 6.35 1.50 Female No Thur Lunch 2 -10.838278
135 6.51 1.25 Female No Thur Lunch 2 -10.678278
By group processing
In addition to aggregation, pandas groupby can be used to replicate most other bysort processing from Stata. For
example, the following example lists the first observation in the current sort order by sex/smoker group.
Other considerations
Disk vs memory
Pandas and Stata both operate exclusively in memory. This means that the size of data able to be loaded in pandas is
limited by your machine’s memory. If out of core processing is needed, one possibility is the dask.dataframe library,
which provides a subset of pandas functionality for an on-disk DataFrame.
[email protected]
166FVD0TPV
2.4.8 Tutorials
This is a guide to many pandas tutorials, geared mainly for new users.
Internal guides
Community guides
The goal of this 2015 cookbook (by Julia Evans) is to give you some concrete examples for getting started with pandas.
These are examples with real-world data, and all the bugs and weirdness that entails. For the table of contents, see the
pandas-cookbook GitHub repository.
This guide is an introduction to the data analysis process using the Python data ecosystem and an interesting open
dataset. There are four sections covering selected topics as munging data, aggregating data, visualizing data and time
series.
Practice your skills with real data sets and exercises. For more resources, please visit the main repository.
Modern pandas
Tutorial series written in 2016 by Tom Augspurger. The source may be found in the GitHub repository
TomAugspurger/effective-pandas.
• Modern Pandas
• Method Chaining
• Indexes
[email protected]
• Performance
166FVD0TPV
• Tidy Data
• Visualization
• Timeseries
Video tutorials
Various tutorials
[email protected]
166FVD0TPV
THREE
USER GUIDE
The User Guide covers all of pandas by topic area. Each of the subsections introduces a topic (such as “working with
missing data”), and discusses how pandas approaches the problem, with many examples throughout.
Users brand-new to pandas should start with 10min.
Further information on any specific method can be obtained in the API reference.
The pandas I/O API is a set of top level reader functions accessed like pandas.read_csv() that generally
return a pandas object. The corresponding writer functions are object methods that are accessed like DataFrame.
to_csv(). Below is a table containing available readers and writers.
[email protected]
166FVD0TPV Format Data Description Reader Writer
Type
text CSV read_csv to_csv
text Fixed-Width Text File read_fwf
text JSON read_json to_json
text HTML read_html to_html
text Local clipboard read_clipboard to_clipboard
MS Excel read_excel to_excel
binary OpenDocument read_excel
binary HDF5 Format read_hdf to_hdf
binary Feather Format read_feather to_feather
binary Parquet Format read_parquet to_parquet
binary ORC Format read_orc
binary Msgpack read_msgpack to_msgpack
binary Stata read_stata to_stata
binary SAS read_sas
binary SPSS read_spss
binary Python Pickle Format read_pickle to_pickle
SQL SQL read_sql to_sql
SQL Google BigQuery read_gbq to_gbq
Note: For examples that use the StringIO class, make sure you import it according to your Python version, i.e.
from StringIO import StringIO for Python 2 and from io import StringIO for Python 3.
227
This file is meant for personal use by [email protected] only.
Sharing or publishing the contents in part or full is liable for legal action.
pandas: powerful Python data analysis toolkit, Release 1.0.3
The workhorse function for reading text files (a.k.a. flat files) is read_csv(). See the cookbook for some advanced
strategies.
Parsing options
Basic
header [int or list of ints, default 'infer'] Row number(s) to use as the column names, and the start of the data.
Default behavior is to infer the column names: if no names are passed the behavior is identical to header=0
and column names are inferred from the first line of the file, if column names are passed explicitly then the
behavior is identical to header=None. Explicitly pass header=0 to be able to replace existing names.
The header can be a list of ints that specify row locations for a MultiIndex on the columns e.g. [0,1,3].
Intervening rows that are not specified will be skipped (e.g. 2 in this example is skipped). Note that this
parameter ignores commented lines and empty lines if skip_blank_lines=True, so header=0 denotes the
first line of data rather than the first line of the file.
names [array-like, default None] List of column names to use. If file contains no header row, then you should
explicitly pass header=None. Duplicates in this list are not allowed.
index_col [int, str, sequence of int / str, or False, default None] Column(s) to use as the row labels of the
DataFrame, either given as string name or column index. If a sequence of int / str is given, a MultiIndex
is used.
Note: index_col=False can be used to force pandas to not use the first column as the index, e.g. when you
have a malformed file with delimiters at the end of each line.
usecols [list-like or callable, default None] Return a subset of the columns. If list-like, all elements must either be
positional (i.e. integer indices into the document columns) or strings that correspond to column names provided
either by the user in names or inferred from the document header row(s). For example, a valid list-like usecols
parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
In [4]: pd.read_csv(StringIO(data))
Out[4]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
Out[5]:
col1 col3
[email protected]
0 a 1
166FVD0TPV 1 a 2
2 c 3
Using this parameter results in much faster parsing time and lower memory usage.
squeeze [boolean, default False] If the parsed data only contains one column then return a Series.
prefix [str, default None] Prefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, . . .
mangle_dupe_cols [boolean, default True] Duplicate columns will be specified as ‘X’, ‘X.1’. . . ’X.N’, rather than
‘X’. . . ’X’. Passing in False will cause data to be overwritten if there are duplicate names in the columns.
dtype [Type name or dict of column -> type, default None] Data type for data or columns. E.g. {'a': np.
float64, 'b': np.int32} (unsupported with engine='python'). Use str or object together with
suitable na_values settings to preserve and not interpret dtype.
engine [{'c', 'python'}] Parser engine to use. The C engine is faster while the Python engine is currently more
feature-complete.
converters [dict, default None] Dict of functions for converting values in certain columns. Keys can either be integers
or column labels.
true_values [list, default None] Values to consider as True.
false_values [list, default None] Values to consider as False.
skipinitialspace [boolean, default False] Skip spaces after delimiter.
skiprows [list-like or integer, default None] Line numbers to skip (0-indexed) or number of lines to skip (int) at the
start of the file.
If callable, the callable function will be evaluated against the row indices, returning True if the row should be
skipped and False otherwise:
In [7]: pd.read_csv(StringIO(data))
Out[7]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
skipfooter [int, default 0] Number of lines at bottom of file to skip (unsupported with engine=’c’).
nrows [int, default None] Number of rows of file to read. Useful for reading pieces of large files.
low_memory [boolean, default True] Internally process the file in chunks, resulting in lower memory use while
parsing, but possibly mixed type inference. To ensure no mixed types either set False, or specify the type with
[email protected]
166FVD0TPV the dtype parameter. Note that the entire file is read into a single DataFrame regardless, use the chunksize
or iterator parameter to return the data in chunks. (Only valid with C parser)
memory_map [boolean, default False] If a filepath is provided for filepath_or_buffer, map the file object
directly onto memory and access the data directly from there. Using this option can improve performance
because there is no longer any I/O overhead.
na_values [scalar, str, list-like, or dict, default None] Additional strings to recognize as NA/NaN. If dict passed,
specific per-column NA values. See na values const below for a list of the values interpreted as NaN by default.
keep_default_na [boolean, default True] Whether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
• If keep_default_na is True, and na_values are specified, na_values is appended to the default NaN values
used for parsing.
• If keep_default_na is True, and na_values are not specified, only the default NaN values are used for
parsing.
• If keep_default_na is False, and na_values are specified, only the NaN values specified na_values are
used for parsing.
• If keep_default_na is False, and na_values are not specified, no strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and na_values parameters will be ignored.
na_filter [boolean, default True] Detect missing value markers (empty strings and the value of na_values). In data
without any NAs, passing na_filter=False can improve the performance of reading a large file.
verbose [boolean, default False] Indicate number of NA values placed in non-numeric columns.
skip_blank_lines [boolean, default True] If True, skip over blank lines rather than interpreting as NaN values.
Datetime handling
parse_dates [boolean or list of ints or names or list of lists or dict, default False.]
• If True -> try parsing the index.
• If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column.
• If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column.
• If {'foo': [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’. A fast-path exists for iso8601-
formatted dates.
infer_datetime_format [boolean, default False] If True and parse_dates is enabled for a column, attempt to infer
the datetime format to speed up the processing.
keep_date_col [boolean, default False] If True and parse_dates specifies combining multiple columns then keep
the original columns.
date_parser [function, default None] Function to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the conversion. pandas will try to
call date_parser in three different ways, advancing to the next if an exception occurs: 1) Pass one or more arrays
(as defined by parse_dates) as arguments; 2) concatenate (row-wise) the string values from the columns defined
by parse_dates into a single array and pass that; and 3) call date_parser once for each row using one or more
strings (corresponding to the columns defined by parse_dates) as arguments.
[email protected]
dayfirst [boolean, default False] DD/MM format dates, international and European format.
166FVD0TPV
cache_dates [boolean, default True] If True, use a cache of unique, converted dates to apply the datetime conversion.
May produce significant speed-up when parsing duplicate date strings, especially ones with timezone offsets.
New in version 0.25.0.
Iteration
iterator [boolean, default False] Return TextFileReader object for iteration or getting chunks with get_chunk().
chunksize [int, default None] Return TextFileReader object for iteration. See iterating and chunking below.
compression [{'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'] For on-the-fly decompres-
sion of on-disk data. If ‘infer’, then use gzip, bz2, zip, or xz if filepath_or_buffer is a string ending in ‘.gz’,
‘.bz2’, ‘.zip’, or ‘.xz’, respectively, and no decompression otherwise. If using ‘zip’, the ZIP file must contain
only one data file to be read in. Set to None for no decompression.
Changed in version 0.24.0: ‘infer’ option added and set to default.
thousands [str, default None] Thousands separator.
decimal [str, default '.'] Character to recognize as decimal point. E.g. use ',' for European data.
float_precision [string, default None] Specifies which converter the C engine should use for floating-point values.
The options are None for the ordinary converter, high for the high-precision converter, and round_trip for
the round-trip converter.
lineterminator [str (length 1), default None] Character to break file into lines. Only valid with C parser.
quotechar [str (length 1)] The character used to denote the start and end of a quoted item. Quoted items can include
the delimiter and it will be ignored.
quoting [int or csv.QUOTE_* instance, default 0] Control field quoting behavior per csv.QUOTE_* constants.
Use one of QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
doublequote [boolean, default True] When quotechar is specified and quoting is not QUOTE_NONE, indi-
cate whether or not to interpret two consecutive quotechar elements inside a field as a single quotechar
element.
escapechar [str (length 1), default None] One-character string used to escape delimiter when quoting is
QUOTE_NONE.
comment [str, default None] Indicates remainder of line should not be parsed. If found at the beginning of a line,
the line will be ignored altogether. This parameter must be a single character. Like empty lines (as long
as skip_blank_lines=True), fully commented lines are ignored by the parameter header but not by
skiprows. For example, if comment='#', parsing ‘#empty\na,b,c\n1,2,3’ with header=0 will result in ‘a,b,c’
being treated as the header.
encoding [str, default None] Encoding to use for UTF when reading/writing (e.g. 'utf-8'). List of Python standard
encodings.
dialect [str or csv.Dialect instance, default None] If provided, this parameter will override values (default or
not) for the following parameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting.
If it is necessary to override values, a ParserWarning will be issued. See csv.Dialect documentation for
more details.
Error handling
[email protected]
166FVD0TPV
error_bad_lines [boolean, default True] Lines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be returned. If False, then these “bad lines”
will dropped from the DataFrame that is returned. See bad lines below.
warn_bad_lines [boolean, default True] If error_bad_lines is False, and warn_bad_lines is True, a warning for
each “bad line” will be output.
You can indicate the data type for the whole DataFrame or individual columns:
In [11]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11
In [13]: df
Out[13]:
a b c d
0 1 2 3 4
1 5 6 7 8
2 9 10 11 NaN
In [14]: df['a'][0]
Out[14]: '1'
In [15]: df = pd.read_csv(StringIO(data),
....: dtype={'b': object, 'c': np.float64, 'd': 'Int64'})
....:
In [16]: df.dtypes
Out[16]:
a int64
b object
c float64
d Int64
dtype: object
Fortunately, pandas offers more than one way to ensure that your column(s) contain only one dtype. If you’re
unfamiliar with these concepts, you can see here to learn more about dtypes, and here to learn more about object
conversion in pandas.
For instance, you can use the converters argument of read_csv():
[email protected]
166FVD0TPV
In [17]: data = ("col_1\n"
....: "1\n"
....: "2\n"
....: "'A'\n"
....: "4.22")
....:
In [19]: df
Out[19]:
col_1
0 1
1 2
2 'A'
3 4.22
In [20]: df['col_1'].apply(type).value_counts()
Out[20]:
<class 'str'> 4
Name: col_1, dtype: int64
Or you can use the to_numeric() function to coerce the dtypes after reading in the data,
In [21]: df2 = pd.read_csv(StringIO(data))
In [24]: df2['col_1'].apply(type).value_counts()
Out[24]:
<class 'float'> 4
Name: col_1, dtype: int64
which will convert all valid parsing to floats, leaving the invalid parsing as NaN.
Ultimately, how you deal with reading in columns containing mixed dtypes depends on your specific needs. In the case
above, if you wanted to NaN out the data anomalies, then to_numeric() is probably your best option. However, if
you wanted for all the data to be coerced, no matter the type, then using the converters argument of read_csv()
would certainly be worth trying.
Note: In some cases, reading in abnormal data with columns containing mixed dtypes will result in an inconsistent
dataset. If you rely on pandas to infer the dtypes of your columns, the parsing engine will go and infer the dtypes for
different chunks of the data, rather than the whole dataset at once. Consequently, you can end up with column(s) with
mixed dtypes. For example,
In [29]: mixed_df['col_1'].apply(type).value_counts()
Out[29]:
<class 'int'> 737858
<class 'str'> 262144
Name: col_1, dtype: int64
In [30]: mixed_df['col_1'].dtype
Out[30]: dtype('O')
will result with mixed_df containing an int dtype for certain chunks of the column, and str for others due to the
mixed dtypes from the data that was read in. It is important to note that the overall column will be marked with a
dtype of object, which is used for columns with mixed dtypes.
In [32]: pd.read_csv(StringIO(data))
Out[32]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [33]: pd.read_csv(StringIO(data)).dtypes
Out[33]:
col1 object
col2 object
col3 int64
dtype: object
Note: With dtype='category', the resulting categories will always be parsed as strings (object dtype). If the
categories are numeric they can be converted using the to_numeric() function, or as appropriate, another converter
such as to_datetime().
When dtype is a CategoricalDtype with homogeneous categories ( all numeric, all datetimes, etc.), the
conversion is done automatically.
In [42]: df.dtypes
Out[42]:
col1 category
col2 category
[email protected]
col3 category
166FVD0TPV dtype: object
In [43]: df['col3']
Out[43]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, object): [1, 2, 3]
In [45]: df['col3']
Out[45]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, int64): [1, 2, 3]
A file may or may not have a header row. pandas assumes the first row should be used as the column names:
In [47]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [48]: pd.read_csv(StringIO(data))
Out[48]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
By specifying the names argument in conjunction with header you can indicate other names to use and whether or
not to throw away the header row (if any):
[email protected]
In [49]: print(data)
166FVD0TPV a,b,c
1,2,3
4,5,6
7,8,9
If the header is in a row other than the first, pass the row number to header. This will skip the preceding rows:
Note: Default behavior is to infer the column names: if no names are passed the behavior is identical to header=0
and column names are inferred from the first non-blank line of the file, if column names are passed explicitly then the
behavior is identical to header=None.
If the file or header contains duplicate names, pandas will by default distinguish between them so as to prevent
overwriting data:
In [55]: pd.read_csv(StringIO(data))
Out[55]:
[email protected]
a b a.1
166FVD0TPV 0 0 1 2
1 3 4 5
There is no more duplicate data because mangle_dupe_cols=True by default, which modifies a series of dupli-
cate columns ‘X’, . . . , ‘X’ to become ‘X’, ‘X.1’, . . . , ‘X.N’. If mangle_dupe_cols=False, duplicate data can
arise:
To prevent users from encountering this problem with duplicate data, a ValueError exception is raised if
mangle_dupe_cols != True:
The usecols argument allows you to select any subset of the columns in a file, either using the column names,
position numbers or a callable:
In [56]: data = 'a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz'
In [57]: pd.read_csv(StringIO(data))
Out[57]:
a b c d
0 1 2 3 foo
1 4 5 6 bar
2 7 8 9 baz
The usecols argument can also be used to specify which columns not to use in the final result:
In [61]: pd.read_csv(StringIO(data), usecols=lambda x: x not in ['a', 'c'])
Out[61]:
b d
0 2 foo
1 5 bar
2 8 baz
In this case, the callable is specifying that we exclude the “a” and “c” columns from the output.
If the comment parameter is specified, then completely commented lines will be ignored. By default, completely
blank lines will be ignored as well.
In [62]: data = ('\n'
....: 'a,b,c\n'
....: ' \n'
(continues on next page)
In [63]: print(data)
a,b,c
# commented line
1,2,3
4,5,6
Warning: The presence of ignored lines might create ambiguities involving line numbers; the parameter header
uses row numbers (ignoring commented/empty lines), while skiprows uses line numbers (including com-
mented/empty lines):
In [67]: data = ('#comment\n'
....: 'a,b,c\n'
....: 'A,B,C\n'
....: '1,2,3')
....:
....: '#comment\n'
....: 'a,b,c\n'
....: '1,2,3')
....:
If both header and skiprows are specified, header will be relative to the end of skiprows. For example:
In [72]: print(data)
# empty
# second empty line
# third emptyline
X,Y,Z
[email protected]
1,2,3
166FVD0TPV A,B,C
1,2.,4.
5.,NaN,10.0
Comments
In [74]: print(open('tmp.csv').read())
ID,level,category
Patient1,123000,x # really unpleasant
Patient2,23000,y # wouldn't take his medicine
Patient3,1234018,z # awesome
In [75]: df = pd.read_csv('tmp.csv')
In [76]: df
(continues on next page)
In [78]: df
Out[78]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
The encoding argument should be used for encoded unicode data, which will result in byte strings being decoded
to unicode in the result:
In [83]: df
Out[83]:
word length
0 Träumen 7
1 Grüße 5
In [84]: df['word'][1]
Out[84]: 'Grüße'
Some formats which encode all characters as multiple bytes, like UTF-16, won’t parse correctly at all without speci-
fying the encoding. Full list of Python standard encodings.
If a file has one more column of data than the number of column names, the first column will be used as the
DataFrame’s row names:
In [86]: pd.read_csv(StringIO(data))
Out[86]:
a b c
4 apple bat 5.7
8 orange cow 10.0
[email protected]
Ordinarily, you can achieve this behavior using the index_col option.
166FVD0TPV
There are some exception cases when a file has been prepared with delimiters at the end of each data line, confusing
the parser. To explicitly disable the index column inference and discard the last column, pass index_col=False:
In [90]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [91]: pd.read_csv(StringIO(data))
Out[91]:
a b c
4 apple bat NaN
8 orange cow NaN
If a subset of data is being parsed using the usecols option, the index_col specification is based on that subset,
not the original data.
In [94]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
Date Handling
[email protected]
To better facilitate working with datetime data, read_csv() uses the keyword arguments parse_dates and
166FVD0TPV date_parser to allow users to specify a variety of columns and date/time formats to turn the input text data into
datetime objects.
The simplest case is to just pass in parse_dates=True:
In [98]: df
Out[98]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
It is often the case that we may want to store date and time data separately, or store various date fields separately. the
parse_dates keyword can be used to specify a combination of columns to parse the dates and/or times from.
You can specify a list of column lists to parse_dates, the resulting date columns will be prepended to the output
(so as to not affect the existing column order) and the new column names will be the concatenation of the component
column names:
In [100]: print(open('tmp.csv').read())
KORD,19990127, 19:00:00, 18:56:00, 0.8100
KORD,19990127, 20:00:00, 19:56:00, 0.0100
KORD,19990127, 21:00:00, 20:56:00, -0.5900
KORD,19990127, 21:00:00, 21:18:00, -0.9900
KORD,19990127, 22:00:00, 21:56:00, -0.5900
KORD,19990127, 23:00:00, 22:56:00, -0.5900
In [102]: df
Out[102]:
1_2 1_3 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
By default the parser removes the component date columns, but you can choose to retain them via the
keep_date_col keyword:
In [104]: df
[email protected]
Out[104]:
166FVD0TPV 1_2 1_3 0 1 2 3 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 19990127 19:00:00 18:56:00 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 19990127 20:00:00 19:56:00 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD 19990127 21:00:00 20:56:00 -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD 19990127 21:00:00 21:18:00 -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD 19990127 22:00:00 21:56:00 -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD 19990127 23:00:00 22:56:00 -0.59
Note that if you wish to combine multiple columns into a single date column, a nested list must be used. In other
words, parse_dates=[1, 2] indicates that the second and third columns should each be parsed as separate date
columns while parse_dates=[[1, 2]] means the two columns should be parsed into a single column.
You can also use a dict to specify custom name columns:
In [107]: df
Out[107]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
It is important to remember that if multiple text columns are to be parsed into a single date column, then a new column
is prepended to the data. The index_col specification is based off of this new set of columns rather than the original
data columns:
In [110]: df
Out[110]:
actual 0 4
nominal
1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Note: If a column or index contains an unparsable date, the entire column or index will be returned unaltered as an
object data type. For non-standard datetime parsing, use to_datetime() after pd.read_csv.
Note: read_csv has a fast_path for parsing datetime strings in iso8601 format, e.g “2000-01-01T00:01:02+00:00” and
similar variations. If you can arrange for your data to store datetimes in this format, load times will be significantly
faster, ~20x has been observed.
[email protected]
166FVD0TPV
Note: When passing a dict as the parse_dates argument, the order of the columns prepended is not guaranteed,
because dict objects do not impose an ordering on their keys. On Python 2.7+ you may use collections.OrderedDict
instead of a regular dict if this matters to you. Because of this, when using a dict for ‘parse_dates’ in conjunction with
the index_col argument, it’s best to specify index_col as a column label rather then as an index on the resulting frame.
Finally, the parser allows you to specify a custom date_parser function to take full advantage of the flexibility of
the date parsing API:
In [112]: df
Out[112]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Pandas will try to call the date_parser function in three different ways. If an exception is raised, the next one is
tried:
1. date_parser is first called with one or more arrays as arguments, as defined using parse_dates (e.g.,
date_parser(['2013', '2013'], ['1', '2'])).
2. If #1 fails, date_parser is called with all the columns concatenated row-wise into a single array (e.g.,
date_parser(['2013 1', '2013 2'])).
3. If #2 fails, date_parser is called once for every row with one or more string arguments from
the columns indicated with parse_dates (e.g., date_parser('2013', '1') for the first row,
date_parser('2013', '2') for the second, etc.).
Note that performance-wise, you should try these methods of parsing dates in order:
1. Try to infer the format using infer_datetime_format=True (see section below).
2. If you know the format, use pd.to_datetime(): date_parser=lambda x: pd.
to_datetime(x, format=...).
3. If you have a really non-standard format, use a custom date_parser function. For optimal performance, this
should be vectorized, i.e., it should accept arrays as arguments.
You can explore the date parsing functionality in date_converters.py and add your own. We would love to turn this
module into a community supported set of date/time parsers. To get you started, date_converters.py contains
functions to parse dual date and time columns, year/month/day columns, and year/month/day/hour/minute/second
columns. It also contains a generic_parser function so you can curry it with a function that deals with a single
date rather than the entire array.
In [115]: df['a']
Out[115]:
0 2000-01-01 00:00:00+05:00
1 2000-01-01 00:00:00+06:00
Name: a, dtype: object
To parse the mixed-timezone values as a datetime column, pass a partially-applied to_datetime() with
utc=True as the date_parser.
In [117]: df['a']
Out[117]:
0 1999-12-31 19:00:00+00:00
(continues on next page)
If you have parse_dates enabled for some or all of your columns, and your datetime strings are all formatted the
same way, you may get a large speed up by setting infer_datetime_format=True. If set, pandas will attempt
to guess the format of your datetime strings, and then use a faster means of parsing the strings. 5-10x parsing speeds
have been observed. pandas will fallback to the usual parsing if either the format cannot be guessed or the format that
was guessed cannot properly parse the entire column of strings. So in general, infer_datetime_format should
not have any negative consequences if enabled.
Here are some examples of datetime strings that can be guessed (All representing December 30th, 2011 at 00:00:00):
• “20111230”
• “2011/12/30”
• “20111230 00:00:00”
• “12/30/2011 00:00:00”
• “30/Dec/2011 00:00:00”
• “30/December/2011 00:00:00”
Note that infer_datetime_format is sensitive to dayfirst. With dayfirst=True, it will guess
“01/12/2011” to be December 1st. With dayfirst=False (default) it will guess “01/12/2011” to be January
[email protected]
12th.
166FVD0TPV
# Try to infer the format for the index column
In [118]: df = pd.read_csv('foo.csv', index_col=0, parse_dates=True,
.....: infer_datetime_format=True)
.....:
In [119]: df
Out[119]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
While US date formats tend to be MM/DD/YYYY, many international formats use DD/MM/YYYY instead. For
convenience, a dayfirst keyword is provided:
In [120]: print(open('tmp.csv').read())
date,value,cat
1/6/2000,5,a
2/6/2000,10,b
3/6/2000,15,c
The parameter float_precision can be specified in order to use a specific floating-point converter during parsing
with the C engine. The options are the ordinary converter, the high-precision converter, and the round-trip converter
(which is guaranteed to round-trip values after writing to a file). For example:
In [123]: val = '0.3066101993807095471566981359501369297504425048828125'
Thousand separators
For large numbers that have been written with a thousands separator, you can set the thousands keyword to a string
of length 1 so that integers will be parsed correctly:
By default, numbers with a thousands separator will be parsed as strings:
In [128]: print(open('tmp.csv').read())
ID|level|category
Patient1|123,000|x
Patient2|23,000|y
Patient3|1,234,018|z
In [130]: df
(continues on next page)
In [131]: df.level.dtype
Out[131]: dtype('O')
In [132]: print(open('tmp.csv').read())
ID|level|category
Patient1|123,000|x
Patient2|23,000|y
Patient3|1,234,018|z
In [134]: df
Out[134]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
In [135]: df.level.dtype
Out[135]: dtype('int64')
[email protected]
166FVD0TPV
NA values
To control which values are parsed as missing values (which are signified by NaN), specify a string in na_values.
If you specify a list of strings, then all values in it are considered to be missing values. If you specify a number (a
float, like 5.0 or an integer like 5), the corresponding equivalent values will also imply a missing value (in this
case effectively [5.0, 5] are recognized as NaN).
To completely override the default values that are recognized as missing, specify keep_default_na=False.
The default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/
A N/A', '#N/A', 'N/A', 'n/a', 'NA', '<NA>', '#NA', 'NULL', 'null', 'NaN',
'-NaN', 'nan', '-nan', ''].
Let us consider some examples:
pd.read_csv('path_to_file.csv', na_values=[5])
In the example above 5 and 5.0 will be recognized as NaN, in addition to the defaults. A string will first be interpreted
as a numerical 5, then as a NaN.
pd.read_csv('path_to_file.csv', na_values=["Nope"])
The default values, in addition to the string "Nope" are recognized as NaN.
Infinity
inf like values will be parsed as np.inf (positive infinity), and -inf as -np.inf (negative infinity). These will
ignore the case of the value, meaning Inf, will also be parsed as np.inf.
Returning Series
Using the squeeze keyword, the parser will return output with a single column as a Series:
In [136]: print(open('tmp.csv').read())
level
Patient1,123000
Patient2,23000
Patient3,1234018
In [138]: output
Out[138]:
Patient1 123000
Patient2 23000
Patient3 1234018
[email protected]
Name: level, dtype: int64
166FVD0TPV
In [139]: type(output)
Out[139]: pandas.core.series.Series
Boolean values
The common values True, False, TRUE, and FALSE are all recognized as boolean. Occasionally you might want to
recognize other values as being boolean. To do this, use the true_values and false_values options as follows:
In [141]: print(data)
a,b,c
1,Yes,2
3,No,4
In [142]: pd.read_csv(StringIO(data))
Out[142]:
a b c
0 1 Yes 2
1 3 No 4
Some files may have malformed lines with too few fields or too many. Lines with too few fields will have NA values
filled in the trailing fields. Lines with too many fields will raise an error by default:
In [145]: pd.read_csv(StringIO(data))
---------------------------------------------------------------------------
ParserError Traceback (most recent call last)
<ipython-input-145-6388c394e6b8> in <module>
----> 1 pd.read_csv(StringIO(data))
674 )
675
--> 676 return _read(filepath_or_buffer, kwds)
677
678 parser_f.__name__ = name
/pandas/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()
/pandas/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()
/pandas/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()
/pandas/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()
/pandas/pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()
Out[29]:
a b c
0 1 2 3
1 8 9 10
You can also use the usecols parameter to eliminate extraneous column data that appear in some lines but not others:
Dialect
The dialect keyword gives greater flexibility in specifying the file format. By default it uses the Excel dialect but
you can specify either the dialect name or a csv.Dialect instance.
Suppose you had data with unenclosed quotes:
In [146]: print(data)
label1,label2,label3
index1,"a,c,e
index2,b,d,f
By default, read_csv uses the Excel dialect and treats the double quote as the quote character, which causes it to
fail when it finds a newline before it finds the closing double quote.
We can get around this using dialect:
Another common dialect option is skipinitialspace, to skip any whitespace after a delimiter:
In [154]: print(data)
a, b, c
1, 2, 3
4, 5, 6
The parsers make every attempt to “do the right thing” and not be fragile. Type inference is a pretty big deal. If a
column can be coerced to integer dtype without altering the contents, the parser will do so. Any non-numeric columns
will come through as object dtype as with the rest of pandas objects.
Quotes (and other escape characters) in embedded fields can be handled in any number of ways. One way is to use
backslashes; to properly parse this data, you should pass the escapechar option:
In [157]: print(data)
a,b
"hello, \"Bob\", nice to see you",5
While read_csv() reads delimited data, the read_fwf() function works with data files that have known and fixed
column widths. The function parameters to read_fwf are largely the same as read_csv with two extra parameters,
and a different usage of the delimiter parameter:
• colspecs: A list of pairs (tuples) giving the extents of the fixed-width fields of each line as half-open intervals
(i.e., [from, to[ ). String value ‘infer’ can be used to instruct the parser to try detecting the column specifications
from the first 100 rows of the data. Default behavior, if not specified, is to infer.
• widths: A list of field widths which can be used instead of ‘colspecs’ if the intervals are contiguous.
• delimiter: Characters to consider as filler characters in the fixed-width file. Can be used to specify the filler
character of the fields if it is not spaces (e.g., ‘~’).
Consider a typical fixed-width data file:
In [159]: print(open('bar.csv').read())
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
In order to parse this file into a DataFrame, we simply need to supply the column specifications to the read_fwf
function along with the file name:
Note how the parser automatically picks column names X.<column number> when header=None argument is spec-
ified. Alternatively, you can supply just the column widths for contiguous columns:
In [165]: df
Out[165]:
0 1 2 3
0 id8141 360.242940 149.910199 11950.7
1 id1594 444.953632 166.985655 11788.4
2 id1849 364.136849 183.628767 11806.2
3 id1230 413.836124 184.375703 11916.8
4 id1948 502.953953 173.237159 12468.3
The parser will take care of extra white spaces around the columns so it’s ok to have extra separation between the
columns in the file.
By default, read_fwf will try to infer the file’s colspecs by using the first 100 rows of the file. It can do it
only in cases when the columns are aligned and correctly separated by the provided delimiter (default delimiter is
whitespace).
In [167]: df
Out[167]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
read_fwf supports the dtype parameter for specifying the types of parsed columns to be different from the inferred
type.
[email protected]
In [169]: pd.read_fwf('bar.csv', header=None, dtype={2: 'object'}).dtypes
166FVD0TPV Out[169]:
0 object
1 float64
2 object
3 float64
dtype: object
Indexes
Consider a file with one less entry in the header than the number of data column:
In [170]: print(open('foo.csv').read())
A,B,C
20090101,a,1,2
20090102,b,3,4
20090103,c,4,5
In this special case, read_csv assumes that the first column is to be used as the index of the DataFrame:
In [171]: pd.read_csv('foo.csv')
Out[171]:
A B C
20090101 a 1 2
(continues on next page)
Note that the dates weren’t automatically parsed. In that case you would need to do as before:
In [172]: df = pd.read_csv('foo.csv', parse_dates=True)
In [173]: df.index
Out[173]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype=
˓→'datetime64[ns]', freq=None)
The index_col argument to read_csv can take a list of column numbers to turn multiple columns into a
MultiIndex for the index of the returned object:
In [175]: df = pd.read_csv("data/mindex_ex.csv", index_col=[0, 1])
In [176]: df
Out[176]:
zit xit
year indiv
1977 A 1.20 0.60
B 1.50 0.50
C 1.70 0.80
1978 A 0.20 0.06
B 0.70 0.20
C 0.80 0.30
D 0.90 0.50
E 1.40 0.90
1979 C 0.20 0.15
D 0.14 0.05
E 0.50 0.15
F 1.20 0.50
G 3.40 1.90
(continues on next page)
In [177]: df.loc[1978]
Out[177]:
zit xit
indiv
A 0.2 0.06
B 0.7 0.20
C 0.8 0.30
D 0.9 0.50
E 1.4 0.90
By specifying list of row locations for the header argument, you can read in a MultiIndex for the columns.
Specifying non-consecutive rows will skip the intervening rows.
In [178]: from pandas._testing import makeCustomDataframe as mkdf
In [180]: df.to_csv('mi.csv')
In [181]: print(open('mi.csv').read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
[email protected]
166FVD0TPV C1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2
Note: If an index_col is not specified (e.g. you don’t have an index, or wrote it with df.to_csv(...,
index=False), then any names on the columns index will be lost.
read_csv is capable of inferring delimited (not necessarily comma-separated) files, as pandas uses the csv.
Sniffer class of the csv module. For this, you have to specify sep=None.
In [185]: print(open('tmp2.sv').read())
:0:1:2:3
0:0.4691122999071863:-0.2828633443286633:-1.5090585031735124:-1.1356323710171934
1:1.2121120250208506:-0.17321464905330858:0.11920871129693428:-1.0442359662799567
2:-0.8618489633477999:-2.1045692188948086:-0.4949292740687813:1.071803807037338
3:0.7215551622443669:-0.7067711336300845:-1.0395749851146963:0.27185988554282986
4:-0.42497232978883753:0.567020349793672:0.27623201927771873:-1.0874006912859915
5:-0.6736897080883706:0.1136484096888855:-1.4784265524372235:0.5249876671147047
[email protected]
6:0.4047052186802365:0.5770459859204836:-1.7150020161146375:-1.0392684835147725
166FVD0TPV 7:-0.3706468582364464:-1.1578922506419993:-1.344311812731667:0.8448851414248841
8:1.0757697837155533:-0.10904997528022223:1.6435630703622064:-1.4693879595399115
9:0.35702056413309086:-0.6746001037299882:-1.776903716971867:-0.9689138124473498
It’s best to use concat() to combine multiple files. See the cookbook for an example.
Suppose you wish to iterate through a (potentially very large) file lazily rather than reading the entire file into memory,
such as the following:
In [187]: print(open('tmp.sv').read())
|0|1|2|3
0|0.4691122999071863|-0.2828633443286633|-1.5090585031735124|-1.1356323710171934
1|1.2121120250208506|-0.17321464905330858|0.11920871129693428|-1.0442359662799567
2|-0.8618489633477999|-2.1045692188948086|-0.4949292740687813|1.071803807037338
3|0.7215551622443669|-0.7067711336300845|-1.0395749851146963|0.27185988554282986
4|-0.42497232978883753|0.567020349793672|0.27623201927771873|-1.0874006912859915
5|-0.6736897080883706|0.1136484096888855|-1.4784265524372235|0.5249876671147047
6|0.4047052186802365|0.5770459859204836|-1.7150020161146375|-1.0392684835147725
7|-0.3706468582364464|-1.1578922506419993|-1.344311812731667|0.8448851414248841
8|1.0757697837155533|-0.10904997528022223|1.6435630703622064|-1.4693879595399115
9|0.35702056413309086|-0.6746001037299882|-1.776903716971867|-0.9689138124473498
In [189]: table
Out[189]:
Unnamed: 0 0 1 2 3
[email protected]
166FVD0TPV 0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
6 6 0.404705 0.577046 -1.715002 -1.039268
7 7 -0.370647 -1.157892 -1.344312 0.844885
8 8 1.075770 -0.109050 1.643563 -1.469388
9 9 0.357021 -0.674600 -1.776904 -0.968914
By specifying a chunksize to read_csv, the return value will be an iterable object of type TextFileReader:
In [190]: reader = pd.read_csv('tmp.sv', sep='|', chunksize=4)
In [191]: reader
Out[191]: <pandas.io.parsers.TextFileReader at 0x7f3d18adb350>
In [194]: reader.get_chunk(5)
Out[194]:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
Under the hood pandas uses a fast and efficient parser implemented in C as well as a Python implementation which is
currently more feature-complete. Where possible pandas uses the C parser (specified as engine='c'), but may fall
back to Python if C-unsupported options are specified. Currently, C-unsupported options include:
• sep other than a single character (e.g. regex separators)
[email protected]
166FVD0TPV • skipfooter
• sep=None with delim_whitespace=False
Specifying any of the above options will produce a ParserWarning unless the python engine is selected explicitly
using engine='python'.
df = pd.read_csv('https://download.bls.gov/pub/time.series/cu/cu.item',
sep='\t')
S3 URLs are handled as well but require installing the S3Fs library:
df = pd.read_csv('s3://pandas-test/tips.csv')
If your S3 bucket requires credentials you will need to set them as environment variables or in the ~/.aws/
credentials config file, refer to the S3Fs documentation on credentials.
The Series and DataFrame objects have an instance method to_csv which allows storing the contents of the
object as a comma-separated-values file. The function takes a number of arguments. Only the first is required.
• path_or_buf: A string path to the file to write or a file object. If a file object it must be opened with
newline=”
• sep : Field delimiter for the output file (default “,”)
• na_rep: A string representation of a missing value (default ‘’)
• float_format: Format string for floating point numbers
• columns: Columns to write (default None)
• header: Whether to write out the column names (default True)
• index: whether to write row (index) names (default True)
• index_label: Column label(s) for index column(s) if desired. If None (default), and header and index are
True, then the index names are used. (A sequence should be given if the DataFrame uses MultiIndex).
• mode : Python write mode, default ‘w’
• encoding: a string representing the encoding to use if the contents are non-ASCII, for Python versions prior
to 3
• line_terminator: Character sequence denoting line end (default os.linesep)
[email protected]
• quoting: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set
166FVD0TPV a float_format then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-
numeric
• quotechar: Character used to quote fields (default ‘”’)
• doublequote: Control quoting of quotechar in fields (default True)
• escapechar: Character used to escape sep and quotechar when appropriate (default None)
• chunksize: Number of rows to write at a time
• date_format: Format string for datetime objects
The DataFrame object has an instance method to_string which allows control over the string representation of
the object. All arguments are optional:
• buf default None, for example a StringIO object
• columns default None, which columns to write
• col_space default None, minimum width of each column.
• na_rep default NaN, representation of NA value
• formatters default None, a dictionary (by column) of functions each of which takes a single argument and
returns a formatted string
• float_format default None, a function which takes a single (float) argument and returns a formatted string;
to be applied to floats in the DataFrame.
• sparsify default True, set to False for a DataFrame with a hierarchical index to print every MultiIndex key
at each row.
• index_names default True, will print the names of the indices
• index default True, will print the index (ie, row labels)
• header default True, will print the column labels
• justify default left, will print column headers left- or right-justified
The Series object also has a to_string method, but with only the buf, na_rep, float_format arguments.
There is also a length argument which, if set to True, will additionally output the length of the Series.
3.1.2 JSON
Writing JSON
A Series or DataFrame can be converted to a valid JSON string. Use to_json with optional parameters:
• path_or_buf : the pathname or buffer to write the output This can be None in which case a JSON string is
returned
• orient :
Series:
split dict like {index -> [index], columns -> [columns], data -> [values]}
records list like [{column -> value}, . . . , {column -> value}]
index dict like {index -> {column -> value}}
columns dict like {column -> {index -> value}}
values just the values array
• date_format : string, type of date conversion, ‘epoch’ for timestamp, ‘iso’ for ISO8601.
• double_precision : The number of decimal places to use when encoding floating point values, default 10.
• force_ascii : force encoded string to be ASCII, default True.
• date_unit : The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’ or
‘ns’ for seconds, milliseconds, microseconds and nanoseconds respectively. Default ‘ms’.
• default_handler : The handler to call if an object cannot otherwise be converted to a suitable format for
JSON. Takes a single argument, which is the object to convert, and returns a serializable object.
• lines : If records orient, then will write each record per line as json.
Note NaN’s, NaT’s and None will be converted to null and datetime objects will be converted based on the
date_format and date_unit parameters.
In [197]: json
Out[197]: '{"A":{"0":-1.2945235903,"1":0.2766617129,"2":-0.0139597524,"3":-0.
˓→0061535699,"4":0.8957173022},"B":{"0":0.4137381054,"1":-0.472034511,"2":-0.
˓→3625429925,"3":-0.923060654,"4":0.8052440254}}'
Orient options
There are a number of different options for the format of the resulting JSON file / string. Consider the following
DataFrame and Series:
In [199]: dfjo
Out[199]:
A B C
x 1 4 7
y 2 5 8
z 3 6 9
[email protected]
166FVD0TPV In [200]: sjo = pd.Series(dict(x=15, y=16, z=17), name='D')
In [201]: sjo
Out[201]:
x 15
y 16
z 17
Name: D, dtype: int64
Column oriented (the default for DataFrame) serializes the data as nested JSON objects with column labels acting
as the primary index:
In [202]: dfjo.to_json(orient="columns")
Out[202]: '{"A":{"x":1,"y":2,"z":3},"B":{"x":4,"y":5,"z":6},"C":{"x":7,"y":8,"z":9}}'
Index oriented (the default for Series) similar to column oriented but the index labels are now primary:
In [203]: dfjo.to_json(orient="index")
Out[203]: '{"x":{"A":1,"B":4,"C":7},"y":{"A":2,"B":5,"C":8},"z":{"A":3,"B":6,"C":9}}'
In [204]: sjo.to_json(orient="index")
Out[204]: '{"x":15,"y":16,"z":17}'
Record oriented serializes the data to a JSON array of column -> value records, index labels are not included. This is
useful for passing DataFrame data to plotting libraries, for example the JavaScript library d3.js:
In [205]: dfjo.to_json(orient="records")
Out[205]: '[{"A":1,"B":4,"C":7},{"A":2,"B":5,"C":8},{"A":3,"B":6,"C":9}]'
In [206]: sjo.to_json(orient="records")
Out[206]: '[15,16,17]'
Value oriented is a bare-bones option which serializes to nested JSON arrays of values only, column and index labels
are not included:
In [207]: dfjo.to_json(orient="values")
Out[207]: '[[1,4,7],[2,5,8],[3,6,9]]'
Split oriented serializes to a JSON object containing separate entries for values, index and columns. Name is also
included for Series:
In [208]: dfjo.to_json(orient="split")
Out[208]: '{"columns":["A","B","C"],"index":["x","y","z"],"data":[[1,4,7],[2,5,8],[3,
˓→6,9]]}'
In [209]: sjo.to_json(orient="split")
Out[209]: '{"name":"D","index":["x","y","z"],"data":[15,16,17]}'
Table oriented serializes to the JSON Table Schema, allowing for the preservation of metadata including but not
limited to dtypes and index names.
[email protected]
Note: Any orient option that encodes to a JSON object will not preserve the ordering of index and column labels
166FVD0TPV during round-trip serialization. If you wish to preserve label ordering use the split option as it uses ordered containers.
Date handling
In [214]: json
Out[214]: '{"date":{"0":"2013-01-01T00:00:00.000Z","1":"2013-01-01T00:00:00.000Z","2":
˓→"2013-01-01T00:00:00.000Z","3":"2013-01-01T00:00:00.000Z","4":"2013-01-01T00:00:00.
˓→000Z"},"B":{"0":2.5656459463,"1":1.3403088498,"2":-0.2261692849,"3":0.8138502857,"4
˓→":-0.8273169356},"A":{"0":-1.2064117817,"1":1.4312559863,"2":-1.1702987971,"3":0.
˓→4108345112,"4":0.1320031703}}'
˓→01T00:00:00.000000Z"},"B":{"0":2.5656459463,"1":1.3403088498,"2":-0.2261692849,"3":
˓→0.8138502857,"4":-0.8273169356},"A":{"0":-1.2064117817,"1":1.4312559863,"2":-1.
˓→1702987971,"3":0.4108345112,"4":0.1320031703}}'
In [218]: json
Out[218]: '{"date":{"0":1356998400,"1":1356998400,"2":1356998400,"3":1356998400,"4":
˓→1356998400},"B":{"0":2.5656459463,"1":1.3403088498,"2":-0.2261692849,"3":0.
˓→8138502857,"4":-0.8273169356},"A":{"0":-1.2064117817,"1":1.4312559863,"2":-1.
˓→1702987971,"3":0.4108345112,"4":0.1320031703}}'
˓→"1356998400000":0.4137381054,"1357084800000":-0.472034511,"1357171200000":-0.
˓→3625429925,"1357257600000":-0.923060654,"1357344000000":0.8052440254},"date":{
˓→"1356998400000":1356998400000,"1357084800000":1356998400000,"1357171200000":
˓→1356998400000,"1357257600000":1356998400000,"1357344000000":1356998400000},"ints":{
˓→"1356998400000":0,"1357084800000":1,"1357171200000":2,"1357257600000":3,
˓→"1357344000000":4},"bools":{"1356998400000":true,"1357084800000":true,"1357171200000
˓→":true,"1357257600000":true,"1357344000000":true}}
Fallback behavior
If the JSON serializer cannot handle the container contents directly it will fall back in the following manner:
• if the dtype is unsupported (e.g. np.complex) then the default_handler, if provided, will be called for
each value, otherwise an exception is raised.
• if an object is unsupported it will attempt the following:
– check if the object has defined a toDict method and call it. A toDict method should return a dict
which will then be JSON serialized.
– invoke the default_handler if one was provided.
– convert the object to a dict by traversing its contents. However this will often fail with an
OverflowError or give unexpected results.
In general the best approach for unsupported objects or dtypes is to provide a default_handler. For example:
Reading JSON
Reading a JSON string to pandas object can take a number of parameters. The parser will try to parse a DataFrame
if typ is not supplied or is None. To explicitly force Series parsing, pass typ=series
• filepath_or_buffer : a VALID JSON string or file handle / StringIO. The string could be a URL. Valid
URL schemes include http, ftp, S3, and file. For file URLs, a host is expected. For instance, a local file could be
file ://localhost/path/to/table.json
• typ : type of object to recover (series or frame), default ‘frame’
• orient :
Series :
– default is index
[email protected]
166FVD0TPV – allowed values are {split, records, index}
DataFrame
– default is columns
– allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split dict like {index -> [index], columns -> [columns], data -> [values]}
records list like [{column -> value}, . . . , {column -> value}]
index dict like {index -> {column -> value}}
columns dict like {column -> {index -> value}}
values just the values array
table adhering to the JSON Table Schema
• dtype : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don’t infer dtypes at
all, default is True, apply only to the data.
• convert_axes : boolean, try to convert the axes to the proper dtypes, default is True
• convert_dates : a list of columns to parse for dates; If True, then try to parse date-like columns, default
is True.
• keep_default_dates : boolean, default True. If parsing dates, then parse the default date-like columns.
• numpy : direct decoding to NumPy arrays. default is False; Supports numeric data only, although labels may
be non-numeric. Also note that the JSON ordering MUST be the same for each term if numpy=True.
• precise_float : boolean, default False. Set to enable usage of higher precision (strtod) function when
decoding string to double values. Default (False) is to use fast but less precise builtin functionality.
• date_unit : string, the timestamp unit to detect if converting dates. Default None. By default the timestamp
precision will be detected, if this is not desired then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force timestamp
precision to seconds, milliseconds, microseconds or nanoseconds respectively.
• lines : reads file as one json object per line.
• encoding : The encoding to use to decode py3 bytes.
• chunksize : when used in combination with lines=True, return a JsonReader which reads in chunksize
lines per iteration.
The parser will raise one of ValueError/TypeError/AssertionError if the JSON is not parseable.
If a non-default orient was used when encoding to JSON be sure to pass the same option here so that decoding
produces sensible results, see Orient Options for an overview.
Data conversion
The default of convert_axes=True, dtype=True, and convert_dates=True will try to parse the axes, and
all of the data into appropriate types, including dates. If you need to override specific dtypes, pass a dict to dtype.
convert_axes should only be set to False if you need to preserve string-like numbers (e.g. ‘1’, ‘2’) in an axes.
Note: Large integer values may be converted to dates if convert_dates=True and the data and / or column labels
appear ‘date-like’. The exact threshold depends on the date_unit specified. ‘date-like’ means that the column label
meets one of the following criteria:
[email protected]
166FVD0TPV • it ends with '_at'
• it ends with '_time'
• it begins with 'timestamp'
• it is 'modified'
• it is 'date'
Warning: When reading JSON data, automatic coercing into dtypes has some quirks:
• an index can be reconstructed in a different order from serialization, that is, the returned order is not guaran-
teed to be the same as before serialization
• a column that was float data will be converted to integer if it can be done safely, e.g. a column of 1.
• bool columns will be converted to integer on reconstruction
Thus there are times where you may want to specify specific dtypes via the dtype keyword argument.
In [227]: pd.read_json(json)
Out[227]:
date B A
0 2013-01-01 2.565646 -1.206412
1 2013-01-01 1.340309 1.431256
(continues on next page)
In [228]: pd.read_json('test.json')
Out[228]:
A B date ints bools
2013-01-01 -1.294524 0.413738 2013-01-01 0 True
2013-01-02 0.276662 -0.472035 2013-01-01 1 True
2013-01-03 -0.013960 -0.362543 2013-01-01 2 True
2013-01-04 -0.006154 -0.923061 2013-01-01 3 True
2013-01-05 0.895717 0.805244 2013-01-01 4 True
Don’t convert any data (but still convert axes and dates):
In [232]: si
Out[232]:
0 1 2 3
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0
In [233]: si.index
Out[233]: Index(['0', '1', '2', '3'], dtype='object')
In [234]: si.columns
Out[234]: Int64Index([0, 1, 2, 3], dtype='int64')
(continues on next page)
In [237]: sij
Out[237]:
0 1 2 3
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
In [238]: sij.index
Out[238]: Index(['0', '1', '2', '3'], dtype='object')
In [239]: sij.columns
Out[239]: Index(['0', '1', '2', '3'], dtype='object')
In [242]: dfju
Out[242]:
[email protected]
166FVD0TPV A B date ints bools
1356998400000000000 -1.294524 0.413738 1356998400000000000 0 True
1357084800000000000 0.276662 -0.472035 1356998400000000000 1 True
1357171200000000000 -0.013960 -0.362543 1356998400000000000 2 True
1357257600000000000 -0.006154 -0.923061 1356998400000000000 3 True
1357344000000000000 0.895717 0.805244 1356998400000000000 4 True
In [244]: dfju
Out[244]:
A B date ints bools
2013-01-01 -1.294524 0.413738 2013-01-01 0 True
2013-01-02 0.276662 -0.472035 2013-01-01 1 True
2013-01-03 -0.013960 -0.362543 2013-01-01 2 True
2013-01-04 -0.006154 -0.923061 2013-01-01 3 True
2013-01-05 0.895717 0.805244 2013-01-01 4 True
In [246]: dfju
Out[246]:
A B date ints bools
2013-01-01 -1.294524 0.413738 2013-01-01 0 True
2013-01-02 0.276662 -0.472035 2013-01-01 1 True
2013-01-03 -0.013960 -0.362543 2013-01-01 2 True
2013-01-04 -0.006154 -0.923061 2013-01-01 3 True
(continues on next page)
Note: This param has been deprecated as of version 1.0.0 and will raise a FutureWarning.
This supports numeric data only. Index and columns labels may be non-numeric, e.g. strings, dates etc.
If numpy=True is passed to read_json an attempt will be made to sniff an appropriate dtype during deserialization
and to subsequently decode directly to NumPy arrays, bypassing the need for intermediate Python objects.
This can provide speedups if you are deserialising a large amount of numeric data:
[email protected]
166FVD0TPV In [252]: %timeit pd.read_json(jsonfloats, numpy=True)
9.34 ms +- 88.5 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Warning: Direct NumPy decoding makes a number of assumptions and may fail or produce unexpected output if
these assumptions are not satisfied:
• data is numeric.
• data is uniform. The dtype is sniffed from the first value decoded. A ValueError may be raised, or
incorrect output may be produced if this condition is not satisfied.
• labels are ordered. Labels are only read from the first container, it is assumed that each subsequent row /
column has been encoded in the same order. This should be satisfied if the data was encoded using to_json
but may not be the case if the JSON is from another source.
Normalization
pandas provides a utility function to take a dict or list of dicts and normalize this semi-structured data into a flat table.
In [257]: pd.json_normalize(data)
Out[257]:
id name.first name.last name.given name.family name
0 1.0 Coleen Volk NaN NaN NaN
1 NaN NaN NaN Mose Regner NaN
2 2.0 NaN NaN NaN NaN Faye Raker
Out[259]:
name population state shortname info.governor
0 Dade 12345 Florida FL Rick Scott
1 Broward 40000 Florida FL Rick Scott
2 Palm Beach 60000 Florida FL Rick Scott
3 Summit 1234 Ohio OH John Kasich
4 Cuyahoga 1337 Ohio OH John Kasich
The max_level parameter provides more control over which level to end normalization. With max_level=1 the follow-
ing snippet normalizes until 1st nesting level of the provided dict.
pandas is able to read and write line-delimited json files that are common in data processing pipelines using Hadoop
or Spark.
New in version 0.21.0.
For line-delimited json files, pandas can also return an iterator which reads in chunksize lines at a time. This can
be useful for large files or to read from a stream.
In [262]: jsonl = '''
.....: {"a": 1, "b": 2}
.....: {"a": 3, "b": 4}
.....: '''
.....:
In [264]: df
Out[264]:
a b
0 1 2
1 3 4
Table schema
Table Schema is a spec for describing tabular datasets as a JSON object. The JSON includes information on the field
names, types, and other attributes. You can use the orient table to build a JSON string with two fields, schema and
data.
In [269]: df = pd.DataFrame({'A': [1, 2, 3],
.....: 'B': ['a', 'b', 'c'],
.....: 'C': pd.date_range('2016-01-01', freq='d', periods=3)},
.....: index=pd.Index(range(3), name='idx'))
.....:
In [270]: df
(continues on next page)
˓→":["idx"],"pandas_version":"0.20.0"},"data":[{"idx":0,"A":1,"B":"a","C":"2016-01-
˓→01T00:00:00.000Z"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000Z"},{"idx":2,
˓→"A":3,"B":"c","C":"2016-01-03T00:00:00.000Z"}]}'
The schema field contains the fields key, which itself contains a list of column name to type pairs, including the
Index or MultiIndex (see below for a list of types). The schema field also contains a primaryKey field if the
(Multi)index is unique.
The second field, data, contains the serialized data with the records orient. The index is included, and any
datetimes are ISO 8601 formatted, as required by the Table Schema spec.
The full list of types supported are described in the Table Schema spec. This table shows the mapping from pandas
types:
In [274]: build_table_schema(s)
Out[274]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime'}],
'primaryKey': ['index'],
'pandas_version': '0.20.0'}
• datetimes with a timezone (before serializing), include an additional field tz with the time zone name (e.g.
'US/Central').
In [276]: build_table_schema(s_tz)
Out[276]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime', 'tz': 'US/Central'}],
'primaryKey': ['index'],
'pandas_version': '0.20.0'}
• Periods are converted to timestamps before serialization, and so have the same behavior of being converted to
UTC. In addition, periods will contain and additional field freq with the period’s frequency, e.g. 'A-DEC'.
In [278]: build_table_schema(s_per)
Out[278]:
{'fields': [{'name': 'index', 'type': 'datetime', 'freq': 'A-DEC'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': ['index'],
'pandas_version': '0.20.0'}
• Categoricals use the any type and an enum constraint listing the set of possible values. Additionally, an
ordered field is included:
[email protected]
In [279]: s_cat = pd.Series(pd.Categorical(['a', 'b', 'a']))
166FVD0TPV
In [280]: build_table_schema(s_cat)
Out[280]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values',
'type': 'any',
'constraints': {'enum': ['a', 'b']},
'ordered': False}],
'primaryKey': ['index'],
'pandas_version': '0.20.0'}
In [282]: build_table_schema(s_dupe)
Out[282]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'pandas_version': '0.20.0'}
• The primaryKey behavior is the same with MultiIndexes, but in this case the primaryKey is an array:
In [284]: build_table_schema(s_multi)
(continues on next page)
In [286]: df
[email protected]
166FVD0TPV Out[286]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [287]: df.dtypes
Out[287]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
In [290]: new_df
Out[290]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
Please note that the literal string ‘index’ as the name of an Index is not round-trippable, nor are any names begin-
ning with 'level_' within a MultiIndex. These are used by default in DataFrame.to_json() to indicate
missing values and the subsequent read cannot distinguish the intent.
In [295]: print(new_df.index.name)
None
3.1.3 HTML
[email protected]
166FVD0TPV
Warning: We highly encourage you to read the HTML Table Parsing gotchas below regarding the issues sur-
rounding the BeautifulSoup4/html5lib/lxml parsers.
The top-level read_html() function can accept an HTML string/file/URL and will parse HTML tables into list of
pandas DataFrames. Let’s look at a few examples.
Note: read_html returns a list of DataFrame objects, even if there is only a single table contained in the
HTML content.
In [298]: dfs
Out[298]:
[ Bank Name City ST CERT
˓→Acquiring Institution Closing Date
0 Ericson State Bank Ericson NE 18265 Farmers and
˓→Merchants Bank February 14, 2020
1 City National Bank of New Jersey Newark NJ 21111
˓→Industrial Bank November 1, 2019
2 Resolute Bank Maumee OH 58317
˓→Buckeye State Bank October 25, 2019
(continues on next page)
Note: The data from the above URL changes every Monday so the resulting data above and the data below may be
slightly different.
Read in the content of the file from the above URL and pass it to read_html as a string:
In [299]: with open(file_path, 'r') as f:
.....: dfs = pd.read_html(f.read())
[email protected]
166FVD0TPV .....:
In [300]: dfs
Out[300]:
[ Bank Name City ST CERT
˓→ Acquiring Institution Closing Date Updated Date
0 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386
˓→ North Shore Bank, FSB May 31, 2013 May 31, 2013
1 Central Arizona Bank Scottsdale AZ 34527
˓→ Western State Bank May 14, 2013 May 20, 2013
2 Sunrise Bank Valdosta GA 58185
˓→ Synovus Bank May 10, 2013 May 21, 2013
3 Pisgah Community Bank Asheville NC 58701
˓→ Capital Bank, N.A. May 10, 2013 May 14, 2013
4 Douglas County Bank Douglasville GA 21649
˓→ Hamilton State Bank April 26, 2013 May 16, 2013
.. ... ... .. ...
˓→ ... ... ...
500 Superior Bank, FSB Hinsdale IL 32646
˓→ Superior Federal, FSB July 27, 2001 June 5, 2012
501 Malta National Bank Malta OH 6629
˓→ North Valley Bank May 3, 2001 November 18, 2002
502 First Alliance Bank & Trust Co. Manchester NH 34264 Southern New
˓→Hampshire Bank & Trust February 2, 2001 February 18, 2003
503 National State Bank of Metropolis Metropolis IL 3815
˓→Banterra Bank of Marion December 14, 2000 March 17, 2005
504 Bank of Honolulu Honolulu HI 21029
˓→ Bank of the Orient October 13, 2000 March 17, 2005
In [303]: dfs
Out[303]:
[ Bank Name City ST CERT
˓→ Acquiring Institution Closing Date Updated Date
0 Banks of Wisconsin d/b/a Bank of Kenosha Kenosha WI 35386
˓→ North Shore Bank, FSB May 31, 2013 May 31, 2013
1 Central Arizona Bank Scottsdale AZ 34527
˓→ Western State Bank May 14, 2013 May 20, 2013
2 Sunrise Bank Valdosta GA 58185
˓→ Synovus Bank May 10, 2013 May 21, 2013
3 Pisgah Community Bank Asheville NC 58701
˓→ Capital Bank, N.A. May 10, 2013 May 14, 2013
4 Douglas County Bank Douglasville GA 21649
˓→ Hamilton State Bank April 26, 2013 May 16, 2013
.. ... ... .. ...
˓→ ... ... ...
500 Superior Bank, FSB Hinsdale IL 32646
˓→ Superior Federal, FSB July 27, 2001 June 5, 2012
[email protected]
166FVD0TPV 501 Malta National Bank Malta OH 6629
˓→ North Valley Bank May 3, 2001 November 18, 2002
502 First Alliance Bank & Trust Co. Manchester NH 34264 Southern New
˓→Hampshire Bank & Trust February 2, 2001 February 18, 2003
503 National State Bank of Metropolis Metropolis IL 3815
˓→Banterra Bank of Marion December 14, 2000 March 17, 2005
504 Bank of Honolulu Honolulu HI 21029
˓→ Bank of the Orient October 13, 2000 March 17, 2005
Note: The following examples are not run by the IPython evaluator due to the fact that having so many network-
accessing functions slows down the documentation build. If you spot an error or an example that doesn’t run, please
do not hesitate to report it over on pandas GitHub issues page.
Specify a header row (by default <th> or <td> elements located within a <thead> are used to form the column
index, if multiple rows are contained within <thead> then a MultiIndex is created); if specified, the header row is
taken from the data minus the parsed header elements (<th> elements).
Specify a number of rows to skip using a list (xrange (Python 2 only) works as well):
Specify converters for columns. This is useful for numerical text data that has leading zeros. By default columns that
are numerical are cast to numeric types and the leading zeros are lost. To avoid this, we can convert these columns to
strings.
[email protected]
166FVD0TPV url_mcc = 'https://en.wikipedia.org/wiki/Mobile_country_code'
dfs = pd.read_html(url_mcc, match='Telekom Albania', header=0,
converters={'MNC': str})
Read in pandas to_html output (with some loss of floating point precision):
df = pd.DataFrame(np.random.randn(2, 2))
s = df.to_html(float_format='{0:.40g}'.format)
dfin = pd.read_html(s, index_col=0)
The lxml backend will raise an error on a failed parse if that is the only parser you provide. If you only have a single
parser you can provide just a string, but it is considered good practice to pass a list with one string if, for example, the
function expects a sequence of strings. You may use:
However, if you have bs4 and html5lib installed and pass None or ['lxml', 'bs4'] then the parse will most
likely succeed. Note that as soon as a parse succeeds, the function will return.
DataFrame objects have an instance method to_html which renders the contents of the DataFrame as an HTML
table. The function arguments are as in the method to_string described above.
Note: Not all of the possible options for DataFrame.to_html are shown here for brevity’s sake. See
to_html() for the full set of options.
In [305]: df
Out[305]:
0 1
0 -0.184744 0.496971
1 -0.856240 1.857977
HTML:
The columns argument will limit the columns shown:
In [307]: print(df.to_html(columns=[0]))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-0.184744</td>
</tr>
(continues on next page)
HTML:
float_format takes a Python callable to control the precision of floating point values:
In [308]: print(df.to_html(float_format='{0:.10f}'.format))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>-0.1847438576</td>
<td>0.4969711327</td>
</tr>
<tr>
<th>1</th>
[email protected]
<td>-0.8562396763</td>
166FVD0TPV <td>1.8579766508</td>
</tr>
</tbody>
</table>
HTML:
bold_rows will make the row labels bold by default, but you can turn that off:
In [309]: print(df.to_html(bold_rows=False))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>-0.184744</td>
<td>0.496971</td>
</tr>
<tr>
<td>1</td>
<td>-0.856240</td>
<td>1.857977</td>
(continues on next page)
The classes argument provides the ability to give the resulting HTML table CSS classes. Note that these classes
are appended to the existing 'dataframe' class.
In [310]: print(df.to_html(classes=['awesome_table_class', 'even_more_awesome_class
˓→']))
The render_links argument provides the ability to add hyperlinks to cells that contain URLs.
New in version 0.24.
In [311]: url_df = pd.DataFrame({
.....: 'name': ['Python', 'Pandas'],
.....: 'url': ['https://www.python.org/', 'https://pandas.pydata.org']})
.....:
In [312]: print(url_df.to_html(render_links=True))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>url</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Python</td>
<td><a href="https://www.python.org/" target="_blank">https://www.python.org/</
˓→a></td>
</tr>
<tr>
(continues on next page)
</tr>
</tbody>
</table>
HTML:
Finally, the escape argument allows you to control whether the “<”, “>” and “&” characters escaped in the resulting
HTML (by default it is True). So to get the HTML without escaped characters pass escape=False
In [313]: df = pd.DataFrame({'a': list('&<>'), 'b': np.random.randn(3)})
Escaped:
In [314]: print(df.to_html())
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
[email protected]
<th>0</th>
166FVD0TPV <td>&</td>
<td>-0.474063</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>-0.230305</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-0.400654</td>
</tr>
</tbody>
</table>
Not escaped:
In [315]: print(df.to_html(escape=False))
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
(continues on next page)
Note: Some browsers may not show a difference in the rendering of the previous two HTML tables.
There are some versioning issues surrounding the libraries that are used to parse HTML tables in the top-level pandas
io function read_html.
[email protected]
166FVD0TPV Issues with lxml
• Benefits
– lxml is very fast.
– lxml requires Cython to install correctly.
• Drawbacks
– lxml does not make any guarantees about the results of its parse unless it is given strictly valid markup.
– In light of the above, we have chosen to allow you, the user, to use the lxml backend, but this backend
will use html5lib if lxml fails to parse
– It is therefore highly recommended that you install both BeautifulSoup4 and html5lib, so that you will
still get a valid result (provided everything else is valid) even if lxml fails.
Issues with BeautifulSoup4 using lxml as a backend
• The above issues hold here as well since BeautifulSoup4 is essentially just a wrapper around a parser backend.
Issues with BeautifulSoup4 using html5lib as a backend
• Benefits
– html5lib is far more lenient than lxml and consequently deals with real-life markup in a much saner way
rather than just, e.g., dropping an element without notifying you.
– html5lib generates valid HTML5 markup from invalid markup automatically. This is extremely important
for parsing HTML tables, since it guarantees a valid document. However, that does NOT mean that it is
“correct”, since the process of fixing markup does not have a single definition.
– html5lib is pure Python and requires no additional build steps beyond its own installation.
• Drawbacks
– The biggest drawback to using html5lib is that it is slow as molasses. However consider the fact that many
tables on the web are not big enough for the parsing algorithm runtime to matter. It is more likely that the
bottleneck will be in the process of reading the raw text from the URL over the web, i.e., IO (input-output).
For very large tables, this might not be true.
The read_excel() method can read Excel 2003 (.xls) files using the xlrd Python module. Excel 2007+ (.
xlsx) files can be read using either xlrd or openpyxl. Binary Excel (.xlsb) files can be read using pyxlsb.
The to_excel() instance method is used for saving a DataFrame to Excel. Generally the semantics are similar
to working with csv data. See the cookbook for some advanced strategies.
In the most basic use-case, read_excel takes a path to an Excel file, and the sheet_name indicating which sheet
to parse.
# Returns a DataFrame
pd.read_excel('path_to_file.xls', sheet_name='Sheet1')
ExcelFile class
To facilitate working with multiple sheets from the same file, the ExcelFile class can be used to wrap the file and
[email protected]
166FVD0TPV can be passed into read_excel There will be a performance benefit for reading multiple sheets as the file is read
into memory only once.
xlsx = pd.ExcelFile('path_to_file.xls')
df = pd.read_excel(xlsx, 'Sheet1')
The sheet_names property will generate a list of the sheet names in the file.
The primary use-case for an ExcelFile is parsing multiple sheets with different parameters:
data = {}
# For when Sheet1's format differs from Sheet2
with pd.ExcelFile('path_to_file.xls') as xls:
data['Sheet1'] = pd.read_excel(xls, 'Sheet1', index_col=None,
na_values=['NA'])
data['Sheet2'] = pd.read_excel(xls, 'Sheet2', index_col=1)
Note that if the same parsing parameters are used for all sheets, a list of sheet names can simply be passed to
read_excel with no loss in performance.
ExcelFile can also be called with a xlrd.book.Book object as a parameter. This allows the user to control
how the excel file is read. For example, sheets can be loaded on demand by calling xlrd.open_workbook() with
on_demand=True.
import xlrd
xlrd_book = xlrd.open_workbook('path_to_file.xls', on_demand=True)
with pd.ExcelFile(xlrd_book) as xls:
df1 = pd.read_excel(xls, 'Sheet1')
df2 = pd.read_excel(xls, 'Sheet2')
Specifying sheets
# Returns a DataFrame
pd.read_excel('path_to_file.xls', 'Sheet1', index_col=None, na_values=['NA'])
# Returns a DataFrame
pd.read_excel('path_to_file.xls', 0, index_col=None, na_values=['NA'])
# Returns a DataFrame
pd.read_excel('path_to_file.xls')
read_excel can read more than one sheet, by setting sheet_name to either a list of sheet names, a list of sheet
positions, or None to read all sheets. Sheets can be specified by sheet index or sheet name, using an integer or string,
respectively.
Reading a MultiIndex
read_excel can read a MultiIndex index, by passing a list of columns to index_col and a MultiIndex
column by passing a list of rows to header. If either the index or columns have serialized level names those will
be read in as well by specifying the rows/columns that make up the levels.
For example, to read in a MultiIndex index without names:
.....:
In [317]: df.to_excel('path_to_file.xlsx')
[email protected]
166FVD0TPV In [318]: df = pd.read_excel('path_to_file.xlsx', index_col=[0, 1])
In [319]: df
Out[319]:
a b
a c 1 5
d 2 6
b c 3 7
d 4 8
If the index has level names, they will parsed as well, using the same parameters.
In [321]: df.to_excel('path_to_file.xlsx')
In [323]: df
Out[323]:
a b
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
If the source file has both MultiIndex index and columns, lists specifying each should be passed to index_col
and header:
In [325]: df.to_excel('path_to_file.xlsx')
In [327]: df
Out[327]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
It is often the case that users will insert columns to do temporary computations in Excel and you may not want to read
in those columns. read_excel takes a usecols keyword to allow you to specify a subset of columns to parse.
Deprecated since version 0.24.0.
Passing in an integer for usecols has been deprecated. Please pass in a list of ints from 0 to usecols inclusive
instead.
[email protected]
166FVD0TPV If usecols is an integer, then it is assumed to indicate the last column to be parsed.
You can also specify a comma-delimited set of Excel columns and ranges as a string:
If usecols is a list of integers, then it is assumed to be the file column indices to be parsed.
Parsing dates
Datetime-like values are normally automatically converted to the appropriate dtype when reading the excel file. But
if you have a column of strings that look like dates (but are not actually formatted as dates in excel), you can use the
parse_dates keyword to parse those strings to datetimes:
Cell converters
It is possible to transform the contents of Excel cells via the converters option. For instance, to convert a column
to boolean:
This options handles missing values and treats exceptions in the converters as missing data. Transformations are
applied cell by cell rather than to the column as a whole, so the array dtype is not guaranteed. For instance, a column
of integers with missing values cannot be transformed to an array with integer dtype, because NaN is strictly a float.
You can manually mask missing data to recover integer dtype:
def cfun(x):
return int(x) if x else -1
[email protected]
166FVD0TPV
Dtype specifications
As an alternative to converters, the type for an entire column can be specified using the dtype keyword, which takes a
dictionary mapping column names to types. To interpret data with no type inference, use the type str or object.
To write a DataFrame object to a sheet of an Excel file, you can use the to_excel instance method. The arguments
are largely the same as to_csv described above, the first argument being the name of the excel file, and the optional
second argument the name of the sheet to which the DataFrame should be written. For example:
df.to_excel('path_to_file.xlsx', sheet_name='Sheet1')
Files with a .xls extension will be written using xlwt and those with a .xlsx extension will be written using
xlsxwriter (if available) or openpyxl.
The DataFrame will be written in a way that tries to mimic the REPL output. The index_label will be placed
in the second row instead of the first. You can place it in the first row by setting the merge_cells option in
to_excel() to False:
In order to write separate DataFrames to separate sheets in a single Excel file, one can pass an ExcelWriter.
Note: Wringing a little more performance out of read_excel Internally, Excel stores all numeric data as floats.
Because this can produce unexpected behavior when reading in data, pandas defaults to trying to convert integers to
floats if it doesn’t lose information (1.0 --> 1). You can pass convert_float=False to disable this behavior,
which may give a slight performance improvement.
Pandas supports writing Excel files to buffer-like objects such as StringIO or BytesIO using ExcelWriter.
bio = BytesIO()
[email protected]
166FVD0TPV # By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter(bio, engine='xlsxwriter')
df.to_excel(writer, sheet_name='Sheet1')
# Seek to the beginning and read to copy the workbook to a variable in memory
bio.seek(0)
workbook = bio.read()
Note: engine is optional but recommended. Setting the engine determines the version of workbook produced.
Setting engine='xlrd' will produce an Excel 2003-format workbook (xls). Using either 'openpyxl' or
'xlsxwriter' will produce an Excel 2007-format workbook (xlsx). If omitted, an Excel 2007-formatted workbook
is produced.
[email protected]
df.to_excel('path_to_file.xlsx', sheet_name='Sheet1')
166FVD0TPV
The look and feel of Excel worksheets created from pandas can be modified using the following parameters on the
DataFrame’s to_excel method.
• float_format : Format string for floating point numbers (default None).
• freeze_panes : A tuple of two integers representing the bottommost row and rightmost column to freeze.
Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default None).
Using the Xlsxwriter engine provides many options for controlling the format of an Excel worksheet created with
the to_excel method. Excellent examples can be found in the Xlsxwriter documentation here: https://xlsxwriter.
readthedocs.io/working_with_pandas.html
# Returns a DataFrame
pd.read_excel('path_to_file.ods', engine='odf')
Note: Currently pandas only supports reading OpenDocument spreadsheets. Writing is not implemented.
Note: Currently pandas only supports reading binary Excel files. Writing is not implemented.
3.1.7 Clipboard
A handy way to grab data is to use the read_clipboard() method, which takes the contents of the clipboard
buffer and passes them to the read_csv method. For instance, you can copy the following text to the clipboard
(CTRL-C on many operating systems):
A B C
[email protected]
x 1 4 p
166FVD0TPV y 2 5 q
z 3 6 r
The to_clipboard method can be used to write the contents of a DataFrame to the clipboard. Following which
you can paste the clipboard contents into other applications (CTRL-V on many operating systems). Here we illustrate
writing a DataFrame into clipboard and reading it back.
>>> df = pd.DataFrame({'A': [1, 2, 3],
... 'B': [4, 5, 6],
... 'C': ['p', 'q', 'r']},
... index=['x', 'y', 'z'])
>>> df
A B C
x 1 4 p
y 2 5 q
z 3 6 r
>>> df.to_clipboard()
>>> pd.read_clipboard()
A B C
(continues on next page)
We can see that we got the same content back, which we had earlier written to the clipboard.
Note: You may need to install xclip or xsel (with PyQt5, PyQt4 or qtpy) on Linux to use these methods.
3.1.8 Pickling
All pandas objects are equipped with to_pickle methods which use Python’s cPickle module to save data
structures to disk using the pickle format.
In [328]: df
Out[328]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
In [329]: df.to_pickle('foo.pkl')
[email protected]
166FVD0TPV The read_pickle function in the pandas namespace can be used to load any pickled pandas object (or any other
pickled object) from file:
In [330]: pd.read_pickle('foo.pkl')
Out[330]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Warning: Loading pickled data received from untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html
Warning: read_pickle() is only guaranteed backwards compatible back to pandas version 0.20.3
In [331]: df = pd.DataFrame({
.....: 'A': np.random.randn(1000),
.....: 'B': 'foo',
.....: 'C': pd.date_range('20130101', periods=1000, freq='s')})
.....:
In [332]: df
Out[332]:
A B C
0 -0.288267 foo 2013-01-01 00:00:00
1 -0.084905 foo 2013-01-01 00:00:01
2 0.004772 foo 2013-01-01 00:00:02
3 1.382989 foo 2013-01-01 00:00:03
4 0.343635 foo 2013-01-01 00:00:04
.. ... ... ...
995 -0.220893 foo 2013-01-01 00:16:35
996 0.492996 foo 2013-01-01 00:16:36
997 -0.461625 foo 2013-01-01 00:16:37
998 1.361779 foo 2013-01-01 00:16:38
999 -1.197988 foo 2013-01-01 00:16:39
[email protected]
166FVD0TPV
[1000 rows x 3 columns]
In [335]: rt
Out[335]:
A B C
0 -0.288267 foo 2013-01-01 00:00:00
1 -0.084905 foo 2013-01-01 00:00:01
2 0.004772 foo 2013-01-01 00:00:02
3 1.382989 foo 2013-01-01 00:00:03
4 0.343635 foo 2013-01-01 00:00:04
.. ... ... ...
995 -0.220893 foo 2013-01-01 00:16:35
996 0.492996 foo 2013-01-01 00:16:36
997 -0.461625 foo 2013-01-01 00:16:37
998 1.361779 foo 2013-01-01 00:16:38
999 -1.197988 foo 2013-01-01 00:16:39
In [338]: rt
Out[338]:
A B C
0 -0.288267 foo 2013-01-01 00:00:00
1 -0.084905 foo 2013-01-01 00:00:01
2 0.004772 foo 2013-01-01 00:00:02
3 1.382989 foo 2013-01-01 00:00:03
4 0.343635 foo 2013-01-01 00:00:04
.. ... ... ...
995 -0.220893 foo 2013-01-01 00:16:35
996 0.492996 foo 2013-01-01 00:16:36
997 -0.461625 foo 2013-01-01 00:16:37
998 1.361779 foo 2013-01-01 00:16:38
999 -1.197988 foo 2013-01-01 00:16:39
In [339]: df.to_pickle("data.pkl.gz")
In [340]: rt = pd.read_pickle("data.pkl.gz")
In [341]: rt
[email protected]
Out[341]:
166FVD0TPV A B C
0 -0.288267 foo 2013-01-01 00:00:00
1 -0.084905 foo 2013-01-01 00:00:01
2 0.004772 foo 2013-01-01 00:00:02
3 1.382989 foo 2013-01-01 00:00:03
4 0.343635 foo 2013-01-01 00:00:04
.. ... ... ...
995 -0.220893 foo 2013-01-01 00:16:35
996 0.492996 foo 2013-01-01 00:16:36
997 -0.461625 foo 2013-01-01 00:16:37
998 1.361779 foo 2013-01-01 00:16:38
999 -1.197988 foo 2013-01-01 00:16:39
In [342]: df["A"].to_pickle("s1.pkl.bz2")
In [343]: rt = pd.read_pickle("s1.pkl.bz2")
In [344]: rt
Out[344]:
0 -0.288267
1 -0.084905
2 0.004772
3 1.382989
4 0.343635
...
995 -0.220893
(continues on next page)
3.1.9 msgpack
pandas support for msgpack has been removed in version 1.0.0. It is recommended to use pyarrow for on-the-wire
transmission of pandas objects.
Example pyarrow usage:
HDFStore is a dict-like object which reads and writes pandas using the high performance HDF5 format using the
excellent PyTables library. See the cookbook for some advanced strategies
[email protected]
166FVD0TPV
Warning: pandas requires PyTables >= 3.0.0. There is a indexing bug in PyTables < 3.2 which may appear
when querying stores using an index. If you see a subset of results being returned, upgrade to PyTables >= 3.2.
Stores created previously will need to be rewritten using the updated version.
In [346]: print(store)
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Objects can be written to the file just like adding key-value pairs to a dict:
In [351]: store['df'] = df
In [356]: store
Out[356]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [358]: store
Out[358]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [359]: store.is_open
Out[359]: False
# Working with, and automatically closing the store using a context manager
In [360]: with pd.HDFStore('store.h5') as store:
(continues on next page)
Read/write API
HDFStore supports a top-level API using read_hdf for reading and to_hdf for writing, similar to how
read_csv and to_csv work.
HDFStore will by default not drop rows that are all missing. This behavior can be changed by setting dropna=True.
In [365]: df_with_missing
Out[365]:
[email protected]
col1 col2
166FVD0TPV 0 0.0 1.0
1 NaN NaN
2 2.0 NaN
Fixed format
The examples above show storing using put, which write the HDF5 to PyTables in a fixed array format, called
the fixed format. These types of stores are not appendable once written (though you can simply remove them and
rewrite). Nor are they queryable; they must be retrieved in their entirety. They also do not support dataframes with
non-unique column names. The fixed format stores offer very fast writing and slightly faster reading than table
stores. This format is specified by default when using put or to_hdf or by format='fixed' or format='f'.
Warning: A fixed format will raise a TypeError if you try to retrieve using a where:
>>> pd.DataFrame(np.random.randn(10, 2)).to_hdf('test_fixed.h5', 'df')
>>> pd.read_hdf('test_fixed.h5', 'df', where='index>5')
TypeError: cannot pass a where specification when reading a fixed format.
this store must be selected in its entirety
Table format
HDFStore supports another PyTables format on disk, the table format. Conceptually a table is shaped very
much like a DataFrame, with rows and columns. A table may be appended to in the same or other sessions.
In addition, delete and query type operations are supported. This format is specified by format='table' or
format='t' to append or put or to_hdf.
This format can be set as an option as well pd.set_option('io.hdf.default_format','table') to
enable put/append/to_hdf to by default store in the table format.
In [370]: store = pd.HDFStore('store.h5')
[email protected]
166FVD0TPV
In [371]: df1 = df[0:4]
In [375]: store
Out[375]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Note: You can also create a table by passing format='table' or format='t' to a put operation.
Hierarchical keys
Keys to a store can be specified as a string. These can be in a hierarchical path-name like format (e.g. foo/bar/
bah), which will generate a hierarchy of sub-stores (or Groups in PyTables parlance). Keys can be specified without
the leading ‘/’ and are always absolute (e.g. ‘foo’ refers to ‘/foo’). Removal operations can remove everything in the
sub-store and below, so be careful.
In [381]: store
Out[381]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [384]: store
Out[384]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
You can walk through the group hierarchy using the walk method which will yield a tuple for each group key along
with the relative keys of its contents.
New in version 0.24.0.
Warning: Hierarchical keys cannot be retrieved as dotted (attribute) access as described above for items stored
under the root node.
In [8]: store.foo.bar.bah
AttributeError: 'HDFStore' object has no attribute 'foo'
# you can directly access the actual PyTables node but using the root node
In [9]: store.root.foo.bar.bah
Out[9]:
[email protected]
166FVD0TPV /foo/bar/bah (Group) ''
children := ['block0_items' (Array), 'block0_values' (Array), 'axis0' (Array),
˓→'axis1' (Array)]
Storing types
Storing mixed-dtype data is supported. Strings are stored as a fixed-width using the maximum size of the appended
column. Subsequent attempts at appending longer strings will raise a ValueError.
Passing min_itemsize={`values`: size} as a parameter to append will set a larger minimum for the string
columns. Storing floats, strings, ints, bools, datetime64 are currently supported. For string
columns, passing nan_rep = 'nan' to append will change the default nan representation on disk (which con-
verts to/from np.nan), this defaults to nan.
In [388]: df_mixed.loc[df_mixed.index[3:5],
.....: ['A', 'B', 'string', 'datetime64']] = np.nan
.....:
[email protected]
In [390]: df_mixed1 = store.select('df_mixed')
166FVD0TPV
In [391]: df_mixed1
Out[391]:
A B C string int bool datetime64
0 -0.116008 0.743946 -0.398501 string 1 True 2001-01-02
1 0.592375 -0.533097 -0.677311 string 1 True 2001-01-02
2 0.476481 -0.140850 -0.874991 string 1 True 2001-01-02
3 NaN NaN -1.167564 NaN 1 True NaT
4 NaN NaN -0.593353 NaN 1 True NaT
5 0.852727 0.463819 0.146262 string 1 True 2001-01-02
6 -1.177365 0.793644 -0.131959 string 1 True 2001-01-02
7 1.236988 0.221252 0.089012 string 1 True 2001-01-02
In [392]: df_mixed1.dtypes.value_counts()
Out[392]:
float64 2
bool 1
float32 1
object 1
datetime64[ns] 1
int64 1
dtype: int64
Storing MultiIndex DataFrames as tables is very similar to storing/selecting from homogeneous index
DataFrames.
In [394]: index = pd.MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
.....: ['one', 'two', 'three']],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
.....: [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: names=['foo', 'bar'])
.....:
In [396]: df_mi
Out[396]:
A B C
foo bar
foo one 0.667450 0.169405 -1.358046
two -0.105563 0.492195 0.076693
three 0.213685 -0.285283 -1.210529
bar one -1.408386 0.941577 -0.342447
two 0.222031 0.052607 2.093214
baz two 1.064908 1.778161 -0.913867
three -0.030004 -0.399846 -1.234765
qux one 0.081323 -0.268494 0.168016
two -0.898283 -0.218499 1.408028
three -1.267828 -0.689263 0.520995
In [398]: store.select('df_mi')
Out[398]:
A B C
foo bar
foo one 0.667450 0.169405 -1.358046
two -0.105563 0.492195 0.076693
three 0.213685 -0.285283 -1.210529
bar one -1.408386 0.941577 -0.342447
two 0.222031 0.052607 2.093214
(continues on next page)
Note: The index keyword is reserved and cannot be use as a level name.
Querying
Querying a table
select and delete operations have an optional criterion that can be specified to select/delete only a subset of the
data. This allows one to have a very large on-disk table and retrieve only a portion of the data.
A query is specified using the Term class under the hood, as a boolean expression.
[email protected]
166FVD0TPV • index and columns are supported indexers of DataFrames.
• if data_columns are specified, these can be used as additional indexers.
• level name in a MultiIndex, with default name level_0, level_1, . . . if not provided.
Valid comparison operators are:
=, ==, !=, >, >=, <, <=
Valid boolean expressions are combined with:
• | : or
• & : and
• ( and ) : for grouping
These rules are similar to how boolean expressions are used in pandas for indexing.
Note:
• = will be automatically expanded to the comparison operator ==
• ~ is the not operator, but can only be used in very limited circumstances
• If a list/tuple of expressions is passed they will be combined via &
Note: Passing a string to a query by interpolating it into the query expression is not recommended. Simply assign the
string of interest to a variable and use that variable in an expression. For example, do this
[email protected]
166FVD0TPV
string = "HolyMoly'"
store.select('df', 'index == string')
instead of this
string = "HolyMoly'"
store.select('df', 'index == %s' % string)
The latter will not work and will raise a SyntaxError.Note that there’s a single quote followed by a double quote
in the string variable.
If you must interpolate, use the '%r' format specifier
The columns keyword can be supplied to select a list of columns to be returned, this is equivalent to passing a
'columns=list_of_columns_to_filter':
start and stop parameters can be specified to limit the total search space. These are in terms of the total number
of rows in a table.
Note: select will raise a ValueError if the query expression has an unknown variable reference. Usually this
means that you are trying to select on a column that is not a data_column.
select will raise a SyntaxError if the query expression is not valid.
Query timedelta64[ns]
You can store and query using the timedelta64[ns] type. Terms can be specified in the format:
<float>(<unit>), where float may be signed (and fractional), and unit can be D,s,ms,us,ns for the timedelta.
Here’s an example:
In [408]: dftd
Out[408]:
A B C
0 2013-01-01 2013-01-01 00:00:10 -1 days +23:59:50
1 2013-01-01 2013-01-02 00:00:10 -2 days +23:59:50
2 2013-01-01 2013-01-03 00:00:10 -3 days +23:59:50
3 2013-01-01 2013-01-04 00:00:10 -4 days +23:59:50
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
[email protected]
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
166FVD0TPV
In [409]: store.append('dftd', dftd, data_columns=True)
Query MultiIndex
Selecting from a MultiIndex can be achieved by using the name of the level.
In [411]: df_mi.index.names
Out[411]: FrozenList(['foo', 'bar'])
If the MultiIndex levels names are None, the levels are automatically made available via the level_n keyword
In [415]: df_mi_2
Out[415]:
A B C
foo one 0.856838 1.491776 0.001283
two 0.701816 -1.097917 0.102588
three 0.661740 0.443531 0.559313
bar one -0.459055 -1.222598 -0.455304
two -0.781163 0.826204 -0.530057
baz two 0.296135 1.366810 1.073372
three -0.994957 0.755314 2.119746
qux one -2.628174 -0.089460 -0.133636
two 0.337920 -0.634027 0.421107
three 0.604303 1.053434 1.109090
# the levels are automatically included as data columns with keyword level_n
[email protected]
In [417]: store.select("df_mi_2", "level_0=foo and level_1=two")
166FVD0TPV Out[417]:
A B C
foo two 0.701816 -1.097917 0.102588
Indexing
You can create/modify an index for a table with create_table_index after data is already in the table (after and
append/put operation). Creating a table index is highly encouraged. This will speed your queries a great deal
when you use a select with the indexed dimension as the where.
Note: Indexes are automagically created on the indexables and any data columns you specify. This behavior can be
turned off by passing index=False to append.
In [421]: i = store.root.df.table.cols.index.index
Oftentimes when appending large amounts of data to a store, it is useful to turn off index creation for each append,
then recreate at the end.
In [428]: st.get_storer('df').table
Out[428]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
In [430]: st.get_storer('df').table
Out[430]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
autoindex := True
colindexes := {
"B": Index(9, full, shuffle, zlib(1)).is_csi=True}
In [431]: st.close()
You can designate (and index) certain columns that you want to be able to perform queries (other than the indexable
columns, which you can always query). For instance say you want to perform this common operation, on-disk, and
return just the frame that matches this query. You can specify data_columns = True to force all columns to be
data_columns.
In [438]: df_dc
Out[438]:
A B C string string2
2000-01-01 1.334065 0.521036 0.930384 foo cool
2000-01-02 -1.613932 1.000000 1.000000 foo cool
2000-01-03 -0.585314 1.000000 1.000000 foo cool
2000-01-04 0.632369 -1.249657 0.975593 foo cool
2000-01-05 1.060617 -0.143682 0.218423 NaN cool
2000-01-06 3.050329 1.317933 -0.963725 NaN cool
[email protected]
2000-01-07 -0.539452 -0.771133 0.023751 foo cool
166FVD0TPV 2000-01-08 0.649464 -1.736427 0.197288 bar cool
# on-disk operations
In [439]: store.append('df_dc', df_dc, data_columns=['B', 'C', 'string', 'string2'])
# getting creative
In [441]: store.select('df_dc', 'B > 0 & C > 0 & string == foo')
Out[441]:
A B C string string2
2000-01-01 1.334065 0.521036 0.930384 foo cool
2000-01-02 -1.613932 1.000000 1.000000 foo cool
2000-01-03 -0.585314 1.000000 1.000000 foo cool
There is some performance degradation by making lots of columns into data columns, so it is up to the user to designate
these. In addition, you cannot change data columns (nor indexables) after the first append/put operation (Of course
you can simply read in the data and create a new table!).
[email protected]
166FVD0TPV Iterator
Note: You can also use the iterator with read_hdf which will open, then automatically close the store when finished
iterating.
Note, that the chunksize keyword applies to the source rows. So if you are doing a query, then the chunksize will
subdivide the total rows in the table and the query applied, returning an iterator on potentially unequal sized chunks.
Here is a recipe for generating a query and using it to create equal sized return chunks.
In [445]: dfeq = pd.DataFrame({'number': np.arange(1, 11)})
In [446]: dfeq
Out[446]:
number
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
Advanced queries
To retrieve a single indexable or data column, use the method select_column. This will, for example, enable you
to get the index very quickly. These return a Series of the result, indexed by the row number. These do not currently
accept the where selector.
In [452]: store.select_column('df_dc', 'index')
Out[452]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
(continues on next page)
Selecting coordinates
Sometimes you want to get the coordinates (a.k.a the index locations) of your query. This returns an Int64Index
of the resulting locations. These coordinates can also be passed to subsequent where operations.
In [457]: c
Out[457]:
Int64Index([732, 733, 734, 735, 736, 737, 738, 739, 740, 741,
...
990, 991, 992, 993, 994, 995, 996, 997, 998, 999],
dtype='int64', length=268)
Sometime your query can involve creating a list of rows to select. Usually this mask would be a resulting index
from an indexing operation. This example selects the months of a datetimeindex which are 5.
Storer object
If you want to inspect the stored object, retrieve via get_storer. You could use this programmatically to say get
the number of rows in an object.
In [464]: store.get_storer('df_dc').nrows
Out[464]: 8
The methods append_to_multiple and select_as_multiple can perform appending/selecting from mul-
tiple tables at once. The idea is to have one table (call it the selector table) that you index most/all of the columns, and
perform your queries. The other table(s) are data tables with an index matching the selector table’s index. You can
then perform a very fast query on the selector table, yet get lots of data back. This method is similar to having a very
wide table, but enables more efficient queries.
The append_to_multiple method splits a given single DataFrame into multiple tables according to d, a dictio-
nary that maps the table names to a list of ‘columns’ you want in that table. If None is used in place of a list, that
table will have the remaining unspecified columns of the given DataFrame. The argument selector defines which
table is the selector table (which you can make queries from). The argument dropna will drop rows from the input
DataFrame to ensure tables are synchronized. This means that if a row for one of the tables being written to is
entirely np.NaN, that row will be dropped from all tables.
If dropna is False, THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES. Remember that
entirely np.Nan rows are not written to the HDFStore, so if you choose to call dropna=False, some tables may
have more rows than others, and therefore select_as_multiple may not work or it may return unexpected
results.
In [469]: store
Out[469]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [471]: store.select('df2_mt')
Out[471]:
C D E F foo
2000-01-01 1.602451 -0.221229 0.712403 0.465927 bar
2000-01-02 -0.525571 0.851566 -0.681308 -0.549386 bar
2000-01-03 -0.044171 1.396628 1.041242 -1.588171 bar
2000-01-04 0.463351 -0.861042 -2.192841 -1.025263 bar
2000-01-05 -1.954845 -1.712882 -0.204377 -1.608953 bar
2000-01-06 1.601542 -0.417884 -2.757922 -0.307713 bar
2000-01-07 -1.935461 1.007668 0.079529 -1.459471 bar
2000-01-08 -1.057072 -0.864360 -1.124870 1.732966 bar
# as a multiple
In [472]: store.select_as_multiple(['df1_mt', 'df2_mt'], where=['A>0', 'B>0'],
.....: selector='df1_mt')
.....:
Out[472]:
A B C D E F foo
2000-01-05 1.043605 1.798494 -1.954845 -1.712882 -0.204377 -1.608953 bar
2000-01-07 0.150568 0.754820 -1.935461 1.007668 0.079529 -1.459471 bar
You can delete from a table selectively by specifying a where. In deleting rows, it is important to understand the
PyTables deletes rows by erasing the rows, then moving the following data. Thus deleting can potentially be a very
expensive operation depending on the orientation of your data. To get optimal performance, it’s worthwhile to have
the dimension you are deleting be the first of the indexables.
Data is ordered (on the disk) in terms of the indexables. Here’s a simple use case. You store panel-type data, with
dates in the major_axis and ids in the minor_axis. The data is then interleaved like this:
• date_1
– id_1
– id_2
– .
– id_n
• date_2
– id_1
– .
– id_n
It should be clear that a delete operation on the major_axis will be fairly quick, as one chunk is removed, then the
following data moved. On the other hand a delete operation on the minor_axis will be very expensive. In this case
it would almost certainly be faster to rewrite the table using a where that selects all but the missing data.
[email protected]
166FVD0TPV Warning: Please note that HDF5 DOES NOT RECLAIM SPACE in the h5 files automatically. Thus, repeatedly
deleting (or removing nodes) and adding again, WILL TEND TO INCREASE THE FILE SIZE.
To repack and clean the file, use ptrepack.
Compression
PyTables allows the stored data to be compressed. This applies to all kinds of stores, not just tables. Two parameters
are used to control compression: complevel and complib.
complevel specifies if and how hard data is to be compressed. complevel=0 and complevel=None dis-
ables compression and 0<complevel<10 enables compression.
complib specifies which compression library to use. If nothing is specified the default library zlib is used. A
compression library usually optimizes for either good compression rates or speed and the results will depend
on the type of data. Which type of compression to choose depends on your specific needs and data. The list of
supported compression libraries:
• zlib: The default compression library. A classic in terms of compression, achieves good com-
pression rates but is somewhat slow.
• lzo: Fast compression and decompression.
• bzip2: Good compression rates.
• blosc: Fast compression and decompression.
Note: If the library specified with the complib option is missing on your platform, compression defaults to zlib
without further ado.
Or on-the-fly compression (this only applies to tables) in stores where compression is not enabled:
[email protected]
166FVD0TPV store.append('df', df, complib='zlib', complevel=5)
ptrepack
PyTables offers better write performance when tables are compressed after they are written, as opposed to turning on
compression at the very beginning. You can use the supplied PyTables utility ptrepack. In addition, ptrepack
can change compression levels after the fact.
Furthermore ptrepack in.h5 out.h5 will repack the file to allow you to reuse previously deleted space. Alter-
natively, one can simply remove the file and write again, or use the copy method.
Caveats
Warning: HDFStore is not-threadsafe for writing. The underlying PyTables only supports concurrent
reads (via threading or processes). If you need reading and writing at the same time, you need to serialize these
operations in a single thread in a single process. You will corrupt your data otherwise. See the (GH2397) for more
information.
• If you use locks to manage write access between multiple processes, you may want to use fsync() before
releasing write locks. For convenience you can use store.flush(fsync=True) to do this for you.
• Once a table is created columns (DataFrame) are fixed; only exactly the same columns can be appended
• Be aware that timezones (e.g., pytz.timezone('US/Eastern')) are not necessarily equal across time-
zone versions. So if data is localized to a specific timezone in the HDFStore using one version of a timezone
library and that data is updated with another version, the data will be converted to UTC since these timezones
are not considered equal. Either use the same version of timezone library or use tz_convert with the updated
timezone definition.
Warning: PyTables will show a NaturalNameWarning if a column name cannot be used as an attribute
selector. Natural identifiers contain only letters, numbers, and underscores, and may not begin with a number.
Other identifiers cannot be used in a where clause and are generally a bad idea.
DataTypes
HDFStore will map an object dtype to the PyTables underlying dtype. This means the following types are known
to work:
[email protected]
166FVD0TPV unicode columns are not supported, and WILL FAIL.
Categorical data
You can write data that contains category dtypes to a HDFStore. Queries work the same as if it was an object
array. However, the category dtyped data is stored in a more efficient manner.
In [474]: dfcat
Out[474]:
A B
0 a 0.477849
1 a 0.283128
2 b -2.045700
3 b -0.338206
4 c -0.423113
5 d 2.314361
6 b -0.033100
7 a -0.965461
In [475]: dfcat.dtypes
Out[475]:
A category
B float64
(continues on next page)
In [479]: result
Out[479]:
A B
2 b -2.045700
3 b -0.338206
4 c -0.423113
6 b -0.033100
In [480]: result.dtypes
Out[480]:
A category
B float64
dtype: object
String columns
min_itemsize
[email protected]
The underlying implementation of HDFStore uses a fixed column width (itemsize) for string columns. A string
166FVD0TPV
column itemsize is calculated as the maximum of the length of data (for that column) that is passed to the HDFStore,
in the first append. Subsequent appends, may introduce a string for a column larger than the column can hold, an
Exception will be raised (otherwise you could have a silent truncation of these columns, leading to loss of information).
In the future we may relax this and allow a user-specified truncation to occur.
Pass min_itemsize on the first table creation to a-priori specify the minimum length of a particular string column.
min_itemsize can be an integer, or a dict mapping a column name to an integer. You can pass values as a key
to allow all indexables or data_columns to have this min_itemsize.
Passing a min_itemsize dict will cause all passed columns to be created as data_columns automatically.
Note: If you are not passing any data_columns, then the min_itemsize will be the maximum of the length of
any string passed
In [482]: dfs
Out[482]:
A B
0 foo bar
1 foo bar
2 foo bar
3 foo bar
4 foo bar
In [484]: store.get_storer('dfs').table
Out[484]:
/dfs/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=30, shape=(2,), dflt=b'', pos=1)}
byteorder := 'little'
chunkshape := (963,)
autoindex := True
colindexes := {
"index": Index(6, medium, shuffle, zlib(1)).is_csi=False}
In [486]: store.get_storer('dfs2').table
Out[486]:
/dfs2/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=3, shape=(1,), dflt=b'', pos=1),
"A": StringCol(itemsize=30, shape=(), dflt=b'', pos=2)}
byteorder := 'little'
chunkshape := (1598,)
autoindex := True
[email protected]
166FVD0TPV colindexes := {
"index": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"A": Index(6, medium, shuffle, zlib(1)).is_csi=False}
nan_rep
String columns will serialize a np.nan (a missing value) with the nan_rep string representation. This defaults to
the string value nan. You could inadvertently turn an actual nan value into a missing value.
In [487]: dfss = pd.DataFrame({'A': ['foo', 'bar', 'nan']})
In [488]: dfss
Out[488]:
A
0 foo
1 bar
2 nan
In [490]: store.select('dfss')
Out[490]:
A
0 foo
1 bar
2 NaN
In [492]: store.select('dfss2')
Out[492]:
A
0 foo
1 bar
2 nan
External compatibility
HDFStore writes table format objects in specific formats suitable for producing loss-less round trips to pandas
objects. For external compatibility, HDFStore can read native PyTables format tables.
It is possible to write an HDFStore object that can easily be imported into R using the rhdf5 library (Package
website). Create a table format store like this:
In [494]: df_for_r.head()
Out[494]:
first second class
0 0.864919 0.852910 0
[email protected]
1 0.030579 0.412962 1
166FVD0TPV 2 0.015226 0.978410 0
3 0.498512 0.686761 0
4 0.232163 0.328185 1
In [497]: store_export
Out[497]:
<class 'pandas.io.pytables.HDFStore'>
File path: export.h5
In R this file can be read into a data.frame object using the rhdf5 library. The following example function reads
the corresponding column names and data values from the values and assembles them into a data.frame:
# Load values and column names for all datasets from corresponding nodes and
# insert them into one data.frame object.
library(rhdf5)