Skip to content

Npartitions with Groupby and Apply #967

@bhagerman00

Description

@bhagerman00

This may be my misunderstanding of how dask handles groupby, but changing the number of partitions in the dask data frame also changes the results of apply:

df = pd.DataFrame({'A': np.arange(100), 'B': np.random.randn(100), 'C': np.random.randn(100),
                                 'Grp1': np.repeat([1, 2], 50), 'Grp2': [3, 4, 5, 6], 25)})

test_dd1 = dd.from_pandas(df, npartitions=1)
test_dd2 = dd.from_pandas(df, npartitions=2)
test_dd5 = dd.from_pandas(df, npartitions=5)
test_dd10 = dd.from_pandas(df, npartitions=10)
test_dd100 = dd.from_pandas(df, npartitions=100)

def test_func(x):
    x['New_Col'] = len(x[x['B'] > 0.]) / len(x['B'])
    return x

test_dd1.groupby(['Grp1', 'Grp2']).apply(test_func).compute().head()
   A               B               C Grp1 Grp2 New_Col
0  0 -0.561376 -1.422286     1     3     0.48
1  1 -1.107799  1.075471     1     3     0.48
2  2 -0.719420 -0.574381     1     3     0.48
3  3 -1.287547 -0.749218     1     3     0.48
4  4  0.677617 -0.908667     1     3     0.48

test_dd2.groupby(['Grp1', 'Grp2']).apply(test_func).compute().head()
   A               B              C  Grp1 Grp2 New_Col
0  0 -0.561376 -1.422286     1     3     0.48
1  1 -1.107799  1.075471     1     3     0.48
2  2 -0.719420 -0.574381     1     3     0.48
3  3 -1.287547 -0.749218     1     3     0.48
4  4  0.677617 -0.908667     1     3     0.48

test_dd5.groupby(['Grp1', 'Grp2']).apply(test_func).compute().head()
   A               B              C  Grp1 Grp2 New_Col
0  0 -0.561376 -1.422286     1     3     0.45
1  1 -1.107799  1.075471     1     3     0.45
2  2 -0.719420 -0.574381     1     3     0.45
3  3 -1.287547 -0.749218     1     3     0.45
4  4  0.677617 -0.908667     1     3     0.45

test_dd10.groupby(['Grp1', 'Grp2']).apply(test_func).compute().head()
   A               B              C  Grp1 Grp2 New_Col
0  0 -0.561376 -1.422286     1     3      0.5
1  1 -1.107799  1.075471     1     3      0.5
2  2 -0.719420 -0.574381     1     3      0.5
3  3 -1.287547 -0.749218     1     3      0.5
4  4  0.677617 -0.908667     1     3      0.5

test_dd100.groupby(['Grp1', 'Grp2']).apply(test_func).compute().head()
   A               B              C  Grp1 Grp2  New_Col
0  0 -0.561376 -1.422286     1     3        0
1  1 -1.107799  1.075471     1     3        0
2  2 -0.719420 -0.574381     1     3        0
3  3 -1.287547 -0.749218     1     3        0
4  4  0.677617 -0.908667     1     3        1

df.groupby(['Grp1', 'Grp2']).apply(test_func).head()
   A               B               C Grp1 Grp2 New_Col
0  0 -0.561376 -1.422286     1     3     0.48
1  1 -1.107799  1.075471     1     3     0.48
2  2 -0.719420 -0.574381     1     3     0.48
3  3 -1.287547 -0.749218     1     3     0.48
4  4  0.677617 -0.908667     1     3     0.48

From a couple comparisons it seems like 'npartitions=1' returns the same results as pandas regular groupby, but even with 1 partition I am able to process the same groupby call that would return a memory error using exclusively pandas. I'm not sure how to set the partition count on read_csv however, only on from_pandas. Any info is appreciated, thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions