How do I create a new column from the output of pandas groupby().sum()? How do I create a new column from the output of pandas groupby().sum()? python python

How do I create a new column from the output of pandas groupby().sum()?


You want to use transform this will return a Series with the index aligned to the df so you can then add it as a new column:

In [74]:df = pd.DataFrame({'Date': ['2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05', '2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05'], 'Sym': ['aapl', 'aapl', 'aapl', 'aapl', 'aaww', 'aaww', 'aaww', 'aaww'], 'Data2': [11, 8, 10, 15, 110, 60, 100, 40],'Data3': [5, 8, 6, 1, 50, 100, 60, 120]})​df['Data4'] = df['Data3'].groupby(df['Date']).transform('sum')dfOut[74]:   Data2  Data3        Date   Sym  Data40     11      5  2015-05-08  aapl     551      8      8  2015-05-07  aapl    1082     10      6  2015-05-06  aapl     663     15      1  2015-05-05  aapl    1214    110     50  2015-05-08  aaww     555     60    100  2015-05-07  aaww    1086    100     60  2015-05-06  aaww     667     40    120  2015-05-05  aaww    121


How do I create a new column with Groupby().Sum()?

There are two ways - one straightforward and the other slightly more interesting.


Everybody's Favorite: GroupBy.transform() with 'sum'

@Ed Chum's answer can be simplified, a bit. Call DataFrame.groupby rather than Series.groupby. This results in simpler syntax.

# The setup.df[['Date', 'Data3']]         Date  Data30  2015-05-08      51  2015-05-07      82  2015-05-06      63  2015-05-05      14  2015-05-08     505  2015-05-07    1006  2015-05-06     607  2015-05-05    120

df.groupby('Date')['Data3'].transform('sum')0     551    1082     663    1214     555    1086     667    121Name: Data3, dtype: int64 

It's a tad faster,

df2 = pd.concat([df] * 12345)%timeit df2['Data3'].groupby(df['Date']).transform('sum')%timeit df2.groupby('Date')['Data3'].transform('sum')10.4 ms ± 367 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)8.58 ms ± 559 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Unconventional, but Worth your Consideration: GroupBy.sum() + Series.map()

I stumbled upon an interesting idiosyncrasy in the API. From what I tell, you can reproduce this on any major version over 0.20 (I tested this on 0.23 and 0.24). It seems like you consistently can shave off a few milliseconds of the time taken by transform if you instead use a direct function of GroupBy and broadcast it using map:

df.Date.map(df.groupby('Date')['Data3'].sum())0     551    1082     663    1214     555    1086     667    121Name: Date, dtype: int64

Compare with

df.groupby('Date')['Data3'].transform('sum')0     551    1082     663    1214     555    1086     667    121Name: Data3, dtype: int64

My tests show that map is a bit faster if you can afford to use the direct GroupBy function (such as mean, min, max, first, etc). It is more or less faster for most general situations upto around ~200 thousand records. After that, the performance really depends on the data.

(Left: v0.23, Right: v0.24)

Nice alternative to know, and better if you have smaller frames with smaller numbers of groups. . . but I would recommend transform as a first choice. Thought this was worth sharing anyway.

Benchmarking code, for reference:

import perfplotperfplot.show(    setup=lambda n: pd.DataFrame({'A': np.random.choice(n//10, n), 'B': np.ones(n)}),    kernels=[        lambda df: df.groupby('A')['B'].transform('sum'),        lambda df:  df.A.map(df.groupby('A')['B'].sum()),    ],    labels=['GroupBy.transform', 'GroupBy.sum + map'],    n_range=[2**k for k in range(5, 20)],    xlabel='N',    logy=True,    logx=True)


I suggest in general to use the more powerful apply, with which you can write your queries in single expressions even for more complicated uses, such as defining a new column whose values are defined are defined as operations on groups, and that can have also different values within the same group!

This is more general than the simple case of defining a column with the same value for every group (like sum in this question, which varies by group by is the same within the same group).

Simple case (new column with same value within a group, different across groups):

# I'm assuming the name of your dataframe is something long, like# `my_data_frame`, to show the power of being able to write your# data processing in a single expression without multiple statements and# multiple references to your long name, which is the normal style# that the pandas API naturally makes you adopt, but which make the# code often verbose, sparse, and a pain to generalize or refactormy_data_frame = pd.DataFrame({    'Date': ['2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05', '2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05'],     'Sym': ['aapl', 'aapl', 'aapl', 'aapl', 'aaww', 'aaww', 'aaww', 'aaww'],     'Data2': [11, 8, 10, 15, 110, 60, 100, 40],    'Data3': [5, 8, 6, 1, 50, 100, 60, 120]})​(my_data_frame    # create groups by 'Date'    .groupby(['Date'])    # for every small Group DataFrame `gdf` with the same 'Date', do:    # assign a new column 'Data4' to it, with the value being    # the sum of 'Data3' for the small dataframe `gdf`    .apply(lambda gdf: gdf.assign(Data4=lambda gdf: gdf['Data3'].sum()))    # after groupby operations, the variable(s) you grouped by on    # are set as indices. In this case, 'Date' was set as an additional    # level for the (multi)index. But it is still also present as a    # column. Thus, we drop it from the index:    .droplevel(0))### OR# We don't even need to define a variable for our dataframe.# We can chain everything in one expression(pd    .DataFrame({        'Date': ['2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05', '2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05'],         'Sym': ['aapl', 'aapl', 'aapl', 'aapl', 'aaww', 'aaww', 'aaww', 'aaww'],         'Data2': [11, 8, 10, 15, 110, 60, 100, 40],        'Data3': [5, 8, 6, 1, 50, 100, 60, 120]})    .groupby(['Date'])    .apply(lambda gdf: gdf.assign(Data4=lambda gdf: gdf['Data3'].sum()))    .droplevel(0))

Out:

DateSymData2Data3Data4
32015-05-05aapl151121
72015-05-05aaww40120121
22015-05-06aapl10666
62015-05-06aaww1006066
12015-05-07aapl88108
52015-05-07aaww60100108
02015-05-08aapl11555
42015-05-08aaww1105055

(Why are the python expression within parentheses? So that we don't need to sprinkle our code with backslashes all over the place, and we can put comments within our expression code to describe every step.)

What is powerful about this? It's that it is harnessing the full power of the "split-apply-combine paradigm". It is allowing you to think in terms of "splitting your dataframe into blocks" and "running arbitrary operations on those blocks" without reducing/aggregating, i.e., without reducing the number of rows. (And without writing explicit, verbose loops and resorting to expensive joins or concatenations to glue the results back.)

Let's consider a more complex example. One in which you have multiple time series of data in your dataframe. You have a column that represents a kind of product, a column that has timestamps, and a column that contains the number of items sold for that product at some time of the year. You would like to group by product and obtain a new column, that contains the cumulative total for the items that are sold for each category. We want a column that, within every "block" with the same product, is still a time series, and is monotonically increasing (only within a block).

How can we do this? With groupby + apply!

(pd     .DataFrame({        'Date': ['2021-03-11','2021-03-12','2021-03-13','2021-03-11','2021-03-12','2021-03-13'],         'Product': ['shirt','shirt','shirt','shoes','shoes','shoes'],         'ItemsSold': [300, 400, 234, 80, 10, 120],        })    .groupby(['Product'])    .apply(lambda gdf: (gdf        # sort by date within a group        .sort_values('Date')        # create new column        .assign(CumulativeItemsSold=lambda df: df['ItemsSold'].cumsum())))    .droplevel(0))

Out:

DateProductItemsSoldCumulativeItemsSold
02021-03-11shirt300300
12021-03-12shirt400700
22021-03-13shirt234934
32021-03-11shoes8080
42021-03-12shoes1090
52021-03-13shoes120210

Another advantage of this method? It works even if we have to group by multiple fields! For example, if we had a 'Color' field for our products, and we wanted the cumulative series grouped by (Product, Color), we can:

(pd     .DataFrame({        'Date': ['2021-03-11','2021-03-12','2021-03-13','2021-03-11','2021-03-12','2021-03-13',                 '2021-03-11','2021-03-12','2021-03-13','2021-03-11','2021-03-12','2021-03-13'],         'Product': ['shirt','shirt','shirt','shoes','shoes','shoes',                    'shirt','shirt','shirt','shoes','shoes','shoes'],         'Color': ['yellow','yellow','yellow','yellow','yellow','yellow',                  'blue','blue','blue','blue','blue','blue'], # new!        'ItemsSold': [300, 400, 234, 80, 10, 120,                      123, 84, 923, 0, 220, 94],        })    .groupby(['Product', 'Color']) # We group by 2 fields now    .apply(lambda gdf: (gdf        .sort_values('Date')        .assign(CumulativeItemsSold=lambda df: df['ItemsSold'].cumsum())))    .droplevel([0,1]) # We drop 2 levels now

Out:

DateProductColorItemsSoldCumulativeItemsSold
62021-03-11shirtblue123123
72021-03-12shirtblue84207
82021-03-13shirtblue9231130
02021-03-11shirtyellow300300
12021-03-12shirtyellow400700
22021-03-13shirtyellow234934
92021-03-11shoesblue00
102021-03-12shoesblue220220
112021-03-13shoesblue94314
32021-03-11shoesyellow8080
42021-03-12shoesyellow1090
52021-03-13shoesyellow120210

(This possibility of easily extending to grouping over multiple fields is the reason why I like to put the arguments of groupby always in a list, even if it's a single name, like 'Product' in the previous example.)

And you can do all of this synthetically in a single expression. (Sure, if python's lambdas were a bit nicer to look at, it would look even nicer.)


Why did I go over a general case? Because this is one of the first SO questions that pops up when googling for things like "pandas new column groupby".


Additional thoughts on the API for this kind of operation

Adding columns based on arbitrary computations made on groups is much like the nice idiom of defining new column using aggregations over Windows in SparkSQL.

For example, you can think of this (it's Scala code, but the equivalent in PySpark looks practically the same):

val byDepName = Window.partitionBy('depName)empsalary.withColumn("avg", avg('salary) over byDepName)

as something like (using pandas in the way we have seen above):

empsalary = pd.DataFrame(...some dataframe...)(empsalary    # our `Window.partitionBy('depName)`    .groupby(['depName'])    # our 'withColumn("avg", avg('salary) over byDepName)    .apply(lambda gdf: gdf.assign(avg=lambda df: df['salary'].mean()))    .droplevel(0))

(Notice how much synthetic and nicer the Spark example is. The pandas equivalent looks a bit clunky. The pandas API doesn't make writing these kinds of "fluent" operations easy).

This idiom in turns comes from SQL's Window Functions, which the PostgreSQL documentation gives a very nice definition of: (emphasis mine)

A window function performs a calculation across a set of table rows that are somehow related to the current row. This is comparable to the type of calculation that can be done with an aggregate function. But unlike regular aggregate functions, use of a window function does not cause rows to become grouped into a single output row — the rows retain their separate identities. Behind the scenes, the window function is able to access more than just the current row of the query result.

And gives a beautiful SQL one-liner example: (ranking within groups)

SELECT depname, empno, salary, rank() OVER (PARTITION BY depname ORDER BY salary DESC) FROM empsalary;
depnameempnosalaryrank
develop860001
develop1052002
develop1152002
develop945004
develop742005
personnel239001
personnel535002
sales150001
sales448002
sales348002

Last thing: you might also be interested in pandas' pipe, which is similar to apply but works a bit differently and gives the internal operations a bigger scope to work on. See here for more