Pandas: remove group from the data when a value in the group meets a required condition Pandas: remove group from the data when a value in the group meets a required condition pandas pandas

Pandas: remove group from the data when a value in the group meets a required condition


Based on what you described in the question, as long as there is at least one value is below 8 within the group, then that group should be dropped. So the equivalent statement is that as long as the minimum value within that group is below 8, that group should be dropped.

By using the filter feature, the actual code can be reduced to only one line, please refer to Filtration, you may use the following code:

dfnew = df.groupby('Groups').filter(lambda x: x['Count'].min()>8 )dfnew.reset_index(drop=True, inplace=True) # reset indexdfnew = dfnew[['Groups','Count']] # rearrange the column sequenceprint(dfnew)Output:   Groups  Count0       2     121       2     152       2     21


You can use isin, loc and unique with selecting subset by inverted mask. Last you can reset_index:

print df  Groups  Count0       1      71       1     112       1      93       2     124       2     155       2     21print df.loc[df['Count'] < 8, 'Groups'].unique()[1]print ~df['Groups'].isin(df.loc[df['Count'] < 8, 'Groups'].unique())0    False1    False2    False3     True4     True5     TrueName: Groups, dtype: booldf1 = df[~df['Groups'].isin(df.loc[df['Count'] < 8, 'Groups'].unique())]print df1.reset_index(drop=True)   Groups  Count0       2     121       2     152       2     21


Create a Boolean Series with your condition then groupby + transform('any') to form a mask for the original DataFrame. This allows you to simply slice the original DataFrame.

df[~df.Count.lt(8).groupby(df.Groups).transform('any')]#   Groups  Count#3       2     12#4       2     15#5       2     21

While the syntax of groupby + filter is more straightforward, it performs much worse for a large number of groups, so creating the Boolean mask with transform is preferred. In this example there's over a 1000x improvement. The .isin method works extremely fast for a single column but would require switching to a merge if grouping on multiple columns.

import pandas as pdimport numpy as npnp.random.seed(123)N = 50000df = pd.DataFrame({'Groups': [*range(N//2)]*2,                   'Count': np.random.randint(0, 1000, N)})# Double check both are equivalent(df.groupby('Groups').filter(lambda x: x['Count'].min() >= 8)  == df[~df.Count.lt(8).groupby(df.Groups).transform('any')]).all().all()#True%timeit df.groupby('Groups').filter(lambda x: x['Count'].min() >= 8)#8.15 s ± 80.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)%timeit df[~df.Count.lt(8).groupby(df.Groups).transform('any')]#6.54 ms ± 143 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)%timeit df[~df['Groups'].isin(df.loc[df['Count'] < 8, 'Groups'].unique())]#2.88 ms ± 24 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)