Select rows from a Pandas DataFrame with same values in one column but different value in the other column
You can try groupby()
+ filter
+ drop_duplicates()
:
>>> df.groupby('A').filter(lambda g: len(g) > 1).drop_duplicates(subset=['A', 'B'], keep="first") A B C D0 foo one 0 02 foo two 4 84 bar four 6 125 bar three 7 14
OR, in case you want to drop duplicates between the subset of columns A
& B
then can use below but that will have the row having cat
as well.
>>> df.drop_duplicates(subset=['A', 'B'], keep="first") A B C D0 foo one 0 02 foo two 4 83 cat one 8 44 bar four 6 125 bar three 7 14
result = df.groupby('A').filter(lambda g: len(g) > 1).groupby(['A', 'B']).head(1)print(result)
Output
A B C D0 foo one 0 02 foo two 4 84 bar four 6 125 bar three 7 14
The first group-by and filter will remove the rows with no duplicated A
values (i.e. cat
), the second will create groups with same A, B
and for each of those get the first element.
The current answers are correct and may be more sophisticated too. If you have complex criteria, filter function will be very useful. If you are like me and want to keep things simple, i feel following is more beginner friendly way
>>> df = pd.DataFrame({ 'A': ['foo', 'foo', 'foo', 'cat', 'bar', 'bar', 'bar'], 'B': ['one', 'one', 'two', 'one', 'four', 'three', 'four'], 'C': [0,2,4,8,6,7,7], 'D': [0,4,8,4,12,14,14]}, index=[1,2,3,4,5,6,7])>>> df = df.drop_duplicates(['A', 'B'], keep='last') A B C D2 foo one 2 43 foo two 4 84 cat one 8 46 bar three 7 147 bar four 7 14>>> df = df[df.duplicated(['A'], keep=False)] A B C D2 foo one 2 43 foo two 4 86 bar three 7 147 bar four 7 14
keep='last'
is optional here