问题
I would like to do the following:
If two rows have exactly the same value in 3 columns ("ID","symbol", and "date") and have either "X" or "T" in one column ("message"), then remove both of these rows. However, if two rows have the same value in the same 3 columns but a value different than "X" or "T" in the other column, then leave intact.
Here is an example of my data frame:
df = pd.DataFrame({"ID":["AA-1", "AA-1", "C-0" ,"BB-2", "BB-2"], "symbol":["A","A","C","B","B"], "date":["06/24/2014","06/24/2014","06/20/2013","06/25/2014","06/25/2015"], "message": ["T","X","T","",""] })
Note that the first two rows have the same value values for the columns "ID","symbol", and "date", and "T" and "X" in the column "message". I would like to remove these two rows.
However, the last two rows have the same value in columns "ID","symbol", and "date", but blank (different than "X" or "T") in the column "message".
I am interested in applying the function to a large dataset with several million rows. So far what I have tried consumes all my memory,
thank you and I appreciate any help,
回答1:
This might work for you:
vals = ['X', 'T']
pd.concat([df[~df.message.isin(vals)], df[df.message.isin(vals)].loc[~df.duplicated(subset=['ID', 'date', 'symbol'], keep=False), :]])
ID date message symbol
3 BB-2 06/25/2014 B
4 BB-2 06/25/2015 B
2 C-0 06/20/2013 T C
It's reasonably fast:
%%timeit
pd.concat([df[~df.message.isin(['X', 'T'])], df[df.message.isin(['X', 'T'])].loc[~df.duplicated(subset=['ID', 'date', 'symbol'], keep=False), :]])
100 loops, best of 3: 1.99 ms per loop
%%timeit
df.groupby(['ID','date','symbol']).filter(lambda x: ~x.message.isin(['T','X']).all())
100 loops, best of 3: 2.71 ms per loop
The alternative was giving indexing errors.
回答2:
I think you can use groupby with filter - conditions are - not 2
rows with duplicate values and column message
in groups isin have not values T
or X
:
import pandas as pd
df = pd.DataFrame({"ID":["AA-1", "AA-1", "C-0" ,"BB-2", "BB-2"],
"symbol":["A","A","C","B","B"],
"date":["06/24/2014","06/24/2014","06/20/2013","06/25/2015","06/25/2015"],
"message": ["T","X","T","",""] })
print (df)
ID date message symbol
0 AA-1 06/24/2014 T A
1 AA-1 06/24/2014 X A
2 C-0 06/20/2013 T C
3 BB-2 06/25/2015 B
4 BB-2 06/25/2015 B
df1 = df.groupby(['ID','date','symbol']).filter(lambda x: ~((len(x) == 2) &
(x.message.isin(['T','X']).all())))
print (df1)
ID date message symbol
2 C-0 06/20/2013 T C
3 BB-2 06/25/2015 B
4 BB-2 06/25/2015 B
Filtration in docs.
EDIT by comment:
import pandas as pd
df = pd.DataFrame({"ID":["AA-1", "AA-1", "C-0", "C-0","BB-2", "BB-2"],
"symbol":["A","A","C","C", "B","B"],
"date":["06/24/2014","06/24/2014","06/20/2013","06/20/2013","06/25/2015","06/25/2015"],
"message": ["T","X","X","X","",""] })
print (df)
ID date message symbol
0 AA-1 06/24/2014 T A
1 AA-1 06/24/2014 X A
2 C-0 06/20/2013 X C
3 C-0 06/20/2013 X C
4 BB-2 06/25/2015 B
5 BB-2 06/25/2015 B
If need remove values with X
or T
in each group - it means it remove double X
or double T
too and each len
of each group is always 2
:
df1 = df.groupby(['ID','date','symbol']).filter(lambda x: ~x.message.isin(['T','X']).all())
print (df1)
ID date message symbol
4 BB-2 06/25/2015 B
5 BB-2 06/25/2015 B
If need remove only groups where are values T
and X
, you can first sort_values by message
and then filter
by checking if first value is T
and second X
in each group. ('T' is first and X
is second, because sorting):
df2 = df.sort_values('message')
.groupby(['ID','date','symbol'], sort=False)
.filter(lambda x: ((x.message.iloc[0] != 'T') | (x.message.iloc[1] != 'X')))
print (df2)
ID date message symbol
4 BB-2 06/25/2015 B
5 BB-2 06/25/2015 B
2 C-0 06/20/2013 X C
3 C-0 06/20/2013 X C
来源:https://stackoverflow.com/questions/37777662/removing-duplicated-rows-but-keep-the-ones-with-a-particular-value-in-one-column