I have a dataframe with about 100 columns that looks like
   Id  Economics-1  English-107  English-2  History-3  Economics-zz  Economics-2  \
0  56          1            1          0        1       0           0   
1  11          0            0          0        0       1           0   
2   6          0            0          1        0       0           1   
3  43          0            0          0        1       0           1   
4  14          0            1          0        0       1           0   
   Histo      Economics-51      Literature-re         Literatureu4  
0           1            0           1                0  
1           0            0           0                1  
2           0            0           0                0  
3           0            1           1                0  
4           1            0           0                0  
so my goal is to leave only more global categories : just English, History, Literature, and write in these dataframe the sum of the value of its' components, for instance for English: English-107, English-2
    Id  Economics      English    History  Literature  
0  56          1            1          2        1                     
1  11          1            0          0        1                    
2   6          0            1          1        0                     
3  43          2            0          1        1                     
4  14          0            1          1        0          
so for those proposes I true these two methodes
      first method:
df=pd.read_csv(file_path, sep='\t')
df['History']=df.loc[df[df.columns[pd.Series(df.columns).str.startswith('History')]].sum(axes=1)]
second method:
    df=pd.read_csv(file_path, sep='\t')
    filter_col = [col for col in list(df) if col.startswith('History')]
    df['History']=0 #initialize value, otherwise throws KeyError
    for c in df[filter_col]:
    df['History']=df[filter_col].sum(axes=1)
    print df['History', df[filter_col]]
, but both give me error
TypeError: 'DataFrame' objects are mutable, thus they cannot be hashed
Could you propose how can I debug this error or, mabe another solution, for my problem. Please, notice, that I have a large dataframe with about 100 columns and 400000 rows, so I'm looking for really optimized solution like as with loc in pandas
I'd suggest that you do something different, which is to perform a transpose, groupby the prefix of the rows (your original columns), sum, and transpose again.
Consider the following:
df = pd.DataFrame({
        'a_a': [1, 2, 3, 4],
        'a_b': [2, 3, 4, 5],
        'b_a': [1, 2, 3, 4],
        'b_b': [2, 3, 4, 5],
    })
Now
[s.split('_')[0] for s in df.T.index.values]
is the prefix of the columns. So
>>> df.T.groupby([s.split('_')[0] for s in df.T.index.values]).sum().T
    a   b
0   3   3
1   5   5
2   7   7
3   9   9
does what you want.
In your case, make sure to split using the '-' character.
Using brilliant DSM's idea:
from __future__ import print_function
import pandas as pd
categories = set(['Economics', 'English', 'Histo', 'Literature'])
def correct_categories(cols):
    return [cat for col in cols for cat in categories if col.startswith(cat)]    
df = pd.read_csv('data.csv', sep=r'\s+', index_col='Id')
#print(df)
print(df.groupby(correct_categories(df.columns),axis=1).sum())
Output:
    Economics  English  Histo  Literature
Id
56          1        1      2           1
11          1        0      0           1
6           1        1      0           0
43          2        0      1           1
14          1        1      1           0
Here is another version, which takes care of "Histo/History" problematic..
from __future__ import print_function
import pandas as pd
#categories = set(['Economics', 'English', 'Histo', 'Literature'])
#
# mapping: common starting pattern: desired name
#
categories = {
    'Histo': 'History',
    'Economics': 'Economics',
    'English': 'English',
    'Literature': 'Literature'
}
def correct_categories(cols):
    return [categories[cat] for col in cols for cat in categories.keys() if col.startswith(cat)]
df = pd.read_csv('data.csv', sep=r'\s+', index_col='Id')
#print(df.columns, len(df.columns))
#print(correct_categories(df.columns), len(correct_categories(df.columns)))
#print(df.groupby(pd.Index(correct_categories(df.columns)),axis=1).sum())
rslt = df.groupby(correct_categories(df.columns),axis=1).sum()
print(rslt)
print('History\n', rslt['History'])
Output:
    Economics  English  History  Literature
Id
56          1        1        2           1
11          1        0        0           1
6           1        1        0           0
43          2        0        1           1
14          1        1        1           0
History
 Id
56    2
11    0
6     0
43    1
14    1
Name: History, dtype: int64
PS You may want to add missing categories to categories map/dictionary
来源:https://stackoverflow.com/questions/35746847/sum-values-of-columns-starting-with-the-same-string-in-pandas-dataframe