Apply pandas function to column to create multiple new columns?

夙愿已清 提交于 2019-11-26 13:56:34
Zelazny7

Building off of user1827356 's answer, you can do the assignment in one pass using df.merge:

df.merge(df.textcol.apply(lambda s: pd.Series({'feature1':s+1, 'feature2':s-1})), 
    left_index=True, right_index=True)

    textcol  feature1  feature2
0  0.772692  1.772692 -0.227308
1  0.857210  1.857210 -0.142790
2  0.065639  1.065639 -0.934361
3  0.819160  1.819160 -0.180840
4  0.088212  1.088212 -0.911788

EDIT: Please be aware of the huge memory consumption and low speed: https://ys-l.github.io/posts/2015/08/28/how-not-to-use-pandas-apply/ !

I usually do this using zip:

>>> df = pd.DataFrame([[i] for i in range(10)], columns=['num'])
>>> df
    num
0    0
1    1
2    2
3    3
4    4
5    5
6    6
7    7
8    8
9    9

>>> def powers(x):
>>>     return x, x**2, x**3, x**4, x**5, x**6

>>> df['p1'], df['p2'], df['p3'], df['p4'], df['p5'], df['p6'] = \
>>>     zip(*df['num'].map(powers))

>>> df
        num     p1      p2      p3      p4      p5      p6
0       0       0       0       0       0       0       0
1       1       1       1       1       1       1       1
2       2       2       4       8       16      32      64
3       3       3       9       27      81      243     729
4       4       4       16      64      256     1024    4096
5       5       5       25      125     625     3125    15625
6       6       6       36      216     1296    7776    46656
7       7       7       49      343     2401    16807   117649
8       8       8       64      512     4096    32768   262144
9       9       9       81      729     6561    59049   531441

This is what I've done in the past

df = pd.DataFrame({'textcol' : np.random.rand(5)})

df
    textcol
0  0.626524
1  0.119967
2  0.803650
3  0.100880
4  0.017859

df.textcol.apply(lambda s: pd.Series({'feature1':s+1, 'feature2':s-1}))
   feature1  feature2
0  1.626524 -0.373476
1  1.119967 -0.880033
2  1.803650 -0.196350
3  1.100880 -0.899120
4  1.017859 -0.982141

Editing for completeness

pd.concat([df, df.textcol.apply(lambda s: pd.Series({'feature1':s+1, 'feature2':s-1}))], axis=1)
    textcol feature1  feature2
0  0.626524 1.626524 -0.373476
1  0.119967 1.119967 -0.880033
2  0.803650 1.803650 -0.196350
3  0.100880 1.100880 -0.899120
4  0.017859 1.017859 -0.982141
Michael David Watson

This is the correct and easiest way to accomplish this for 95% of use cases:

>>> df = pd.DataFrame(zip(*[range(10)]), columns=['num'])
>>> df
    num
0    0
1    1
2    2
3    3
4    4
5    5

>>> def example(x):
...     x['p1'] = x['num']**2
...     x['p2'] = x['num']**3
...     x['p3'] = x['num']**4
...     return x

>>> df = df.apply(example, axis=1)
>>> df
    num  p1  p2  p3
0    0   0   0    0
1    1   1   1    1
2    2   4   8   16
3    3   9  27   81
4    4  16  64  256

Summary: If you only want to create a few columns, use df[['new_col1','new_col2']] = df[['data1','data2']].apply( function_of_your_choosing(x), axis=1)

For this solution, the number of new columns you are creating must be equal to the number columns you use as input to the .apply() function. If you want to do something else, have a look at the other answers.

Details Let's say you have two-column dataframe. The first column is a person's height when they are 10; the second is said person's height when they are 20.

Suppose you need to calculate both the mean of each person's heights and sum of each person's heights. That's two values per each row.

You could do this via the following, soon-to-be-applied function:

def mean_and_sum(x):
    """
    Calculates the mean and sum of two heights.
    Parameters:
    :x -- the values in the row this function is applied to. Could also work on a list or a tuple.
    """

    sum=x[0]+x[1]
    mean=sum/2
    return [mean,sum]

You might use this function like so:

 df[['height_at_age_10','height_at_age_20']].apply(mean_and_sum(x),axis=1)

(To be clear: this apply function takes in the values from each row in the subsetted dataframe and returns a list.)

However, if you do this:

df['Mean_&_Sum'] = df[['height_at_age_10','height_at_age_20']].apply(mean_and_sum(x),axis=1)

you'll create 1 new column that contains the [mean,sum] lists, which you'd presumably want to avoid, because that would require another Lambda/Apply.

Instead, you want to break out each value into its own column. To do this, you can create two columns at once:

df[['Mean','Sum']] = df[['height_at_age_10','height_at_age_20']]
.apply(mean_and_sum(x),axis=1)

In 2018, I use apply() with argument result_type='expand'

>>> appiled_df = df.apply(lambda row: fn(row.text), axis='columns', result_type='expand')
>>> df = pd.concat([df, appiled_df], axis='columns')

I've looked several ways of doing this and the method shown here (returning a pandas series) doesn't seem to be most efficient.

If we start with a largeish dataframe of random data:

# Setup a dataframe of random numbers and create a 
df = pd.DataFrame(np.random.randn(10000,3),columns=list('ABC'))
df['D'] = df.apply(lambda r: ':'.join(map(str, (r.A, r.B, r.C))), axis=1)
columns = 'new_a', 'new_b', 'new_c'

The example shown here:

# Create the dataframe by returning a series
def method_b(v):
    return pd.Series({k: v for k, v in zip(columns, v.split(':'))})
%timeit -n10 -r3 df.D.apply(method_b)

10 loops, best of 3: 2.77 s per loop

An alternative method:

# Create a dataframe from a series of tuples
def method_a(v):
    return v.split(':')
%timeit -n10 -r3 pd.DataFrame(df.D.apply(method_a).tolist(), columns=columns)

10 loops, best of 3: 8.85 ms per loop

By my reckoning it's far more efficient to take a series of tuples and then convert that to a DataFrame. I'd be interested to hear people's thinking though if there's an error in my working.

For me this worked:

Input df

df = pd.DataFrame({'col x': [1,2,3]})
   col x
0      1
1      2
2      3

Function

def f(x):
    return pd.Series([x*x, x*x*x])

Create 2 new columns:

df[['square x', 'cube x']] = df['col x'].apply(f)

Output:

   col x  square x  cube x
0      1         1       1
1      2         4       8
2      3         9      27

The accepted solution is going to be extremely slow for lots of data. The solution with the greatest number of upvotes is a little difficult to read and also slow with numeric data. If each new column can be calculated independently of the others, I would just assign each of them directly without using apply.

Example with fake character data

Create 100,000 strings in a DataFrame

df = pd.DataFrame(np.random.choice(['he jumped', 'she ran', 'they hiked'],
                                   size=100000, replace=True),
                  columns=['words'])
df.head()
        words
0     she ran
1     she ran
2  they hiked
3  they hiked
4  they hiked

Let's say we wanted to extract some text features as done in the original question. For instance, let's extract the first character, count the occurrence of the letter 'e' and capitalize the phrase.

df['first'] = df['words'].str[0]
df['count_e'] = df['words'].str.count('e')
df['cap'] = df['words'].str.capitalize()
df.head()
        words first  count_e         cap
0     she ran     s        1     She ran
1     she ran     s        1     She ran
2  they hiked     t        2  They hiked
3  they hiked     t        2  They hiked
4  they hiked     t        2  They hiked

Timings

%%timeit
df['first'] = df['words'].str[0]
df['count_e'] = df['words'].str.count('e')
df['cap'] = df['words'].str.capitalize()
127 ms ± 585 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

def extract_text_features(x):
    return x[0], x.count('e'), x.capitalize()

%timeit df['first'], df['count_e'], df['cap'] = zip(*df['words'].apply(extract_text_features))
101 ms ± 2.96 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Surprisingly, you can get better performance by looping through each value

%%timeit
a,b,c = [], [], []
for s in df['words']:
    a.append(s[0]), b.append(s.count('e')), c.append(s.capitalize())

df['first'] = a
df['count_e'] = b
df['cap'] = c
79.1 ms ± 294 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

Another example with fake numeric data

Create 1 million random numbers and test the powers function from above.

df = pd.DataFrame(np.random.rand(1000000), columns=['num'])


def powers(x):
    return x, x**2, x**3, x**4, x**5, x**6

%%timeit
df['p1'], df['p2'], df['p3'], df['p4'], df['p5'], df['p6'] = \
       zip(*df['num'].map(powers))
1.35 s ± 83.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Assigning each column is 25x faster and very readable:

%%timeit 
df['p1'] = df['num'] ** 1
df['p2'] = df['num'] ** 2
df['p3'] = df['num'] ** 3
df['p4'] = df['num'] ** 4
df['p5'] = df['num'] ** 5
df['p6'] = df['num'] ** 6
51.6 ms ± 1.9 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

I made a similar response with more details here on why apply is typically not the way to go.

Have posted the same answer in two other similar questions. The way I prefer to do this is to wrap up the return values of the function in a series:

def f(x):
    return pd.Series([x**2, x**3])

And then use apply as follows to create separate columns:

df[['x**2','x**3']] = df.apply(lambda row: f(row['x']), axis=1)

Just use result_type="expand"

df = pd.DataFrame(np.random.randint(0,10,(10,2)), columns=["random", "a"])
df[["sq_a","cube_a"]] = df.apply(lambda x: [x.a**2, x.a**3], axis=1, result_type="expand")

you can return the entire row instead of values:

df = df.apply(extract_text_features,axis = 1)

where the function returns the row

def extract_text_features(row):
      row['new_col1'] = value1
      row['new_col2'] = value2
      return row
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!