How to get the batch size inside lambda layer

二次信任 提交于 2021-02-20 09:01:21

问题


I'm trying to implement a layer (via lambda layer) which is doing the following numpy procedure:

def func(x, n):
    return np.concatenate((x[:, :n], np.tile(x[:, n:].mean(axis = 0), (x.shape[0], 1))), axis = 1)

I'm stuck because I don't know how to get the size of the first dimension of x (which is the batch size). The backend function int_shape(x) returns (None, ...).

So, if I know the batch_size, the corresponding Keras procedure would be:

def func(x, n):
    return K.concatenate([x[:, :n], K.tile(K.mean(x[:, n:], axis=0), [batch_size, 1])], axis = 1)

回答1:


Just as @pitfall says, the second argument of K.tile should be a tensor. And according to the doc of keras backend, K.shape returns a tensor and K.int_shape returns a tuple of int or None entries. So the correct way is to use K.shape. Following is the MWE:

import keras.backend as K
from keras.layers import Input, Lambda
from keras.models import Model
import numpy as np

batch_size = 8
op_len = ip_len = 10

def func(X):
    return K.tile(K.mean(X, axis=0, keepdims=True), (K.shape(X)[0], 1))

ip = Input((ip_len,))
lbd = Lambda(lambda x:func(x))(ip)

model = Model(ip, lbd)
model.summary()

model.compile('adam', loss='mse')

X = np.random.randn(batch_size*100, ip_len)
Y = np.random.randn(batch_size*100, op_len)
#no parameters to train!
#model.fit(X,Y,batch_size=batch_size)

#prediction
np_result = np.tile(np.mean(X[:batch_size], axis=0, keepdims=True), 
                    (batch_size,1))
pred_result = model.predict(X[:batch_size])
print(np.allclose(np_result, pred_result))



回答2:


You should not use K.int_shape, but something like tf.shape, which will give you a dynamic shape.


New updates

Here is a solution without using tile

from keras.layers import *
from keras.models import *

# define the lambda layer
n = 5
MyConcat = Lambda(lambda x: K.concatenate([x[:,:n], 
                                           K.ones_like(x[:,n:]) * K.mean(x[:,n:], axis=0)],
                                          axis=1))

# make a dummy testing model
x = Input(shape=(10,))
y = MyConcat(x)
mm = Model(inputs=x, outputs=y)

# test code
a = np.arange(40).reshape([-1,10])
print(a)

[[ 0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]
[20 21 22 23 24 25 26 27 28 29]
[30 31 32 33 34 35 36 37 38 39]]

b = mm.predict(a)
print(b)

[[ 0. 1. 2. 3. 4. 20. 21. 22. 23. 24.]
[10. 11. 12. 13. 14. 20. 21. 22. 23. 24.]
[20. 21. 22. 23. 24. 20. 21. 22. 23. 24.]
[30. 31. 32. 33. 34. 20. 21. 22. 23. 24.]]


The last thing that wants to mention is --- in keras you are not allowed to change the batch-size within a layer, namely the output and input batch size of a layer MUST be the same.




回答3:


Create a functor and give it the batch size at the initialization.

class SuperLoss:
    def __init__(self,batch_size):
        self.batch_size = batch_size
    def __call__(self,y_true,y_pred):
        self.batch_size ....


来源:https://stackoverflow.com/questions/42841096/how-to-get-the-batch-size-inside-lambda-layer

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!