Lambda layer to perform if then in keras/tensorflow

﹥>﹥吖頭↗ 提交于 2021-01-29 14:47:02

问题


I'm tearing my hair out with this one.

I asked a question over here If then inside custom non-trainable keras layer but I'm still having difficulties.

I tried his solution, but it didn't work - I thought I'd post my complete code with his solution

I have a custom Keras layer that I want to return specific output from specific inputs. I don't want it to be trainable.

The layer should do the following

if input = [1,0] then output = 1
if input = [0,1] then output = 0

Here's the lambda layer code for doing this:

input_tensor = Input(shape=(n_hots,))


def custom_layer_1(tensor):
    if tensor == [1,0]:
        resp_1 = np.array([1,],dtype=np.int32)
        k_resp_1 = backend.variable(value=resp_1)
        return k_resp_1
    elif tensor == [0,1]:
        resp_0 = np.array([0,],dtype=np.int32)
        k_resp_0 = backend.variable(value=resp_0)
        return k_resp_0
    else:
        resp_e = np.array([-1,])
        k_resp_e = backend.variable(value=resp_e)
        return k_resp_e
    print(tensor.shape)

layer_one = keras.layers.Lambda(custom_layer_1,output_shape = (None,))(input_tensor)


_model = Model(inputs=input_tensor, outputs = layer_one)

When i fit my model it always computes -1 despite the inputs.

This is what the model looks like:

Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 2)                 0         
_________________________________________________________________
lambda_1 (Lambda)            (None, None)              0         
=================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0

Here's the full code for the model:

import numpy as np
from keras.models import Model
from keras import layers
from keras import Input
from keras import backend
import keras
from keras import models
import tensorflow as tf


# Generate the datasets:
n_obs = 1000

n_hots = 2

obs_mat = np.zeros((n_obs,n_hots),dtype=np.int32)

resp_mat = np.zeros((n_obs,1),dtype=np.int32)

# which position in the array should be "hot" ?
hot_locs = np.random.randint(n_hots, size=n_obs)

# set the bits:
for row,loc in zip(np.arange(n_obs),hot_locs):
    obs_mat[row,loc] = 1

for idx in np.arange(n_obs):
    if( (obs_mat[idx,:]==[1,0]).all() == True ):
        resp_mat[idx] = 1
    if( (obs_mat[idx,:]==[0,1]).all() == True ):
        resp_mat[idx] = 0

# test data:
test_suite = np.identity(n_hots)

# Build the network
input_tensor = Input(shape=(n_hots,))


def custom_layer_1(tensor):
    if tensor == [1,0]:
        resp_1 = np.array([1,],dtype=np.int32)
        k_resp_1 = backend.variable(value=resp_1)
        return k_resp_1
    elif tensor == [0,1]:
        resp_0 = np.array([0,],dtype=np.int32)
        k_resp_0 = backend.variable(value=resp_0)
        return k_resp_0
    else:
        resp_e = np.array([-1,])
        k_resp_e = backend.variable(value=resp_e)
        return k_resp_e
    print(tensor.shape)

layer_one = keras.layers.Lambda(custom_layer_1,output_shape = (None,))(input_tensor)


_model = Model(inputs=input_tensor, outputs = layer_one)

# compile
_model.compile(optimizer="adam",loss='mse')

#train (even thought there's nothing to train)
history_mdl = _model.fit(obs_mat,resp_mat,verbose=True,batch_size = 100,epochs = 10)

# test
_model.predict(test_suite)
# outputs: array([-1., -1.], dtype=float32)

test = np.array([1,0])
test = test.reshape(1,2)
_model.predict(test,verbose=True)
# outputs: -1

This seems like fairly simple stuff, why isn't it working? Thanks


回答1:


There are a few reasons:

  • You're comparing a 2D tensor (samples, hots) with a 1D tensor (hots).
  • You didn't consider the batch size in any of the results.
  • You might not be getting good results with a plain if while tf is a tensor framework.

So, the suggestion is:

from keras import backend as K

def custom_layer(tensor):
    #comparison tensors with compatible shape 2D: (dummy_batch, hots)
    t10 = K.reshape(K.constant([1,0]), (1,2))
    t01 = K.reshape(K.constant([0,1]), (1,2))

    #comparison results - elementwise - shape (batch_size, 2)
    is_t10 = K.equal(tensor, t10)
    is_t01 = K.equal(tensor, t01)

    #comparison results - per sample - shape (batch_size,)
    is_t10 = K.all(is_t10, axis=-1)
    is_t01 = K.all(is_t01, axis=-1)

    #result options
    zeros = K.zeros_like(is_t10, dtype='float32') #shape (batch_size,)
    ones = K.ones_like(is_t10, dtype='float32')   #shape (batch_size,)
    negatives = -ones                             #shape (batch_size,)

    #selecting options
    result_01_or_else = K.switch(is_t01, zeros, negatives)
    result = K.switch(is_t10, ones, result_01_or_else)

    return result

Warnings:

  • this layer is not differentiable (it returns constants) - you will not be able to train anything that comes before this layer and if you try you will get "An operation has None for gradient" error.
  • The input tensor cannot be outputs of other layers because you're requiring it to be exact ones or zeros.


来源:https://stackoverflow.com/questions/60566048/lambda-layer-to-perform-if-then-in-keras-tensorflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!