Find input that maximises output of a neural network using Keras and TensorFlow

点点圈 提交于 2020-08-04 05:33:21

问题


I have used Keras and TensorFlow to classify the Fashion MNIST following this tutorial .

It uses the AdamOptimizer to find the value for model parameters that minimize the loss function of the network. The input for the network is a 2-D tensor with shape [28, 28], and output is a 1-D tensor with shape [10] which is the result of a softmax function.

Once the network has been trained, I want to use the optimizer for another task: find an input that maximizes one of the elements of the output tensor. How can this be done? Is it possible to do so using Keras or one have to use a lower level API?

Since the input is not unique for a given output, it would be even better if we could impose some constraints on the values the input can take.

The trained model has the following format

model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    keras.layers.Dense(128, activation=tf.nn.relu),
    keras.layers.Dense(10, activation=tf.nn.softmax)
])

回答1:


I feel you would want to backprop with respect to the input freezing all the weights to your model. What you could do is:

  1. Add a dense layer after the input layer with the same dimensions as input and set it as trainable
  2. Freeze all the other layers of your model. (except the one you added)
  3. As an input, feed an identity matrix and train your model based on whatever output you desire.

This article and this post might be able to help you if you want to backprop based on the input instead. It's a bit like what you are aiming for but you can get the intuition.




回答2:


It would be very similar to the way that filters of a Convolutional Network is visualized: we would do gradient ascent optimization in input space to maximize the response of a particular filter.

Here is how to do it: after training is finished, first we need to specify the output and define a loss function that we want to maximize:

from keras import backend as K

output_class = 0 # the index of the output class we want to maximize
output = model.layers[-1].output
loss = K.mean(output[:,output_class]) # get the average activation of our desired class over the batch

Next, we need to take the gradient of the loss we have defined above with respect to the input layer:

grads = K.gradients(loss, model.input)[0] # the output of `gradients` is a list, just take the first (and only) element

grads = K.l2_normalize(grads) # normalize the gradients to help having an smooth optimization process

Next, we need to define a backend function that takes the initial input image and gives the values of loss and gradients as outputs, so that we can use it in the next step to implement the optimization process:

func = K.function([model.input], [loss, grads])

Finally, we implement the gradient ascent optimization process:

import numpy as np

input_img = np.random.random((1, 28, 28)) # define an initial random image

lr = 1.  # learning rate used for gradient updates
max_iter = 50  # number of gradient updates iterations
for i in range(max_iter):
    loss_val, grads_val = func([input_img])
    input_img += grads_val * lr  # update the image based on gradients

Note that, after this process is finished, to display the image you may need to make sure that all the values in the image are in the range [0, 255] (or [0,1]).




回答3:


After the hints Saket Kumar Singh gave in his answer, I wrote the following that seems to solve the question.

I create two custom layers. Maybe Keras offers already some classes that are equivalent to them.

The first on is a trainable input:

class MyInputLayer(keras.layers.Layer):
    def __init__(self, output_dim, **kwargs):
        self.output_dim = output_dim
        super(MyInputLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        self.kernel = self.add_weight(name='kernel',
                                      shape=self.output_dim,
                                      initializer='uniform',
                                      trainable=True)
        super(MyInputLayer, self).build(input_shape)

    def call(self, x):
        return self.kernel

    def compute_output_shape(self, input_shape):
        return self.output_dim

The second one gets the probability of the label of interest:

class MySelectionLayer(keras.layers.Layer):
    def __init__(self, position, **kwargs):
        self.position = position
        self.output_dim = 1
        super(MySelectionLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        super(MySelectionLayer, self).build(input_shape)

    def call(self, x):
        mask = np.array([False]*x.shape[-1])
        mask[self.position] = True
        return tf.boolean_mask(x, mask,axis=1)

    def compute_output_shape(self, input_shape):
        return self.output_dim

I used them in this way:

# Build the model
layer_flatten =  keras.layers.Flatten(input_shape=(28, 28))
layerDense1 = keras.layers.Dense(128, activation=tf.nn.relu)
layerDense2 = keras.layers.Dense(10, activation=tf.nn.softmax)
model = keras.Sequential([
    layer_flatten,
    layerDense1,
    layerDense2
])

# Compile the model
model.compile(optimizer=tf.train.AdamOptimizer(),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# Train the model
# ...

# Freeze the model
layerDense1.trainable = False
layerDense2.trainable = False

# Build another model
class_index = 7

layerInput =  MyInputLayer((1,784))
layerSelection = MySelectionLayer(class_index)

model_extended = keras.Sequential([
    layerInput,
    layerDense1,
    layerDense2,
    layerSelection
])

# Compile it
model_extended.compile(optimizer=tf.train.AdamOptimizer(),
              loss='mean_absolute_error')

# Train it
dummyInput = np.ones((1,1))
target = np.ones((1,1))
model_extended.fit(dummyInput, target,epochs=300)

# Retrieve the weights of layerInput
layerInput.get_weights()[0]



回答4:


Interesting. Maybe a solution would be to feed all your data to the network and for each sample save the output_layer after softmax.

This way, for 3 classes, where you want to find the best input for class 1, you are looking for outputs where the first component is high. For example: [1 0 0]

Indeed the output means the probability, or the confidence of the network, for the sample being one of the classes.




回答5:


Funny coincident I was just working on the same "problem". I'm interested in the direction of adversarial training etc. What I did was to insert a LocallyConnected2D Layer after the input and then train with data which is all one and has as targets the class of interest.

As model I use

batch_size = 64
num_classes = 10
epochs = 20
input_shape = (28, 28, 1)


inp = tf.keras.layers.Input(shape=input_shape)
conv1 = tf.keras.layers.Conv2D(32, kernel_size=(3, 3),activation='relu',kernel_initializer='he_normal')(inp)
pool1 = tf.keras.layers.MaxPool2D((2, 2))(conv1)
drop1 = tf.keras.layers.Dropout(0.20)(pool1)
flat  = tf.keras.layers.Flatten()(drop1)
fc1   = tf.keras.layers.Dense(128, activation='relu')(flat)
norm1 = tf.keras.layers.BatchNormalization()(fc1)
dropfc1 = tf.keras.layers.Dropout(0.25)(norm1)
out   = tf.keras.layers.Dense(num_classes, activation='softmax')(dropfc1)

model = tf.keras.models.Model(inputs = inp , outputs = out)

model.compile(loss=tf.keras.losses.categorical_crossentropy,
              optimizer=tf.keras.optimizers.RMSprop(),
              metrics=['accuracy'])
model.summary()

after training I insert the new layer

def insert_intermediate_layer_in_keras(model,position, before_layer_id):
    layers = [l for l in model.layers]

    if(before_layer_id==0) :
        x = new_layer
    else:
        x = layers[0].output
    for i in range(1, len(layers)):
        if i == before_layer_id:
            x = new_layer(x)
            x = layers[i](x)

        else:
            x = layers[i](x)

    new_model = tf.keras.models.Model(inputs=layers[0].input, outputs=x)
    return new_model

def fix_model(model):
    for l in model.layers:
        l.trainable=False


fix_model(model)    
new_layer = tf.keras.layers.LocallyConnected2D(1, kernel_size=(1, 1),
                                               activation='linear',
                                               kernel_initializer='he_normal',
                                                use_bias=False)
new_model = insert_intermediate_layer_in_keras(model,new_layer,1)
new_model.compile(loss=tf.keras.losses.categorical_crossentropy,
              optimizer=tf.keras.optimizers.RMSprop(),
              metrics=['accuracy'])

and finally rerun training with my fake data.

X_fake = np.ones((60000,28,28,1))
print(Y_test.shape)
y_fake = np.ones((60000))
Y_fake = tf.keras.utils.to_categorical(y_fake, num_classes)
new_model.fit(X_fake, Y_fake, epochs=100)
weights = new_layer.get_weights()[0]

imshow(weights.reshape(28,28))
plt.show()

Results are not yet satisfying but I'm confident of the approach and guess I need to play around with the optimiser.



来源:https://stackoverflow.com/questions/52678215/find-input-that-maximises-output-of-a-neural-network-using-keras-and-tensorflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!