Visualizing ConvNet filters using my own fine-tuned network resulting in a “NoneType” when running: K.gradients(loss, model.input)[0]

这一生的挚爱 提交于 2020-02-02 06:55:13

问题


I have a fine-tuned network that I created which uses vgg16 as it's base. I am following section 5.4.2 Visualizing CovNet Filters in Deep Learning With Python (which is very similar to the guide on the Keras blog to visualize convnet filters here).

The guide simply uses the vgg16 network. My fine tuned model uses the vgg16 model as the base, for example:

model.summary()

Layer (type)                 Output Shape              Param #
=======================================================================
vgg16 (Model)                (None, 4, 4, 512)         14714688  
_______________________________________________________________________ 
flatten_1 (Flatten)          (None, 8192)              0
_______________________________________________________________________ 
dense_7 (Dense)              (None, 256)               2097408   
_______________________________________________________________________ 
dense_8 (Dense)              (None, 3)                 771       
======================================================================== 
Total params: 16,812,867 
Trainable params: 16,812,867
Non-trainable params: 0

I'm running into an issue when I run this line: grads = K.gradients(loss, model.input)[0] where when I use my fine tuned network I get a result that's a "NoneType"

Here is the code from the guide:

> from keras.applications import VGG16
> from keras import backend as K
> 
> model = VGG16(weights='imagenet',
>               include_top=False)
> 
> layer_name = 'block3_conv1'
> filter_index = 0
> 
> layer_output = model.get_layer(layer_name).output
> loss = K.mean(layer_output[:, :, :, filter_index])
> 
> grads = K.gradients(loss, model.input)[0]

To reproduce the on my fine tuned model, I've used the exact same code, except I obviously changed the model that I imported:

model = keras.models.load_model(trained_models_dir + 'fine_tuned_model.h5')

...and I also had to index into the nested Model object (my first layer is a Model object as is shown above) to get the 'block2_con1' layer:

my_Model_object = 'vgg16'
layer_name = 'block3_conv1'
filter_index = 0

layer_output = 
model.get_layer(my_Model_object).get_layer(layer_name).output

any idea why running grads = K.gradients(loss, model.input)[0] on my fine tuned network would result in a "NoneType"??

Thanks.


回答1:


SOLVED: I had to use:

grads = K.gradients(loss, model.get_layer(my_Model_object).get_layer('input_1').input)[0] 

instead of just:

grads = K.gradients(loss, model.input)[0]

which is confusing because both

model.get_layer(my_Model_object).get_layer('input_1').input)[0]

and

model.input[0]

print the same thing and are of the same type.



来源:https://stackoverflow.com/questions/50310063/visualizing-convnet-filters-using-my-own-fine-tuned-network-resulting-in-a-none

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!