I want to divide the autoencoder learning and applying into two parts following https://blog.keras.io/building-autoencoders-in-keras.html and using the fashion-mnist data fo
Since you are using functional API for creating the autoencoder, the best way to reconstruct the encoder and decoder is to use the functional API and the Model
class again:
autoencoder= K.models.load_model('fashion-autoencoder.hdf5')
encoder = Model(autoencoder.input, autoencoder.layers[-2].output)
decoder_input = Input(shape=(encoding_dim,))
decoder = Model(decoder_input, autoencoder.layers[-1](decoder_input))
encoder.summary()
decoder.summary()
The models summary:
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) (None, 784) 0
_________________________________________________________________
dense_3 (Dense) (None, 32) 25120
=================================================================
Total params: 25,120
Trainable params: 25,120
Non-trainable params: 0
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_6 (InputLayer) (None, 32) 0
_________________________________________________________________
dense_4 (Dense) (None, 784) 25872
=================================================================
Total params: 25,872
Trainable params: 25,872
Non-trainable params: 0
_________________________________________________________________
The solution involving pop()
on layers
attribute does not work since you need to update some of the internal attributes of the model. Although, for sequential models a built-in pop() method has been implemented.