How to design a shared weight, multi input/output Auto-Encoder network?

て烟熏妆下的殇ゞ 提交于 2021-02-08 07:44:56

问题


I have two different types of images (camera image and it's corresponding sketch). The goal of the network is to find the similarity between both images.

The network consists of a single encoder and a single decoder. The motivation behind the single encoder-decoder is to share the weights between them.

input_img = Input(shape=(img_width,img_height, channels))

def encoder(input_img):
    # Photo-Encoder Code
    pe = Conv2D(96, kernel_size=11, strides=(4,4), padding = 'SAME')(left_input) # (?, 64, 64, 96)
    pe = BatchNormalization()(pe)
    pe = Activation('selu')(pe)
    pe = MaxPool2D((3, 3), strides=(2, 2), padding = 'VALID')(pe) # (?, 31, 31, 96)

    pe = Conv2D(256, kernel_size=5, strides=(1,1), padding = 'SAME')(pe) # (?, 31, 31, 256)
    pe = BatchNormalization()(pe)
    pe = Activation('selu')(pe)
    pe = MaxPool2D((3, 3), strides=(2, 2), padding = 'VALID')(pe) #(?, 15, 15, 256)

    pe = Conv2D(384, kernel_size=3, strides=(1,1), padding = 'SAME')(pe) # (?, 15, 15, 384)
    pe = BatchNormalization()(pe)
    pe = Activation('selu')(pe)

    pe = Conv2D(384, kernel_size=3, strides=(1,1), padding = 'SAME')(pe) # (?, 15, 15, 384)
    pe = BatchNormalization()(pe)
    pe = Activation('selu')(pe)

    pe = Conv2D(256, kernel_size=3, strides=(1,1), padding = 'SAME')(pe) # (?, 15, 15, 256)
    pe = BatchNormalization()(pe)
    pe = Activation('selu')(pe)
    encoded = MaxPool2D((3, 3), strides=(2, 2), padding = 'VALID')(pe) # (?, 7, 7, 256)

    return encoded

def decoder(pe):
    pe = Conv2D(1024, kernel_size=7, strides=(1, 1), padding = 'VALID')(pe)
    pe = BatchNormalization()(pe)
    pe = Activation('selu')(pe)

    p_decoder_inp = Reshape((2,2,256))(pe)   

    pd = Conv2DTranspose(128, kernel_size=5, strides=(2, 2), padding='SAME')(p_decoder_inp)
    pd = Activation("selu")(pd)

    pd = Conv2DTranspose(64, kernel_size=5, strides=(2, 2), padding='SAME')(pd) 
    pd = Activation("selu")(pd)

    pd = Conv2DTranspose(32, kernel_size=5, strides=(2, 2), padding='SAME')(pd)
    pd = Activation("selu")(pd)

    pd = Conv2DTranspose(16, kernel_size=5, strides=(2, 2), padding='SAME')(pd) 
    pd = Activation("selu")(pd)

    pd = Conv2DTranspose(8, kernel_size=5, strides=(2, 2), padding='SAME')(pd)
    pd = Activation("selu")(pd)

    pd = Conv2DTranspose(4, kernel_size=5, strides=(2, 2), padding='SAME')(pd)
    pd = Activation("selu")(pd)

    decoded = Conv2DTranspose(3, kernel_size=5, strides=(2, 2), padding='SAME', activation='sigmoid')(pd) # (?, ?, ?, 3)

    return decoded


siamsese_net = Model([camera_img, sketch_img], [decoder(encoder(camera_img)), decoder(encoder(sketch_img))])

siamsese_net.summary()

When I visualize the network, it show two-different networks.

But what I want is a network which takes two inputs, for example, a camera image and a sketch image and returns same images by using a single encoder-decoder.

Where I am doing wrong?


回答1:


Your "functions" are not "models", they are "creators".

Update both your functions like:

def create_encoder(): #no arguments!!!
    pe = Input(shape=(img_width,img_height, channels))
    ....
    encoded = ...

    encoder = Model(pe, encoded)
    return encoder

def create_decoder():
    pe = Input(shape=(7,7,256))
     ....
    decoded = ....

    decoder = Model(pe, decoded)
    return decoder

Now create the models:

encoder = create_encoder()
decoder = create_decoder()

siamsese_net = Model([camera_img, sketch_img],
                     [decoder(encoder(camera_img)), decoder(encoder(sketch_img))])

#where camera_img and sketch_image are 'Input' objects.


来源:https://stackoverflow.com/questions/60264360/how-to-design-a-shared-weight-multi-input-output-auto-encoder-network

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!