generative-adversarial-network

Keras train partial model issue (about GAN model)

假装没事ソ 提交于 2019-12-22 11:29:45
问题 I came across a strange issue when using keras to implement GAN model. With GAN we need to build up G and D first, and then add a new Sequential model (GAN) and add(G), add(D) sequentially afterwards. Keras seems to backprop back to G (via GAN model) when I do D.train_on_batch , and I got an InvalidArgumentError: You must feed a value for placeholder tensor 'dense_input_1' with dtype float . If I remove the GAN model (the last stacked G then D sequential model), it computes d_loss correctly.

Runtime Error: Disconnected graph for GANs because input can't be obtained

江枫思渺然 提交于 2019-12-10 11:08:18
问题 Here is my discriminator architecture: def build_discriminator(img_shape,embedding_shape): model1 = Sequential() model1.add(Conv2D(32, kernel_size=5, strides=2, input_shape=img_shape, padding="same")) model1.add(LeakyReLU(alpha=0.2)) model1.add(Dropout(0.25)) model1.add(Conv2D(48, kernel_size=5, strides=2, padding="same")) #model.add(ZeroPadding2D(padding=((0,1),(0,1)))) model1.add(BatchNormalization(momentum=0.8)) model1.add(LeakyReLU(alpha=0.2)) model1.add(Dropout(0.25)) model1.add(Conv2D

Runtime Error: Disconnected graph for GANs because input can't be obtained

感情迁移 提交于 2019-12-06 06:33:07
Here is my discriminator architecture: def build_discriminator(img_shape,embedding_shape): model1 = Sequential() model1.add(Conv2D(32, kernel_size=5, strides=2, input_shape=img_shape, padding="same")) model1.add(LeakyReLU(alpha=0.2)) model1.add(Dropout(0.25)) model1.add(Conv2D(48, kernel_size=5, strides=2, padding="same")) #model.add(ZeroPadding2D(padding=((0,1),(0,1)))) model1.add(BatchNormalization(momentum=0.8)) model1.add(LeakyReLU(alpha=0.2)) model1.add(Dropout(0.25)) model1.add(Conv2D(64, kernel_size=5, strides=2, padding="same")) model1.add(BatchNormalization(momentum=0.8)) model1.add

Keras train partial model issue (about GAN model)

喜你入骨 提交于 2019-12-06 05:47:06
I came across a strange issue when using keras to implement GAN model. With GAN we need to build up G and D first, and then add a new Sequential model (GAN) and add(G), add(D) sequentially afterwards. Keras seems to backprop back to G (via GAN model) when I do D.train_on_batch , and I got an InvalidArgumentError: You must feed a value for placeholder tensor 'dense_input_1' with dtype float . If I remove the GAN model (the last stacked G then D sequential model), it computes d_loss correctly. My environment is: Ubuntu 16.04 keras 1.2.2 tensorflow-gpu 1.0.0 keras config: { "backend": "tensorflow

How to interpret the discriminator's loss and the generator's loss in Generative Adversarial Nets?

…衆ロ難τιáo~ 提交于 2019-12-03 11:24:06
问题 I am reading people's implementation of DCGAN, especially this one in tensorflow. In that implementation, the author draws the losses of the discriminator and of the generator, which is shown below (images come from https://github.com/carpedm20/DCGAN-tensorflow): Both the losses of the discriminator and of the generator don't seem to follow any pattern. Unlike general neural networks, whose loss decreases along with the increase of training iteration. How to interpret the loss when training

How to interpret the discriminator's loss and the generator's loss in Generative Adversarial Nets?

谁都会走 提交于 2019-12-03 02:50:30
I am reading people's implementation of DCGAN, especially this one in tensorflow. In that implementation, the author draws the losses of the discriminator and of the generator, which is shown below (images come from https://github.com/carpedm20/DCGAN-tensorflow ): Both the losses of the discriminator and of the generator don't seem to follow any pattern. Unlike general neural networks, whose loss decreases along with the increase of training iteration. How to interpret the loss when training GANs? Unfortunately, like you've said for GANs the losses are very non-intuitive. Mostly it happens