generative-adversarial-network

How to balance the generator and the discriminator performances in a GAN?

自闭症网瘾萝莉.ら 提交于 2021-02-19 08:06:26
问题 It's the first time I'm working with GANs and I am facing an issue regarding the Discriminator repeatedly outperforming the Generator. I am trying to reproduce the PA model from this article and I'm looking at this slightly different implementation to help me out. I have read quite a lot of papers on how GANs work and also followed some tutorials to understand them better. Moreover, I've read articles on how to overcome the major instabilities, but I can't find a way to overcome this behavior

Error after combination of two Keras models into VAE: You must feed a value for placeholder tensor 'critic_input_2'

只愿长相守 提交于 2021-01-29 09:30:17
问题 Trying combine GAN generator and critic to train both as VAE. Base code is here. Code modified to create encoder on top of critic: def _build_critic(self): #### THE critic critic_input = Input(shape=self.input_dim, name='critic_input') x = critic_input for i in range(self.n_layers_critic): x = Conv2D( filters = self.critic_conv_filters[i] , kernel_size = self.critic_conv_kernel_size[i] , strides = self.critic_conv_strides[i] , padding = 'same' , name = 'critic_conv_' + str(i) , kernel

GAN generates exactly the same Images cross a batch only because of seeds distribution, Why?

拈花ヽ惹草 提交于 2021-01-07 00:12:14
问题 I have trained a GAN to reproduce CIFAR10 like images. Initially I notice all images cross one batch produced by the generator look always the same, like the picture below: After hours of debugging and comparison to the tutorial which is a great learning source for beginners (https://machinelearningmastery.com/how-to-develop-a-generative-adversarial-network-for-a-cifar-10-small-object-photographs-from-scratch/), I just add only one letter on my original code and the generated images start

GAN generates exactly the same Images cross a batch only because of seeds distribution, Why?

試著忘記壹切 提交于 2021-01-07 00:08:36
问题 I have trained a GAN to reproduce CIFAR10 like images. Initially I notice all images cross one batch produced by the generator look always the same, like the picture below: After hours of debugging and comparison to the tutorial which is a great learning source for beginners (https://machinelearningmastery.com/how-to-develop-a-generative-adversarial-network-for-a-cifar-10-small-object-photographs-from-scratch/), I just add only one letter on my original code and the generated images start

GAN generates exactly the same Images cross a batch only because of seeds distribution, Why?

≡放荡痞女 提交于 2021-01-07 00:07:50
问题 I have trained a GAN to reproduce CIFAR10 like images. Initially I notice all images cross one batch produced by the generator look always the same, like the picture below: After hours of debugging and comparison to the tutorial which is a great learning source for beginners (https://machinelearningmastery.com/how-to-develop-a-generative-adversarial-network-for-a-cifar-10-small-object-photographs-from-scratch/), I just add only one letter on my original code and the generated images start

Stylegan2 model with flask API is generating weird results after first request

时光总嘲笑我的痴心妄想 提交于 2020-12-12 08:51:29
问题 So here's whats happening. I have been using the StyleGAN2 model for a while now and I decided to make a website that will allow the user to input the arguments for the model to generate the images. The model has been trained using tensorflow v1.15 and the code works perfectly fine and generates all the required outputs when I run the model directly on my machine through the command line. The problem arises when I am now using a flask API to do the same thing. Here is all the code for

Stylegan2 model with flask API is generating weird results after first request

喜你入骨 提交于 2020-12-12 08:50:29
问题 So here's whats happening. I have been using the StyleGAN2 model for a while now and I decided to make a website that will allow the user to input the arguments for the model to generate the images. The model has been trained using tensorflow v1.15 and the code works perfectly fine and generates all the required outputs when I run the model directly on my machine through the command line. The problem arises when I am now using a flask API to do the same thing. Here is all the code for

Scene Text Image Super-Resolution for OCR

孤者浪人 提交于 2020-12-01 10:04:39
问题 I am working on an OCR system. A challenge that I'm facing for recognizing the text within ROI is due to the shakiness or motion effect shot or text that is not focus due to angle positions . Please consider the following demo sample If you notice the texts (for ex. the mark as a red), in such cases the OCR system couldn't properly recognize the text. However, this scenario can also come on with no angle shot where the image is too blurry that the OCR system can't recognize or partially

How may i do equalized learning rate with tensorflow 2?

纵然是瞬间 提交于 2020-07-21 11:28:32
问题 I am trying to implement StyleGAN with tensorflow version 2, and i have no ideas how to do equalized learning rate. I tried to scale gradients with this way: gradeints equalization But it doesn't works correctly. Pls help. 回答1: You can just create a custom layer. class DenseEQ(Dense): """ Standard dense layer but includes learning rate equilization at runtime as per Karras et al. 2017. Inherits Dense layer and overides the call method. """ def __init__(self, **kwargs): if 'kernel_initializer'

Keras.backend.reshape: TypeError: Failed to convert object of type <class 'list'> to Tensor. Consider casting elements to a supported type

佐手、 提交于 2020-01-06 07:59:10
问题 I'm designing a custom layer for my neural network, but I get an error from my code. I want to do a attention layer as described in the paper: SAGAN. And the original tf code class AttentionLayer(Layer): def __init__(self, **kwargs): super(AttentionLayer, self).__init__(**kwargs) def build(self, input_shape): input_dim = input_shape[-1] filters_f_g = input_dim // 8 filters_h = input_dim kernel_shape_f_g = (1, 1) + (input_dim, filters_f_g) kernel_shape_h = (1, 1) + (input_dim, filters_h) #