keras

Combining Two CNN's

非 Y 不嫁゛ 提交于 2021-01-04 02:00:37
问题 I Want to Combine Two CNN Into Just One In Keras, What I Mean Is that I Want The Neural Network To Take Two Images And Process Each One in Separate CNN, and Then Concatenate Them Together Into The Flattening Layer and Use Fully Connected Layer to Do The Last Work, Here What I Did: # Start With First Branch ############################################################ branch_one = Sequential() # Adding The Convolution branch_one.add(Conv2D(32, (3,3),input_shape = (64,64,3) , activation = 'relu'

Keras `ImageDataGenerator` image and mask augments differently

允我心安 提交于 2021-01-02 08:10:12
问题 I'm training a semantic segmentation model using Keras with TensorFlow backend. I adopted ImageDataGenerator to do the image augmentation, including rotation, flip and shift. By following the documentation, I created a dictionary maskgen_args and used it as arguments to instantiate two ImageDataGenerator instances. maskgen_args = dict( rotation_range=90, validation_split=VALIDATION_SPLIT ) image_datagen = ImageDataGenerator(**maskgen_args) mask_datagen = ImageDataGenerator(**maskgen_args) The

Custom loss function involving gradients in Keras/Tensorflow

。_饼干妹妹 提交于 2021-01-02 06:36:25
问题 I've seen that this question has been asked a few times before, but without any resolution. My problem is simple: I would like to implement a loss function which computes the MSE between the gradient of the prediction and the truth value (eventually moving on to much more complicated loss functions). I define the following two functions: def my_loss(y_true, y_pred, x): dydx = K.gradients(y_pred, x) return K.mean(K.square(dydx - y_true), axis=-1) def my_loss_function(x): def gradLoss(y_true, y

Custom loss function involving gradients in Keras/Tensorflow

对着背影说爱祢 提交于 2021-01-02 06:35:37
问题 I've seen that this question has been asked a few times before, but without any resolution. My problem is simple: I would like to implement a loss function which computes the MSE between the gradient of the prediction and the truth value (eventually moving on to much more complicated loss functions). I define the following two functions: def my_loss(y_true, y_pred, x): dydx = K.gradients(y_pred, x) return K.mean(K.square(dydx - y_true), axis=-1) def my_loss_function(x): def gradLoss(y_true, y

keras giving same loss on every epoch

泄露秘密 提交于 2021-01-02 06:04:07
问题 I am newbie to keras. I ran it on a dataset where my objective was to reduce the logloss. For every epoch it is giving me the same loss value. I am confused whether i am on the right track or not. For example: Epoch 1/5 91456/91456 [==============================] - 142s - loss: 3.8019 - val_loss: 3.8278 Epoch 2/5 91456/91456 [==============================] - 139s - loss: 3.8019 - val_loss: 3.8278 Epoch 3/5 91456/91456 [==============================] - 143s - loss: 3.8019 - val_loss: 3.8278

How to fix ' module 'keras.backend.tensorflow_backend' has no attribute '_is_tf_1''

自古美人都是妖i 提交于 2021-01-02 05:37:23
问题 While training the yolov3 framework, there's always this module error I have tried reinstalling keras and tensorflow, and the version of keras is 2.3.0 and the version of tensorflow is 1.14.0. Traceback (most recent call last): File "train.py", line 6, in <module> import keras.backend as K File "F:\Anacoda\lib\site-packages\keras\__init__.py", line 3, in <module> from . import utils File "F:\Anacoda\lib\site-packages\keras\utils\__init__.py", line 27, in <module> from .multi_gpu_utils import

How to fix ' module 'keras.backend.tensorflow_backend' has no attribute '_is_tf_1''

狂风中的少年 提交于 2021-01-02 05:37:03
问题 While training the yolov3 framework, there's always this module error I have tried reinstalling keras and tensorflow, and the version of keras is 2.3.0 and the version of tensorflow is 1.14.0. Traceback (most recent call last): File "train.py", line 6, in <module> import keras.backend as K File "F:\Anacoda\lib\site-packages\keras\__init__.py", line 3, in <module> from . import utils File "F:\Anacoda\lib\site-packages\keras\utils\__init__.py", line 27, in <module> from .multi_gpu_utils import

Python Keras Multiple outputs returns empty array

我的未来我决定 提交于 2021-01-02 03:47:34
问题 I am trying to create a neural network with 1 input and 2 outputs: This is my code for predicting: def predict(self, environment): policy, value = self.AI.predict([environment]) print(type(policy), type(value)) print(policy, value) and this is what is printed on the screen: <class 'numpy.ndarray'> <class 'numpy.ndarray'> [[0.01186783 0.09048636 0.3038044 0.02231415 0.20798717 0.24917272 0.06396621 0.02610502 0.02429625]] [] Why is the value array empty? Shouldn't it have 1 float? This is how

Python Keras Multiple outputs returns empty array

非 Y 不嫁゛ 提交于 2021-01-02 03:46:49
问题 I am trying to create a neural network with 1 input and 2 outputs: This is my code for predicting: def predict(self, environment): policy, value = self.AI.predict([environment]) print(type(policy), type(value)) print(policy, value) and this is what is printed on the screen: <class 'numpy.ndarray'> <class 'numpy.ndarray'> [[0.01186783 0.09048636 0.3038044 0.02231415 0.20798717 0.24917272 0.06396621 0.02610502 0.02429625]] [] Why is the value array empty? Shouldn't it have 1 float? This is how

keras (tensorflow backend) conditional assignment with K.switch()

╄→гoц情女王★ 提交于 2021-01-01 13:35:18
问题 I'm trying to implement something like if np.max(subgrid) == np.min(subgrid): middle_middle = cur_subgrid + 1 else: middle_middle = cur_subgrid Since the condition can only be determined at run-time, I'm using Keras syntax as following middle_middle = K.switch(K.max(subgrid) == K.min(subgrid), lambda: tf.add(cur_subgrid,1), lambda: cur_subgrid) But I'm getting this error: <ipython-input-112-0504ce070e71> in col_loop(j, gray_map, mask_A) 56 57 ---> 58 middle_middle = K.switch(K.max(subgrid) ==