deep-learning

Combining Two CNN's

萝らか妹 提交于 2021-01-04 02:03:08
问题 I Want to Combine Two CNN Into Just One In Keras, What I Mean Is that I Want The Neural Network To Take Two Images And Process Each One in Separate CNN, and Then Concatenate Them Together Into The Flattening Layer and Use Fully Connected Layer to Do The Last Work, Here What I Did: # Start With First Branch ############################################################ branch_one = Sequential() # Adding The Convolution branch_one.add(Conv2D(32, (3,3),input_shape = (64,64,3) , activation = 'relu'

Combining Two CNN's

生来就可爱ヽ(ⅴ<●) 提交于 2021-01-04 02:02:44
问题 I Want to Combine Two CNN Into Just One In Keras, What I Mean Is that I Want The Neural Network To Take Two Images And Process Each One in Separate CNN, and Then Concatenate Them Together Into The Flattening Layer and Use Fully Connected Layer to Do The Last Work, Here What I Did: # Start With First Branch ############################################################ branch_one = Sequential() # Adding The Convolution branch_one.add(Conv2D(32, (3,3),input_shape = (64,64,3) , activation = 'relu'

Combining Two CNN's

非 Y 不嫁゛ 提交于 2021-01-04 02:00:37
问题 I Want to Combine Two CNN Into Just One In Keras, What I Mean Is that I Want The Neural Network To Take Two Images And Process Each One in Separate CNN, and Then Concatenate Them Together Into The Flattening Layer and Use Fully Connected Layer to Do The Last Work, Here What I Did: # Start With First Branch ############################################################ branch_one = Sequential() # Adding The Convolution branch_one.add(Conv2D(32, (3,3),input_shape = (64,64,3) , activation = 'relu'

module 'tensorflow' has no attribute 'random_uniform'

落花浮王杯 提交于 2021-01-02 06:37:08
问题 I tried to perform some deep learning application and got a module 'tensorflow' has no attribute 'random_uniform' error. On CPU the code works fine but it is really slow. In order to run the code on GPU i needed to change some definitions. Here is my code below. Any ideas? def CapsNet(input_shape, n_class, routings): x = tf.keras.layers.Input(shape=input_shape) # Layer 1: Just a conventional Conv2D layer conv1 = tf.keras.layers.Convolution2D(filters=256, kernel_size=9, strides=1, padding=

module 'tensorflow' has no attribute 'random_uniform'

馋奶兔 提交于 2021-01-02 06:36:31
问题 I tried to perform some deep learning application and got a module 'tensorflow' has no attribute 'random_uniform' error. On CPU the code works fine but it is really slow. In order to run the code on GPU i needed to change some definitions. Here is my code below. Any ideas? def CapsNet(input_shape, n_class, routings): x = tf.keras.layers.Input(shape=input_shape) # Layer 1: Just a conventional Conv2D layer conv1 = tf.keras.layers.Convolution2D(filters=256, kernel_size=9, strides=1, padding=

tensorflow gradient - getting all nan values

十年热恋 提交于 2021-01-02 06:14:15
问题 I am using python 3 with anaconda, and tensorflow 1.12 with eager eval. I am using it to create a triplet loss function for a siamese network, and need to calculate distance between different data samples. I created a function in order to create the distance calculation, but no matter what I do, when I try to calculate it's gradient with respect to the networks output, It keeps giving me all nan gradient. This is the code: def matrix_row_wise_norm(matrix): import tensorflow as tf tensor = tf

tensorflow gradient - getting all nan values

一曲冷凌霜 提交于 2021-01-02 06:13:08
问题 I am using python 3 with anaconda, and tensorflow 1.12 with eager eval. I am using it to create a triplet loss function for a siamese network, and need to calculate distance between different data samples. I created a function in order to create the distance calculation, but no matter what I do, when I try to calculate it's gradient with respect to the networks output, It keeps giving me all nan gradient. This is the code: def matrix_row_wise_norm(matrix): import tensorflow as tf tensor = tf

Custom loss function not improving with epochs

白昼怎懂夜的黑 提交于 2021-01-01 09:08:43
问题 I have created a custom loss function to deal with binary class imbalance, but my loss function does not improve per epoch. For metrics, I'm using precision and recall. Is this a design issue where I'm not picking good hyper-parameters? weights = [np.array([.10,.90]), np.array([.5,.5]), np.array([.1,.99]), np.array([.25,.75]), np.array([.35,.65])] for weight in weights: print('Model with weights {a}'.format(a=weight)) model = keras.models.Sequential([ keras.layers.Flatten(), #input_shape=[X

Custom loss function not improving with epochs

最后都变了- 提交于 2021-01-01 09:08:33
问题 I have created a custom loss function to deal with binary class imbalance, but my loss function does not improve per epoch. For metrics, I'm using precision and recall. Is this a design issue where I'm not picking good hyper-parameters? weights = [np.array([.10,.90]), np.array([.5,.5]), np.array([.1,.99]), np.array([.25,.75]), np.array([.35,.65])] for weight in weights: print('Model with weights {a}'.format(a=weight)) model = keras.models.Sequential([ keras.layers.Flatten(), #input_shape=[X

Data augmentation techniques for general datasets?

左心房为你撑大大i 提交于 2020-12-30 05:28:26
问题 I am working in a machine learning problem and want to build neural network based classifiers on it in matlab. One problem is that the data is given in the form of features and number of samples is considerably lower. I know about data augmentation techniques for images, by rotating, translating, affine translation, etc. I would like to know whether there are data augmentation techniques available for general datasets ? Like is it possible to use randomness to generate more data ? I read the