keras

keras (tensorflow backend) conditional assignment with K.switch()

断了今生、忘了曾经 提交于 2021-01-01 13:34:30
问题 I'm trying to implement something like if np.max(subgrid) == np.min(subgrid): middle_middle = cur_subgrid + 1 else: middle_middle = cur_subgrid Since the condition can only be determined at run-time, I'm using Keras syntax as following middle_middle = K.switch(K.max(subgrid) == K.min(subgrid), lambda: tf.add(cur_subgrid,1), lambda: cur_subgrid) But I'm getting this error: <ipython-input-112-0504ce070e71> in col_loop(j, gray_map, mask_A) 56 57 ---> 58 middle_middle = K.switch(K.max(subgrid) ==

Why is Tensorflow's Gradient Tape returning None when trying to find the gradient of loss wrt input?

拜拜、爱过 提交于 2021-01-01 09:28:50
问题 I have a CNN model built in keras which uses an SVM in its last layer. I get the prediction of this SVM by putting in an input into the CNN model, extracting the relevant features and then putting those features into my SVM to get an output prediction. This entire process I have names predict_DNR_tensor in the code below. This works fine and I am able to get a correct prediction. I am now trying to get a gradient of squared hinge loss of this prediction from my SVM wrt to the original input,

Custom loss function not improving with epochs

白昼怎懂夜的黑 提交于 2021-01-01 09:08:43
问题 I have created a custom loss function to deal with binary class imbalance, but my loss function does not improve per epoch. For metrics, I'm using precision and recall. Is this a design issue where I'm not picking good hyper-parameters? weights = [np.array([.10,.90]), np.array([.5,.5]), np.array([.1,.99]), np.array([.25,.75]), np.array([.35,.65])] for weight in weights: print('Model with weights {a}'.format(a=weight)) model = keras.models.Sequential([ keras.layers.Flatten(), #input_shape=[X

Custom loss function not improving with epochs

最后都变了- 提交于 2021-01-01 09:08:33
问题 I have created a custom loss function to deal with binary class imbalance, but my loss function does not improve per epoch. For metrics, I'm using precision and recall. Is this a design issue where I'm not picking good hyper-parameters? weights = [np.array([.10,.90]), np.array([.5,.5]), np.array([.1,.99]), np.array([.25,.75]), np.array([.35,.65])] for weight in weights: print('Model with weights {a}'.format(a=weight)) model = keras.models.Sequential([ keras.layers.Flatten(), #input_shape=[X

Using tensorflow.keras model in pyspark UDF generates a pickle error

最后都变了- 提交于 2021-01-01 07:02:47
问题 I would like to use a tensorflow.keras model in a pysark pandas_udf. However, I get a pickle error when the model is being serialized before sending it to the workers. I am not sure I am using the best method to perform what I want, therefore I will expose a minimal but complete example. Packages: tensorflow-2.2.0 (but error is triggered to all previous versions too) pyspark-2.4.5 The import statements are: import pandas as pd import numpy as np from tensorflow.keras.models import Sequential

Keras/TF CPU creating too many threads

孤街醉人 提交于 2021-01-01 04:29:21
问题 Even after setting tf.config.threading.set_inter_op_parallelism_threads(1) and tf.config.threading.set_intra_op_parallelism_threads(1) Keras with Tensorflow CPU (running a simple CNN model fit) on a linux machine is creating too many threads. Whatever I try it seems to be creating 94 threads while going through the fitting epochs. Have tried playing with tf.compat.v1.ConfigProto settings but nothing helps. How do I limit the number of threads? 回答1: This is why tensorflow created many threads.

Keras/TF CPU creating too many threads

﹥>﹥吖頭↗ 提交于 2021-01-01 04:21:55
问题 Even after setting tf.config.threading.set_inter_op_parallelism_threads(1) and tf.config.threading.set_intra_op_parallelism_threads(1) Keras with Tensorflow CPU (running a simple CNN model fit) on a linux machine is creating too many threads. Whatever I try it seems to be creating 94 threads while going through the fitting epochs. Have tried playing with tf.compat.v1.ConfigProto settings but nothing helps. How do I limit the number of threads? 回答1: This is why tensorflow created many threads.

Reset all weights of Keras model

元气小坏坏 提交于 2021-01-01 04:16:30
问题 I would like to be able to reset the weights of my entire Keras model so that I do not have to compile it again. Compiling the model is currently the main bottleneck of my code. Here is an example of what I mean: import tensorflow as tf model = tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(16, activation='relu'), tf.keras.layers.Dense(10) ]) model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=0.001), loss=tf.keras.losses

Using Gensim Fasttext model with LSTM nn in keras

只谈情不闲聊 提交于 2020-12-31 14:52:51
问题 I have trained fasttext model with Gensim over the corpus of very short sentences (up to 10 words). I know that my test set includes words that are not in my train corpus, i.e some of the words in my corpus are like "Oxytocin" "Lexitocin", "Ematrophin",'Betaxitocin" given a new word in the test set, fasttext knows pretty well to generate a vector with high cosine-similarity to the other similar words in the train set by using the characters level n-gram How do i incorporate the fasttext model

Using Gensim Fasttext model with LSTM nn in keras

半腔热情 提交于 2020-12-31 14:47:54
问题 I have trained fasttext model with Gensim over the corpus of very short sentences (up to 10 words). I know that my test set includes words that are not in my train corpus, i.e some of the words in my corpus are like "Oxytocin" "Lexitocin", "Ematrophin",'Betaxitocin" given a new word in the test set, fasttext knows pretty well to generate a vector with high cosine-similarity to the other similar words in the train set by using the characters level n-gram How do i incorporate the fasttext model