keras

Multiple Embedding layers for Keras Sequential model

一个人想着一个人 提交于 2021-01-22 22:56:03
问题 I am using Keras (tensorflow backend) and am wondering how to add multiple Embedding layers into a Keras Sequential model. More specifically, I have several columns in my dataset which have categorical values and I have considered using one-hot encoding but have determined that the number of categorical items is in the hundreds leading to a large and far too sparse set of columns. Upon looking for solutions I have found that Keras' Embedding layer appears to solve the problem very elegantly.

Multiple Embedding layers for Keras Sequential model

人走茶凉 提交于 2021-01-22 22:55:58
问题 I am using Keras (tensorflow backend) and am wondering how to add multiple Embedding layers into a Keras Sequential model. More specifically, I have several columns in my dataset which have categorical values and I have considered using one-hot encoding but have determined that the number of categorical items is in the hundreds leading to a large and far too sparse set of columns. Upon looking for solutions I have found that Keras' Embedding layer appears to solve the problem very elegantly.

Multiple Embedding layers for Keras Sequential model

偶尔善良 提交于 2021-01-22 22:55:38
问题 I am using Keras (tensorflow backend) and am wondering how to add multiple Embedding layers into a Keras Sequential model. More specifically, I have several columns in my dataset which have categorical values and I have considered using one-hot encoding but have determined that the number of categorical items is in the hundreds leading to a large and far too sparse set of columns. Upon looking for solutions I have found that Keras' Embedding layer appears to solve the problem very elegantly.

Tensorflow model prediction is slow

廉价感情. 提交于 2021-01-22 08:34:21
问题 I have a TensorFlow model with a single Dense layer: model = tf.keras.Sequential([tf.keras.layers.Dense(2)]) model.build(input_shape=(None, None, 25)) I construct a single input vector in float32 : np_vec = np.array(np.random.randn(1, 1, 25), dtype=np.float32) vec = tf.cast(tf.convert_to_tensor(np_vec), dtype=tf.float32) I want to feed that to my model for prediction, but it is very slow. If I call predict or __call__ it takes a really long time, compared to doing the same operation in NumPy.

Tensorflow flatten vs numpy flatten function effect on machine learning training

我是研究僧i 提交于 2021-01-22 07:00:34
问题 I am starting with deep learning stuff using keras and tensorflow. At very first stage i am stuck with a doubt. when I use tf.contrib.layers.flatten (Api 1.8) for flattening a image (could be multichannel as well). How is this different than using flatten function from numpy? How does this affect the training. I can see the tf.contrib.layers.flatten is taking longer time than numpy flatten. Is it doing something more? This is a very close question but here the accepted answer includes Theano

Tensorflow flatten vs numpy flatten function effect on machine learning training

五迷三道 提交于 2021-01-22 06:58:11
问题 I am starting with deep learning stuff using keras and tensorflow. At very first stage i am stuck with a doubt. when I use tf.contrib.layers.flatten (Api 1.8) for flattening a image (could be multichannel as well). How is this different than using flatten function from numpy? How does this affect the training. I can see the tf.contrib.layers.flatten is taking longer time than numpy flatten. Is it doing something more? This is a very close question but here the accepted answer includes Theano

Why is Keras LSTM on CPU three times faster than GPU?

女生的网名这么多〃 提交于 2021-01-22 06:15:05
问题 I use this notebook from Kaggle to run LSTM neural network. I had started training of neural network and I saw that it is too slow. It is almost three times slower than CPU training. CPU perfomance: 8 min per epoch; GPU perfomance: 26 min per epoch. After this I decided to find answer in this question on Stackoverflow and I applied a CuDNNLSTM (which runs only on GPU) instead of LSTM . Hence, GPU perfomance became only 1 min per epoch and accuracy of model decreased on 3%. Questions: 1) Does

Why is Keras LSTM on CPU three times faster than GPU?

血红的双手。 提交于 2021-01-22 06:03:52
问题 I use this notebook from Kaggle to run LSTM neural network. I had started training of neural network and I saw that it is too slow. It is almost three times slower than CPU training. CPU perfomance: 8 min per epoch; GPU perfomance: 26 min per epoch. After this I decided to find answer in this question on Stackoverflow and I applied a CuDNNLSTM (which runs only on GPU) instead of LSTM . Hence, GPU perfomance became only 1 min per epoch and accuracy of model decreased on 3%. Questions: 1) Does

What is meant by sequential model in Keras

六眼飞鱼酱① 提交于 2021-01-21 15:23:52
问题 I have recently started working Tensorflow for deep learning. I found this statement model = tf.keras.models.Sequential() bit different. I couldn't understand what is actually meant and is there any other models as well for deep learning? I worked a lot on MatconvNet (Matlab library for convolutional neural network). never saw any sequential definition in that. 回答1: There are two ways to build Keras models: sequential and functional. The sequential API allows you to create models layer-by

What is meant by sequential model in Keras

强颜欢笑 提交于 2021-01-21 15:15:14
问题 I have recently started working Tensorflow for deep learning. I found this statement model = tf.keras.models.Sequential() bit different. I couldn't understand what is actually meant and is there any other models as well for deep learning? I worked a lot on MatconvNet (Matlab library for convolutional neural network). never saw any sequential definition in that. 回答1: There are two ways to build Keras models: sequential and functional. The sequential API allows you to create models layer-by