keras-layer

How do you create a custom activation function with Keras?

非 Y 不嫁゛ 提交于 2019-11-26 18:52:20
Sometimes the default standard activations like ReLU, tanh, softmax, ... and the advanced activations like LeakyReLU aren't enough. And it might also not be in keras-contrib . How do you create your own activation function? Credits to this Github issue comment by Ritchie Ng . # Creating a model from keras.models import Sequential from keras.layers import Dense # Custom activation function from keras.layers import Activation from keras import backend as K from keras.utils.generic_utils import get_custom_objects def custom_activation(x): return (K.sigmoid(x) * 5) - 1 get_custom_objects().update(

Keras input explanation: input_shape, units, batch_size, dim, etc

橙三吉。 提交于 2019-11-26 17:59:42
For any Keras layer ( Layer class), can someone explain how to understand the difference between input_shape , units , dim , etc.? For example the doc says units specify the output shape of a layer. In the image of the neural net below hidden layer1 has 4 units. Does this directly translate to the units attribute of the Layer object? Or does units in Keras equal the shape of every weight in the hidden layer times the number of units? In short how does one understand/visualize the attributes of the model - in particular the layers - with the image below? Units: The amount of "neurons", or

When does keras reset an LSTM state?

∥☆過路亽.° 提交于 2019-11-26 09:25:47
问题 I read all sorts of texts about it, and none seem to answer this very basic question. It\'s always ambiguous: In a stateful = False LSTM layer, does keras reset states after: Each sequence; or Each batch? Suppose I have X_train shaped as (1000,20,1), meaning 1000 sequences of 20 steps of a single value. If I make: model.fit(X_train, y_train, batch_size=200, nb_epoch=15) Will it reset states for every single sequence (resets states 1000 times)? Or will it reset states for every batch (resets

Specify connections in NN (in keras)

我与影子孤独终老i 提交于 2019-11-26 08:34:28
问题 I am using keras and tensorflow 1.4. I want to explicitly specify which neurons are connected between two layers. Therefor I have a matrix A with ones in it, whenever neuron i in the first Layer is connected to neuron j in the second Layer and zeros elsewhere. My first attempt was to create a custom layer with a kernel, that has the same size as A with non-trainable zeros in it, where A has zeros in it and trainable weights, where A has ones in it. Then, the desired output would be a simple

Keras Dense layer's input is not flattened

匆匆过客 提交于 2019-11-26 05:30:07
This is my test code: from keras import layers input1 = layers.Input((2,3)) output = layers.Dense(4)(input1) print(output) The output is: <tf.Tensor 'dense_2/add:0' shape=(?, 2, 4) dtype=float32> But What Happend? The documentation says: Note: if the input to the layer has a rank greater than 2, then it is flattened prior to the initial dot product with kernel. While the output is reshaped? Currently, contrary to what has been stated in documentation, the Dense layer is applied on the last axis of input tensor : Contrary to the documentation, we don't actually flatten it. It's applied on the

Keras input explanation: input_shape, units, batch_size, dim, etc

情到浓时终转凉″ 提交于 2019-11-26 04:28:59
问题 For any Keras layer ( Layer class), can someone explain how to understand the difference between input_shape , units , dim , etc.? For example the doc says units specify the output shape of a layer. In the image of the neural net below hidden layer1 has 4 units. Does this directly translate to the units attribute of the Layer object? Or does units in Keras equal the shape of every weight in the hidden layer times the number of units? In short how does one understand/visualize the attributes

Keras Dense layer&#39;s input is not flattened

为君一笑 提交于 2019-11-26 00:49:15
问题 This is my test code: from keras import layers input1 = layers.Input((2,3)) output = layers.Dense(4)(input1) print(output) The output is: <tf.Tensor \'dense_2/add:0\' shape=(?, 2, 4) dtype=float32> But What Happend? The documentation says: Note: if the input to the layer has a rank greater than 2, then it is flattened prior to the initial dot product with kernel. While the output is reshaped? 回答1: Currently, contrary to what has been stated in documentation, the Dense layer is applied on the