deep-learning

Cannot take the length of Shape with unknown rank

心不动则不痛 提交于 2020-06-27 07:41:35
问题 I have a neural network, from a tf.data data generator and a tf.keras model, as follows (a simplified version-because it would be too long): dataset = ... A tf.data.Dataset object that with the next_x method calls the get_next for the x_train iterator and for the next_y method calls the get_next for the y_train iterator. Each label is a (1, 67) array in one-hot form. Layers: input_tensor = tf.keras.layers.Input(shape=(240, 240, 3)) # dim of x output = tf.keras.layers.Flatten()(input_tensor)

Difference between training function and learning function in MATLAB neural network

。_饼干妹妹 提交于 2020-06-27 04:15:13
问题 I am new to the deep learning toolbox in MATLAB and I am very confused about the difference between training function of network and the corresponding learning function of network parameters. For example, if I create a feedforward network with 7 hidden neurons and gradient descent training function: >> net = feedforwardnet(7,'traingd'); Then, we find what is the learning function of input weight for example: >> net.inputWeights{1}.learnFcn ans = 'learngdm' We find it gradient descent with

Keras: stacking multiple LSTM layer with

谁都会走 提交于 2020-06-26 07:24:09
问题 I have the following network which works fine: output = LSTM(8)(output) output = Dense(2)(output) Now for the same model, I am trying to stack a few LSTM layer like below: output = LSTM(8)(output, return_sequences=True) output = LSTM(8)(output) output = Dense(2)(output) But I got the following errors: TypeError Traceback (most recent call last) <ipython-input-2-0d0ced2c7417> in <module>() 39 40 output = Concatenate(axis=2)([leftOutput,rightOutput]) ---> 41 output = LSTM(8)(output, return

How to set parameters in keras to be non-trainable?

丶灬走出姿态 提交于 2020-06-25 09:49:05
问题 I am new to Keras and I am building a model. I want to freeze the weights of the last few layers of the model while training the previous layers. I tried to set the trainable property of the lateral model to be False, but it dosen't seem to work. Here is the code and the model summary: opt = optimizers.Adam(1e-3) domain_layers = self._build_domain_regressor() domain_layers.trainble = False feature_extrator = self._build_common() img_inputs = Input(shape=(160, 160, 3)) conv_out = feature

Keras embedding layer masking. Why does input_dim need to be |vocabulary| + 2?

不羁的心 提交于 2020-06-25 02:38:07
问题 In the Keras docs for Embedding https://keras.io/layers/embeddings/, the explanation given for mask_zero is mask_zero: Whether or not the input value 0 is a special "padding" value that should be masked out. This is useful when using recurrent layers which may take variable length input. If this is True then all subsequent layers in the model need to support masking or an exception will be raised. If mask_zero is set to True, as a consequence, index 0 cannot be used in the vocabulary (input

Keras: How to get model predictions( or last layer output) in a custom generator during training?

瘦欲@ 提交于 2020-06-24 15:38:28
问题 I have made a custom generator in which I need my model's prediction, during training, to do some calculations on it, before it is trained against the true labels. Therefore, I save the model first and then call model.predict() on the current state. from keras.models import load_model def custom_generator(model): while True: state, target_labels = next(train_it) model.save('my_model.h5') #pause training and do some calculations on the output of the model trained so far print(state) print

How to implement maclaurin series in keras?

梦想与她 提交于 2020-06-24 09:16:02
问题 I am trying to implement expandable CNN by using maclaurin series. The basic idea is the first input node can be decomposed into multiple nodes with different orders and coefficients. Decomposing single nodes to multiple ones can generate different non-linear line connection that generated by maclaurin series. Can anyone give me a possible idea of how to expand CNN with maclaurin series non-linear expansion? any thought? I cannot quite understand how to decompose the input node to multiple

How to understand loss acc val_loss val_acc in Keras model fitting

夙愿已清 提交于 2020-06-24 05:02:08
问题 I'm new on Keras and have some questions on how to understanding my model results. Here is my result:(for your convenience, I only paste the loss acc val_loss val_acc after each epoch here) Train on 4160 samples, validate on 1040 samples as below: Epoch 1/20 4160/4160 - loss: 3.3455 - acc: 0.1560 - val_loss: 1.6047 - val_acc: 0.4721 Epoch 2/20 4160/4160 - loss: 1.7639 - acc: 0.4274 - val_loss: 0.7060 - val_acc: 0.8019 Epoch 3/20 4160/4160 - loss: 1.0887 - acc: 0.5978 - val_loss: 0.3707 - val

Adam optimizer goes haywire after 200k batches, training loss grows

余生颓废 提交于 2020-06-23 22:24:20
问题 I've been seeing a very strange behavior when training a network, where after a couple of 100k iterations (8 to 10 hours) of learning fine, everything breaks and the training loss grows : The training data itself is randomized and spread across many .tfrecord files containing 1000 examples each, then shuffled again in the input stage and batched to 200 examples. The background I am designing a network that performs four different regression tasks at the same time, e.g. determining the

AttributeError: module 'tensorflow' has no attribute 'keras' in conda prompt

China☆狼群 提交于 2020-06-23 14:24:05
问题 *I try to install tensorflow and keras I installed tensorflow and I imported it with no errors Keras is installed but I can't import it * (base) C:\Windows\system32>pip uninstall keras Found existing installation: Keras 2.3.1 Uninstalling Keras-2.3.1: Would remove: c:\users\asus\anaconda3\anaconda\lib\site-packages\docs\* c:\users\asus\anaconda3\anaconda\lib\site-packages\keras-2.3.1.dist-info\* c:\users\asus\anaconda3\anaconda\lib\site-packages\keras\* Proceed (y/n)? y Successfully