deep-learning

Define custom LSTM with multiple inputs

倖福魔咒の 提交于 2020-03-04 04:38:08
问题 Following the tutorial writing custom layer, I am trying to implement a custom LSTM layer with multiple input tensors. I am providing two vectors input_1 and input_2 as a list [input_1, input_2] as suggested in the tutorial. The single input code is working but when I change the code for multiple inputs, its throwing the error, self.kernel = self.add_weight(shape=(input_shape[0][-1], self.units), TypeError: 'NoneType' object is not subscriptable. What change I have to do to get rid of the

Define custom LSTM with multiple inputs

空扰寡人 提交于 2020-03-04 04:37:58
问题 Following the tutorial writing custom layer, I am trying to implement a custom LSTM layer with multiple input tensors. I am providing two vectors input_1 and input_2 as a list [input_1, input_2] as suggested in the tutorial. The single input code is working but when I change the code for multiple inputs, its throwing the error, self.kernel = self.add_weight(shape=(input_shape[0][-1], self.units), TypeError: 'NoneType' object is not subscriptable. What change I have to do to get rid of the

TypeError: Unexpected keyword argument passed to optimizer: learning_rate

浪子不回头ぞ 提交于 2020-03-01 01:58:52
问题 I am trying to load a Keras model which was trained on an Azure VM (NC promo). But I am getting the following error. TypeError: Unexpected keyword argument passed to optimizer:learning_rate EDIT: Here is the code snippet that I am using to load my model: from keras.models import load_model model = load_model('my_model_name.h5') 回答1: Did you use a custom optimizer? If so, you can load like this: model = load_model('my_model_name.h5', custom_objects={ 'Adam': lambda **kwargs: hvd

How to load a trained MXnet model?

白昼怎懂夜的黑 提交于 2020-02-28 18:36:28
问题 I have trained a network using MXnet, but am not sure how I can save and load the parameters for later use. First I define and train the network: dataIn = mx.sym.var('data') fc1 = mx.symbol.FullyConnected(data=dataIn, num_hidden=100) act1 = mx.sym.Activation(data=fc1, act_type="relu") fc2 = mx.symbol.FullyConnected(data=act1, num_hidden=50) act2 = mx.sym.Activation(data=fc2, act_type="relu") fc3 = mx.symbol.FullyConnected(data=act2, num_hidden=25) act3 = mx.sym.Activation(data=fc3, act_type=

How to load a trained MXnet model?

穿精又带淫゛_ 提交于 2020-02-28 18:32:07
问题 I have trained a network using MXnet, but am not sure how I can save and load the parameters for later use. First I define and train the network: dataIn = mx.sym.var('data') fc1 = mx.symbol.FullyConnected(data=dataIn, num_hidden=100) act1 = mx.sym.Activation(data=fc1, act_type="relu") fc2 = mx.symbol.FullyConnected(data=act1, num_hidden=50) act2 = mx.sym.Activation(data=fc2, act_type="relu") fc3 = mx.symbol.FullyConnected(data=act2, num_hidden=25) act3 = mx.sym.Activation(data=fc3, act_type=

False positives in faster-rcnn object detection

自闭症网瘾萝莉.ら 提交于 2020-02-28 06:59:43
问题 I'm training an object detector using tensorflow and the faster_rcnn_inception_v2_coco model and am experiencing a lot of false positives when classifying on a video. After some research I've figured out that I need to add negative images to the training process. How do I add these to tfrecord files? I used the csv to tfrecord file code provided in the tutorial here. Also it seems that ssd has a hard_example_miner in the config that allows to configure this behaviour but this doesn't seem to

False positives in faster-rcnn object detection

房东的猫 提交于 2020-02-28 06:59:22
问题 I'm training an object detector using tensorflow and the faster_rcnn_inception_v2_coco model and am experiencing a lot of false positives when classifying on a video. After some research I've figured out that I need to add negative images to the training process. How do I add these to tfrecord files? I used the csv to tfrecord file code provided in the tutorial here. Also it seems that ssd has a hard_example_miner in the config that allows to configure this behaviour but this doesn't seem to

Add hand-crafted features to Keras sequential model

半腔热情 提交于 2020-02-28 06:33:23
问题 I have 1D sequences which I want to use as input to a Keras VGG classification model, split in x_train and x_test . For each sequence, I also have custom features stored in feats_train and feats_test which I do not want to input to the convolutional layers, but to the first fully connected layer. A complete sample of train or test would thus consist of a 1D sequence plus n floating point features. What is the best way to feed the custom features first to the fully connected layer? I thought

Matconvnet output of deep network's marix is uniform valued instead of varying values?

扶醉桌前 提交于 2020-02-27 13:05:24
问题 Im trying to achieve a density map from network output of dimension 20x20x1x50. Here 20x20 is the output map and 50 is the batch size. The issue is that the value of output X is equal 0.098 across each output matrix..20x20. There is no gaussian shape like density map but a flat similar valued output map 20x20x1x50. The issue is shown in the figure attached. What am i missing here? The euclidean loss for backpropagation is given as: case {'l2loss'} res=(c-X); n=1; if isempty(dzdy) %forward Y =

expected input to have 4 dimensions, but got array with shape

允我心安 提交于 2020-02-27 07:11:56
问题 I have this error Error when checking input: expected input_13 to have 4 dimensions, but got array with shape (7, 100, 100) For the following code how should I reshape array to fit with 4-dimensions, I searched for it but didn't understand the previous solutions. Please ask if not clear its very common issue in convolution neural network. inputs=Input(shape=(100,100,1)) x=Conv2D(16,(3,3), padding='same')(inputs) x=Activation('relu')(x) x=Conv2D(8,(3,3))(x) x=Activation('relu')(x) x