deep-learning

How to detect an object real time and track it automatically, instead of user having to draw a bounding box around the object to be tracked?

倾然丶 夕夏残阳落幕 提交于 2021-02-19 07:37:26
问题 I have the following code where the user can press p to pause the video, draw a bounding box around the object to be tracked, and then press Enter (carriage return) to track that object in the video feed: import cv2 import sys major_ver, minor_ver, subminor_ver = cv2.__version__.split('.') if __name__ == '__main__' : # Set up tracker. tracker_types = ['BOOSTING', 'MIL','KCF', 'TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT'] tracker_type = tracker_types[1] if int(minor_ver) < 3: tracker = cv2

Custom layer with two parameters function on Core ML

倖福魔咒の 提交于 2021-02-19 07:33:09
问题 Thanks to this great article(http://machinethink.net/blog/coreml-custom-layers/), I understood how to write converting using coremltools and Lambda with Keras custom layer. But, I cannot understand on the situation, function with two parameters. #python def scaling(x, scale): return x * scale Keras layer is here. #python up = conv2d_bn(mixed, K.int_shape(x)[channel_axis], 1, activation=None, use_bias=True, name=name_fmt('Conv2d_1x1')) x = Lambda(scaling, # HERE !! output_shape=K.int_shape(up)

How many epochs does it take to train VGG-16

。_饼干妹妹 提交于 2021-02-19 06:38:04
问题 I'm training a VGG-16 model from scratch using a dataset containing 3k images. I use Tensorflow platform and 8 cpus without any gpu. Training rate - 0.01, Weight decay - 0.0005, Momentum - 0.9, Batch size - 64, I've kept training for about three days. But the training accuracy has been unchanged, around 15%-20% after 20 epochs. Could anyone give me some hints to improve the accuracy? 回答1: It seems like I have used too large learning rate. Or weight decay does not work as it promises. After I

Keras: test, cross validation and accuracy while processing batched data with train_on_batch

假装没事ソ 提交于 2021-02-19 05:40:07
问题 Can someone point me to a complete example that does all of the following? Fits batched (and pickled) data in a loop using train_on_batch() Sets aside data from each batch for validation purposes Sets aside test data for accuracy evaluation after all batches have been processed (see last line of my example below). I'm finding lots of 1 - 5 line code snippets on the internet illustrating how to call train_on_batch() or fit_generator() , but so far nothing that clearly illustrates how to

Cleanest way to combine reduce and map in Python

痞子三分冷 提交于 2021-02-19 03:44:01
问题 I'm doing a little deep learning, and I want to grab the values of all hidden layers. So I end up writing functions like this: def forward_pass(x, ws, bs): activations = [] u = x for w, b in zip(ws, bs): u = np.maximum(0, u.dot(w)+b) activations.append(u) return activations If I didn't have to get the intermediate values, I'd use the much less verbose form: out = reduce(lambda u, (w, b): np.maximum(0, u.dot(w)+b), zip(ws, bs), x) Bam. All one line, nice and compact. But I can't keep any of

Cleanest way to combine reduce and map in Python

穿精又带淫゛_ 提交于 2021-02-19 03:43:10
问题 I'm doing a little deep learning, and I want to grab the values of all hidden layers. So I end up writing functions like this: def forward_pass(x, ws, bs): activations = [] u = x for w, b in zip(ws, bs): u = np.maximum(0, u.dot(w)+b) activations.append(u) return activations If I didn't have to get the intermediate values, I'd use the much less verbose form: out = reduce(lambda u, (w, b): np.maximum(0, u.dot(w)+b), zip(ws, bs), x) Bam. All one line, nice and compact. But I can't keep any of

Pytorch Batchnorm layer different from Keras Batchnorm

核能气质少年 提交于 2021-02-19 03:38:08
问题 I'm trying to copy pre-trained BN weights from a pytorch model to its equivalent Keras model but I keep getting different outputs. I read Keras and Pytorch BN documentation and I think that the difference lies in the way they calculate the "mean" and "var". Pytorch: The mean and standard-deviation are calculated per-dimension over the mini-batches source: Pytorch BatchNorm Thus, they average over samples. Keras: axis: Integer, the axis that should be normalized (typically the features axis).

Adapting Tensorflow RNN Seq2Seq model code for Tensorflow 2.0

£可爱£侵袭症+ 提交于 2021-02-19 03:00:15
问题 I am very new to Tensorflow and have been messing around with a simple chatbot-building project from this link. There were many warnings that were saying that things would be deprecated in Tensorflow 2.0 and that I should upgrade, so I did. I then used the automatic Tensorflow code upgrader to update all the necessary files to 2.0. There were a few errors with this. When processing the model.py file, it returned these warnings: 133:20: WARNING: tf.nn.sampled_softmax_loss requires manual check

how to convert logits to probability in binary classification in tensorflow?

余生长醉 提交于 2021-02-18 10:59:10
问题 logits= tf.matmul(inputs, weight) + bias After matmul operation, the logits are two values derive from the MLP layer. My target is binary classification, how to convert the two values, logits, into probabilities, which include positive prob and negative prob and the sum of them is 1 ? 回答1: predictions = tf.nn.softmax(logits) 回答2: I am writing this answer for anyone who needs further clarifications: If it is a binary classification, it should be: prediction = tf.round(tf.nn.sigmoid(logit)) If

Keras network can never classify the last class

核能气质少年 提交于 2021-02-18 10:50:29
问题 I have been working on my project Deep Learning Language Detection which is a network with these layers to recognise from 16 programming languages: And this is the code to produce the network: # Setting up the model graph_in = Input(shape=(sequence_length, number_of_quantised_characters)) convs = [] for i in range(0, len(filter_sizes)): conv = Conv1D(filters=num_filters, kernel_size=filter_sizes[i], padding='valid', activation='relu', strides=1)(graph_in) pool = MaxPooling1D(pool_size=pooling