deep-learning

Working of the Earth Mover Loss method in Keras and input arguments data types

六月ゝ 毕业季﹏ 提交于 2020-06-14 23:19:06
问题 I have found a code for the Earth Mover Loss in Keras/Tensrflow. I want to compute the loss for the scores given to images but I can not do it until I get to know the working of the Earth Mover Loss given below. Can someone please describe that what is happening in the code. The last layer of the model or output layer is like: out = Dense(10,activation='softmax')(x) What should be the input types for this method.I have my y_labels in the form of 1.2,4.9 etc etc. I want to use it with Keras

Tensorflow Removing JFIF

拟墨画扇 提交于 2020-06-13 00:11:14
问题 I am quite new to tensorflow, I would like to clearly know, what does the below command do? import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import os num_skipped = 0 for folder_name in ("Cat", "Dog"): print("folder_name:",folder_name) #folder_name: Cat folder_path = os.path.join("Dataset/PetImages", folder_name) print("folder_path:",folder_path) #folder_path: Dataset/PetImages/Cat for fname in os.listdir(folder_path): print("fname:",fname) #fname: 5961

Keras Realtime Augmentation adding Noise and Contrast

帅比萌擦擦* 提交于 2020-06-12 07:10:02
问题 Keras provides an ImageDataGenerator class for realtime augmentation, but it does not include contrast adjustment and addition of noise. How can we apply a random level of noise and a random contrast adjustment during training? Could these functions be added to the 'preprocessing_function' parameter in the datagen? Thank you. 回答1: From the Keras docs: preprocessing_function: function that will be implied on each input. The function will run before any other modification on it. The function

How do you convert a .onnx to tflite?

限于喜欢 提交于 2020-06-10 12:32:31
问题 I've exported my model to ONNX via: # Export the model torch_out = torch.onnx._export(learn.model, # model being run x, # model input (or a tuple for multiple inputs) EXPORT_PATH + "mnist.onnx", # where to save the model (can be a file or file-like object) export_params=True) # store the trained parameter weights inside the model file And now I am trying to convert the model to a Tensorflow Lite file so that I can do inference on Android. Unfortunately, PyTorch/Caffe2 support is fairly

How do you convert a .onnx to tflite?

流过昼夜 提交于 2020-06-10 12:32:04
问题 I've exported my model to ONNX via: # Export the model torch_out = torch.onnx._export(learn.model, # model being run x, # model input (or a tuple for multiple inputs) EXPORT_PATH + "mnist.onnx", # where to save the model (can be a file or file-like object) export_params=True) # store the trained parameter weights inside the model file And now I am trying to convert the model to a Tensorflow Lite file so that I can do inference on Android. Unfortunately, PyTorch/Caffe2 support is fairly

How to get probability of prediction per entity from Spacy NER model?

て烟熏妆下的殇ゞ 提交于 2020-06-10 07:14:11
问题 I used this official example code to train a NER model from scratch using my own training samples. When I predict using this model on new text, I want to get the probability of prediction of each entity. # test the saved model print("Loading from", output_dir) nlp2 = spacy.load(output_dir) for text, _ in TRAIN_DATA: doc = nlp2(text) print("Entities", [(ent.text, ent.label_) for ent in doc.ents]) print("Tokens", [(t.text, t.ent_type_, t.ent_iob) for t in doc]) I am unable to find a method in

Tensorflow weight initialization

烈酒焚心 提交于 2020-06-09 08:29:05
问题 Regarding the MNIST tutorial on the TensorFlow website, I ran an experiment (gist) to see what the effect of different weight initializations would be on learning. I noticed that, against what I read in the popular [Xavier, Glorot 2010] paper, learning is just fine regardless of weight initialization. The different curves represent different values for w for initializing the weights of the convolutional and fully connected layers. Note that all values for w work fine, even though 0.3 and 1.0

Keras uses GPU for first 2 epochs, then stops using it

久未见 提交于 2020-06-09 05:25:05
问题 I prepare the dataset and save it as as hdf5 file. I have a custom data generator that subclasses Sequence from keras and generates batches from the hdf5 file. Now, when I model.fit_generator using the train generator, the model uses the GPU and trains fast for the first 2 epochs (GPU memory is full and GPU volatile usage fluctuates nicely around 50%). However, after the 3rd epoch, GPU volatile usage is 0% and the epoch takes 20x as long. What's going on here? 回答1: Can you try configuring GPU

How do you write a custom activation function in python for Keras?

天涯浪子 提交于 2020-06-01 07:32:26
问题 I'm trying to write a custom activation function for use with Keras. I can not write it with tensorflow primitives as it does properly compute the derivative. I followed How to make a custom activation function with only Python in Tensorflow? and it works very we in creating a tensorflow function. However, when I tried putting it into Keras as an activation function for the classic MNIST demo. I got errors. I also tried the tf_spiky function from the above reference. Here is the sample code

What is the connections between two stacked LSTM layers?

回眸只為那壹抹淺笑 提交于 2020-06-01 05:12:17
问题 The question is like this one What's the input of each LSTM layer in a stacked LSTM network?, but more into implementing details. For simplicity how about 4 units and 2 units structures like the following model.add(LSTM(4, input_shape=input_shape, return_sequences=True)) model.add(LSTM(2,input_shape=input_shape)) So I know the output of LSTM_1 is 4 length but how do the next 2 units handle these 4 inputs, are they fully connected to the next layer of nodes? I guess they are fully connected