conv-neural-network

Keras TimeDistributed Conv1D Error

旧街凉风 提交于 2019-12-13 12:35:28
问题 This is my code: cnn_input = Input(shape=(cnn_max_length,)) emb_output = Embedding(num_chars + 1, output_dim=32, input_length=cnn_max_length, trainable=True)(cnn_input) output = TimeDistributed(Convolution1D(filters=128, kernel_size=4, activation='relu'))(emb_output) I want to train a character-level CNN sequence labeler and I keep receiving this error: Traceback (most recent call last): File "word_lstm_char_cnn.py", line 24, in <module> output = kl.TimeDistributed(kl.Convolution1D(filters

How to generate new image using deep learning, from new features [closed]

倖福魔咒の 提交于 2019-12-13 11:06:24
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 8 months ago . If i have a dataset consisting by a list of images each associated with a series of features; there is a model that, once trained, generates new images upon entering a new list of features? 回答1: I think you are looking for GAN(Generative Adversarial Networks) which is proposed in

Sensitivity and specificity changes using a single threshold and a gradient of thresholds at 0.5 using pROC in R

旧时模样 提交于 2019-12-13 09:41:22
问题 I am trying to calculate ROC for a model of multi-class image. But since I didn't find any best way for multi-class classification, I have converted it to binary class. I have 31 classes of image. Using binary methods I am trying to find ROC of each 31 classes individually. df <- read.xlsx("data.xlsx",sheetName = 1,header = F) dn <- as.vector(df$X1) # 31 class model_info <- read.csv("all_new.csv",stringsAsFactors = F) # details of model output (Actual labels, Model labels, probabablity values

Tensorflow for Poets Inception v3 image size

早过忘川 提交于 2019-12-13 08:29:39
问题 I am training my own image set using Tensorflow for Poets as an example, https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/ What size do the images need to be. I have read that the script automatically resizes the image for you, but what size does it resize them to. Can you preresize your images to this to save on your disk space (10,000 1mb images). How does it crop the images, does it chop off part of your image, or add white/black bars, or change the aspect ratio? Also,

How to compute the sum of the values of elements in a vector using cblas functions?

巧了我就是萌 提交于 2019-12-13 07:19:42
问题 I need to sum all the elements of a matrix in caffe, But as I noticed, the caffe wrapper of the cblas functions ( 'math_functions.hpp' & 'math_functions.cpp' ) is using cblas_sasum function as caffe_cpu_asum that computes the sum of the absolute values of elements in a vector. Since I'm a newbie in cblas, I tried to find a suitable function to get rid of absolute there, but it seems that there is no function with that property in cblas. Any suggestion? 回答1: There is a way to do so using cblas

Could two tf.data.Dataset coexist and controled by tf.cond()

不问归期 提交于 2019-12-13 06:18:25
问题 I put two Dataset pipeline for train/test = 9:1 set in my Graph and the control the flow by a tf.cond. I encountered a problem that during the training the both pipelines are activated at each step. The testset ran out before the trainset as it has less during training. OutOfRangeError (see above for traceback): End of sequence First, nest the input pipeline in a function: def input_pipeline(*args): ... # construct iterator it = batch.make_initializable_iterator() iter_init_op = it

Use different target vectors for CNN

余生颓废 提交于 2019-12-13 05:10:46
问题 I wish to use different target vectors (not the standard one-hot encoded) for training my CNN. My image data lies in 10 different folders (10 different categories). How do I use my desired target vectors? The flow_from_directory() outputs a one-hot encoded array of labels. I have the label vectors stored in a dictionary. Also, the names of the folders are the labels, if that helps. 回答1: Well as you may know the ImageDataGenerator in Keras is a python generator (if you are not familiar with

Transfer learning with Eulidean loss in the final layer

主宰稳场 提交于 2019-12-13 04:35:34
问题 Greatly appreciate it if someone could help me out here: I'm trying to do some transfer learning on a regression task --- my inputs are 200X200 RGB images and my prediction output/label is a set of real values (let's say, within [0,10] , though scaling is not a big deal here...?) --- on top of InceptionV3 architecture. Here are my functions that take a pretrained Inception model, remove the last layer and add a new layer for transfer learning... """ Transfer learning functions """ IM_WIDTH,

Must the input height of a 1D CNN be constant?

末鹿安然 提交于 2019-12-13 03:38:53
问题 I'm currently doing my honours research project on online/dynamic signature verification. I am using the SVC 2004 dataset (Task 2). I have done the following data processing: def load_dataset_normalized(path): file_names = os.listdir(path) num_of_persons = len(file_names) initial_starting_point = np.zeros(np.shape([7])) x_dataset = [] y_dataset = [] for infile in file_names: full_file_name = os.path.join(path, infile) file = open(full_file_name, "r") file_lines = file.readlines() num_of

Visualising Keras CNN final trained filters at each layer

梦想的初衷 提交于 2019-12-13 03:19:59
问题 the same question was asked by someone :visualize learned filters in keras cnn. But it has no answers, so I asked it again. I know that Keras has default filters at each layer which are then modified and adjusted. After all modification, I want to see how these filters (32 or 64 or any number) look. I know that when prediction of new image happens, these filters are applied one-by-one to predict the image. But how these TRAINED filters look? I went through several blogs and posts which titles