keras

How does the Tensorflow's TripletSemiHardLoss and TripletHardLoss and how to use with Siamese Network?

半腔热情 提交于 2021-02-19 08:37:18
问题 As much as I know that Triplet Loss is a Loss Function which decrease the distance between anchor and positive but decrease between anchor and negative. Also, there is a margin added to it. So for EXAMPLE LEt us Suppose: a Siamese Network , which gives embeddings: anchor_output = [1,2,3,4,5...] # embedding given by the CNN model positive_output = [1,2,3,4,4...] negative_output= [53,43,33,23,13...] And I think I can get the triplet loss such as: (I think I have to make it as loss using Lambda

ValueError: Error when checking input: expected embedding_1_input to have shape (32,) but got array with shape (1,)

China☆狼群 提交于 2021-02-19 08:32:31
问题 model.fit throws an error ValueError: Error when checking input: expected embedding_1_input to have shape (32,) but got array with shape (1,) , but there are no arrays of shape (1,) passed to model.fit . def create_model(vocabulary_size, input_word_count, embedding_dims=50): model = Sequential() model.add(Embedding(vocabulary_size, embedding_dims, input_length=input_word_count)) model.add(GlobalAveragePooling1D()) model.add(Dense(1, activation="sigmoid")) model.compile(loss="binary

Keras: Wrong Input Shape in LSTM Neural Network

我是研究僧i 提交于 2021-02-19 08:26:35
问题 I am trying to train an LSTM recurrent neural network, for sequence classification. My data has the following formart: Input: [1,5,2,3,6,2, ...] -> Output: 1 Input: [2,10,4,6,12,4, ...] -> Output: 1 Input: [4,1,7,1,9,2, ...] -> Output: 2 Input: [1,3,5,9,10,20, ...] -> Output: 3 . . . So basically I want to provide a sequence as an input and get an integer as an output. Each input sequence has length = 2000 float numbers, and I have around 1485 samples for training The output is just an

Why is the result of the code offered by Deep Learning with TensorFlow different from the snapshot in its book

跟風遠走 提交于 2021-02-19 07:49:05
问题 In the first chapter of Deep Learning with TensorFlow , it gives an example on how to build a simple neural network for recognizing handwritten digits. According to its description, the code bundle for the book can be found at GitHub. From the context, I think section Running a simple TensorFlow 2.0 net and establishing a baseline uses the code same with Deep-Learning-with-TensorFlow-2-and-Keras/mnist_V1.py. When I run this example code, it gives me the following output: The snapshot from the

Custom layer with two parameters function on Core ML

倖福魔咒の 提交于 2021-02-19 07:33:09
问题 Thanks to this great article(http://machinethink.net/blog/coreml-custom-layers/), I understood how to write converting using coremltools and Lambda with Keras custom layer. But, I cannot understand on the situation, function with two parameters. #python def scaling(x, scale): return x * scale Keras layer is here. #python up = conv2d_bn(mixed, K.int_shape(x)[channel_axis], 1, activation=None, use_bias=True, name=name_fmt('Conv2d_1x1')) x = Lambda(scaling, # HERE !! output_shape=K.int_shape(up)

Keras Regression to approximate function (goal: loss < 1e-7)

六眼飞鱼酱① 提交于 2021-02-19 07:30:08
问题 I'm working on a neural network which approximates a function f(X)=y, with X a vector [x0, .., xn] and y in [-inf, +inf]. This approximated function needs to have an accuracy (sum of errors) around 1e-8. In fact, I need my neural network to overfit. X is composed of random points in the interval -500 and 500. Before putting these points into the input layer I normalized them between [0, 1]. I use keras as follow: dimension = 10 #example self.model = Sequential() self.model.add(Dense(128,

Keras: test, cross validation and accuracy while processing batched data with train_on_batch

假装没事ソ 提交于 2021-02-19 05:40:07
问题 Can someone point me to a complete example that does all of the following? Fits batched (and pickled) data in a loop using train_on_batch() Sets aside data from each batch for validation purposes Sets aside test data for accuracy evaluation after all batches have been processed (see last line of my example below). I'm finding lots of 1 - 5 line code snippets on the internet illustrating how to call train_on_batch() or fit_generator() , but so far nothing that clearly illustrates how to

Delayed echo of sin - cannot reproduce Tensorflow result in Keras

早过忘川 提交于 2021-02-19 04:54:20
问题 I am experimenting with LSTMs in Keras with little to no luck. At some moment I decided to scale back to the most basic problems in order finally achieve some positive result. However, even with simplest problems I find that Keras is unable to converge while the implementation of the same problem in Tensorflow gives stable result. I am unwilling to just switch to Tensorflow without understanding why Keras keeps diverging on any problem I attempt. My problem is a many-to-many sequence

How to fix “IndexError: list index out of range” in Tensorflow

半城伤御伤魂 提交于 2021-02-19 04:29:48
问题 I'm creating an Image Classifier using Tensorflow and Keras, but when I tried to train my model I got an error: IndexError: list index out of range. I think the problem is with my model, because when I remove the conv2D layers, then the code throws no error. model = Sequential() model.add(Conv2D(64,(3,3),activation='relu',padding='same')) model.add(Conv2D(64,(3,3),activation='relu',padding='same')) model.add(MaxPool2D((2,2),strides=(2,2))) model.add(Conv2D(128,(3,3),activation='relu',padding=

Keras: Create a custom generator for two input model using flow_from _directory() function

拈花ヽ惹草 提交于 2021-02-19 04:21:33
问题 I was trying to train my siamese network with fit_generator() ,I learned from this answer: Keras: How to use fit_generator with multiple inputs that the best way to do this was to create your own generator that yield the multiple data points, my problem was that I retrieve my data with flow_from_directory() function and I didn't know if that was possible. This is my attempt to readapt a generator for my problem: from keras.models import load_model from keras import optimizers from keras