neural-network

ValueError: Tensor:(…) is not an element of this graph

独自空忆成欢 提交于 2019-12-22 14:16:36
问题 I am using keras' pre-trained model and the error came up when trying to get predictions. I have the following code in flask server: from NeuralNetwork import * @app.route("/uploadMultipleImages", methods=["POST"]) def uploadMultipleImages(): uploaded_files = request.files.getlist("file[]") getPredictionfunction = preTrainedModel["VGG16"] for file in uploaded_files: path = os.path.join(STATIC_PATH, file.filename) result = getPredictionfunction(path) This is what I have in my NeuralNetwork.py

Neural Network - Working with a imbalanced dataset

拟墨画扇 提交于 2019-12-22 10:57:04
问题 I am working on a Classification problem with 2 labels : 0 and 1. My training dataset is a very imbalanced dataset (and so will be the test set considering my problem). The proportion of the imbalanced dataset is 1000:4 , with label '0' appearing 250 times more than label '1'. However, I have a lot of training samples : around 23 millions. So I should get around 100 000 samples for the label '1'. Considering the big number of training samples I have, I didn't consider SVM. I also read about

Copying weights of a specific layer - keras

☆樱花仙子☆ 提交于 2019-12-22 10:39:20
问题 According to this the following copies weights from one model to another: target_model.set_weights(model.get_weights()) What about copying the weights of a specific layer, would this work? model_1.layers[0].set_weights(source_model.layers[0].get_weights()) model_2.layers[0].set_weights(source_model.layers[0].get_weights()) If I train model_1 and model_2 will they have separate weights? The documentation doesn't state whether if this get_weights makes a deep copy or not. If this doesn't work,

What would be an idiomatic F# way to scale a list of (n-tuples or list) with another list, also arrays?

跟風遠走 提交于 2019-12-22 09:50:02
问题 Given: let weights = [0.5;0.4;0.3] let X = [[2;3;4];[7;3;2];[5;3;6]] what I want is: wX = [(0.5)*[2;3;4];(0.4)*[7;3;2];(0.3)*[5;3;6]] would like to know an elegant way to do this with lists as well as with arrays. Additional optimization information is welcome 回答1: You write about a list of lists, but your code shows a list of tuples. Taking the liberty to adjust for that, a solution would be let weights = [0.5;0.4;0.3] let X = [[2;3;4];[7;3;2];[5;3;6]] X |> List.map2 (fun w x -> x |> List

Net surgery: How to reshape a convolution layer of a caffemodel file in caffe?

☆樱花仙子☆ 提交于 2019-12-22 09:48:08
问题 I'm trying to reshape the size of a convolution layer of a caffemodel (This is a follow-up question to this question). Although there is a tutorial on how to do net surgery, it only shows how to copy weight parameters from one caffemodel to another of the same size. Instead I need to add a new channel (all 0) to my convolution filter such that it changes its size from currently ( 64 x 3 x 3 x 3 ) to ( 64 x 4 x 3 x 3 ). Say the convolution layer is called 'conv1' . This is what I tried so far:

how to get learning rate or iteration times when define new layer in caffe

≯℡__Kan透↙ 提交于 2019-12-22 09:12:24
问题 I want to change the loss calculation method in loss layer when the iteration times reach a certain number. In order to realize it I think I need to get the current learning rate or iteration times, then I use if phrase to choose changing loss calculation method or not. 回答1: You can add a member variable in Caffe class to save the current learning rate or iteration times and access it in the layer where you want. For example, to get the current iteration times where you want you need to make

Strange loss curve while training LSTM with Keras

佐手、 提交于 2019-12-22 08:57:18
问题 I'm trying to train an LSTM for some a binary classification problem. When I plot loss curve after the training, there are strange picks in it. Here are some examples: Here is the basic code model = Sequential() model.add(recurrent.LSTM(128, input_shape = (columnCount,1), return_sequences=True)) model.add(Dropout(0.5)) model.add(recurrent.LSTM(128, return_sequences=False)) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(optimizer='adam', loss='binary

How to interpret loss function in Tensorflow DNNRegressor Estimator model?

爷,独闯天下 提交于 2019-12-22 08:09:43
问题 I am using Tensorflow DNNRegressor Estimator model for making a neural network. But calling estimator.train() function is giving output as follows: I.e. my loss function is varying a lot with every step. But as far as I know, my loss function should decrease with no of iterations. Also, find the attached screenshot for Tensorboard Visualisation for loss function: The doubts I'm not able to figure out are: Whether it is overall loss function value (combined loss for every step processed till

How to interpret loss function in Tensorflow DNNRegressor Estimator model?

前提是你 提交于 2019-12-22 08:09:23
问题 I am using Tensorflow DNNRegressor Estimator model for making a neural network. But calling estimator.train() function is giving output as follows: I.e. my loss function is varying a lot with every step. But as far as I know, my loss function should decrease with no of iterations. Also, find the attached screenshot for Tensorboard Visualisation for loss function: The doubts I'm not able to figure out are: Whether it is overall loss function value (combined loss for every step processed till

How to show hidden layer outputs in Tensorflow

非 Y 不嫁゛ 提交于 2019-12-22 06:40:40
问题 I'm having differences of the outputs when comparing a model with its stored protobuf version (via this conversion script). For debugging I'm comparing both layers respectively. For the weights and the actual layer output during a test sequence I receive the identical outputs, thus I'm not sure how to access the hidden layers. Here is how I load the layers input = graph.get_tensor_by_name("lstm_1_input_1:0") layer1 = graph.get_tensor_by_name("lstm_1_1/kernel:0") layer2 = graph.get_tensor_by