neural-network

Tensorflow 3 channel order of color inputs

邮差的信 提交于 2019-12-20 01:13:23
问题 I'm using tensor flow to process color images with a convolutional neural network. A code snippet is below. My code runs so I think I got the number of channels right. My question is, how do I correctly order the rgb data? Is it in the form rgbrgbrgb or would it be rrrgggbbb? Presently I am using the latter. Thanks. Any help would be appreciated. c_output = 2 c_input = 784 * 3 def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias

Keras custom metric iteration

落花浮王杯 提交于 2019-12-19 20:38:15
问题 I'm pretty new to Keras and I'm trying to define my own metric. It calculates concordance index which is a measure for regression problems. def cindex_score(y_true, y_pred): sum = 0 pair = 0 for i in range(1, len(y_true)): for j in range(0, i): if i is not j: if(y_true[i] > y_true[j]): pair +=1 sum += 1* (y_pred[i] > y_pred[j]) + 0.5 * (y_pred[i] == y_pred[j]) if pair is not 0: return sum/pair else: return 0 def baseline_model(hidden_neurons, inputdim): model = Sequential() model.add(Dense

Tensorboard- superimpose 2 plots

拟墨画扇 提交于 2019-12-19 19:52:07
问题 In tensorboard, I would like to superimpose 2 plots on the same graph (training and validation losses of a neural network). I can see 2 separate plots, but not one plot with 2 superimposed curves. Otherwise, I get one plot in zigzag. How can I do? 回答1: It is possible to superimpose two plots in Tensorboard. You'll have to satisfy both of the following: Create two separate tf.train.SummaryWriter objects such that it outputs in two folders. Create two summaries (e.g. tf.scalar_summary ) with

Calculate the error using a sigmoid function in backpropagation

↘锁芯ラ 提交于 2019-12-19 19:38:43
问题 I have a quick question regarding backpropagation. I am looking at the following: http://www4.rgu.ac.uk/files/chapter3%20-%20bp.pdf In this paper, it says to calculate the error of the neuron as Error = Output(i) * (1 - Output(i)) * (Target(i) - Output(i)) I have put the part of the equation that I don't understand in bold. In the paper, it says that the Output(i) * (1 - Output(i)) term is needed because of the sigmoid function - but I still don't understand why this would be nessecary. What

How do neural networks use genetic algorithms and backpropagation to play games?

℡╲_俬逩灬. 提交于 2019-12-19 17:15:09
问题 I came across this interesting video on YouTube on genetic algorithms. As you can see in the video, the bots learn to fight. Now, I have been studying neural networks for a while and I wanted to start learning genetic algorithms.. This somehow combines both. How do you combine genetic algorithms and neural networks to do this? And also how does one know the error in this case which you use to back-propagate and update your weights and train the net? And also how do you think the program in

Keras: TypeError: can't pickle _thread.lock objects with KerasClassifier

佐手、 提交于 2019-12-19 16:28:13
问题 import pandas as pd import numpy as np import matplotlib.pyplot as plt dataset = pd.read_csv("Churn_Modelling.csv") X = dataset.iloc[:,3:13].values Y = dataset.iloc[:,13:].values from sklearn.preprocessing import OneHotEncoder,LabelEncoder,StandardScaler enc1=LabelEncoder() enc2=LabelEncoder() X[:,1] = enc1.fit_transform(X[:,1]) X[:,2] = enc2.fit_transform(X[:,2]) one = OneHotEncoder(categorical_features=[1]) X=one.fit_transform(X).toarray() X = X[:,1:] from sklearn.model_selection import

Speeding up Math calculations in Java

拈花ヽ惹草 提交于 2019-12-19 13:42:12
问题 I have a neural network written in Java which uses a sigmoid transfer function defined as follows: private static double sigmoid(double x) { return 1 / (1 + Math.exp(-x)); } and this is called many times during training and computation using the network. Is there any way of speeding this up? It's not that it's slow, it's just that it is used a lot, so a small optimisation here would be a big overall gain. 回答1: For neural networks, you don't need the exact value of the sigmoid function. So you

Neural network accuracy optimization

匆匆过客 提交于 2019-12-19 09:25:18
问题 I have constructed an ANN in keras which has 1 input layer(3 inputs), one output layer (1 output) and two hidden layers with with 12 and 3 nodes respectively. The way i construct and train my network is: from keras.models import Sequential from keras.layers import Dense from sklearn.cross_validation import train_test_split import numpy # fix random seed for reproducibility seed = 7 numpy.random.seed(seed) dataset = numpy.loadtxt("sorted output.csv", delimiter=",") # split into input (X) and

How do I to prevent backward computation in specific layers in caffe

时光毁灭记忆、已成空白 提交于 2019-12-19 09:17:25
问题 I want to disable the backward computations in certain convolution layers in caffe, how do I do this? I have used propagate_down setting,however find out it works for fc layer but not convolution layer. Please help~ first update : I set propagate_down:false in test/pool_proj layer. I don't want it to backward(but other layer backward). But from the log file, it says that the layer still needs backward. second update : Let's denote a deep learning model, there are two path from input layer to

What is “linear projection” in convolutional neural network

独自空忆成欢 提交于 2019-12-19 09:04:06
问题 I am reading through Residual learning, and I have a question. What is "linear projection" mentioned in 3.2? Looks pretty simple once got this but could not get the idea... I am basically not a computer science person, so I would very appreciate if someone provide me a simple example. 回答1: First up, it's important to understand what x , y and F are and why they need any projection at all. I'll try explain in simple terms, but basic understanding of ConvNets is required. x is an input data