neural-network

Pruning in Keras

不羁的心 提交于 2019-12-18 18:53:38
问题 I'm trying to design a neural network using Keras with priority on prediction performance, and I cannot get sufficiently high accuracy by further reducing the number of layers and nodes per layer. I have noticed that very large portion of my weights are effectively zero (>95%). Is there a way to prune dense layers in hope of reducing prediction time? 回答1: Not a dedicated way :( There's currently no easy (dedicated) way of doing this with Keras. A discussion is ongoing at https://groups.google

How to make best use of GPU for TensorFlow Estimators?

这一生的挚爱 提交于 2019-12-18 18:32:43
问题 I was using Tensorflow(CPU version) for my Deep Learning Model. Specifically using DNNRegressor Estimator for training, with given set of parameters (network structure, hidden layers, alpha etc.) Though I was able to reduce the loss, but model took very large time for learning (approx 3 days.) and time it was taking was 9 sec per 100th step. I came accross this article :- https://medium.com/towards-data-science/how-to-traine-tensorflow-models-79426dabd304 and found that GPU's can be more

How to use multi CPU cores to train NNs using caffe and OpenBLAS

独自空忆成欢 提交于 2019-12-18 13:35:06
问题 I am learning deep learning recently and my friend recommended me caffe. After install it with OpenBLAS, I followed the tutorial, MNIST task in the doc. But later I found it was super slow and only one CPU core was working. The problem is that the servers in my lab don't have GPU, so I have to use CPUs instead. I Googled this and got some page like this . I tried to export OPENBLAS_NUM_THREADS=8 and export OMP_NUM_THREADS=8 . But caffe still used one core. How can I make caffe use multi CPUs?

Obtaining a prediction in Keras

感情迁移 提交于 2019-12-18 13:24:57
问题 I have successfully trained a simple model in Keras to classify images: model = Sequential() model.add(Convolution2D(32, 3, 3, border_mode='valid', input_shape=(img_channels, img_rows, img_cols), activation='relu', name='conv1_1')) model.add(Convolution2D(32, 3, 3, activation='relu', name='conv1_2')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Convolution2D(64, 3, 3, border_mode='valid', activation='relu', name='conv2_1')) model.add(Convolution2D(64, 3, 3,

PyBrain:How can I put specific weights in a neural network?

一个人想着一个人 提交于 2019-12-18 13:19:53
问题 I am trying to recreate a neural network based on given facts.It has 3 inputs,a hidden layer and an output.My problem is that the weights are also given,so I don't need to train. I was thinking maybe I could save the trainning of a similar in structure neural network and change the values accordingly.Do you think that will work?Any other ideas.Thanks. Neural Network Code: net = FeedForwardNetwork() inp = LinearLayer(3) h1 = SigmoidLayer(1) outp = LinearLayer(1) # add modules net

Why use softmax only in the output layer and not in hidden layers?

我只是一个虾纸丫 提交于 2019-12-18 12:57:10
问题 Most examples of neural networks for classification tasks I've seen use the a softmax layer as output activation function. Normally, the other hidden units use a sigmoid, tanh, or ReLu function as activation function. Using the softmax function here would - as far as I know - work out mathematically too. What are the theoretical justifications for not using the softmax function as hidden layer activation functions? Are there any publications about this, something to quote? 回答1: I haven't

When bulding a CNN, I am getting complaints from Keras that do not make sense to me.

喜夏-厌秋 提交于 2019-12-18 12:56:37
问题 My input shape is supposed to be 100x100. It represents a sentence. Each word is a vector of 100 dimensions and there are 100 words at maximum in a sentence. I feed eight sentences to the CNN.I am not sure whether this means my input shape should be 100x100x8 instead. Then the following lines Convolution2D(10, 3, 3, border_mode='same', input_shape=(100, 100)) complains: Input 0 is incompatible with layer convolution2d_1: expected ndim=4, found ndim=3 This does not make sense to me as my input

Keras Masking for RNN with Varying Time Steps

前提是你 提交于 2019-12-18 12:49:42
问题 I'm trying to fit an RNN in Keras using sequences that have varying time lengths. My data is in a Numpy array with format (sample, time, feature) = (20631, max_time, 24) where max_time is determined at run-time as the number of time steps available for the sample with the most time stamps. I've padded the beginning of each time series with 0 , except for the longest one, obviously. I've initially defined my model like so... model = Sequential() model.add(Masking(mask_value=0., input_shape=

Image per-pixel Scene labeling output issue (using FCN-32s Semantic Segmentation)

耗尽温柔 提交于 2019-12-18 12:42:02
问题 I'm looking for a way that, given an input image and a neural network, it will output a labeled class for each pixel in the image (sky, grass, mountain, person, car etc). I've set up Caffe (the future-branch) and successfully run the FCN-32s Fully Convolutional Semantic Segmentation on PASCAL-Context model. However, I'm unable to produce clear labeled images with it. Images that visualizes my problem: Input image ground truth And my result: This might be some resolution issue. Any idea of

XOR neural network error stops decreasing during training

不羁岁月 提交于 2019-12-18 12:22:33
问题 I'm training a XOR neural network via back-propagation using stochastic gradient descent. The weights of the neural network are initialized to random values between -0.5 and 0.5. The neural network successfully trains itself around 80% of the time. However sometimes it gets "stuck" while backpropagating. By "stuck", I mean that I start seeing a decreasing rate of error correction. For example, during a successful training, the total error decreases rather quickly as the network learns, like