neural-network

How Gradient passed by tf.py_func

﹥>﹥吖頭↗ 提交于 2019-12-30 05:24:10
问题 This is Faster R-CNN implement in tensorflow. The proposal_layer is implement by python i am curious about if gradient can pass by tf.py_func the weights and biases are keep changing so I think the gradient deliver back successful Then I do a small test import tensorflow as tf import numpy as np def addone(x): # print type(x) return x + 1 def pyfunc_test(): # create data x_data = tf.placeholder(dtype=tf.float32, shape=[None]) y_data = tf.placeholder(dtype=tf.float32, shape=[None]) w = tf

Python/Keras/Theano wrong dimensions for Deep Autoencoder

有些话、适合烂在心里 提交于 2019-12-30 04:25:10
问题 I'm trying to follow the Deep Autoencoder Keras example. I'm getting a dimension mismatch exception, but for the life of me, I can't figure out why. It works when I use only one encoded dimension, but not when I stack them. Exception: Input 0 is incompatible with layer dense_18: expected shape=(None, 128), found shape=(None, 32)* The error is on the line decoder = Model(input=encoded_input, output=decoder_layer(encoded_input)) from keras.layers import Dense,Input from keras.models import

Where to start Handwritten Recognition using Neural Network?

梦想的初衷 提交于 2019-12-29 18:00:17
问题 I've been trying to learn about Neural Networks for a while now, and I can understand some basic tutorials online. Now i want to develop online handwritten recognition using Neural Network. So i haven't any idea where to start? And i need a very good instruction. In finally i'm java programmer. What do you suggest I do? 回答1: Start simple with character recognition on the Unipen database. You will need to extract pertinent features out of raw trajectory data in order to form what's commonly

Example of Time Series Prediction using Neural Networks in R

a 夏天 提交于 2019-12-29 13:32:45
问题 Anyone's got a quick short educational example how to use Neural Networks ( nnet in R ) for the purpose of prediction? Here is an example, in R , of a time series T = seq(0,20,length=200) Y = 1 + 3*cos(4*T+2) +.2*T^2 + rnorm(200) plot(T,Y,type="l") Many thanks David 回答1: I think you can use the caret package and specially the train function This function sets up a grid of tuning parameters for a number of classification and regression routines. require(quantmod) require(nnet) require(caret) T

Example of Time Series Prediction using Neural Networks in R

末鹿安然 提交于 2019-12-29 13:30:51
问题 Anyone's got a quick short educational example how to use Neural Networks ( nnet in R ) for the purpose of prediction? Here is an example, in R , of a time series T = seq(0,20,length=200) Y = 1 + 3*cos(4*T+2) +.2*T^2 + rnorm(200) plot(T,Y,type="l") Many thanks David 回答1: I think you can use the caret package and specially the train function This function sets up a grid of tuning parameters for a number of classification and regression routines. require(quantmod) require(nnet) require(caret) T

Why do we need to explicitly call zero_grad()? [duplicate]

北慕城南 提交于 2019-12-29 12:11:52
问题 This question already has an answer here : Why do we need to call zero_grad() in PyTorch? (1 answer) Closed last month . Why do we need to explicitly zero the gradients in PyTorch? Why can't gradients be zeroed when loss.backward() is called? What scenario is served by keeping the gradients on the graph and asking the user to explicitly zero the gradients? 回答1: We explicitly need to call zero_grad() because, after loss.backward() (when gradients are computed), we need to use optimizer.step()

When should I use genetic algorithms as opposed to neural networks? [closed]

旧街凉风 提交于 2019-12-29 10:10:02
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 7 years ago . Is there a rule of thumb (or set of examples) to determine when to use genetic algorithms as opposed to neural networks (and vice

When should I use genetic algorithms as opposed to neural networks? [closed]

走远了吗. 提交于 2019-12-29 10:09:07
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 7 years ago . Is there a rule of thumb (or set of examples) to determine when to use genetic algorithms as opposed to neural networks (and vice

TensorFlow - regularization with L2 loss, how to apply to all weights, not just last one?

北城以北 提交于 2019-12-29 10:07:16
问题 I am playing with a ANN which is part of Udacity DeepLearning course. I have an assignment which involves introducing generalization to the network with one hidden ReLU layer using L2 loss. I wonder how to properly introduce it so that ALL weights are penalized, not only weights of the output layer. Code for network without generalization is at the bottom of the post (code to actually run the training is out of the scope of the question). Obvious way of introducing the L2 is to replace the

What is “Parameter” layer in caffe?

若如初见. 提交于 2019-12-29 09:05:34
问题 Recently I came across "Parameter" layer in caffe. It seems like this layer exposes its internal parameter blob to "top". What is this layer using for? Can you give a usage example? 回答1: This layer was introduced in the pull request #2079, with the following description: This layer simply holds a parameter blob of user-defined shape, and shares it as its single top. which is exactly what you expected. This was introduced in context of the issue #1474, which basically proposes to treat