neural-network

Export weights of neural network using tensorflow

…衆ロ難τιáo~ 提交于 2019-12-12 11:58:13
问题 I wrote neural-network using tensorflow tools. everything working and now I want to export the final weights of my neural network to make a single prediction method. How can I do this? 回答1: You will need to save your model at the end of training by using the tf.train.Saver class. While initializing the Saver object, you will need to pass a list of all the variables you wish to save. The best part is that you can use these saved variables in a different computation graph! Create a Saver object

Tensorflow - Saving a model

China☆狼群 提交于 2019-12-12 11:21:35
问题 I have the following code, and getting an error when trying to save the model. What could I be doing wrong, and how can I solve this issue? import tensorflow as tf data, labels = cifar_tools.read_data('C:\\Users\\abc\\Desktop\\Testing') x = tf.placeholder(tf.float32, [None, 150 * 150]) y = tf.placeholder(tf.float32, [None, 2]) w1 = tf.Variable(tf.random_normal([5, 5, 1, 64])) b1 = tf.Variable(tf.random_normal([64])) w2 = tf.Variable(tf.random_normal([5, 5, 64, 64])) b2 = tf.Variable(tf.random

Using Keras, How can I load weights generated from CuDNNLSTM into LSTM Model?

我只是一个虾纸丫 提交于 2019-12-12 10:58:41
问题 I've developed a NN Model with Keras, based on the LSTM Layer. In order to increase speed on Paperspace (a GPU Cloud processing infrastructure), I've switched the LSTM Layer with the new CuDNNLSTM Layer. However this is usable only on machines with GPU cuDNN support. PS: CuDNNLSTM is available only on Keras master , not in the latest release. So I've generated the weights and saved them to hdf5 format on the Cloud, and I'd like to use them locally on my MacBook. Since CuDNNLSTM layer is not

Convolutional neural networks vs downsampling?

点点圈 提交于 2019-12-12 10:25:36
问题 After reading up on the subject I don't fully understand: Is the 'convolution' in neural networks comparable to a simple downsampling or 'sharpening' function? Can you break this term down into a simple, understandable image/analogy? edit: Rephrase after 1st answer: Can pooling be understood as downsampling of weight matrices? 回答1: Convolutional neural network is a family of models which are proved empirically to work great when it comes to image recognition. From this point of view - CNN is

Caffe network getting very low loss but very bad accuracy in testing

拟墨画扇 提交于 2019-12-12 10:06:07
问题 I'm somewhat new to caffe, and I'm getting some strange behavior. I'm trying to use fine tuning on the bvlc_reference_caffenet to accomplish an OCR task. I've taken their pretrained net, changed the last FC layer to the number of output classes that I have, and retrained. After a few thousand iterations I'm getting loss rates of ~.001, and an accuracy over 90 percent when the network tests. That said, when I try to run my network on data by myself, I get awful results, not exceeding 7 or 8

Adding static data( not changing over time) to sequence data in LSTM

倖福魔咒の 提交于 2019-12-12 10:02:30
问题 I am trying to build a model like the following figure. Please see the following image: I want to pass sequence data in LSTM layer and static data (blood group, gender) in another feed forward neural network layer. Later I want to merge them. However, I am confused about the dimenstion here. If my understaning is right(which i depict in the image), how the 5-dimensional sequence data can be merged with 4 dimenstional static data? Also, what is the difference of attention mechanism with this

How to combine mfcc vector with labels from annotation to pass to a neural network

人走茶凉 提交于 2019-12-12 08:58:06
问题 Using librosa, I created mfcc for my audio file as follows: import librosa y, sr = librosa.load('myfile.wav') print y print sr mfcc=librosa.feature.mfcc(y=y, sr=sr) I also have a text file that contains manual annotations[start, stop, tag] corresponding to the audio as follows: 0.0 2.0 sound1 2.0 4.0 sound2 4.0 6.0 silence 6.0 8.0 sound1 QUESTION: How to do I combine the generated mfcc's that was generated by librosa, with the annotations from text file. Final goal is, I want to combine mfcc

Dimensions in convolutional neural network

心不动则不痛 提交于 2019-12-12 08:48:55
问题 I am trying to understand how the dimensions in convolutional neural network behave. In the figure below the input is 28-by-28 matrix with 1 channel. Then there are 32 5-by-5 filters (with stride 2 in height and width). So I understand that the result is 14-by-14-by-32. But then in the next convolutional layer we have 64 5-by-5 filters (again with stride 2). So why the result is 7-by-7- by 64 and not 7-by-7-by 32*64? Aren't we applying each one of the 64 filters to each one of the 32 channels

how to begin neural network programming [closed]

一个人想着一个人 提交于 2019-12-12 08:32:04
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 7 years ago . i am quite a novice in the field of neural networks . I have read some theory regarding neural networks. Now i want to do some real

Re-initialize variables in Tensorflow

戏子无情 提交于 2019-12-12 07:57:46
问题 I am using a Tensorflow tf.Saver to load a pre-trained model and I want to re-train a few of its layers by erasing (re-initializing to random) their appropriate weights and biases, then training those layers and saving the trained model. I can not find a method that re-initializes the variables. I tried tf.initialize_variables(fine_tune_vars) but it did not work (I'd assume because the variables are already initialized), I have also seen that you can pass variables to the tf.Saver so that you