neural-network

cant load mnist dataset in keras

倖福魔咒の 提交于 2019-12-13 19:21:46
问题 im trying to load the mnist dataset by : import keras from keras.datasets import mnist (x_train,y_train),(x_test,y_test)=mnist.load_data() but i get this error : Traceback (most recent call last): File "<stdin>", line 1, in <module> File "E:\anaconda\lib\site-packages\keras\datasets\mnist.py", line 15, in load_data data = cPickle.load(f) File "E:\anaconda\lib\gzip.py", line 252, in read raise IOError(errno.EBADF, "read() on write-only GzipFile object") IOError: [Errno 9] read() on write-only

keras lstm(100) and lstm(units=100) produces different results?

这一生的挚爱 提交于 2019-12-13 17:55:09
问题 I am using keras 2.0.2 to create a lstm network for a classification task. The network topology is as below: from numpy.random import seed seed(42) from tensorflow import set_random_seed set_random_seed(42) import os #os.environ['PYTHONHASHSEED'] = '0' model = Sequential() model.add(embedding_layer) model.add(LSTM(units=100)) #line A model.add(Dropout(0.2)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) On the same

Is it important for a neural network to have normally distributed data?

妖精的绣舞 提交于 2019-12-13 17:09:05
问题 So one of the standard things to do with the data is normalize it and standardize it to have data that's normally distributed with a mean 0 and standard deviation of 1, right? But, what if the data is NOT normally distributed? Also, does the desired output has to be normally distributed too? What if I want my feedforward net to classify between two classes (-1, and 1), that would be impossible to standardize into a normal distribution of mean 0 and std of 1 right? Feedforward nets are non

“Index Exceeds Matrix Dimensions” neural network function error

佐手、 提交于 2019-12-13 16:28:08
问题 I've got two datasets, which I load from a CSV file, and split them into X and T: X (3x5000) double T (1x5000) double I'm trying to configure this function, but I can't http://www.mathworks.co.uk/help/toolbox/nnet/ref/layrecnet.html X has three features and 5000 examples. T has one feature and 5000 examples. For an example the target is feature 1 20 steps ahead. So basically X(1,21) == T(1) . [X,T] = simpleseries_dataset; This works perfectly, in this case, I have 1x100, 1x100. If I use my

Hybrid SOM (with MLP)

南楼画角 提交于 2019-12-13 16:27:21
问题 Could someone please provide some information on how to properly combine a self organizing map with a multilayer perceptron? I recently read some articles about this technique in comparison to regular MLPs and it performed way better in prediction tasks. So, I want to use the SOM as front-end for dimension reduction by clustering the input data and pass the results to an MLP back-end. My current idea of implementing it is it to train the SOM with a couple of training sets and to determine the

How to take the data?

﹥>﹥吖頭↗ 提交于 2019-12-13 16:05:12
问题 I am learning to use neural networks, and have encountered a problem. I can not figure out how to convert data for a neural network. As I understand it, I need to normalize the data, after normalization and learning, the answer is always averaged. https://jsfiddle.net/eoy7krzj/ <html> <head> <script src="https://cdn.rawgit.com/BrainJS/brain.js/5797b875/browser.js"></script> </head> <body> <div> <button onclick="train()">train</button><button onclick="Generate.next(); Generate.draw();"

Best available data sets and software to compare accuracy between homemade and professional ANNs / feedfoward neural networks

喜欢而已 提交于 2019-12-13 16:02:12
问题 I have a couple slightly modified / non-traditional setups for feedforward neural networks which I'd like to compare for accuracy against the ones used professionally today. Are there specific data sets, or types of data sets, which can be used as a benchmark for this? I.e. "the style of ANN typically used for such-and-such a task is 98% accurate against this data set." It would be great to have a variety of these, a couple for statistical analysis, a couple for image and voice recognition,

Sklearn metrics values are very different from Keras values

China☆狼群 提交于 2019-12-13 15:22:27
问题 I need some help in order to understand how accuracy is calculated when fitting a model in Keras. This is a sample history of training the model: Train on 340 samples, validate on 60 samples Epoch 1/100 340/340 [==============================] - 5s 13ms/step - loss: 0.8081 - acc: 0.7559 - val_loss: 0.1393 - val_acc: 1.0000 Epoch 2/100 340/340 [==============================] - 3s 9ms/step - loss: 0.7815 - acc: 0.7647 - val_loss: 0.1367 - val_acc: 1.0000 Epoch 3/100 340/340 [==================

tensorflow: batches of variable-sized images

二次信任 提交于 2019-12-13 15:00:47
问题 When one passes to tf.train.batch, it looks like the shape of the element has to be strictly defined, else it would complain that All shapes must be fully defined if there exist Tensors with shape Dimension(None) . How, then, does one train on images of different sizes? 回答1: You could set dynamic_pad=True in the argument of tf.train.batch. dynamic_pad : Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same

How to predict using model generated by Torch?

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-13 14:41:26
问题 I have executed the neuralnetwork_tutorial.lua. Now that I have the model, I would like to test it with some of my own handwritten images. But I have tried many ways by storing the weights, and now by storing the complete model using torch save and load methods. However now that I try to predict my own handwritten images(converted to 28X28 DoubleTensor) using model:forward(testImageTensor) ...ches/torch/install/share/lua/5.1/dp/model/sequential.lua:30: attempt to index local 'carry' (a nil