pybrain

don't print results of a function

一曲冷凌霜 提交于 2019-12-12 01:09:54
问题 I am using python and pybrain for Neural Networks. Unfortunately, my sample is realy big and when the program print the errors on the training, my memory get full before the programm completed. Is there anyway to not print the errors from the functions? !!!! It's not a python error. It's pybrain feature. It's print the difference of the prediction and the real sample. For example "error: 0.00424". Each time it makes a prediction, it print this string. Here is my code ds = SupervisedDataSet(1,

Unstable output values from ANN and improving accuracy

久未见 提交于 2019-12-11 08:10:45
问题 I am trying to develop an Artificial Neural Network using PyBrain to model biological data. My ANN compiles and runs, but its accuracy value is very low, never surpassing ~62%. From a coding perspective, how can I improve the ANN's accuracy? Something I noticed was that each time, the outputs of the ANN are not the same, either, even though the test data set doesn't change--is there a reason the ANN is acting to unstably, and how can I improve this? Thank you! :) 回答1: If you creating new

Pybrain multi dimensional data input

不想你离开。 提交于 2019-12-11 06:25:30
问题 I have some data that has a dimension of 8x128 for each record. I wanted to train a Neural Network for this data. Does anyone have any examples of using Multi-Dimensional Data as Input to a Pybrain Neural Network? I've searched the documentation and found only Single Dimensional Input examples. enter link description here 回答1: Neural Networks don't care about the dimensionality of your input data, just serialize it (reshape([1024])) and feed that as input. 来源: https://stackoverflow.com

How to denormalise (de-standardise) neural net predictions after normalising input data

霸气de小男生 提交于 2019-12-10 11:59:57
问题 How does one return original data scale after normalising input data for the neural net. Normalising was made with the standard deviation method. But the problem has already discussed, it belongs to returning same values for each neural net input. I've followed the advice and normalised data. Are there very obvious ways how to get adequate (which are different from each other) predictions for non-normalised data? But being normalised inputs demonstrate relatively acceptable output results

_convertToOneOfMany in PyBrain

不问归期 提交于 2019-12-08 17:22:46
问题 I follow the PyBrain tutorial Classification with Feed-Forward Neural Networks and want to build my own classifier. I do not understand how _convertToOneOfMany modifies outputs. Why would initial operation alldata.addSample(input, [klass]) create more than one output neuron per class? 回答1: nevermind, here is doc explaining this stuff http://pybrain.org/docs/tutorial/datasets.html 回答2: Target number is [0,1,2], this function translate them to (001,010,100). This is because many algorithms work

What is `target` in `ClassificationDataSet` good for?

蓝咒 提交于 2019-12-07 20:12:52
问题 I've tried to find out what the parameter target of ClassificationDataSet can be used for, but I'm still not clear about that. What I've tried >>> from pybrain.datasets import ClassificationDataSet >>> help(ClassificationDataSet) Help on class ClassificationDataSet in module pybrain.datasets.classification: class ClassificationDataSet(pybrain.datasets.supervised.SupervisedDataSet) | Specialized data set for classification data. Classes are to be numbered from 0 to nb_classes-1. | | Method

Training an LSTM neural network to forecast time series in pybrain, python

与世无争的帅哥 提交于 2019-12-06 17:28:17
I have a neural network created using PyBrain and designed to forecast time series. I am using the sequential dataset function, and trying to use a sliding window of 5 previous values to predict the 6th. One of my problems is that I can't figure out how to create the required dataset by appending the 5 previous values to the inputs and the 6th as an output. I am also unsure of how exactly to forecast values in the series once the network is trained. Posting my code below: from pybrain.datasets import SupervisedDataSet from pybrain.datasets import SequentialDataSet from pybrain.tools.shortcuts

Pybrain cross-validation method

梦想与她 提交于 2019-12-05 01:43:33
问题 I'm trying to use the cross-validator on my data, but I'm getting 0.0 success rate, which doesn't make sense. My data is comprised of samples with 5 continuous attributes and two possible classes: "y" and "n". My code: net = pybrain.tools.shortcuts.buildNetwork(5, 8, 1) trainer = BackpropTrainer(net, ds) evaluation = ModuleValidator.classificationPerformance(trainer.module, ds) validator = CrossValidator(trainer=trainer, dataset=trainer.ds, n_folds=5, valfunc=evaluation) print(validator

How to load training data in PyBrain?

橙三吉。 提交于 2019-12-04 04:09:39
I am trying to use PyBrain for some simple NN training. What I don't know how to do is to load the training data from a file. It is not explained in their website anywhere. I don't care about the format because I can build it now, but I need to do it in a file instead of adding row by row manually, because I will have several hundreds of rows. Here is how I did it: ds = SupervisedDataSet(6,3) tf = open('mycsvfile.csv','r') for line in tf.readlines(): data = [float(x) for x in line.strip().split(',') if x != ''] indata = tuple(data[:6]) outdata = tuple(data[6:]) ds.addSample(indata,outdata) n =

Pybrain cross-validation method

爱⌒轻易说出口 提交于 2019-12-03 16:26:00
I'm trying to use the cross-validator on my data, but I'm getting 0.0 success rate, which doesn't make sense. My data is comprised of samples with 5 continuous attributes and two possible classes: "y" and "n". My code: net = pybrain.tools.shortcuts.buildNetwork(5, 8, 1) trainer = BackpropTrainer(net, ds) evaluation = ModuleValidator.classificationPerformance(trainer.module, ds) validator = CrossValidator(trainer=trainer, dataset=trainer.ds, n_folds=5, valfunc=evaluation) print(validator.validate()) When I'm doing a regular training like so print(trainer.train()) I'm getting a reasonable error