neural-network

Setting up the input on an RNN in Keras

十年热恋 提交于 2019-12-23 04:29:12
问题 So I had a specific question with setting up the input in Keras. I understand that the sequence length refers to the window length of the longest sequence that you are looking to model with the rest being padded by 0's. However, how do I set up something that is already in a time series array? For example, right now I have an array that is 550k x 28. So there are 550k rows each with 28 columns (27 features and 1 target). Do I have to manually split the array into (550k- sequence length)

Setting up the input on an RNN in Keras

主宰稳场 提交于 2019-12-23 04:29:08
问题 So I had a specific question with setting up the input in Keras. I understand that the sequence length refers to the window length of the longest sequence that you are looking to model with the rest being padded by 0's. However, how do I set up something that is already in a time series array? For example, right now I have an array that is 550k x 28. So there are 550k rows each with 28 columns (27 features and 1 target). Do I have to manually split the array into (550k- sequence length)

Having a batch program learn

强颜欢笑 提交于 2019-12-23 04:08:01
问题 I am making a chat bot for my sister in batch but it is consuming so much time I figured I would let it have her tell it what to say when it does not know. However I can not get it working and I figured someone on here might know. Here is what I have so far: @ECHO OFF COLOR A cls ECHO HELLO I AM A CHATBOT. WHAT IS YOUR NAME? SET /P NAME= ECHO %NAME%, IS A COOL NAME. set /a favvid=0 set /a hack=0 :hello echo Hello,%name% :begin SET /P TALK= if /i "%TALK%"== "how are you" goto howareyou if /i "

Applying neural network to MFCCs for variable-length speech segments

醉酒当歌 提交于 2019-12-23 04:07:31
问题 I'm currently trying to create and train a neural network to perform simple speech classification using MFCCs. At the moment, I'm using 26 coefficients for each sample, and a total of 5 different classes - these are five different words with varying numbers of syllables. While each sample is 2 seconds long, I am unsure how to handle cases where the user can pronounce words either very slowly or very quickly. E.g., the word 'television' spoken within 1 second yields different coefficients than

Keras sentiment analysis with LSTM how to test it

隐身守侯 提交于 2019-12-23 03:46:09
问题 I'm trying to do sentiment analysis with Keras on my texts using example imdb_lstm.py but I dont know how to test it. I stored my model and weights into file and it look like this: model = model_from_json(open('my_model_architecture.json').read()) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.load_weights('my_model_weights.h5') results = model.evaluate(X_test, y_test, batch_size=32) but ofcourse I dont know how should X_test and y_test look like.

How to handle gradients when training two sub-graphs simultaneously

浪尽此生 提交于 2019-12-23 03:43:10
问题 The general idea I am trying to realize is a seq2seq-model (taken from the translate.py-example in the models, based on the seq2seq-class). This trains well. Furthermore I am using the hidden state of the rnn after all the encoding is done , right before decoding starts (I call it the “hidden state at end of encoding”). I use this hidden state at end of encoding to feed it into a further sub-graph which I call “prices” (see below). The training gradients of this sub-graph backprop not only

keras autoencoder “Error when checking target”

痴心易碎 提交于 2019-12-23 03:32:15
问题 i'm trying to adapt the 2d convolutional autoencoder example from the keras website: https://blog.keras.io/building-autoencoders-in-keras.html to my own case where i use 1d inputs: from keras.layers import Input, Dense, Conv1D, MaxPooling1D, UpSampling1D from keras.models import Model from keras import backend as K import scipy as scipy import numpy as np mat = scipy.io.loadmat('edata.mat') emat = mat['edata'] input_img = Input(shape=(64,1)) # adapt this if using `channels_first` image data

Matlab neural network tool box,on extract weight & bias from feedforwardnet

£可爱£侵袭症+ 提交于 2019-12-23 03:32:11
问题 My problem is simple. I have trained a feedforwardnet. now I want to extract its weights and biases so i can test it on another programming language. but while i tested those trained weights by my own code, it always returns different results compare with neural tool box.here is my code close all RandStream.setGlobalStream (RandStream ('mrg32k3a','Seed', 1234)); [x,t] = simplefit_dataset; plot(t) hold on topo = [2] net = feedforwardnet(topo); net = train(net,x,t); view(net) y = net(x); plot(y

Optimization of Neural Network input data

走远了吗. 提交于 2019-12-23 03:14:43
问题 I'm trying to build an app to detect images which are advertisements from the webpages. Once I detect those I`ll not be allowing those to be displayed on the client side. Basically I'm using Back-propagation algorithm to train the neural network using the dataset given here: http://archive.ics.uci.edu/ml/datasets/Internet+Advertisements. But in that dataset no. of attributes are very high. In fact one of the mentors of the project told me that If you train the Neural Network with that many

Train Keras Stateful LSTM return_seq=true not learning

一世执手 提交于 2019-12-23 03:04:14
问题 Consider this minimal runnable example: from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM import numpy as np import matplotlib.pyplot as plt max = 30 step = 0.5 n_steps = int(30/0.5) x = np.arange(0,max,step) x = np.cos(x)*(max-x)/max y = np.roll(x,-1) y[-1] = x[-1] shape = (n_steps,1,1) batch_shape = (1,1,1) x = x.reshape(shape) y = y.reshape(shape) model = Sequential() model.add(LSTM(50, return_sequences=True, stateful=True, batch_input_shape