deep-learning

expected input to have 4 dimensions, but got array with shape

泪湿孤枕 提交于 2020-02-27 07:11:26
问题 I have this error Error when checking input: expected input_13 to have 4 dimensions, but got array with shape (7, 100, 100) For the following code how should I reshape array to fit with 4-dimensions, I searched for it but didn't understand the previous solutions. Please ask if not clear its very common issue in convolution neural network. inputs=Input(shape=(100,100,1)) x=Conv2D(16,(3,3), padding='same')(inputs) x=Activation('relu')(x) x=Conv2D(8,(3,3))(x) x=Activation('relu')(x) x

How do you compute accuracy in a regression model, after rounding predictions to classes, in keras?

梦想与她 提交于 2020-02-26 23:08:56
问题 How would you create and display an accuracy metric in keras for a regression problem, for example after you round the predictions to the nearest integer class? While accuracy is not itself effectively defined conventionally for a regression problem, to determine ordinal classes/labels for data, it is suitable to treat the problem as a regression. But then it would be convenient to also calculate an accuracy metric, whether it be kappa or something else like that. Here is a basic keras

LSTM-Keras Error: ValueError: non-broadcastable output operand with shape (67704,1) doesn't match the broadcast shape (67704,12)

两盒软妹~` 提交于 2020-02-26 10:02:30
问题 Good morning everyone. I'm trying to implement this LSTM Algorithm using Keras and pandas as to read in the csv file in. The backend that I'm using is Tensorflow. I'm having a problem when it comes to inversing my results before predicting the training set. Below is my code import numpy import matplotlib.pyplot as plt import pandas import math from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from sklearn.preprocessing import MinMaxScaler from

LSTM-Keras Error: ValueError: non-broadcastable output operand with shape (67704,1) doesn't match the broadcast shape (67704,12)

拈花ヽ惹草 提交于 2020-02-26 10:02:30
问题 Good morning everyone. I'm trying to implement this LSTM Algorithm using Keras and pandas as to read in the csv file in. The backend that I'm using is Tensorflow. I'm having a problem when it comes to inversing my results before predicting the training set. Below is my code import numpy import matplotlib.pyplot as plt import pandas import math from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from sklearn.preprocessing import MinMaxScaler from

How to use model.reset_states() in Keras?

你说的曾经没有我的故事 提交于 2020-02-26 08:37:49
问题 I have sequential data and I declared a LSTM model which predicts y with x in Keras. So if I call model.predict(x1) and model.predict(x2) , Is it correct to call model.reset_states between the two predict() explicitly? Does model.reset_states clear history of inputs, not weights, right? # data1 x1 = [2,4,2,1,4] y1 = [1,2,3,2,1] # dat2 x2 = [5,3,2,4,5] y2 = [5,3,2,3,2] And in my actual code, I use model.evaluate() . In evaluate() , is reset_states called implicitly for each data sample? model

why Keras 2D regression network has constant output

眉间皱痕 提交于 2020-02-25 07:12:21
问题 I am working on the some kind of the 2D Regression Deep network with keras, but the network has constant output for every datasets, even I test with handmade dataset in this code I feed the network with a constant 2d values and the output is linear valu of the X (2*X/100) but the out put is constant. import resource import glob import gc rsrc = resource.RLIMIT_DATA soft, hard = resource.getrlimit(rsrc) print ('Soft limit starts as :', soft) resource.setrlimit(rsrc, (4 * 1024 * 1024 * 1024,

why Keras 2D regression network has constant output

丶灬走出姿态 提交于 2020-02-25 07:12:05
问题 I am working on the some kind of the 2D Regression Deep network with keras, but the network has constant output for every datasets, even I test with handmade dataset in this code I feed the network with a constant 2d values and the output is linear valu of the X (2*X/100) but the out put is constant. import resource import glob import gc rsrc = resource.RLIMIT_DATA soft, hard = resource.getrlimit(rsrc) print ('Soft limit starts as :', soft) resource.setrlimit(rsrc, (4 * 1024 * 1024 * 1024,

how to feed DataGenerator for KERAS multilabel issue?

烂漫一生 提交于 2020-02-25 04:15:31
问题 I am working on a multilabel classification problem with KERAS. When i execute the code like this i get the following error: ValueError: Error when checking target: expected activation_19 to have 2 dimensions, but got array with shape (32, 6, 6) This is because of my lists full of "0" and "1" in the labels dictionary, which dont fit to keras.utils.to_categorical in return statement, as i learned recently. softmax cant handle more than one "1" as well. I guess I first need a Label_Encoder and

ValueError: When feeding symbolic tensors to a model, we expect the tensors to have a static batch size

霸气de小男生 提交于 2020-02-25 04:06:12
问题 I am new to Keras and I was trying to build a text-classification CNN model using Python 3.6 when I encountered this error : Traceback (most recent call last): File "model.py", line 94, in <module> model.fit([x1, x2], y_label, batch_size=batch_size, epochs=epochs, verbose=1, callbacks=[checkpoint], validation_split=0.2) # starts training File "/../../anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 955, in fit batch_size=batch_size) File "/../../anaconda3/lib/python3.6

ValueError: When feeding symbolic tensors to a model, we expect the tensors to have a static batch size

半世苍凉 提交于 2020-02-25 04:04:13
问题 I am new to Keras and I was trying to build a text-classification CNN model using Python 3.6 when I encountered this error : Traceback (most recent call last): File "model.py", line 94, in <module> model.fit([x1, x2], y_label, batch_size=batch_size, epochs=epochs, verbose=1, callbacks=[checkpoint], validation_split=0.2) # starts training File "/../../anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 955, in fit batch_size=batch_size) File "/../../anaconda3/lib/python3.6