keras

Generating data for training in keras

做~自己de王妃 提交于 2021-01-28 06:39:56
问题 My training set is really quite large. The entire thing takes up about 120GB of RAM and so I can't even generate the numpy.zeros() array to store the data. From what I've seen, using a generator works well when the entire dataset is already loaded into an array but then is incrementally fed into the network and then deleted afterwards. Is it alright for the generator to create the arrays, insert the data, load the data into the network, delete the data? Or will that whole process take too

Keras: Error when checking input

元气小坏坏 提交于 2021-01-28 06:36:51
问题 I am using Keras autoencodes with Theano backend. And want to make autoencode for 720x1080 RGB images. This is my code from keras.datasets import mnist import numpy as np from keras.layers import Input, LSTM, RepeatVector, Conv2D, MaxPooling2D, UpSampling2D from keras.models import Model from PIL import Image x_train = [] x_train_noisy = [] for i in range(5,1000): image = Image.open('data/trailerframes/frame' + str(i) + '.jpg', 'r') x_train.append(np.array(image)) image = Image.open('data

keras merge concatenate failed because of different input shape even though input shape are the same

痴心易碎 提交于 2021-01-28 06:35:39
问题 I am trying to concatenate 4 different layers into one layer to input into the next part of my model. I am using the Keras functional API and the code is shown below. # Concat left side 4 inputs and right side 4 inputs print(lc,l1_conv_net,l2_conv_net,l3_conv_net) left_combined = merge.Concatenate()([lc, l1_conv_net, l2_conv_net, l3_conv_net]) This errors occurs which says that my input shape is not the same. However, I also printed the input shape and it is seems to be the same except along

Keras ValueError: Dimensions must be equal, but are 9 and 400 for '{{node Equal}}' with input shapes: [?,9], [?,300,400]

我们两清 提交于 2021-01-28 06:11:48
问题 I'm trying to train a very simple Keras network to classify some one-hot encoded images saved as np.array . The input data structure is made of a .npy file, with 500 images (3 arrays each one, as it's RGB) and a one-hot encoded array with each image to determine it's classification. Each image is 400x300 pixels (Width x Height), and the target output should be of 9 classes. Hence, each image has a shape of (300, 400, 3) and each one-hot encoded label list has a length of 9 . This is the code

Question About Dropout Layer and Batch Normalization Layer in DNN model

痴心易碎 提交于 2021-01-28 06:04:02
问题 I have some queries about the Dropout layer and Batch normalized layer. Basically, I have made a simple DNN structure with a Dropout layer and Batch normalized layer and train it that's fine. The simple structure of DNN model for example: from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ layers.Dense(10, activation='relu', input_shape=[11]), layers.Dropout(0.3), layers.BatchNormalization(), layers.Dense(8, activation='relu'), layers.Dropout(0.3),

Question About Dropout Layer and Batch Normalization Layer in DNN model

廉价感情. 提交于 2021-01-28 06:00:24
问题 I have some queries about the Dropout layer and Batch normalized layer. Basically, I have made a simple DNN structure with a Dropout layer and Batch normalized layer and train it that's fine. The simple structure of DNN model for example: from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ layers.Dense(10, activation='relu', input_shape=[11]), layers.Dropout(0.3), layers.BatchNormalization(), layers.Dense(8, activation='relu'), layers.Dropout(0.3),

Keras LSTM: Return Empty Array on Predicition

喜夏-厌秋 提交于 2021-01-28 05:20:31
问题 I am trying to write my first LSTM with Keras and i'm stucking. That are my training data structure: x_data = [1265, 12] y_data = [1265, 3] x_data example: [102.7, 100.69, 103.39, 99.6, 319037.0, 365230.0, 1767412, 102.86, 13.98] y_data example: [0, 0, 1] My model looks like the following: self._opt_cells = 12 self.model = Sequential() self.model.add(LSTM(units = self._opt_cells, return_sequences = True, input_shape = (12, 1))) self.model.add(Dropout(0.2)) self.model.add(LSTM(units = self.

How can I convert yolo weights to tflite file

不羁的心 提交于 2021-01-28 05:19:27
问题 I will use yolo weights in android so I plan to convert yolo weights file to tflite file. I use this code in anaconda prompt because I downloaded keras library in env. activate env python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5 Finally, it did.Saved Keras model to model_data/yolo.h5 And I'm going to convert this h5 file to tflite file in jupyter notebook with this code. model = tf.keras.models.load_model("./yolo/yolo.h5", compile=False) converter = tf.lite.TFLiteConverter.from

Bias only Layer in Keras

生来就可爱ヽ(ⅴ<●) 提交于 2021-01-28 05:18:38
问题 How could one build a layer in Keras which maps an input x to an output of the form x+b where b is a trainable weight of the same dimension? (Also the activation function here would be the identity). 回答1: You can always build a custom layer by extending tf.keras.layers.Layer class, here is how I'd do it import tensorflow as tf print('TensorFlow:', tf.__version__) class BiasLayer(tf.keras.layers.Layer): def __init__(self, *args, **kwargs): super(BiasLayer, self).__init__(*args, **kwargs) def

CNN architecture: classifying “good” and “bad” images

时光总嘲笑我的痴心妄想 提交于 2021-01-28 05:11:58
问题 I'm researching the possibility of implementing a CNN in order to classify images as "good" or "bad" but am having no luck with my current architecture. Characteristics that denote a "bad" image: Overexposure Oversaturation Incorrect white balance Blurriness Would it be feasible to implement a neural network to classify images based on these characteristics or is it best left to a traditional algorithm that simply looks at the variance in brightness/contrast throughout an image and classifies