neural-network

Is fit_generator in Keras supposed to reset the generator after each epoch?

人走茶凉 提交于 2021-02-07 14:22:12
问题 I am trying to use fit_generator with a custom generator to read in data that's too big for memory. There are 1.25 million rows I want to train on, so I have the generator yield 50,000 rows at a time. fit_generator has 25 steps_per_epoch , which I thought would bring in those 1.25MM per epoch. I added a print statement so that I could see how much offset the process was doing, and I found that it exceeded the max when it got a few steps into epoch 2. There are a total of 1.75 million records

Keras Neural Network Error: Setting an Array Element with a Sequence

家住魔仙堡 提交于 2021-02-07 13:58:22
问题 I'm loading dummy data into a neural network, but I'm receiving an error I can't seem to debug: Here is my data, visualized: df: Label Mar 0 | [[.332, .326], [.058, .138]] 0 | [[.234, .246], [.234, .395]] 1 | [[.084, .23], [.745, .923]], I'm trying to use the 'Mar' column to predict the 'Label' column (I know this data makes no sense, its just similar to my real data). Here is my neural network code: model = Sequential() model.add(Dense(3, input_dim=(1), activation='relu')) model.add(Dense(1,

How to create text-to-speech with neural network

走远了吗. 提交于 2021-02-07 10:59:33
问题 I am creating a Text to Speech system for a phonetic language called "Kannada" and I plan to train it with a Neural Network. The input is a word/phrase while the output is the corresponding audio. While implementing the Network, I was thinking the input should be the segmented characters of the word/phrase as the output pronunciation only depends on the characters that make up the word, unlike English where we have slient words and Part of Speech to consider. However, I do not know how I

Keras Lambda Layer for matrix vector multiplication

纵然是瞬间 提交于 2021-02-07 10:30:54
问题 I am trying to have a lambda layer in keras that performs a vector matrix multiplication, before passing it to another layer. The matrix is fixed (I don't want to learn it). Code below: model.add(Dropout(0.1)) model.add(Lambda(lambda x: x.dot(A))) model.add(Dense(output_shape, activation='softmax')) model.compile(<stuff here>)} A is the fixed matrix, and I want to do x.dot(A) WHen I run this, I get the following error: 'Tensor' object has no attribute 'dot' Same Error when I replace dot with

Keras Lambda Layer for matrix vector multiplication

江枫思渺然 提交于 2021-02-07 10:30:00
问题 I am trying to have a lambda layer in keras that performs a vector matrix multiplication, before passing it to another layer. The matrix is fixed (I don't want to learn it). Code below: model.add(Dropout(0.1)) model.add(Lambda(lambda x: x.dot(A))) model.add(Dense(output_shape, activation='softmax')) model.compile(<stuff here>)} A is the fixed matrix, and I want to do x.dot(A) WHen I run this, I get the following error: 'Tensor' object has no attribute 'dot' Same Error when I replace dot with

How to extract text region from an image after detecting

天大地大妈咪最大 提交于 2021-02-07 09:24:49
问题 I am trying to extract all text regions from an image using opencv python. I have successfully detected the text regions but could not extract it. I extracted the smaller sub-matrices of a text region but I cannot aggregrate them into a bigger matrix that we see as the text region in the image. import numpy as np import cv2 from imutils.object_detection import non_max_suppression import matplotlib.pyplot as plt %matplotlib inline from PIL import Image # pip install imutils image1 = cv2.imread

PySpark: Getting output layer neuron values for Spark ML Multilayer Perceptron Classifier

北战南征 提交于 2021-02-07 09:11:16
问题 I am doing binary classification using Spark ML Multilayer Perceptron Classifier. mlp = MultilayerPerceptronClassifier(labelCol="evt", featuresCol="features", layers=[inputneurons,(inputneurons*2)+1,2]) The output layer has of two neurons as it is a binary classification problem. Now I would like get the values two neurons for each of the rows in the test set instead of just getting the prediction column containing either 0 or 1. I could not find anything to get that in the API document. 回答1:

PySpark: Getting output layer neuron values for Spark ML Multilayer Perceptron Classifier

半城伤御伤魂 提交于 2021-02-07 09:07:43
问题 I am doing binary classification using Spark ML Multilayer Perceptron Classifier. mlp = MultilayerPerceptronClassifier(labelCol="evt", featuresCol="features", layers=[inputneurons,(inputneurons*2)+1,2]) The output layer has of two neurons as it is a binary classification problem. Now I would like get the values two neurons for each of the rows in the test set instead of just getting the prediction column containing either 0 or 1. I could not find anything to get that in the API document. 回答1:

Why are Embeddings in PyTorch implemented as Sparse Layers?

泪湿孤枕 提交于 2021-02-07 08:28:00
问题 Embedding Layers in PyTorch are listed under "Sparse Layers" with the limitation: Keep in mind that only a limited number of optimizers support sparse gradients: currently it’s optim.SGD (cuda and cpu), and optim.Adagrad (cpu) What is the reason for this? For example in Keras I can train an architecture with an Embedding Layer using any optimizer. 回答1: Upon closer inspection sparse gradients on Embeddings are optional and can be turned on or off with the sparse parameter: class torch.nn

Why are Embeddings in PyTorch implemented as Sparse Layers?

纵饮孤独 提交于 2021-02-07 08:26:57
问题 Embedding Layers in PyTorch are listed under "Sparse Layers" with the limitation: Keep in mind that only a limited number of optimizers support sparse gradients: currently it’s optim.SGD (cuda and cpu), and optim.Adagrad (cpu) What is the reason for this? For example in Keras I can train an architecture with an Embedding Layer using any optimizer. 回答1: Upon closer inspection sparse gradients on Embeddings are optional and can be turned on or off with the sparse parameter: class torch.nn