neural-network

ValueError: Error when checking : expected flatten_1_input to have shape (None, 4, 4, 512) but got array with shape (1, 150, 150, 3)

匆匆过客 提交于 2020-01-15 09:22:08
问题 I followed the guide at this link to build a model and stopped before the finetuning part to test the model on some other images using the following code: img_width, img_height = 150, 150 batch_size = 1 test_model = load_model('dog_cat_model.h5') validation_data_dir = "test1" test_datagen = ImageDataGenerator(rescale=1. / 255) validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width, img_height), batch_size=batch_size, shuffle=False, class_mode=

Derivatives of n-dimensional function in Keras

十年热恋 提交于 2020-01-15 08:55:30
问题 Say I have a bivariate function, for example: z = x^2 + y^2. I learned that on Keras I can compute nth-order derivatives using Lambda layers: def bivariate_function(x, y): x2 = Lambda(lambda u: K.pow(u,2))(x) y3 = Lambda(lambda u: K.pow(u,2))(y) return Add()([x2,y3]) def derivative(y,x): return Lambda(lambda u: K.gradients(u[0],u[1]))([y,x]) f = bivariate_function(x,y) df_dx = grad(f,x) # 1st derivative wrt to x df_dy = grad(f,y) # 1st derivative wrt to y df_dx2 = grad(df_dx,x) # 2nd

Multiple pathways for data through a layer in Caffe

前提是你 提交于 2020-01-15 04:51:46
问题 I would like to construct a network in Caffe in which the incoming data is split up initially, passes separately through the same set of layers, and is finally recombined using an eltwise layer. After this, all the parts will move as a single blob. The layer configuration of the part of the network for which the data moves parallely will be identical, except for the learned parameters. Is there a way to define this network in Caffe without redefining the layers through which the different

How can neural networks learn functions with a variable number of inputs?

℡╲_俬逩灬. 提交于 2020-01-14 09:52:08
问题 A simple example: Given an input sequence, I want the neural network to output the median of the sequence. The problem is, if a neural network learnt to compute the median of n inputs, how can it compute the median of even more inputs? I know that recurrent neural networks can learn functions like max and parity over a sequence, but computing these functions only requires constant memory. What if the memory requirement grows with the input size like computing the median? This is a follow up

How can neural networks learn functions with a variable number of inputs?

隐身守侯 提交于 2020-01-14 09:52:05
问题 A simple example: Given an input sequence, I want the neural network to output the median of the sequence. The problem is, if a neural network learnt to compute the median of n inputs, how can it compute the median of even more inputs? I know that recurrent neural networks can learn functions like max and parity over a sequence, but computing these functions only requires constant memory. What if the memory requirement grows with the input size like computing the median? This is a follow up

ValueError: Input 0 is incompatible with layer conv_1: expected ndim=3, found ndim=4

二次信任 提交于 2020-01-14 09:42:24
问题 I am trying to make a variational auto encoder to learn to encode DNA sequences, but am getting an unexpected error. My data is an array of one-hot arrays. The issue I'm getting is a Value Error. It's telling me that I have a four dimensional input, when my input is clearly three-dimensional (100, 4008, 4). In fact, when I print out the seq layer, it says that it's shape is (?, 100, 4008, 4). When I take out a dimension, it then gives me an error for being two dimensional. Any help will be

numpy ValueError shapes not aligned

孤人 提交于 2020-01-14 08:05:30
问题 So I am trying to adapt the neural network from michael nielson's http://neuralnetworksanddeeplearning.com/chap1.html I modified network.py to work on python 3 and made a small script to test it with a few 15x10 pictures of digits. import os import numpy as np from network import Network from PIL import Image BLACK = 0 WHITE = 255 cdir = "cells" cells = [] for cell in os.listdir(cdir): img = Image.open(os.path.join(cdir,cell)) number = cell.split(".")[0][-1] pixels = img.load() pdata = [] for

Neural network for square (x^2) approximation

霸气de小男生 提交于 2020-01-14 08:01:09
问题 I'm new to TensorFlow and Data Science. I made a simple module that should figure out the relationship between input and output numbers. In this case, x and x squared. The code in Python: import numpy as np import tensorflow as tf # TensorFlow only log error messages. tf.logging.set_verbosity(tf.logging.ERROR) features = np.array([-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype = float) labels = np.array([100, 81, 64, 49, 36, 25, 16, 9, 4, 1, 0, 1, 4, 9, 16,

Tensorflow 0.8 Import and Export output tensors problems

纵饮孤独 提交于 2020-01-14 07:44:12
问题 I am using Tensorflow 0.8 with Python 3. I am trying to train the Neural Network, and the goal is to automatically export/import network states every 50 iteration. The problem is when I export the output tensor at the first iteration, the output tensor name is ['Neg:0', 'Slice:0'] , but when I export the output tensor at the second iteration, the output tensor name is changed as ['import/Neg:0', 'import/Slice:0'] , and importing this output tensor is not working then: ValueError: Specified

Pytorch loss function dimensions do not match

浪尽此生 提交于 2020-01-14 07:01:29
问题 I'm trying to run word embeddings using batch training , as shown below. def forward(self, inputs): print(inputs.shape) embeds = self.embeddings(inputs) print(embeds.shape) out = self.linear1(embeds) print(out.shape) out = self.activation_function1(out) print(out.shape) out = self.linear2(out).cuda() print(out.shape) out = self.activation_function2(out) print(out.shape) return out.cuda() Here, I'm using context size 4, batch size 32, embedding size 50, hidden layer size 64, vocab size 9927