neural-network

Neural Network Backpropagation not working

二次信任 提交于 2020-02-05 03:20:29
问题 I have coded a neural network in JavaScript and implemented the Backpropagation algorithm described here. Here is the code (typescript): /** * Net */ export class Net { private layers: Layer[] = []; private inputLayer: Layer; private outputLayer: Layer; public error: number = Infinity; private eta: number = 0.15; private alpha: number = 0.5; constructor(...topology: number[]) { topology.forEach((topologyLayer, iTL) => { var nextLayerNeuronNumber = topology[iTL + 1] || 0; this.layers.push(new

Yolo v3 model output clarification with keras

折月煮酒 提交于 2020-02-04 01:46:27
问题 I'm using yolo v3 model with keras and this network is giving me as output container with shape like this: [(1, 13, 13, 255), (1, 26, 26, 255), (1, 52, 52, 255)] So I found this link Then I understand the value 255 in each of the 3 containers, I also understand that there is 3 containers because there is 3 different image scaling for bounding boxes creation. But I did not understand why in the output vector there are 13 * 13 lists for the first scaling rate then 26 *26 lists for the second

Neural Network Diverging instead of converging

若如初见. 提交于 2020-02-02 04:14:18
问题 I have implemented a neural network (using CUDA) with 2 layers. (2 Neurons per layer). I'm trying to make it learn 2 simple quadratic polynomial functions using backpropagation. But instead of converging, the it is diverging (the output is becoming infinity) Here are some more details about what I've tried: I had set the initial weights to 0, but since it was diverging I have randomized the initial weights I read that a neural network might diverge if the learning rate is too high so I

Adding an additional value to a Convolutional Neural Network Input? [closed]

喜夏-厌秋 提交于 2020-02-01 05:20:06
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 2 years ago . I have a dataset of images I want to input to a Convolutional Neural Network Model, however, with each of these images, there is a range or distance from the object associated with the image. I want to input this range as an additional piece of context for the CNN model. Does

solving XOR with single layer perceptron

纵饮孤独 提交于 2020-01-31 18:44:08
问题 I've always heard that the XOR problem can not be solved by a single layer perceptron (not using a hidden layer) since it is not linearly separable. I understand that there is no linear function that can separate the classes. However, what if we use a non-monotonic activation function like sin() or cos() is this still the case? I would imagine these types of functions might be able to separate them. 回答1: Yes , a single layer neural network with a non-monotonic activation function can solve

solving XOR with single layer perceptron

╄→尐↘猪︶ㄣ 提交于 2020-01-31 18:43:12
问题 I've always heard that the XOR problem can not be solved by a single layer perceptron (not using a hidden layer) since it is not linearly separable. I understand that there is no linear function that can separate the classes. However, what if we use a non-monotonic activation function like sin() or cos() is this still the case? I would imagine these types of functions might be able to separate them. 回答1: Yes , a single layer neural network with a non-monotonic activation function can solve

solving XOR with single layer perceptron

自闭症网瘾萝莉.ら 提交于 2020-01-31 18:43:11
问题 I've always heard that the XOR problem can not be solved by a single layer perceptron (not using a hidden layer) since it is not linearly separable. I understand that there is no linear function that can separate the classes. However, what if we use a non-monotonic activation function like sin() or cos() is this still the case? I would imagine these types of functions might be able to separate them. 回答1: Yes , a single layer neural network with a non-monotonic activation function can solve

How to programmatically generate deploy.txt for caffe in python

谁都会走 提交于 2020-01-28 09:37:04
问题 I have written python code to programmatically generate a convolutional neural network (CNN) for training and validation .prototxt files in caffe. Below is my function: def custom_net(lmdb, batch_size): # define your own net! n = caffe.NetSpec() # keep this data layer for all networks n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb, ntop=2, transform_param=dict(scale=1. / 255)) n.conv1 = L.Convolution(n.data, kernel_size=6, num_output=48, weight_filler=dict

First NeuroNetwork, troubles(

ぐ巨炮叔叔 提交于 2020-01-25 10:12:53
问题 I've never done neural network development before, but it just so happens that I HAVE to do it before January 23, inclusive, or I will miss a huge number of opportunities. I get a huge bunch of mistakes that I don't know what to do with. Be so kind-correct my code and explain to me what I'm doing wrong) thank You all UPD code: import numpy as np def Sigmoid(x): return 1/(1+np.exp(-x)) trn_inp=np.array([[1],[2],[3]]) #Массив 3 на 1. 3 строки и 1 столбец trn_out=np.array([[1,2,3]]).T #Ожидаемые

Extract features from 2 auto-encoders and feed them into an MLP

岁酱吖の 提交于 2020-01-25 07:34:06
问题 I understand that the features extracted from an auto-encoder can be fed into an mlp for classification or regression purpose. This is something that I did earlier. But what if I have 2 auto-encoders? Can I extract the features from the bottleneck layers of 2 auto-encoders and feed them into an mlp which performs classification based on these features? If yes, then how? I am not sure how to concatenate these two feature sets. I tried with numpy.hstack() which gives me 'unhashable slice' error