neural-network

I am getting an error that I can't figure out when I run my neural network in Keras as soon as I introduce a class weight

大兔子大兔子 提交于 2020-06-09 05:39:26
问题 Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv1d (Conv1D) (None, 35, 32) 96 _________________________________________________________________ batch_normalization (BatchNo (None, 35, 32) 128 _________________________________________________________________ dropout (Dropout) (None, 35, 32) 0 ______________________________________________________________

How do you write a custom activation function in python for Keras?

天涯浪子 提交于 2020-06-01 07:32:26
问题 I'm trying to write a custom activation function for use with Keras. I can not write it with tensorflow primitives as it does properly compute the derivative. I followed How to make a custom activation function with only Python in Tensorflow? and it works very we in creating a tensorflow function. However, when I tried putting it into Keras as an activation function for the classic MNIST demo. I got errors. I also tried the tf_spiky function from the above reference. Here is the sample code

Neural network for adding two integer numbers

依然范特西╮ 提交于 2020-06-01 06:03:12
问题 I am beginner in NeuralNets . I want to create a neural network which can add two integer numbers. I have designed it as follows : question I have really low accuracy of 0.002% . what can i do to increase it? 1) For creating data: import numpy as np import random a=[] b=[] c=[] for i in range(1, 1001): a.append(random.randint(1,999)) b.append(random.randint(1,999)) c.append(a[i-1] + b[i-1]) X = np.array([a,b]).transpose() y = np.array(c).transpose().reshape(-1, 1) 2) scaling my data : from

Repeated use of GradientTape for multiple Jacobian calculations

蓝咒 提交于 2020-06-01 04:58:06
问题 I am attempting to compute the Jacobian of a TensorFlow neural network's outputs with respect to its inputs. This is easily achieved with the tf.GradientTape.jacobian method. The trivial example provided in the TensorFlow documentation is as follows: with tf.GradientTape() as g: x = tf.constant([1.0, 2.0]) g.watch(x) y = x * x jacobian = g.jacobian(y, x) This is fine if I want only want to compute the Jacobian of a single instance of the input tensor x . However, I need to repeatedly evaluate

Convolutional Neural Network seems to be randomly guessing

旧城冷巷雨未停 提交于 2020-05-29 06:39:26
问题 So I am currently trying to build a race recognition program using a convolution neural network. I'm inputting 200px by 200px versions of the UTKFaceRegonition dataset (put my dataset on a google drive if you want to take a look). Im using 8 different classes (4 races * 2 genders) using keras and tensorflow, each having about 700 images but I have done it with 1000. The problem is when I run the network it gets at best 13.5% accuracy and about 11-12.5% validation accuracy, with a loss around

Understanding a simple LSTM pytorch

南楼画角 提交于 2020-05-24 08:10:50
问题 import torch,ipdb import torch.autograd as autograd import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.autograd import Variable rnn = nn.LSTM(input_size=10, hidden_size=20, num_layers=2) input = Variable(torch.randn(5, 3, 10)) h0 = Variable(torch.randn(2, 3, 20)) c0 = Variable(torch.randn(2, 3, 20)) output, hn = rnn(input, (h0, c0)) This is the LSTM example from the docs. I don't know understand the following things: What is output-size and why is it

Why I'm getting bad result with Keras vs random forest or knn?

久未见 提交于 2020-05-24 03:35:38
问题 I'm learning deep learning with keras and trying to compare the results (accuracy) with machine learning algorithms ( sklearn ) (i.e random forest , k_neighbors ) It seems that with keras I'm getting the worst results. I'm working on simple classification problem: iris dataset My keras code looks: samples = datasets.load_iris() X = samples.data y = samples.target df = pd.DataFrame(data=X) df.columns = samples.feature_names df['Target'] = y # prepare data X = df[df.columns[:-1]] y = df[df

Convert sklearn.svm SVC classifier to Keras implementation

人走茶凉 提交于 2020-05-23 05:54:55
问题 I'm trying to convert some old code from using sklearn to Keras implementation. Since it is crucial to maintain the same way of operation, I want to understand if I'm doing it correctly. I've converted most of the code already, however I'm having trouble with sklearn.svm SVC classifier conversion. Here is how it looks right now: from sklearn.svm import SVC model = SVC(kernel='linear', probability=True) model.fit(X, Y_labels) Super easy, right. However, I couldn't find the analog of SVC

Convert sklearn.svm SVC classifier to Keras implementation

旧巷老猫 提交于 2020-05-23 05:54:09
问题 I'm trying to convert some old code from using sklearn to Keras implementation. Since it is crucial to maintain the same way of operation, I want to understand if I'm doing it correctly. I've converted most of the code already, however I'm having trouble with sklearn.svm SVC classifier conversion. Here is how it looks right now: from sklearn.svm import SVC model = SVC(kernel='linear', probability=True) model.fit(X, Y_labels) Super easy, right. However, I couldn't find the analog of SVC

ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,4)

☆樱花仙子☆ 提交于 2020-05-21 17:50:09
问题 I recently started to follow along with Siraj Raval's Deep Learning tutorials on YouTube, but I an error came up when I tried to run my code. The code is from the second episode of his series, How To Make A Neural Network. When I ran the code I got the error: Traceback (most recent call last): File "C:\Users\dpopp\Documents\Machine Learning\first_neural_net.py", line 66, in <module> neural_network.train(training_set_inputs, training_set_outputs, 10000) File "C:\Users\dpopp\Documents\Machine