artificial-intelligence

Finding the right parameters for neural network for pong-game

大兔子大兔子 提交于 2019-12-23 19:57:06
问题 I have some trouble with my implementation of a deep neural network to the game Pong because my network is always diverging, regardless which parameters I change. I took a Pong-Game and implemented a theano/lasagne based deep-q learning algorithm which is based on the famous nature paper by Googles Deepmind. What I want: Instead of feeding the network with pixel data I want to input the x- and y-position of the ball and the y-position of the paddle for 4 consecutive frames. So I got a total

Quick search of the face descriptors in the db

我与影子孤独终老i 提交于 2019-12-23 19:17:51
问题 I want to implement something like face recognition application using neural networks. So I found this incredible article. When we put the face image in the OpenFace network we get 128 measurements which we can use to compare faces. But here we get the main question: how we can quickly find the same face in the database with the closest 128 values? If we will use SVM(as it is written in the article) we should to retrain the classifier every time when we put the new face in the db, but it's

Using ranking data in Logistic Regression

邮差的信 提交于 2019-12-23 10:25:47
问题 I will be putting the max bounty on this as I am struggling to learn these concepts! I am trying to use some ranking data in a logistic regression. I want to use machine learning to make a simple classifier as to whether a webpage is "good" or not. It's just a learning exercise so I don't expect great results; just hoping to learn the "process" and coding techniques. I have put my data in a .csv as follows : URL WebsiteText AlexaRank GooglePageRank In my Test CSV we have : URL WebsiteText

multi layer perceptron - finding the “separating” curve

和自甴很熟 提交于 2019-12-23 10:25:13
问题 with single-layer perceptron it's easy to find the equation of the "separating line" (I don't know the professional term), the line that separate between 2 types of points, based on the perceptron's weights, after it was trained. How can I find in a similar way the equation of the curve (not straight line) that separate between 2 types of points, in a multi-layer perceptron? thanks. 回答1: This is only an attempt to get an approximation to the separating boundary or curve. Dataset Below I

how to flatten input in `nn.Sequential` in Pytorch

a 夏天 提交于 2019-12-23 09:57:44
问题 how to flatten input inside the nn.Sequential Model = nn.Sequential(x.view(x.shape[0],-1), nn.Linear(784,256), nn.ReLU(), nn.Linear(256,128), nn.ReLU(), nn.Linear(128,64), nn.ReLU(), nn.Linear(64,10), nn.LogSoftmax(dim=1)) 回答1: You can create a new module/class as below and use it in the sequential as you are using other modules (call Flatten() ). class Flatten(torch.nn.Module): def forward(self, x): batch_size = x.shape[0] return x.view(batch_size, -1) Ref: https://discuss.pytorch.org/t

PyTorch Binary Classification - same network structure, 'simpler' data, but worse performance?

纵然是瞬间 提交于 2019-12-23 06:58:18
问题 To get to grips with PyTorch (and deep learning in general) I started by working through some basic classification examples. One such example was classifying a non-linear dataset created using sklearn (full code available as notebook here) n_pts = 500 X, y = datasets.make_circles(n_samples=n_pts, random_state=123, noise=0.1, factor=0.2) x_data = torch.FloatTensor(X) y_data = torch.FloatTensor(y.reshape(500, 1)) This is then accurately classified using a pretty basic neural net class Model(nn

FANN XOR training

为君一笑 提交于 2019-12-23 05:25:06
问题 I am developing a piece of software that uses FANN, the Fast Artificial Neural Network library. I have tried after numerous failed attempts at writing my own ANN code to compile a FANN sample program, here the C++ XOR approximation program. Here is the source. #include "../include/floatfann.h" #include "../include/fann_cpp.h" #include <ios> #include <iostream> #include <iomanip> using std::cout; using std::cerr; using std::endl; using std::setw; using std::left; using std::right; using std:

Having a batch program learn

强颜欢笑 提交于 2019-12-23 04:08:01
问题 I am making a chat bot for my sister in batch but it is consuming so much time I figured I would let it have her tell it what to say when it does not know. However I can not get it working and I figured someone on here might know. Here is what I have so far: @ECHO OFF COLOR A cls ECHO HELLO I AM A CHATBOT. WHAT IS YOUR NAME? SET /P NAME= ECHO %NAME%, IS A COOL NAME. set /a favvid=0 set /a hack=0 :hello echo Hello,%name% :begin SET /P TALK= if /i "%TALK%"== "how are you" goto howareyou if /i "

Neural Network Output :Scaling the output range

一世执手 提交于 2019-12-23 01:41:58
问题 The output layer of my neural network (3 layered) is using sigmoid as activation which outputs only in range [0-1]. However, if I want to train it for outputs that are beyond [0-1], say in thousands, what should I do? For example if I want to train input ----> output 0 0 ------> 0 0 1 ------> 1000 1000 1 ----> 1 1 1 -------> 0 My program works for AND, OR, XOR etc. As input output are all in binary. There were some suggestion to use, Activation: y = lambda*(abs(x) 1/(1+exp(-1 (x))))

Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (800, 1000)

蹲街弑〆低调 提交于 2019-12-23 01:28:10
问题 i am trying to do sentimental analysis using CNN i my code my data has (1000,1000) shape when i pass the data to convolution2D it is throwing me an error. which i am not able to resolve. i tried below solution but still facing issue. When bulding a CNN, I am getting complaints from Keras that do not make sense to me. My code is below. TfIdf = TfidfVectorizer(max_features=1000) X = TfIdf.fit_transform(x.ravel()) Y = df.iloc[:,1:2].values X_train, X_test, Y_train, Y_test = train_test_split(X, Y