artificial-intelligence

How to capture and process live activity from another application in Python?

笑着哭i 提交于 2019-12-05 21:41:22
I'm a computer science student, and as a personal project, I'm interested in building software that can watch and produce useful information about Super Nintendo games being run on a local emulator. This might be things like current health, current score, etc., (anything legible on the screen). The emulator runs in windowed form (I'm using SNES9x) and so I wouldn't need to capture every pixel on the screen, and I'd only have to capture about 30fps. I've looked into some libraries like FFMPEG and OpenCV, but so far what I've seen leads me to believe I have to have pre-recorded renderings of the

AI: Fastest algorithm to find if path exists?

那年仲夏 提交于 2019-12-05 21:28:52
问题 I am looking for a pathfinding algorithm to use for an AI controlling an entity in a 2D grid that needs to find a path from A to B. It does not have to be the shortest path but it needs to be calculated very fast. The grid is static (never changes) and some grid cells are occupied by obstacles. I'm currently using A* but it is too slow for my purposes because it always tries to calculate the fastest path. The main performance problem occurs when the path does not exist, in which case A* will

java Open source projects for medical diagnose & data mining [closed]

我怕爱的太早我们不能终老 提交于 2019-12-05 20:01:20
Closed. This question is off-topic . It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 4 years ago . I'm looking for some OS java engines for medical diseases diagnose . these are engines that takes queries input from user discribing patient symptoms and the engine should return suggestions of potential disease according to input symptoms. does such engines exists somewhere? I prefer some Java OS engine in this field if there exists some. any suggestions or Ideas? thanks It sounds like you are looking for a

Getting more details from optim function from R

老子叫甜甜 提交于 2019-12-05 19:14:21
I'm not very familiar with the optim function, and I wanted to get these informations from its results: a) how many iterations were needed for achieving the result? and b) to plot the sequence of partial solutions, that is, the solution obtained in the end of each iteration. My code until now looks like this: f1 <- function(x) { x1 <- x[1] x2 <- x[2] x1^2 + 3*x2^2 } res <- optim(c(1,1), f1, method="CG") How can I improve it to get further information? Thanks in advance You could modify your function to store the values that are passed into it into a global list. i <- 0 vals <- list() f1 <-

Crossover different length genotypes

允我心安 提交于 2019-12-05 17:56:50
E.g. I have two random representatives 1 6 8 9 0 3 4 7 5 and 3 6 5 7 8 5 What are the ways to crossover them? Add some empty numbers (or operations or sth) on the end of every genotype so they will have the same size? 3 6 5 7 8 5 -1 -1 -1 where -1 means nothing? Or copy few number from first genotype and some from second? What is the way you use? If you already have variable length chromosomes, then it shouldnt matter how you do it, you just need to select a crossover point for each of them, and then crossover as normal. For example using your chromosomes, I have selected two points (.) at

Neural Network Diverging instead of converging

坚强是说给别人听的谎言 提交于 2019-12-05 16:17:49
I have implemented a neural network (using CUDA) with 2 layers. (2 Neurons per layer). I'm trying to make it learn 2 simple quadratic polynomial functions using backpropagation. But instead of converging, the it is diverging (the output is becoming infinity) Here are some more details about what I've tried: I had set the initial weights to 0, but since it was diverging I have randomized the initial weights I read that a neural network might diverge if the learning rate is too high so I reduced the learning rate to 0.000001 The two functions I am trying to get it to add are: 3 * i + 7 * j+9 and

Can anyone suggest good algorithms for CBIR?

北城以北 提交于 2019-12-05 15:54:46
Project: Content Based Image Retrieval - Semi-supervised (manual tagging is done on images while training) Description I have 1000000 images in the database. The training is manual (supervised) - title and tags are provided for each image. Example: coke.jpg Title : Coke Tags : Coke, Can Using the images and tags, I have to train the system. After training, when I give a new image (already in database/ completely new) the system should output the possible tags the image may belong to and display few images belonging to each tag. The system may also say no match found. Questions: 1) What is mean

How to convert n-ary CSP to binary CSP using dual graph transformation

孤者浪人 提交于 2019-12-05 14:43:30
问题 When I read the book -- Artificial Intelligence (a modern approach), I came across the following sentence describing the method to convert a n-ary Constraint Search Problem to a binary one: Another way to convert an n-ary CSP to a binary one is the dual graph transformation: create a new graph in which there will be one variable for each constraint in the original graph, and one binary constraint for each pair of constraints in the original graph that share variables. For example, if the

Cards representation in Prolog

痴心易碎 提交于 2019-12-05 12:57:02
I'm trying to learn Prolog. This are my first steps with this language. As exercise I want to write program which can recognize some poker hands (Straight flush, Four of a kind, Full house etc.). I'm looking for good card representation in Prolog. I need to have possibility to check if one card is bigger than other, if cards are suited and so one. I have started with code: rank(2). rank(3). rank(4). rank(5). rank(6). rank(7). rank(8). rank(9). rank(t). rank(j). rank(q). rank(k). rank(a). value(2, 2). value(3, 3). value(4, 4). value(5, 5). value(6, 6). value(7, 7). value(8, 8). value(9, 9).

Gaussian-RBM fails on a trivial example

强颜欢笑 提交于 2019-12-05 12:41:15
I want to have a nitty-gritty understanding of Restricted Boltzman Machines with continuous input variables. I am trying to devise the most trivial possible example, so that the behavior could be easily tracked. So, here it is. The input data is two-dimensional. Each data point is drawn from one of two symmetrical normal distributions (sigma = 0.03), whose centers are well spaced (15 times sigma). The RBM has two-dimensional hidden layer. I expected to obtain an RBM that would generate two clouds of points with the same means as in my train data. I was even thinking that after adding some