artificial-intelligence

Batch normalization instead of input normalization

北战南征 提交于 2019-12-03 05:09:32
问题 Can I use batch normalization layer right after input layer and not normalize my data? May I expect to get similar effect/performance? In keras functional it would be something like this: x = Input (...) x = Batchnorm(...)(x) ... 回答1: You can do it. But the nice thing about batchnorm, in addition to activation distribution stabilization, is that the mean and std deviation are likely migrate as the network learns. Effectively, setting the batchnorm right after the input layer is a fancy data

What kind of algorithm is behind the Akinator game?

送分小仙女□ 提交于 2019-12-03 05:06:34
问题 It always amazed me how the Akinator app could guess a character by asking just several questions. So I wonder what kind of algorithm or method let it do that? Is there a name for that class of algorithms and where can I read more about them? 回答1: Yes, there is a name for these class of algorithms - it is called classification algorithms in the field of machine learning. Decision trees is one example for classification algorithm. In this classification problem, the features for the algorithm

How many possible states does the 8-puzzle have?

这一生的挚爱 提交于 2019-12-03 04:59:51
The classical 8-puzzle belongs to the family of sliding blocks. My book (Artificial intelligence A modern approach by Stuart Russell and peter Norwig) says that the 8-puzzle has 9!/2 possible states. But WHY the /2 ? How do you get this? 9! is the total number of possible configurations of the puzzle, whereas 9!/2 is the total number of solvable configurations. For example, this configuration doesn't have a solution: 1 2 3 4 5 6 8 7 Read more about the solvability of certain configurations of the n-puzzle in this Wikipedia article , or as pointed out by @dasblinkenlight in this MathWorld

What is the difference between Q-learning and Value Iteration?

我怕爱的太早我们不能终老 提交于 2019-12-03 04:56:27
问题 How is Q-learning different from value iteration in reinforcement learning? I know Q-learning is model-free and training samples are transitions (s, a, s', r) . But since we know the transitions and the reward for every transition in Q-learning, is it not the same as model-based learning where we know the reward for a state and action pair, and the transitions for every action from a state (be it stochastic or deterministic)? I do not understand the difference. 回答1: You are 100% right that if

Robot exploration algorithm

血红的双手。 提交于 2019-12-03 04:55:55
问题 I'm trying to devise an algorithm for a robot trying to find the flag(positioned at unknown location), which is located in a world containing obstacles. Robot's mission is to capture the flag and bring it to his home base(which represents his starting position). Robot, at each step, sees only a limited neighbourhood ( he does not know how the world looks in advance ), but he has an unlimited memory to store already visited cells. I'm looking for any suggestions about how to do this in an

What are some good resources on flocking and swarm algorithms?

这一生的挚爱 提交于 2019-12-03 04:44:27
问题 Awhile ago I read the novel Prey. Even though it is definitely in the realm of fun science fiction, it piqued my interest in swarm/flock AI. I've been seeing some examples of these demos recently on reddit such as the Nvidia plane flocking video and Chris Benjaminsen's flocking sandbox (source). I'm interested in writing some simulation demos involving swarm or flocking AI. I've taken Artificial Intelligence in college but we never approached the subject of simulating swarming/flocking

“synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.” problem in TensorFlow

隐身守侯 提交于 2019-12-03 04:38:54
I installed TensorFlow 1.10.1 but when I tried to import TensorFlow it said that I need TensorFlow version 1.10.0. Thus, I installed it and now I get the following warnings: >>> import tensorflow C:\Users\PC\Anaconda3\envs\tut\lib\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) C:\Users\PC\Anaconda3\envs\tut\lib\site-packages\tensorflow\python\framework\dtypes.py:517:

Are there any artificial intelligence projects in PHP out there? [closed]

此生再无相见时 提交于 2019-12-03 04:25:10
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 years ago . I'm interested in this field,but I'm only familiar with PHP so far. If not,can you recommend a tiny but not so bad project that's easy enough to learn? 回答1: Take a look to Program O: http://blog.program-o.com/ This is the description of the project: Program O is an AIML engine written in PHP with MySQL. Here you

Sudoku solving algorithm C++

眉间皱痕 提交于 2019-12-03 03:58:57
问题 I'm trying to make a Sudoku Solving program for a couple of days but I'm stuck with the methods. I found this algorithm here but I don't really understand it: start at the first empty cell, and put 1 in it. Check the entire board, and see if there are any conflicts If there are coflicts on the board, increase the number in the current cell by 1 (so change 1 to 2, 2 to 3, etc) If the board is clean move, start at step one again. If all nine possible numbers on a given cell cause a conflict in

Tracing and Returning a Path in Depth First Search

爱⌒轻易说出口 提交于 2019-12-03 03:57:07
问题 So I have a problem that I want to use depth first search to solve, returning the first path that DFS finds. Here is my (incomplete) DFS function: start = problem.getStartState() stack = Stack() visited = [] stack.push(start) if problem.isGoalState(problem.getStartState): return something while stack: parent = stack.pop() if parent in visited: continue if problem.isGoalState(parent): return something visited.append(parent) children = problem.getSuccessors(parent) for child in children: stack