planning

About PDDL in AI planning

独自空忆成欢 提交于 2021-02-11 17:23:58
问题 I am trying to solve a Pacman-style problem with a planner, using PDDL. I assume there are many food in the given map. I use exists to check if here is any other food in the map, but it does not work; why is that? Here is my problem file: (define (problem pacman-level-1) (:domain pacman_simple) ;; problem map ;; | 1 | 2 | 3 | ;; -|---|---|---| ;; a| P | G | F | ;; b| _ | _ | _ | ;; |---|---|---| (:objects a1 a2 a3 b1 b2 b3 - cell pacman - pacman ghost - ghost food1 - food food2 - food nofood

About PDDL in AI planning

て烟熏妆下的殇ゞ 提交于 2021-02-11 17:23:50
问题 I am trying to solve a Pacman-style problem with a planner, using PDDL. I assume there are many food in the given map. I use exists to check if here is any other food in the map, but it does not work; why is that? Here is my problem file: (define (problem pacman-level-1) (:domain pacman_simple) ;; problem map ;; | 1 | 2 | 3 | ;; -|---|---|---| ;; a| P | G | F | ;; b| _ | _ | _ | ;; |---|---|---| (:objects a1 a2 a3 b1 b2 b3 - cell pacman - pacman ghost - ghost food1 - food food2 - food nofood

PDDL Graphplan can't find plan

拈花ヽ惹草 提交于 2020-01-06 14:12:08
问题 I've written a domain and a test problem in PDDL, but apparently the graphplan implementation can't find a plan. Here's the domain: (define (domain aperture) (:requirements :strips :typing :negative-preconditions) (:types cube hallway room - location ) (:predicates (at ?l - location) (has ?c - cube) (connected ?l1 - location ?l2 - location) (in ?c - cube ?l - location) ) (:action enter :parameters (?h - hallway ?r - room) :precondition (and (connected ?h ?r) (connected ?r ?h) (at ?h) (not (at

How to write kind of Conditional Planning in Prolog?

☆樱花仙子☆ 提交于 2019-12-24 10:02:16
问题 I tried to write a prolog code that can understand student program written in C#. Now I'm stuck in the process of recognizing the 'if' statement in student's program. For example: The following is the code that I expect from the student. int d = int.Parse(Console.ReadLine()); // value d is inputted by user int s = 0; if (d>0) s = 2; else if (d==0) s = 1; else s = 0; I defined the goal of this expected code as: goal:- hasVarName(Vid_s, s), hasVarName(Vid_d, d), hasVarValue(Vid_d, Vd), ((not(gt

Prolog planning using retract and assert

心不动则不痛 提交于 2019-12-18 09:28:50
问题 I wonder, is it possible to do planning in Prolog using the knowledge base modified by retract and assert during the runtime? My idea is as follows: assume that I need to replace a flat tire of a car. I can either put something on the ground or move something from the ground to some free place. So I came up with such a code: at(flat, axle). at(spare, trunk). free(Where) :- at(_, Where), !, fail. remove(What) :- at(What, _), retract(at(What, _)), assert(at(What, ground)). put_on(What, Where) :

how to keep track of planning variable assignment in optaplanner

江枫思渺然 提交于 2019-12-13 04:27:40
问题 Is there a way to access planning variable's assignment during planning? In my use case, I want to assign a planning variable with certain status only one time only during planning. After that I don't want to use that planning variable. I know that in optaplanner, a planning variable/problem fact can not change, so i can not change its status. Is there a way to get list of planning variable assignment during planning so that in java code or drools file, i can avoid re-assignment if it has

fast forward and pddl: is the computed solution the best?

佐手、 提交于 2019-12-11 21:31:04
问题 how can i be sure that the plan, computed by the fast forward planner, is the best of all the possible plans?! Does exist an automatic tool to solve this problem?! thanks a lot! 回答1: If I don't remember it wrong FF is not an optimal planner so you can't be sure if the generated plan is optimal. On the other side FF is fast in generating "good enough" solutions in contrast to optimal planners ( cpt4 , bjolp , ecc...) which provide optimal plans but much more slowly than satisficing planners.

Suggestions required in increasing utilization of yarn containers on our discovery cluster

瘦欲@ 提交于 2019-12-11 16:57:56
问题 Current Setup we have our 10 node discovery cluster. Each node of this cluster has 24 cores and 264 GB ram Keeping some memory and CPU aside for background processes, we are planning to use 240 GB memory. now, when it comes to container set up, as each container may need 1 core, so max we can have 24 containers, each with 10GB memory. Usually clusters have containers with 1-2 GB memory but we are restricted with the available cores we have with us or maybe I am missing something Problem

Trying to implement potential field navigation in matplotlib

ぃ、小莉子 提交于 2019-12-10 23:30:10
问题 I am trying to produce an algorithm where multiple agents (blue) work together as a team to capture a slightly faster enemy agent (red) by preforming surrounding and circling tactics in a 2D grid. So I am trying to make a robust multi-agent algorithm that would allow multi-agents would capture an intelligent and faster enemy agent So I attempted to give the enemy agent navigation and obstacle avoidance abilities by using something known as potential field navigation. Basically, the enemy

Reinforcement Learning With Variable Actions

≡放荡痞女 提交于 2019-12-04 19:24:20
问题 All the reinforcement learning algorithms I've read about are usually applied to a single agent that has a fixed number of actions. Are there any reinforcement learning algorithms for making a decision while taking into account a variable number of actions? For example, how would you apply a RL algorithm in a computer game where a player controls N soldiers, and each soldier has a random number of actions based its condition? You can't formulate fixed number of actions for a global decision