Goal-oriented action planner for AI in Haxe.
- Optimized for agent states that contain up to 32 boolean flags.
- Supports actions that can both set and clear flags in the state.
- Haxe type safety with agent-specific state and action definitions.
- Dijkstra's shortest-path algorithm used to calculate the action plan with the smallest cost.
Installation
The library is available on Haxelib.
haxelib install goap
Usage
It is necessary to define two things: states and actions. The design of this library follows the pattern that each agent shall have his own set of state flags, and a set of actions that only correspond to this particular agent.
States
In the context of this library, the world-state for an agent is a single 32-bit integer. It is implemented as a bitmask which can contain up to 32 individual bit flags.
Therefore, the user shall implement the state variables of their agents as an enum abstract
that has an implicit to Int
conversion.
It is recommend to use the StateBuilder.build()
macro, which will automatically make the enum values into bit flags.
In the following example, HasAxe
will be set to 1
, HasWood
to 2
, NotTired
to 4
and so on.
@:build(goap.macro.StateBuilder.build())
enum abstract WoodcutterState(Int) to Int
{
var HasAxe;
var HasWood;
var NotTired;
}
The macro builder will also automatically generate bitwise ` | and &` operator overloads for the state abstract. |
---|
Actions
All of the woodcutter's actions in this case need to extend Action<WoodcutterState>
.
Each action has four fields which are considered by the planner:
preconditions
: Bitmask of conditions for the action to be possible. Defaults to0
.effectSet
: Bitmask of the state flags that are set to1
when this action is performed. Defaults to0
.effectClear
: Bitmask of the state flags that are set to0
when this action is performed. Defaults to0
.cost
: The cost of performing this action. Defaults to1.0
.enabled
: Bool indicating whether the action is currently possible or not. Defaults totrue
, and can be set tofalse
to disable a specific action.
With that in mind, take a look at the following example.
Note how to get wood, the woodcutter can either chop a tree or collect branches.
It is assumed that in this context, picking up branches yields less wood than chopping a tree - therefore we set a higher cost to the GatherBranches
action, this makes it less preferable to the planner.
import goap.Action;
class ChopTree extends Action<WoodcutterState>
{
public function new()
{
// needs to both have an axe, and not be tired
preconditions = NotTired | HasAxe;
effectSet = HasWood;
effectClear = NotTired;
}
}
class GatherBranches extends Action<WoodcutterState>
{
public function new()
{
preconditions = NotTired;
effectSet = HasWood;
effectClear = NotTired;
cost = 8;
}
}
class GetAxe extends Action<WoodcutterState>
{
public function new()
{
effectSet = HasAxe;
effectClear = NotTired;
}
}
class Nap extends Action<WoodcutterState>
{
public function new()
{
effectSet = NotTired;
}
}
Note that an action that is configured to both set and clear the same flag, will actually clear it.
Planning
With the above implemented, the action planner of this woodcutter would look as follows.
import goap.ActionPlanner;
var planner: ActionPlanner<WoodcutterState> = new ActionPlanner();
planner.setActions([
new ChopTree(),
new GatherBranches(),
new GetAxe(),
new Nap()
]);
Then choosing a goal for the woodcutter is up to the user.
Suppose the the HasWood
goal is chosen.
var initialState: WoodcutterState = 0;
var goalState: WoodcutterState = HasWood;
var actionPlan: ActionPlan<WoodcutterState> = planner.getPlan(initialState, goalState);
if (actionPlan.result = Solved)
{
actionPlan.plan; // [ GetAxe, Nap, ChopTree ]
}
Note that this planning is instantaneous, i.e it assumes that there will not be any other external influences to the agent's state during the plan's execution. Naturally if anything changes, the plan needs to be calculated again. For example if the agent should be assigned another goal, if action costs change, if new actions become available etc.
The goal state
Note that the goalState
should have set to 1
the bit flags which are required to be 1
in the state for the goal to have been reached.
The remaining flags that are 0
in the goalState
, may be either 0
or 1
when actually reaching the goal.
Therefore, in the example above if the goal is HasWood
, then the planner will return an action plan whose final state is any state where the HasWood
flag has been set to 1
.
Currently it is not possible to specify that some bits in the goalState
must be 0
.
Instead, negative-logic flags should be used.
For example, it is not possible to define a goal HasWood and not IsDrunk
but it is possible to define a goal HasWood and NotDrunk
.