PTWing this!
Posting to watch. This is awesome!
Thanks guys!
My recent push has been on implementing my own take on Goal-Oriented Action Planning for AI. It's a cool idea where the AI's view of the world is broken into possible states of the world, and behaviors they can take to change the state of the world. However, each behavior has its own world states that must be true in order to execute the behavior, so you end up chaining together a bunch of behaviors in order to accomplish a goal.
For example, my latest test has been to give some people outside civilization the goal of "Shelter has been constructed". To figure out how to achieve the goal, they see what behaviors they can accomplish to construct a shelter. The behavior is "Construct Building", which turns construction materials into a building. However, in order to construct a building, it the world state "Have construction materials at location" must be true - and if it's not, the behavior of "Bring materials to location" is needed.
This goes on until they find a world state which is true. Once that happens, they can aggregate and plan their behaviors. The ones I got working recently is something like:
- Move to a place with wood
- Gather wood
- Move to a place with iron
- Gather iron
- Forge iron tools (using the same reaction definition present in the economy simulation!)
- Move to a place with stone
- Gather stone
- Use tools to transform stone into stone construction materials
- Move to location where building will be constructed
- Use tools to transform construction materials into building
The great part about this is that if the goal of "Shelter has been constructed" fires, but the AI already has tools, stone, or construction materials, it won't worry about doing those other behaviors, without me having to explicitly add in a ton of if/else checks for multiple levels of world states.
It's also capable of making choices when there are more than one type of behavior available to execute an action, but this is going to be a hairy thing to debug correctly. For example, if an AI wants a particular object, it might have the option of buying it, stealing it, or stealing money in order to buy it with. You can imagine that having these kinds options available makes it difficult to predict and tune AI behavior to get it just right.
At any rate, the behavior list I described above is more or less implemented already. It was really cool to see guys moving around the map with some kind of purpose. Here's what a debug-speak outsider told me when I ran into him:
Of course the AI-speech will need to be improved, but there's also a lot more work needed on the goal algorithm itself to make sure it can handle the right types of behaviors. It turns out it's not easy to track states of the world for things that don't exist yet. In this example, a building that doesn't exist yet, made out of construction materials which may not be known, in a location that isn't known yet, has to be passed around to multiple goal states and whatnot.
But I really hope I can smooth out some of those issues soon and make the algorithm more solid. Hopefully that can lead to all sorts of complex behaviors and people moving around in the world. I really want to get to a state where I can go out into the world and encounter other travelers, and have interesting conversations with them about why they're traveling and what they've seen.