Emergent Behavior

Here’s an entertaining failure case in my late-game pathfinding AI. As I’ve mentioned, after a certain point, the AI switches from a straight minimax strategy (carefully considering its moves vs. possible counter-moves) to a Djikstra’s Algorithm pathfinder with limited minimax elements. Which is to say, it tries to find ways to rearrange its own pieces to create clear paths to park its highest-priority pieces.

This strategy is certainly not foolproof, but usually when it goes awry it’s because the old path to the parking space has become blocked. The system can detect that and rethink its approach. This is not one of those cases.

Check out the 1 of Suns in this board position. It’s got a good clear path home; it just needs to drive a little bit around the opposing 1 of stars. Yeah, oops. Because as soon as it tries to do that, it’s forced to jump right back to where it started. The AI as far as version 1.0.1 is content to stay on that ride indefinitely. It’ll just zoom around in circles until something breaks the trap.

Now, this isn’t as likely to happen as you might think. The position of both the Green 1 of Moons and the Red 2 of Stars is important, because otherwise it’ll be the Red 1 of Stars that is forced to jump (when the moving Green piece pulls in front of it). Also, there are simple things that Green could do to keep this from happening, if only it could think of them.

So, that was part of yesterday’s work on the project. You don’t even have to make Green understand the consequences of the forced jump, you just have to tell it that the space to the lower left of the 1 of Stars should be considered blocked. Now, what the AI will do in this case is:

  • Start moving with the 1 of Suns as it did before
  • Pull the 2 of moons down into the space it just vacated. Hey, look: no more forced jump!
  • Continue parking the 1 of Suns.

It’s interesting to note that I didn’t even have that solution to the problem in mind when I did the code fix. I thought maybe it’d use its 1 of Moons to force a jump on the Red 1 of Stars. But, of course, its actual solution is considerably better. So this strategy of defusing a forced jump by back-filling its landing square is therefore an example of an emergent behavior. So was the original bug of moving around and around in circles.

You get this stuff a lot in AI. In fact, on some level it’s what makes a system look more “intelligent” than simply well-regulated, at least when the emergent behavior is successful. It’s said that asking whether a computer can think is like asking whether a submarine can swim. If it’s not thinking when it produces a solution via emergence, it’s certainly at least doing whatever it is a submarine does.

Leave a Reply

Your email address will not be published.