When reading articles about AI for games, even when the authors say that they’re only interested in the appearance of intelligence rather than its simulation, they almost always fall for the seduction of realism. I think this may be related to the fact that almost all of these authors are programmers. Programmers are problem-solvers. And in their eagerness to solve a problem, they might oversee the most important aspect of the process: defining the problem. This is why, I think, they often end up trying to reverse engineer human behaviour rather than solving the real problem.
This real problem is, as they know, how to create the appearance of intelligence. This doesn’t necessarily mean you should be making “fake AI”. Because then you end up creating the appearance of artificial intelligence.
Appearance means nothing if appearance doesn’t happen to somebody. This somebody is the player. So appearance is not objective. It is not a weaker version of reality. It is reality as it exists in the head of the viewer.
Now, if everybody’s vision of reality would be totally unique, we wouldn’t even seem to live on the same planet. Luckily for us, people tend to agree on a lot of aspects of reality. A large chunk of their own personal construction of reality is very similar the that of many people’s. But we should never forget that reality, as far as AI is concerned, is and remains a picture in the viewer’s head and not something that lives outside of him, objectively.
We, programmers, often solve the problem of imitation by recreation. But not only is recreation not required in most cases, it also always underperforms vis-a-vis the original. When creating a fiction, we want quite the opposite: we want to generate a larger-than-life emotional effect while using far less materials. Yes, this is the process that is commonly referred to as art.
The example of the figurative painter springs to mind. While he probably needs to know a whole lot about human anatomy, he usually does not need to paint all of it to create the semblance of a human figure. Translated to AI programming in games this means that we need to know about human behaviour, but we don’t need to recreate it to achieve credibility in a character.
So you might think AI programmers need to study psychology now. I don’t think so. In fact, the risk would be that then they would start to recreate psychological models in code. Which doesn’t get us any further. In analogy: a biologist might know a lot more about anatomy than a painter but, even if he develops the skill, chances are he could not paint as beautiful a picture.
The key is in internal knowledge, acquired knowledge. The painter does not have to think about all his anatomy lessons. His hand knows about them. He can instinctively draw the correct shape.
Now for a final and extreme example, think about love. Scientists are trying to tell us that love, or any kind of emotion for that matter, is just a question of chemical reactions in your nervous system. Does this knowledge help us to paint a picture of love? Love has been depicted very effectively hundreds of thousands of times in many different media throughout history without knowing anything about these chemical reactions. And still programmers, I’m sure, would be inclined to replicate the chemical reactions in code if they want to talk about love! 🙄
How do the poets talk about love? They talk about the symptoms, not the disease. That way the reader can recognize the symptoms and recreate the feeling of love inside of himself. That’s where we need to get with our AI: to let the player do all the hard work!
In my thoughts about Drama Princess I’m seeing the autonomous characters more and more as empty shells. They don’t have an artifical brain or a mood or any replication of what constitutes a human. Except appearance.
Their behaviour is not defined by how they feel or what they want but by the enviroment and the circumstances. I don’t want to model human emotions. I want to model relationships, the things that exist between the characters rather than within them. To only create the appearance, the symptoms. To some extent, it doesn’t even matter which emotion is triggered. We can let the user’s nervous system take care of that. I’m sure it can come up with a pleasant emotion. If you want to interpret the scene as an amourous one, please do, if you see it as a hostile situation, that’s fine as well. Let the user create their own story.
The thing that we should control, as authors, is the environment, the circumstances. And those are a lot easier to model than human behaviour. But more about that in a later post.
Posted on May 13, 2006 at 11:50 pm
[…] While Richard Evans’ work will probably always allow his characters to have more individuality than we need them to have for our purposes, the idea expressed above, connects very nicely to our ideas of Usage instructions, AI from the outside and Modeling relationships. We have developed a preference for designing the things between the characters rather than what happens inside of them. The viewer can only perceive the outside anyway, the things that are expressed. Social activities are a prime example of how individual autonomous agents can be directed as a group. They don’t need individual minds to make decisions. Mr. Evans would not go this far because he wants his simulations to really work. While we only care about the illusion that takes place in the player’s mind. […]