Illustration from Varieties of Learning by Richard Evans in AI Game Programming Wisdom
Look at that!… Isn’t that beautiful?
This system almost moves me with its clarity and simplicty.
And yet…
Yet I feel, that we don’t need the top part of this diagram at all for our purposes. Does it really matter what our character believes or desires when all we see is him throwing a stone at a house? Does it?
Please somebody protect me from myself. And explain to me why our puppets need to have minds of their own. I can’t find any reason. Why go through all the trouble? Please, someone? Anyone?
Posted on May 18, 2006 at 4:00 pm
There are a number of reasons why you would want characters to have a degree of autonomy. Primarily they are connected to:
1. Time
2. Money
Or to be more precise, the lack of those two.
Designers and developers simply dont have the time or resources available to script character behavior to any level of depth. It is just too complex. This is why currently characters are shallow, flat and dull.
The approach you are taking has its merits and for many situations may be the right solution. However, perhaps contrary to your thinking, it is exactly the complex subtle kinds of behavior that will set rich characters apart from wooden scripted ones.
In the example above it is not necessary for the viewer to know what the characters inner systems are doing but it is vital for the character. Without that system it would be up to the developer to make the character throw the stone at the house. Take that to its logical conclusion, imagine how much work would be involved in designing that behavior, now multiply that by the thousands of behaviors probably required to form a rich character, now multiply that by the hundreds or thousands of characters required to populate a rich world (each with their own personalities and unique behaviors of course) and you can start to see what an impossible task this is becoming. How much time and resources would be needed to build that system? More than any game company has would be my guess.
I do like your ideas, they are interesting and a different perspective from my own. However what you are proposing to build is a framework not a character, if the character is not autonomous I think.
What does make sense to me, from reading your blog for example, is looking at all of the systems for an autonomous character and focusing on those that have the most dramatic impact.
Posted on May 18, 2006 at 4:36 pm
I’m not suggesting at all to use scripts instead of A.I. I’m wondering if randomness couldn’t be used instead of A.I.
So the characters would still be autonomous. But their decisions would be made at random rather than through complex A.I. systems. I think we will save 1. Time and 2. Money with this. 🙂
There are of course a few objections against total randomness. Consistency in behaviour is one of them. But I’m wondering if those problems cannot be solved by much simpler means than A.I. I’m really only interested in creating the “symptomatic” behaviour of the character. What the behaviour means is something for the spectator’s interpretation. The character doesn’t need to know why it is doing things. It only needs to know, e.g. in the case of consistency, not to switch between different kinds of activities too quickly.
Mind you, this is still an open question. And I deeply appreciate criticism like yours. I realize that the approach that I am evolving towards is rather radical. And there must be a reason why nobody has done this before.
Posted on May 20, 2006 at 7:50 am
If you’re merely making an autonomous digital character that you don’t interact with but just watch, you’re right that randomly choosing the next behavior to do can get you quite far. The audience will fill in the gap for the reasons why the character is doing what it is doing.
But once a human player starts interacting with the character, to feel satisfaction and pleasure the player will want them to respond in a way that is meaningful, even if the character doesn’t speak words. Randomness may suffice for the first few interactions, but after just a few, if there doesn’t seem to be any understanding or intentionality behind the character’s reactions, the player will just shrug and say to herself, “The character isn’t listening, he’s just choosing actions randomly. Boring.”
That said, randomness can still play a role in an NPC, to add variability and lifelikeness to a character’s idling behavior, or variations within reactive behaviors. We had much success using constrained randomness in creating Dogz and Catz, the world’s first virtual pets (the first released pre-Tamagotchi) and arguably the first commerically successful AI-based characters.
http://www.interactivestory.net/papers/PetzAndBabyz.html
Posted on May 20, 2006 at 8:53 am
Dogz and Catz are indeed very inspiring. Thank you for the link to the text.
I agree that purely random behaviour would be unaccaptable, especially when interacting with the player. But as you say yourself, only when there “doesn’t seem to be any understanding or intentionality”. To me, this does not mean that these characters need a real mind. The characters themselves don’t need to know why they are doing things (only the player). All the characters needs to do, is perform in a way that seems intentional.
I’m guessing that creating a system that can deal with the external appearance of intelligence and intentionality is a lot easier than creating actual artificial intelligence (1). I’ll try and elaborate on this in a future post. Work my way through what rules are required to make the character behave in a way that seems rational and intentional. I have touched upon concistency before. But there’s likely going to be a few more elements that need to be designed. Any help in making this list would be greatly appreciated.
(1) This may be related to Diderot’s paradox of acting, which I’m about to read.
Posted on May 20, 2006 at 5:58 pm
I agree that, essentially, we only need to create the appearance of intelligence and intentionality. You’re right, a “real mind” is not needed for simple characters; but it’s all a matter of degree; once you start wanting more sophisticated reactions, then you start needing more and more sophisticated modelling of behavior, which by no means becomes a “real mind”, but becomes a “simple mind”, or mind-like, or at least a few components are mind-like, like in Richard’s diagram above.
I highly recommend you read the Oz project papers from the mid-1990’s, they specifically address this and offer solutions (and the rest of the “must-reads” at the bottom of interactivestory.net, for that matter 😉 Our hopefully upcoming release of ABL, a descendant of the Oz project work, can be used the apply/implement the Oz project’s solutions.
Posted on May 20, 2006 at 6:37 pm
It’s true that the advantage of simplicity might get lost very quickly as the requirements for complexity in the behaviours increase. But even so, I still think it is more appropriate to approach the problem “from the outside” and try to model the behaviour that we want to see. Rather than immediately stepping inside, building a system in man’s image and hoping something like human behaviour comes out. I could of course be overestimating the value of terms like “desire” and “opinions” when it comes to AI. Perhaps it only sounds like they’re trying to make a human.
I will look into the Oz project papers. I’ve encountered several other references to this project already. And your must-reads look very interesting. Guess I never scrolled down that far… 😕
Posted on June 7, 2006 at 4:05 pm
Regarding Diderot, after having read his essay, I must add that he did not think that the mindless puppet method was easier. He though it was better but only suitable for genius actors. For less talented people, he thought that actually feeling the emotions would probably help them bring a better performance, if only once in a while.
So will Drama Princess be a genius? Or a less talented actor? I guess the latter would be a safer bet. 😕