Modeling relationships

Posted by Michael on May 6th, 2006, in Development

I’m quite happy with the direction that we’re heading in. To try and create a dramatic spectacle that only makes sense in the viewer’s head. The characters are hollow shells. We’re only programming systems that take care of the appearance without caring about what meaning comes out of this.

Characters don’t know anything about an object or other character until they encounter it. Then the other offers them instructions on how to use it. Which actions they pick from that list depends on consistency: they will first pick an action that fits in the same category as an action they did before. Unless their attention span has reached its low point, when they are allowed to switch categories. The attention span can be influenced by external, objective curcumstances. E.g. a shock might cause the character to switch behaviour. So the mood of a scene is objective. The only thing we still need is a way to allow the character to respond to an objective mood in a subjective way.

Next up is the modeling of relationships. Rather than giving each character its own properties and preferences, we want to design what happens between them. Or better: what appears to happen. So again an objective given. Relationships grow and shrink. So when the character switches behaviour, the new behaviour cannot be random but should be consistent with what happened before. E.g. in normal circumstances, fearful behaviour should not follow romantic behaviour. So behaviours concerning relationships should follow each other in a logical (objective!) succession. We can probably simplify this by defining a relationship with a simple value: 0 is no relationship, 1 is deeply in love. How this behaviour is expressed depends on the instructions that a character carries around. A child, e.g., does not carry any instructions about having sexual intercourse with it, making this impossible even if another character is supposedly “deeply in love” with it.

But what about the other way around? We usually wouldn’t want the child to have sexual intercourse with its mother. The easy solution would be to limit a child’s capability to love. A child can only love up to 0.6 e.g. But this feels unsatisfactory because this is adding an individual property and also because there might be other exceptions like this that may not be as easy to solve. How do we allow for different kinds of love? (romantic love, parent-child love, best friends love, respect/admiration love; note that some of these are not symmetrical)

Perhaps it is as simple as having categories of characters and only allow characters to enter a romantic relationship with characters in the same category. These categories are not hard-coded but defined by the author as they depend on the story that you are trying to tell.

One of our Golden Rules is to always design for the situation that will occur most frequently and not waste too much time on exceptions. So we’ll see.

This brings up an important issue. Reciprocity. Most of the time, relationships between characters will be objective. Character A feels the same for character B as character B feels for character A. But exceptions could be interesting dramatically (only if they are exceptions, though). So there should always be a certain randomness that allows for a character to decrease the relationship even though, normally, it should increase.

When characters are away from each other, their relationship might automatically decrease. Perhaps time spent in proximity of each other is sufficient to increase the relationship.

When the relationship is reciprocal, it can be considered objective and as such expressed in a value that belongs to the world, and not to an individual character. To make the narrative more interesting, a character should be able to unilaterally decide to increase or decrease the relationship. Or at least seem to (!). And then the other character should respond to that by either agreeing or disagreeing with the course taken. Is there a way to define this on the objective level as well?

Black & White

Posted by Michael on May 5th, 2006, in Games

Black & White 1

I love Black & White. I can’t help it. It’s so sad to see what Lionhead did with the sequel (Black & White 2: Warrior Within ;)). But you can read my rant about that here.

In Black & White you play a god and you need to convince people in small villages to believe in you. When they do, their prayers serve as fuel for your divine powers. Some of these powers are destructive and some benevolent. You can choose to play an evil god or a good god. Hence the game’s title.

But the main feature of Black & White, as far as I’m concerned, is a sort of pet that you have as a companion, referred to as a Creature. In the beginning of the game, you choose an antropomorphic ape, cow or tiger, whom you then have to train. You can teach the creature to do just about anything you can do.

The design really shines in the credibility of this creature. Just watching it go about its business is a joy in and of itself. With a little bit of training, you can give it a certain personality and make sure it takes care of itself. And then it can play the game on its own. Either as your helper or just as a fun companion.

Sometimes he gets a bit lazy or forgetful, and then you have to remind him of what he should or should not do. It’s a kind of parenting that feels very natural. The creature gets thin when he eats too little and fat when too much. He gets stronger when he carries heavy objects and he grows bigger and bigger over the course of the game. And of course when he gets evil, he looks evil too. And good ditto. He can also develop simple relationships with other creatures in the game. They can become fighting enemies but also best friends (my favourite). Then they dance together and even kiss. And when they are seperated, sometimes a text appears, saying “Your creature misses his friend.” Aaah. :), it brings a tear to my eye.
He tries to imitate you. And when he fails, he is sad. He tells stories to the villagers and dances with them. Or tries to impress them with tricks. He can throw things and catch them as a form of play. He points at things that interest him. He expresses hunger and then tries to find some food. He develops preferences for certain types of food. When he gets tired, he finds a good spot and goes to sleep. He sits down and rests when he doesn’t know what to do. When he’s walking towards a target, he regularly looks in its direction. He shows interest in something (you do) and then walks towards it to look at it more closely. When there’s something he doesn’t know, he looks at you for advice (pathfinding e.g.). He is continuously paying attention to what you do, which strengthens your bond with him and increases the sympathy.

The AI in Black & White is a unique creation and I wish I knew what its designer, Richard Evans, is doing at this moment. He has left the sinking ship Lionhead and hasn’t been heard of since. I hope his brilliant mind can resurface at some point. If anyone knows where he is, please let me know!

The rest of the game supports this brilliance too. The villagers also have remarkable AI, the game’s interface is a joy to use in its complex sophistication, the graphics are a pleasure to watch and the soundscape is as soothing as anything. A rare product of excellence that never fails to entertain. I keep replaying this thing. It never grows old. I don’t care if there’s a sequel. I’m sticking to this one.

Game Story & Character Development (Marianne Krawczyk & Jeannie Novak)

Posted by Michael on May 5th, 2006, in Books

This book starts a very optimistic description of the current use of story and characters in games, pretending that narrative in games is on the same level as in popular cinema or even Shakespeare. It goes on to map the existing formulas that make Hollywood films so boring and predictable exciting onto games, pretending that running around shooting monsters is a viable way of telling stories. If you are happy with the way current games are made and you need a formula how to make one exactly like that, this is a book for you. At least if you can stand the “trendy” cut-everything-into-bitesize-chunks layout.

Later in the book, the writers come to their senses a bit an allow some doubt to trickle through. Here and there you find suggestions that perhaps maybe there is still a lot of work to do for games to become a mature medium. But they are very careful to leave all of these questions very open, as they hide behind a few quotes from game celebreties. Perhaps the sequel will be good.

Consistency for consisency’s sake

Posted by Michael on May 5th, 2006, in Development

Consistency is the one reason why randomness is not good enough when it comes to making decisons. The spectator needs to be able to construct an imaginary story in his head. Radical shifts in emotions expressed don’t seem logical most of the time. But sometimes they do. And some characters would be more prone to having mood swings (children and madmen e.g.).

Since we don’t really care how our characters “feel”, as long as they display believable behaviour, we could arrange all possible actions in categories. And then a character would pick a random action from the category that it previously picked one from most of the time. Every time it picks an action from the same category, its attention span decreases a bit. For some characters this “bit” is larger than for others, i.e. some characters have a short attention span.

Shocking events could reset the attention span of all characters present to force them to change their behaviours.

This solves the problem of consistency when characters are given complete freedom (randomness).

Some of the times, however, we will want the characters to behave in a certain way, i.e. to choose actions from certain categories only. Like the “shocking events”, this is also something that could be enforced from above. Certain areas or situations would be defined as having a certain mood. As long as this mood lasts, only certain categories of actions are appropriate.
This leaves one problem: depending on their personality, characters would respond differently to the same mood. If the mood is a threatening, violent one, the Wolf would be very comfortable and dominant while Red Ridinghood might be afraid and nervous.
How do characters respond to an objective mood in a subjective way?

Usage instructions

Posted by Michael on May 5th, 2006, in Development

A character expresses itself through actions. Especially in our games, where there are no words. It also interacts with the world through actions. Equipping the character with a large list of all possible actions would not only be a daunting task design-wise, it may also be excessive in terms of memory-use.

What if all objects in the game world come with a list of actions that you can perform with them? A character could be a perfectly blank slate until it sees a ball and it learns how to kick it. To the viewer, of course, it seems like the character knew all along. Other characters would be the same. When the girl meets a boy, she learns how to kiss. And vice versa. So characters walk around with instructions printed on them for other characters on how to use them. 🙂

One problem might be that not all instructions apply to all characters. A ball might say “Kick me!” or “Pick me up!” A child that passes by would kick it. A school teacher would pick it up. Unless the child is depressed or the school teacher thinks nobody’s watching.
How do characters choose actions that match their personality and the situation?

AI from the outside

Posted by Michael on May 5th, 2006, in Development

When reading articles about AI for games, even when the authors say that they’re only interested in the appearance of intelligence rather than its simulation, they almost always fall for the seduction of realism. I think this may be related to the fact that almost all of these authors are programmers. Programmers are problem-solvers. And in their eagerness to solve a problem, they might oversee the most important aspect of the process: defining the problem. This is why, I think, they often end up trying to reverse engineer human behaviour rather than solving the real problem.

This real problem is, as they know, how to create the appearance of intelligence. This doesn’t necessarily mean you should be making “fake AI”. Because then you end up creating the appearance of artificial intelligence.
Appearance means nothing if appearance doesn’t happen to somebody. This somebody is the player. So appearance is not objective. It is not a weaker version of reality. It is reality as it exists in the head of the viewer.

Now, if everybody’s vision of reality would be totally unique, we wouldn’t even seem to live on the same planet. Luckily for us, people tend to agree on a lot of aspects of reality. A large chunk of their own personal construction of reality is very similar the that of many people’s. But we should never forget that reality, as far as AI is concerned, is and remains a picture in the viewer’s head and not something that lives outside of him, objectively.

We, programmers, often solve the problem of imitation by recreation. But not only is recreation not required in most cases, it also always underperforms vis-a-vis the original. When creating a fiction, we want quite the opposite: we want to generate a larger-than-life emotional effect while using far less materials. Yes, this is the process that is commonly referred to as art.

The example of the figurative painter springs to mind. While he probably needs to know a whole lot about human anatomy, he usually does not need to paint all of it to create the semblance of a human figure. Translated to AI programming in games this means that we need to know about human behaviour, but we don’t need to recreate it to achieve credibility in a character.
So you might think AI programmers need to study psychology now. I don’t think so. In fact, the risk would be that then they would start to recreate psychological models in code. Which doesn’t get us any further. In analogy: a biologist might know a lot more about anatomy than a painter but, even if he develops the skill, chances are he could not paint as beautiful a picture.

The key is in internal knowledge, acquired knowledge. The painter does not have to think about all his anatomy lessons. His hand knows about them. He can instinctively draw the correct shape.
Now for a final and extreme example, think about love. Scientists are trying to tell us that love, or any kind of emotion for that matter, is just a question of chemical reactions in your nervous system. Does this knowledge help us to paint a picture of love? Love has been depicted very effectively hundreds of thousands of times in many different media throughout history without knowing anything about these chemical reactions. And still programmers, I’m sure, would be inclined to replicate the chemical reactions in code if they want to talk about love! 🙄

How do the poets talk about love? They talk about the symptoms, not the disease. That way the reader can recognize the symptoms and recreate the feeling of love inside of himself. That’s where we need to get with our AI: to let the player do all the hard work!

In my thoughts about Drama Princess I’m seeing the autonomous characters more and more as empty shells. They don’t have an artifical brain or a mood or any replication of what constitutes a human. Except appearance.
Their behaviour is not defined by how they feel or what they want but by the enviroment and the circumstances. I don’t want to model human emotions. I want to model relationships, the things that exist between the characters rather than within them. To only create the appearance, the symptoms. To some extent, it doesn’t even matter which emotion is triggered. We can let the user’s nervous system take care of that. I’m sure it can come up with a pleasant emotion. If you want to interpret the scene as an amourous one, please do, if you see it as a hostile situation, that’s fine as well. Let the user create their own story.

The thing that we should control, as authors, is the environment, the circumstances. And those are a lot easier to model than human behaviour. But more about that in a later post.

Testing Drama Princess ideas

Posted by Michael on May 5th, 2006, in Development

The ultimate test of concepts expressed on these pages, is of course the answer to the question “How will all of this look once it is up an running?” How will the characters act when they are equiped with the Drama Princess system? What can we expect to see happen?

It would be good to develop a fictional test situation where we can do thought experiments with several ideas. The same situation every time, so that different systems can be compared.

Since the first application of the Drama Princess system -if successful- will be 144, a horror game based on the folk tale of Little Red Ridinghood, let’s take an example from that. A good situation seems to be when Red meets the Wolf. This is an interesting situation because many emotions can come up. Red can be an innocent child, or she can be a girl growing into a woman. In the case of the latter she could be afraid of growing up or eager to. And if eager, she might be a seductress. She might be extraverted as a seductress or coy. And the Wolf is ambiguous as well. He is hungry so he wants to eat Red. But he is also greedy and the promise of a second meal interests him. So he pretends to be nice to get information from Red. An interesting challenge for the AI will be how to make the Wolf respond to any seductive activity coming from Red. If the wolf is just hungry, he won’t be able to read the signs. But if we interpret “eating” as a metaphor for “having sex”, we could generate some interesting responses. Also, the Wolf might start the communication with hunger on his mind, but noticing the sexual attractiveness of Red, he might change his mind about what he thinks is more important.
If we need a third character to make the scene more complex, we can use the Little Deaf Mute Girl. She is Red’s friend and she knows the Wolf is up to no good. So she wants to discourage the communication and take Red away from the Wolf. But she is afraid of the Wolf and she doesn’t want to endanger her friendship with Red.
And to include authorship more strongly into the picture, we could say the the scene needs to have a certain mood. It should probably feel dangerous or scary.

The most important thing to keep in mind when running this test is that we need to generate a story in the head of the player. It doesn’t matter what the characters really think. We should also keep in mind that we don’t have any expectations as to the outcome of the confrontation. Any result that the test generates could/should be interesting. But only in the imagination of the player (keeping in mind that it is not unrealistic to not understand human behaviour sometimes).

The Deaf Mute Little Girl in the Pretty White Dress

Posted by Michael on May 5th, 2006, in Development

The Deafmute Little Girl in the first demo of 8.

From 2002 to 2005, we have worked on a game called 8. One of its main features was that the protagonist was autonomous while the player looked at the game from a first person perspective. You could instruct the girl by pointing and clicking, but the idea was that you were more like a guardian or parent to her. This autonomous character is what inspired us to start the Drama Princess project.

In a first demo of 8 we just triggered some random motions. For a second one we developed an engine that we called the Mood System, because the girl’s behaviour was entirely based on how she felt. We never got the opportunity to fully explore the potential of this system and it’s probably too convoluted and too much like “real A.I.” to be suitable for Drama Princess, but certain ideas might be useful nonetheless. Here’s a summary of the design.

The Deafmute Little Girl in the second demo of 8

The Mood system was designed by Ronald Jones based on specifications and descriptions that Auriea and I had compiled. It was built with Quest3D. The only custom part we had made was a motion blender.

The core of the system was a set of three values, ranging from 0 to 1, expressing Energy, Happiness and Concern. We called these the Mood Variables. These values would be influenced by just about everything: player’s actions, own actions, environment, etc.

To define the behaviour of the character, we developed an Action Language. The core of this language was an array with a list of actions. Whenever the girl wanted to do something, the action was added to the list. When an action is done, the system moves down one row. This action array contained several columns to store bits of information (target position to go to, object ID of thing to pick up, etc) associated with the action.

One advantage of using a persistent list like this was that we would have a record of what she had been doing and could replay that if we wanted to. Or the other way around, we could easily write a script for her by making an action array by hand. The list of actions defined the girl’s behaviour, disregarding how the list was built (player instructions, autonomous decisions, canned sequences, randomness, etc). That made it very flexible.

Every request for the character to do something (either initiated by herself or by the player) would be translated into a list of actions that would be added to the Action List sequentially. Each request would have an ID so that, if so required, the whole sequence could be cancelled in one go, by moving to the row with the first different request ID.

Now, the fact that an action was on the list (however it got there) did not mean that the girl was actually going to do it. Every time a new action is processed, the Mood Variables would be consulted and she would only perform the action if she felt like it. This way we could deal with her changing mood. E.g. you tell her to go somewhere and pick up an object. She accepts to go there but when she arrives she realises that the floor is all sticky and weird and she refuses to pick up the object.

If she accepted to do a certain action, then she would also consult her Mood Variables to see which animations she wants to use for the action. Instead of walking, she could skip when happy or drag her feet when sad.

If she wasn’t being told what to do, she could make her own decisions. For this, we attached three envelopes to each action, one for each Mood Variable. These envelopes would return a number between -1 and 1 for each possible value of each Mood Variable, expressing her desire to do the action. This way e.g. a walk command would be accepted even if her Energy value was low, providing that her Concern and Happiness values were high enough.
Whenever she needed to make a decision on doing something, she made a list of actions sorted according to willingness to perform them and picked a random one from the top of the list.
Which, in hindsight, is kind of stupid since it doesn’t take opportunity into account. If you see an object, you would be inclined to pick it, no? Rather than thinking “I feel like picking up an object, hm, where can I find an object?” That’s silly. The environment would only influence her Mood Variables and that’s way too fuzzy to lead to any sort of reasonable action. To trigger behaviour that looks realistic, the object lying on the ground should say “Pick me up!” and the character should respond. Is this the postmodern version of A.I.? 🙂 The character is an empty shell and is simply being told by the environment what to do. It certainly fits with our desire to approach the problem from the outside rather than from within.

Exercise: look at people and pretend that they have no plans and make no decisions but instead their environment is telling them what to do.

Conclusion
Now that I look back on it, and after having programmed a lot more and having read a bit about A.I., I realize that this system is not as complex as I once thought. It’s main problem was that it was far too concerned with the character’s mood and far too little with what the viewer actually perceives. This is why, probably, random selections of actions often lead to more interesting behaviour. The Mood System also required extensive trial-and-error authoring to make sure that the willingness to do certain actions would correspond properly to the Mood Variables.

In short: it makes no sense to try and define a character’s mood with a limited set of variables and expect her to exhibit rich and meaningful behaviour based solely on this imperfectly expressed mood. Then, indeed, randomness is better.

The Action List, on the other hand, remains an attractive idea and seems like a solid way of storing behaviour sequences, potentially expressing even long term desires that can be interrupted by short terms ones.

Emotion roles

Posted by Michael on May 1st, 2006, in Development

When thinking about simulating emotions, the task of simulating all possible emotions that a human can display is daunting. And the chances of generating absurd behaviour increase with the number of emotions or moods that are possible. Perhaps this is another opportunity for coming at the problem from the outside.

What if each character only has two emotions and different characters have different sets of emotions? As a group, they would provide for an emotionally rich presentation but every single character would be easy to read.

The two emotions that each character possesses would be opposites, so that we don’t have a scale between neutral and happy, and another one between neutral and angry. But we use one continuous scale between happy and sad.
One character would always hover between happy and sad but would never be angry or disappointed. Another one would always be comfortable or frightened, yet another one always hateful or benevolent, or something in between.

This way we would not need the complex layering of personality, mood and emotions. For each character, this one set of emotions would automatically express the mood (the position in between the extremes) and the personality (the choice of emotions).

This concept is limited by requiring opposite emotions. So complex emotions like nostalgia or disgust would be near impossible to express. But simplification will have to happen somewhere anyway. So it might as well be here.
To come up with the pairs of emotions is an act of authorship that is not neutral.
Also, there is a limited list of emotions that are feasible to express in a game. So when there are more characters than emotions, several characters would have the same personality. This does not necessarily have to be a problem (as Animal Crossing shows).

Willingness in the Suspension of Disbelief

Posted by Michael on May 1st, 2006, in Development

As Animal Crossing shows us very convincingly, making the player like the autonomous characters, already gets you halfway towards making the player believe them. This seems related to what Richard Evans once said about the creature in Black and White: one of the three design goals for the A.I. was to make the creature loveable.

It appears that the “willingness” aspect of the proverbial “Willing Suspension of Disbelief” that is required for the audience to enjoy fiction, is often underrated. Surely a lot can be done in the design of both the appearance and the behaviour of the characters in a game, to increase this willingness in the player. One powerful means towards this, is to make the characters loveable, cute, attractive, charming.
Perhaps this is where The Sims fail most of the time: the characters seem to be too selfish, greedy and nasty to develop any feelings for.