Etiquette bubbles

Posted by Michael on May 31st, 2006, in Development

This is sort of a stream of consciousness that starts with the problem of animation selection and ends in the invention of instanced etiquette.

The final result of any logic that we are going to build is an animation played by a character. This animation may be modified by a pose (to express tiredness e.g. or personality).

Sometimes animations will be played by two characters synchronously (playing with each other, hugging, etc). This motivates us, combined with a reluctance towards individuality, to think of this animation being chosen by the situation rather than the individual actor.
If individual characters have to perform animations synchronously, they would have to communicate with each other about the choice of animation and when to get started. This is messy.

This means that, at least when the situation consists of more than one actor, one could think of the situation itself as possessing the knowledge about the mood that motivates behaviour, rather than the individual actors. This would mean that each actor agrees on the mood of the situation. This may not be realistic but it might be good enough. Giving the actors the freedom to choose whether or not to do what the situation dictates is messy. Perhaps we can call this “mood of the situation” etiquette.

Now, how does the situation gauge the mood? And do we allow for a measure of randomness?
A lot depends on the characters. On their personality and their state (or mood). But coming to a single conclusion about the mood of a scene, based on these parameters, is boring. This is where randomness can be our friend. So that when a weak girl meets an aggressive monster, the chance exists that she greets him with fond enthusiasm.

These leads us to causality. If the monster responds favourably, then the two characters can become friends. This would drastically reduce the chances of e.g. the girl being eaten.
If the monster responds negatively, the girl doesn’t necessarily have to give up. This depends on the narrative goal of the situation. And how strong it is. If there is a strong “pressure from etiquette” to become friends with the monster, the girl may be motivated to remain nice until the goal of friendship has been reached, or a more prominent goal has surfaced (e.g. survival if the monster becomes menacing).

There may be a problem with thinking of the situation or etiquette being the core of the system in terms of modularity and reusability. Do we need to have multiple instances of etiquette, one for each encounter? Or does a single “master brain” suffice (with the dreaded spectre of the Drama Manager looming on the horizon)?

On the other hand, when we think of real etiquette, it still applies even when a character is alone. And also, and perhaps more importantly, perhaps the etiquette actually only needs to exists where the player’s avatar is -at least for the kind of games that we want to make. We could consider etiquette to be a sort of stretching bubble that includes only the characters in the vicinity of the player’s avatar.

A problem with avatar-centric etiquette is obviously that characters outside of the sphere of influence will behave like animals. 🙂
So how do animals behave? The general answer is “on instinct”. Perhaps instinct is also a form of etiquette. Perhaps instinct could be replaced by etiquette. To prevent them from behaving like animals.
What does this mean?

What if all characters walk around with an etiquette bubble? When they meet each other, one character will be enveloped in the other’s etiquette bubble. Which etiquette bubble wins is randomly decided. If one of the characters is the player, then the player’s bubble always wins. It doesn’t really matter because all etiquette bubbles are the same. The important thing is that two (or more?) characters can share the same bubble.
Instanced etiquette would allow characters to interact with each other without involvement of the player’s avatar. When two characters meet, etiquette controls both of their behaviours.

Or is protocol a better name?

Why not random?

Posted by Michael on May 25th, 2006, in Development

The basic problem of autonomous behaviour in Drama Princess is that of action selection. The most simple solution is randomness. And we know from experience that randomness is also very believable, more so, often, than much more complex systems.

For reasons that are mostly related to narrative, total randomness is not desirable.
These reasons include the desire for

  • Consistent behaviour
    It’s easy to make an insane or hysteric character. Pure randomness will get you very close to this. But we do not want to limit our narrative potential. Sane characters display a certain consistency in their behaviour.
    We have already suggested a simple solution for consistency.
  • Diverse personalities
    Both the story and the player’s empathy improve as the characters display a recognisable personality. To express this personality, a character’s selection logic would need to prefer certain actions over others. To define this personality, the author would need to be able to define this preference. On the other hand giving each character a different set of actions would also immediately give them a different personality. And the outer appearance of a character (model, textures, animations) goes a long way as well in this area.
    The re-usability requirement of Drama Princess, however, would benefit from characters that share action sets and assets.
  • Character growth
    Also important for both story and empathy is the perception of growth or evolution in a character. Especially if this growth happens as a result of interaction with the player.
    To implement growth as a constraint on randomness, the selection preferences mentioned above would need to be dynamic. Not randomly dynamic but according to a gradual shift. Perhaps one could think of morphing as a metaphor: you could morph one set of actions or preferences gradually into another. In the case of preferences, this could be as simple as increasing or decreasing a bunch of numbers, each of which defines the probability that an action will be selected.
Randomizing constraints?

Posted by Michael on May 25th, 2006, in Development

Randomness is an easy solution for the problem of action selection in an autonomous character. For reasons that I will get into in a later post, complete randomness is not desirable. In order to create believable characters, one needs to develop constraints on this randomness.

But what if we turn the logic around?
At the other side of the spectrum, we find totally scripted behaviour. Would it be possible to inject randomness into scripted behaviour in such a way that the character seems autonomous?

We would first need to define scripting. Obviously if all behaviour of the character is scripted, then randomness cannot not have any deep effects. Unless the intervention of randomness applies to the very linearity of the script. In a branched structure, randomness could be applied every time a new branch needs to be chosen. Or the script could be cut up into sequences that each have a limited amount of sequences that can follow it. Randomness could be applied to the selection. The smaller these sequences are, the more varied the behaviour will be.

It seems that by randomizing the constraints or constraining the randomness, one simply gets closer to the other as we improve the quality. So the ideal system lies somewhere in the middle. Randomizing the constraints might be a more convenient way to author the system because it deals with desired behaviour as its material.

Drama Princess Workshop & Symposium

Posted by Michael on May 25th, 2006, in Development

On 22 and 23 May 2006, we have organised a small workshop and symposium in the Foam Lab in Brussels.

A summary of the symposium’s conclusions can be found here.

On the first day, guests were invited to play certain commercial videogames. Those games were Ico, Black & White, The Sims 2 and Animal Crossing. Facade, Catz and Soul Calibur II were available as well.
And on the second day we discussed the autonomous characters in these games in a round-table format that was open to the public. Present on that day were Maja Kuzmanovic & Nik Gaffney (who had played Ico during the workshop), Judith Dormans & Daan Pasmans (Animal Crossing), Marek Bronstring (Black & White), Maaike Lauwaert & Martijn Hendricks (The Sims 2), Lina Kusaite, Cocky Eek, Theun Karelse, Elke Van Campenhout, Nina Czegledy and of course we, Auriea Harvey & Michaël Samyn.

The conclusions that our guests came to about the autonomous characters in these games, were largely the same as the ones that we have expressed on these pages before. With the exceptions, to some extent, of Maaike Lauwaert who went as far as calling The Sims 2 an amoral game and Maja Kuzmanovic expressing doubts about the sympathy one feels as a player for Yorda. It must be noted that Maaike had been doing a case study on The Sims for another project and that Maja had not ever played a Playstation game before in her life.

During the discussion we came to a perhaps odd consensus that sophistication of “AI” seems to be reversely related to the believability of the characters. The primitive Animal Crossing creatures were far easier to accept than the complex Sims. Some people even had trouble calling The Sims autonomous because they do not seem to try to accomplish their goals but needed the player’s help with that. Also, their personalities were not considered very diverse as they all responded in the same way to the same stimuli.

Here’s a summarized transcript of the videorecording of the symposium.

How low should the author go?

Posted by Michael on May 23rd, 2006, in Development

Developing a system for interactive drama requires finding the optimal point on the scale between the extreme high level of an animation movie, where everything is predefined, and the extreme low level of a hypothetical ideal AI or A-Life system that only authors the smallest atoms of behaviour and where all results are emergent.

In a realistic present-day situation, building autonomy into your characters boils down to designing a system that allows these characters to choose from a list of pre-defined actions. One side of this problem is the selection mechanism, the other side of this problem is the definition of the actions.

The actions can be be defined on a low level (walk, breathe, wave, pick up) or on a high level (walk to the wall and pick up the key and wave when you’re done). The lower the definition, the more emergent the behaviour is. The higher, the more the behaviour will make sense. The selection system will need to be designed to deal with either one.

Actions defined on a higher level are actually sequences of actions defined on a lower level. So strictly speaking, we could have two selection systems: one to select from a lower level and one from a higher level. These systems could work together or alternately. The latter requires yet another selection system that decides when to apply the lower level, more emergent, behaviour and when the higher level, more defined, behaviour.

Thinking of somebody else

Posted by Michael on May 19th, 2006, in Development

When designing a system for autonomous characters, one is very quick to think about the problem in terms of “How or when or why would I do this or that?” The solution for this problem is then to design a system that can reason along the lines of “When I feel like this, I will do that.” Such a reasoning system can be extremely complex and (perhaps because of that) still lead to rigid, unrealistic behaviour when applied to Non Playing Characters (NPCs).

But the NPC, per definition, does not represent you, the player. The NPC is always somebody else, the other. Therefore, a more appropriate way to think about the problem of representing autonomous characters is in terms of “What is she doing and does that make sense?” The natural empathy that we feel makes us immediately project the behaviour that we are observing onto ourselves and translate what we see to “If I were to do that, would it make sense?”. And to answer this question, we would attach some meaning to what is being done. This whole process of empathy does not need to be programmed! The player will do this automatically. All we need to program is the behaviour of the other person (not ourselves). We do not need to program the reasoning. As long as the behaviour is such that the player can attach some meaning to it. This doesn’t mean that the NPC needs to know this meaning. It only means that the NPC cannot behave in absurd ways. So we just need to design a few rules to avoid completely absurd behaviour. And not a synthetical mind.

Another aspect of thinking of the NPC as another person and not yourself is that we only need to program systems for things that can be perceived by the player. If our output (graphics, animations, sounds) can only express sleeping or fighting, i.e. if our output is boolean, there is no need for a complex system that can reason about the decision to either sleep or fight (such as “if I’m a bit tired but I’m close to an enemy, I will fight anyway unless I’m very lazy or I’m sick” etc). All the player will see is the NPC either sleeping or fighting. Why the NPC starts fighting is something that can be left entirely up to the spectator.
For the system, randomness is sufficient. Something like “If I’m close to an enemy, and random number A equals x, then I’ll fight”. We could refine the system a bit by increasing the probability as the NPC gets closer to the enemy, or by taking any other external context into account (context, in other words, that can be preceived by the player) like aggressive behaviour of the enemy or terrain conditions. But the decision itself will always be random because the reasoning of the NPC can not be expressed anyway.

We do have to add some rules to avoid absurdity. For instance, if our NPC is very tired when he starts to fight, we don’t want him to fall asleep during the battle. So we simply make a rule that the NPC cannot go to his sleeping state when he is in his fighting state. The player, who might be aware of the NPC’s tiredness, would attach meaning to it by thinking of adrenaline rushes, etc. But we do not need to program adrenaline.
We can reduce the build workload and increase the flexibility of these rules by generalising them. E.g. by grouping behaviours according to compatibility, something we have to do anyway to ensure consistent behaviour.

I admit that this is a very unintuitive way of thinking about NPCs. So strong is our inclination to empathize. But we have to force ourselves to think of the other as somebody else. Because we only need to program his or her external behaviour. And we can make up much simpler rules to govern that behaviour than if we needed to program our own (including all the reasoning). As a bonus, we allow the spectator more freedom of interpretation and the game more possibilities of unexpected things happening.

Sympathy and empathy

Posted by Michael on May 17th, 2006, in Development

The goal of our work that Drama Princess needs to serve is to help give the spectator emotionally satisfying experiences. Such experiences often come from immersion in a virtual reality (a book, a movie, a game). To allow for such an emotional immersion, the spectator needs to willingly suspend his or her disbelief. As stated before, I think there is a lot that can be done to increase this willingness, rather than focussing all one’s attention on increasing the credibility. A comfortable interface and a pleasant environment will create a mood that will allow the willingness of the spectator to grow. The sympathy that the spectator feels for the characters in the game will do the rest.
Allowing this sympathy to happen becomes as important as making sure that the character is believable. And for artists like ourselves, the former seems a lot less daunting (and a lot more gratifying) a task than the latter.

Even hardcore A.I. scientists have realized that the flaws of an autonomous character can generate such sympathy in the spectator that these characters are perceived as much more believable than the ones that are intellectually superior or supposedly more realistic. The key is that people care about the characters.

Another important element in the emotional satisfaction that comes from fiction is empathy with the characters. If the spectator feels what the character feels, or at least is inclined to try to imagine so, the story will appear very believable and even relevant to the spectator’s own life. And when the spectator feels sympathy for the character, he or she will find it a lot easier to feel empathy. So again, sympathy is key.

A last piece of this puzzle has been suggested here before: it is easier to feel sympathy (and thus empathy) for someone who shows sympathy for you. Note that I said “show sympathy” and not “feel sympathy”. All we care about is how things appear to the spectator. So it becomes of utmost importance that our autonomous character displays behaviour that can be interpreted by the spectator as expressing sympathy for him or her. As we know from experience with 8, this can be as simple as recognizing that the player is there by making the character look into the camera from time to time. She’s so cute! 🙂

A few more questions remain. Since we are talking about a situation in which the spectator is represented in the game world by an avatar, we can ask ourselves whether we need to establish sympathy between the autonomous character and the player’s avatar, between the autonomous characters and the player (the camera) and/or between the avatar and the player.

Mindless puppets?

Posted by Michael on May 17th, 2006, in Development

Richard Evans BDI architecture with Opinions
Illustration from Varieties of Learning by Richard Evans in AI Game Programming Wisdom

Look at that!… Isn’t that beautiful?

This system almost moves me with its clarity and simplicty.

And yet…
Yet I feel, that we don’t need the top part of this diagram at all for our purposes. Does it really matter what our character believes or desires when all we see is him throwing a stone at a house? Does it?
Please somebody protect me from myself. And explain to me why our puppets need to have minds of their own. I can’t find any reason. Why go through all the trouble? Please, someone? Anyone?

Always in between things

Posted by Michael on May 16th, 2006, in Development

Because of the topic of this research, I’m trying to observe people more purposefully than I normally do. To see what they do, how they behave, how they interact with each other, etc.

It occurs to me that most of the time, people are in between things. You find them moving from one place to another, or doing something with a certain goal in mind. Most of the time, this behaviour is tremendously boring to observe. These people, these real people, are quite different from the characters you meet in movies or novels. The latter are far more interesting. It’s much nicer to go sit in a crowd and read a book than to look at the passers-by!

So this confirms my belief that we should not try to make synthetic humans. Because humans are boring! The only way that you can compensate for the general tediousness of human life is, I guess, to compress time (as is done in The Sims). So that the 90% of their lifetimes that humans spend on being in between things at least doesn’t last too long.

The real solution -in my opinion- is to simply ignore this in-between time and design a system that only deals with the 10% of human life that is interesting to look at. The question that remains is whether this is compatible with the real-time medium. Will a character that does “interesting things” all of the time still seem real? Or do we need a structure that implies all the other things that the character does without showing them (by skipping forward in time e.g.)?

Interacting with avatars

Posted by Michael on May 15th, 2006, in Development

The interaction between autonomous characters in a realtime fiction can be defined very loosely. Since we don’t have a fixed story to tell and all meaning should come from the spectator’s own imagination, a great deal of the autonomus behaviour can be governed by randomness.

One of the characters in our virtual space, however, will be the avatar of the spectator. Contrary to the Non Playing Characters (NPCs), the behaviour of this character will be controlled by a human. This means that we cannot use the concept of “activity-things” (Richard Evans) for these interactions. It seems that we will not be able to cheat as much when the spectator knows what is going on in the mind of one of the participants.

For instance, when one NPC seems to have forgotten what he experienced with another NPC, the spectator will probably make up a story why this is.
For example, in one scene we see the Wolf behaving very aggressively towards the Deafmute Girl en she runs away. In the next scene we see both of them play together. The spectator can imagine that at some point these characters got to know each other and decided that they like each other. When the Wolf is aggressive towards the avatar (Red), however, the spectator (who controls the avatar) will find it highly suspicious if suddenly the Wolf wants to play with her.

Perhaps this is just a problem of consistency. A problem that is fairly easy to solve.
But I can imagine that there may be other occasions when the relationships that the NPCs have with the avatar will require a bit more detail than the relationships that they have with each other. So it’s probably a good idea to keep an eye open for this and make sure that there is sufficient room in our design to add such detail.