Caring agents make good teachers and friendly companions
PDF version | Permalink
The new field of affective computing makes it possible to develop intelligent agents (graphical characters and robots) that are able to understand and respond to human emotions. As computers became more powerful during the 1990s, artificial intelligence (AI) researchers began to combine their technology with interactive graphics. They were able to produce intelligent characters that could interact with each other in virtual worlds and also with human users.1 The idea was taken up in robotics too, forming the field of human-robot interaction (HRI).2, 3 It soon became clear that giving intelligence to a graphical or robot body opened up new opportunities for communication using non-verbal expressive behaviour.
Expressive behaviour refers to the facial expressions, gestures, and postures that allow humans to track each other's goals and motivations. In human-human interaction, non-verbal behaviour is often said to carry more information than words. But how can a graphical character use expressive behaviour correctly within a context? It has become clear that this depends on equipping the character with an intelligent affective model that allows it to assess events around it. The character can then generate an appropriate response, which it can express using its body. We focus on modelling empathy, which is the human ability to understand and share the emotions of others. Our overall aim is to equip our intelligent agents for applications in which empathic engagement by the user is essential. These applications may be specific educational domains that target attitudes and behaviour, or human social environments in general.
We began developing our affective agent model within our first application domain, which produced graphical characters to support education against bullying for children aged 9 to 11. These characters represented virtual children in a 3D graphical school in which one child was being bullied by another in a virtual drama (see Figure 1). The child user would watch an episode in our FearNot! application4 and then be asked by the bullied character what they should do about the bullying. This ‘invisible friend’ approach depended entirely on the child feeling empathy for the character's predicament.
These graphical characters had to be able to express sorrow and anger, and they also had to be able to act under the influence of these emotions, whether by pushing the victim over or bursting into tears. We rejected the scripted approach so widely used in computer games because this would make it difficult for the user to believe in the characters as autonomous beings with affective states they should care about. As a result, we focused on an approach from psychology called cognitive appraisal.5 According to the cognitive appraisal approach, we generate emotions by interpreting what we experience in terms of the external events and our internal goals. For example, suppose you are walking down the street and a stranger approaches you and shouts at you. This relates negatively to goals of personal safety and pleasant interactions, so it typically generates fear and anger.
To further strengthen our model, we added another theory from psychology, that of coping behaviour.6 This theory suggests that a strong emotion either produces a real-world action (you run away from the shouting person or you shout back), or an internal emotional adjustment (you tell yourself the person is crazy and so ignore him or her). We used this theory to create an intelligent planner for our characters.7 Using the planner, the characters were able to act autonomously in the school scenes and so produce different stories for individual children.
A large-scale evaluation8 showed that children did empathize with FearNot! characters and that it did have an effect on bullying behaviour. The team then extended this approach from the desktop environment of FearNot! into ORIENT,9 in which a three-person team of 14- to 16-year-olds role-played a space patrol that is beamed down to the planet ORIENT and interacted with strange characters called Sprytes (see Figure 2). This was aimed at increasing empathy for ther cultures in the classroom and involved a large screen, a real-world space, and a variety of novel interaction devices (the WiiMote, the Wii Dance Mat, mobile phones). A current project10 is extending this work further by adding the ability to configure characters to behave as if they come from different cultures.
A large-scale project is using the same affective architecture to investigate how graphical characters and robots can become long-term companions in human social environments. These companions might support the elderly in their home, be team buddies in the workplace, or play games, such as chess, with children11 (see Figure 3). So far we have focused on giving our agents the ability to model and express affective states. However, a companion must be able to respond sensitively to its human interaction partners, so we need to add the ability for characters to track the user's face, detect smiles, and recognize simple gestures. In addition, the companion must also be able to interpret these smiles and gestures correctly, which remains a significant challenge. Our vision is an agent that is able to register the user's affective state so that it can respond with appropriate actions and expressive behaviour, so forming an affective loop.12