Interactive Robotics

Caring agents make good teachers and friendly companions

PDF version | Permalink

Ruth Aylett

12 December 2011

Intelligent graphical and robot agents that are able to express emotional states can help educate children against bullying, improve empathy with other cultures, and act as real-world companions.

The new field of affective computing makes it possible to develop intelligent agents (graphical characters and robots) that are able to understand and respond to human emotions. As computers became more powerful during the 1990s, artificial intelligence (AI) researchers began to combine their technology with interactive graphics. They were able to produce intelligent characters that could interact with each other in virtual worlds and also with human users.1 The idea was taken up in robotics too, forming the field of human-robot interaction (HRI).2, 3 It soon became clear that giving intelligence to a graphical or robot body opened up new opportunities for communication using non-verbal expressive behaviour.

Expressive behaviour refers to the facial expressions, gestures, and postures that allow humans to track each other's goals and motivations. In human-human interaction, non-verbal behaviour is often said to carry more information than words. But how can a graphical character use expressive behaviour correctly within a context? It has become clear that this depends on equipping the character with an intelligent affective model that allows it to assess events around it. The character can then generate an appropriate response, which it can express using its body. We focus on modelling empathy, which is the human ability to understand and share the emotions of others. Our overall aim is to equip our intelligent agents for applications in which empathic engagement by the user is essential. These applications may be specific educational domains that target attitudes and behaviour, or human social environments in general.

We began developing our affective agent model within our first application domain, which produced graphical characters to support education against bullying for children aged 9 to 11. These characters represented virtual children in a 3D graphical school in which one child was being bullied by another in a virtual drama (see Figure 1). The child user would watch an episode in our FearNot! application4 and then be asked by the bullied character what they should do about the bullying. This ‘invisible friend’ approach depended entirely on the child feeling empathy for the character's predicament.


In the FearNot! program, graphical characters educated against bullying.

These graphical characters had to be able to express sorrow and anger, and they also had to be able to act under the influence of these emotions, whether by pushing the victim over or bursting into tears. We rejected the scripted approach so widely used in computer games because this would make it difficult for the user to believe in the characters as autonomous beings with affective states they should care about. As a result, we focused on an approach from psychology called cognitive appraisal.5 According to the cognitive appraisal approach, we generate emotions by interpreting what we experience in terms of the external events and our internal goals. For example, suppose you are walking down the street and a stranger approaches you and shouts at you. This relates negatively to goals of personal safety and pleasant interactions, so it typically generates fear and anger.

To further strengthen our model, we added another theory from psychology, that of coping behaviour.6 This theory suggests that a strong emotion either produces a real-world action (you run away from the shouting person or you shout back), or an internal emotional adjustment (you tell yourself the person is crazy and so ignore him or her). We used this theory to create an intelligent planner for our characters.7 Using the planner, the characters were able to act autonomously in the school scenes and so produce different stories for individual children.

A large-scale evaluation8 showed that children did empathize with FearNot! characters and that it did have an effect on bullying behaviour. The team then extended this approach from the desktop environment of FearNot! into ORIENT,9 in which a three-person team of 14- to 16-year-olds role-played a space patrol that is beamed down to the planet ORIENT and interacted with strange characters called Sprytes (see Figure 2). This was aimed at increasing empathy for ther cultures in the classroom and involved a large screen, a real-world space, and a variety of novel interaction devices (the WiiMote, the Wii Dance Mat, mobile phones). A current project10 is extending this work further by adding the ability to configure characters to behave as if they come from different cultures.


The ORIENT interactive program aimed at increasing empathy for other cultures.

A large-scale project is using the same affective architecture to investigate how graphical characters and robots can become long-term companions in human social environments. These companions might support the elderly in their home, be team buddies in the workplace, or play games, such as chess, with children11 (see Figure 3). So far we have focused on giving our agents the ability to model and express affective states. However, a companion must be able to respond sensitively to its human interaction partners, so we need to add the ability for characters to track the user's face, detect smiles, and recognize simple gestures. In addition, the companion must also be able to interpret these smiles and gestures correctly, which remains a significant challenge. Our vision is an agent that is able to register the user's affective state so that it can respond with appropriate actions and expressive behaviour, so forming an affective loop.12


An intelligent robot is part of a project to create long-term companions.




Author

Ruth Aylett
Heriot-Watt University

Ruth Aylett is a professor of computer science where she runs a lab in Intelligent Autonomous Characters, covering both graphical characters and robots. She works in the overlap of artificial intelligence and graphics and researches affective computing, interactive narrative, and human-robot interaction. She is involved in a number of large research projects in these areas and has published widely in books, journals, and conferences.


References
  1. J. Cassell, Embodied Conversational Agents: Representation and Intelligence in User Interfaces, AI magazine 22 (4), 2001. AAAI Press, UK

  2. C. Brezeal, Designing Sociable Robots, MIT Press, 2002.

  3. K. Dautenhahn, Socially intelligent robots: Dimensions of human-robot interaction, Philosophical Trans. the Royal Soc. B: Biological Sciences 362 (1480), pp. 679-704, 2007.

  4. R.S. Aylett, S. Louchart, J. Dias, A. Paiva, M. Vala, S. Woods and L. Hall, Unscripted Narrative for affectively driven characters, IEEE J. Graphics and Applications 26 (3), pp. 42-52 May/June, 2006.

  5. A. Ortony, G. Clore and A. Collins, The cognitive structure of emotions, Cambridge University Press, Cambridge, UK, 1988.

  6. R. Lazarus, Emotion and Adaptation, Oxford University Press, New York, 1991.

  7. R. Aylett, J. Dias and A. Paiva, An affectively driven planner for synthetic characters., Int'l Conf. on Automated Planning and Scheduling (ICAPS2006), pp. 2-10, 2006. AAAI Press, UK

  8. M. Sapouna, D. Wolke, N. Vannini, S. Watson, S. Woods, W. Schneider, S. Enz, L. Hall, A. Paiva, E. Andre, K. Dautenhahn and R. Aylett, Virtual learning intervention to reduce bullying victimization in primary school: a controlled trial, J. Child Psychology and Psychiatry 51 (1), pp. 104-112, 2010.

  9. R. Aylett, N. Vannini, E. André, A. Paiva, S. Enz and L. Hall, But that was in another country: agents and intercultural empathy, 8th Int'l Conf. Autonomous Agents and Multiagent Systems, Budapest, Hungary, Int'l Foundation for Autonomous Agents and Multiagent Systems, pp. 305-312, 2009.

  10. http://ecute.eu. Website for eCUTE: Education in Cultural Understanding, Technologically Enhanced. Accessed 16 November 2011.

  11. G. Castellano, R. Aylett, K. Dautenhahn, A. Paiva, P.W. McOwan and W.C. Ho, Long-term affect for sensitive and socially interactive companions, 4th Int'l Workshop on Human-Computer Conversation, Bellagio, Italy 6-7 October, 2008.

  12. P. Sundström, Exploring the affective loop, 2005. Stockholm: Stockholm University


 
DOI:  10.2417/3201111.003949