Artificial Intelligence
An outlook for self-awareness in computing systems | Permalink As computing systems continue to advance, they are increasingly comprised of large numbers of different types of subsystems, each with their own local perspective and goals and connected in changing network topologies. As a result, having humans understand and manage these systems is becoming increasingly infeasible. Future computing systems, from robots to personal music devices to web services, should be able to achieve advanced levels of autonomous behaviour to adapt themselves at runtime and learn behaviours appropriate to changing conditions. Nevertheless, users engaging with different parts of the system still expect high performance, reliability, security and other qualities. Such systems will be faced with the challenge of managing trade-offs between these conflicting goals at runtime, both at the global and at the local level, in response to changing conditions. For a system to adapt itself effectively, it is important that it has the ability to be self-aware. Self-awareness is concerned with the availability, collection and representation of knowledge about something, by that something. A self-aware node has knowledge of itself, permitting reasoning and intelligent decision making to support effective autonomous adaptive behaviour. While some work in self-aware computing exists, there is no general methodology for engineering self-aware systems or for validating and benchmarking their behaviour. In the EPiCS (engineering proprioception in computing systems) project,1 we aim to address this gap. The study of self-awareness emerged as a field within psychology in the 1960s. Morin2 defines self-awareness as “the capacity to become the object of one's own attention." As a prerequisite for this, the organism must have the ability to monitor or observe itself. Our work in the EPiCS project is concerned with taking inspiration from self-awareness theory to design methodologies for engineering, validating and benchmarking self-aware systems. In a recent paper,3 we reviewed key concepts from self-awareness literature that are relevant to our work of realizing self-awareness in computing systems. The first of these concepts is the distinction between public and private self-awareness classes,4 which are concerned with knowledge of phenomena external and internal to the individual, respectively. A second concept is the theory of different levels of self-awareness,5ranging from basic awareness of environmental stimuli, by means of an awareness of interactions and time, to an awareness of one's own thoughts. Advanced organisms also engage in meta-self-awareness,6 an awareness that they themselves are self-aware. A further concept is that self-awareness can be an emergent property of collective systems, even when there is no single component with a global awareness of the whole system.7 This is a key observation that can contribute to the design of self-aware systems, because it implies that we do not need a self-aware system to possess a global omniscient node. There are several clusters of research in computer science and engineering that have explicitly used the term self-awareness.3 However, there is no common framework for describing or benchmarking the self-awareness properties of these systems or the benefits that self-awareness brings. One cluster focuses on self-awareness as a part of metacognition, and Cox8 suggests that this is tantamount to the algorithm selection problem. Here, the task is to choose the most efficient algorithm from a set of possibilities. We generalize this idea to decisions with multiple objectives, representative of the conflicting goals of adaptive nodes in dynamic, heterogeneous environments. At a more fundamental level, some studies focus on how to engineer systems that explicitly consider knowledge about themselves. Examples here include Agarwal et al.,9 who argue that, through self-awareness, the need to consider availability of resources and constraints at design-time can be avoided or reduced. However, this point is discussed only in a specific application context; our aim is to generalize the idea. Indeed, we have demonstrated this effect in recent work on distributed smart cameras.10 Here, nodes are cameras that need to track objects moving through their fields of view, while making intelligent decisions at runtime about whom to communicate with and how to exchange tracking responsibilities. Using a market mechanism and a pheromone-inspired learning approach, the outcome is an efficient balance of the trade-off between tracking performance and communication overhead. Unlike previous approaches to this handover problem, the cameras did not require any a priori knowledge of their environment or the camera neighbourhood structure; this is all learnt online (see Figure 1). Figure 1. Illustrations of distributed smart camera network scenarios, showing the initial state of the system where no neighbourhood information is present (left) and the learnt neighbourhood structure after some time (right). Each camera is represented by a green circle, with its field of view indicated by the associated triangle. The cameras learn which other cameras have adjacent fields of view over time, through interactions, as the system operates, tracking objects (black dots). Red lines indicate links in the vision neighbourhood graph. ![]() Techniques such as this provide options from which meta-self-aware nodes can select. As part of the methodological work undertaken in EPiCS, we are studying how such learning techniques can be combined and chosen at runtime, and how to compare these decision processes. In addition to smart cameras, we are also studying this approach in two quite different case studies: heterogeneous computer clusters and interactive mobile media devices. Looking forward, it is clear that there is still much to understand about how to incorporate self-awareness properties into computing systems. How can nodes learn and adapt to changing conditions at runtime, while still considering trade-offs between both system goals and the overheads associated with learning about themselves? How should we characterize system-level behaviour that emerges from the interactions of self-aware nodes that have only local information? What can we say about expectations or guarantees of this behaviour? If we are to improve the adaptability of systems through self-awareness, how should we measure this adaptability, and how should we describe assumptions about scenarios and dynamics that we have considered? Is it possible to generalize these types of claims? At the conceptual level, our current work is concerned with establishing methodologies for engineering, validating and benchmarking self-aware systems. However, realizing self-awareness in computing systems will require contributions from many disciplines, including psychology, philosophy, economics, complexity science, artificial and computational intelligence and electronic and software engineering. References
Stay Informed
|
||||
|