Situational Awareness
Smart environments for Alzheimer's patients | Permalink As the number of people living with cognitive impairments increases, ways of caring for them in assistive environments are in increasing demand, with two main purposes. The first is to detect and recognize potentially dangerous situations and issue timely alarms. The second is to recognize and annotate other relevant facts used to monitor disease progression (e.g., abnormal behaviour that may not pose a threat to safety, but still needs to be reported to the clinician to give a clear and comprehensive picture of the patient's condition). However, developing this type of situation awareness in assistive environments requires the environment to analyse the behaviour of patients with cognitive impairments, which is extremely challenging. Traditional behavioural and situation analysis techniques assume that people behave ‘rationally’—meaning that their actions are driven by clear goals—and rely on matching the observed activities with predefined behavioural patterns.1 Data collected from sensors can be processed in a smart environment (such as that shown in Figure 1) at different semantic levels. As reported in Ye et al.,1 a first distinction can be drawn between context-awareness and situation-awareness. More specifically, the literature proposes defining ‘primary context’ as the full set of data captured by real or virtual sensors, while ‘secondary context’ is the information inferred and/or derived from multiple data streams (i.e., from primary contexts). An important example of a secondary context is the activities performed in the environment. Finally, ‘situation’ can be defined as the abstract state of affairs of interest to applications or designers, derived from the observed context and hypotheses about how it relates to factors of interest. Situation-awareness incorporates a rich web of temporal and other structural aspects. For example: a situation might only happen at particular hours of the day (time-of-day); it may only last a specified amount of time (duration), or recur a certain number of times a week (frequency); and different situations may occur in a fixed order (sequence). Figure 1. A situation-aware environment for cognitively impaired people. HCI: Human-computer interface. ![]() To better clarify this aspect, we can consider the example of wandering, which is a typical anomalous behaviour of Alzheimer's patients. It is obvious that a simple first-level context analysis, such as processing data from accelerometers or motion sensors, may suggest to the system that the patient is moving. Further inferences, drawn from motion sensor data combined with information from a location service, might then allow the system to deduce that the patient is walking, rather than simply moving. However, simple reasoning on the primary context will not enable the system to recognize a situation of wandering. Such abnormal behaviour can in fact only be detected if the system verifies whether specific hypotheses, relating to the frequency, duration and location of walking, are satisfied. The situation calculus (SC), first introduced by John McCarthy,2 provides a formalism for modelling dynamically changing worlds. The basic elements of the SC are actions, fluents and situations. Actions can be performed in the world and quantified. Fluents describe the state of the world (these are predicates and functions whose value may change depending on the situation). Situations represent a history of action occurrences. A dynamic world is modelled as a series of situations, resulting from the various actions performed within the world. The constant S0 denotes the initial situation, while do(a, S)denotes the situation that results from performing action a in situation S. The dynamic world is axiomatized mainly by adding initial world axioms, effect axioms, and successor state axioms.3 The initial world axioms describe the starting state of the environment, made up of its objects, their position, their properties, and so forth. Effect axioms instead describe the effect upon a fluent of performing an action in a given situation. It is also necessary to specify, for each fluent, the non-effect of other actions. Intuitively, we can say that a fluent's truth value will become true after executing action a if that action has the effect of making the fluent true, or if the fluent was already true before executing a and the action does not have the effect of making it false. In such a case, a successor state axiom must be formalized. Our approach for a cognitive assistive environment involves defining a basic action theory to represent and identify dangerous or anomalous situations.4, 5 As an example, a fluent such as isDangerousHeating may be defined to detect a dangerous situation arising from an attempt to warm something with a heater. This happens whenever an inflammable object is brought into contact with a functioning heater, either because the patient switches on the heater after placing the inflammable object on/inside it, or vice versa. According to this axiom, the situation continues to be dangerous until the device is switched off or the object is removed. ![]() ![]() ![]() Another important feature of the SC is its capability for goal-setting and planning. The planning problem is expressed as: ‘starting from an axiomatized initial situation S0, and given a goal G, find a sequence of actions S such that the goal G(S) is true.’ In this way, an intelligent agent can identify a sequence of executable actions that will recover the system (patient) from an unsafe situation, and lead it to one that is safe. For example, in the case where the fluent isDangerousHeating becomes true, the agent could be driven by the goal ‘Make isDangerousHeating false’ to find a sequence of actions that can change the fluent's truth value. In summary, we have focused specifically on the dangerous situations and abnormal behaviours associated with cognitively impaired patients in monitored environments. Handling anomalous behaviour comprises four main tasks: detection, identification, recovery and prevention. Our approach is able to detect dangerous situations and interpret abnormal behaviours that would be difficult to handle by traditional analysis methods. Fluents such as isAnomalous, isAbnormal and isDangerous can be specified and used to detect criticalities. Identification may be partially automated by analysing the sequence of actions leading to the current situation, and recovery accomplished by intelligent agents capable of computing goals. Finally, we hope to extend our work to prevention, which would require the ability to predict a future situation based on the current one. For this, we envisage using probabilistic situation calculus. References
Stay Informed
|
||||
|