Situational Awareness
Self-organizing trusted communities of trust-adaptive agents | Permalink In order to cope with the growing complexity of today's computing systems, the Organic Computing initiative (OC)1 introduced a broad range of architecture paradigms and self-organization mechanisms. The development of organic computing, and self-adaptive and self-organizing (SASO) systems in general, promotes more open environments and an increasing heterogeneity of participating entities. Our aim is to create provisions that help develop trustworthy systems, even in safety- or mission-critical environments. One means to achieve this is to control emergent behaviour. The exhibition of emergent behaviour is one of the main characteristics of organic computing systems. However, such behaviour is at times detrimental to a system, and so has to be limited by analysing, verifying and restraining the interactions between agents during design or runtime and fostering cooperative behaviour. In our work we use Trusted Communities and trust-adaptive agents to control emergent behaviour in a positive way to ensure the efficiency and robustness of a system. Figure 1. A hierarchy of agents ordered according to their awareness of environmental knowledge and solution quality. Trust-neglecting agents use a standard grid strategy based on workload (WL). Static trust-considering agents base decisions on their knowledge of the current workload, trust and reputation values of other agents (Rep). Trust-adaptive agents choose between different pre-configured behaviour parameters based on a short-term situation description (SD.S). Trust-strategic agents are able to predict future values based on a long-term situation description (SD.L). ![]() The techniques developed in the course of our project are demonstrated with three case studies. For example, the Trusted Desktop Grid is a multi-agent approach to a Desktop Grid and Volunteer Computing system (DGVCS2) in which agents act on behalf of the users. It is a distributed system without central control. Grid systems are exposed to threats by clients that plan to exploit or damage the system. By extending each user client with an agent component and modelling the relations between the agents with a trust mechanism, we expect to counter these threats and thus increase the robustness and efficiency of such a system. In our Trusted Desktop Grid, observations of the other agents' behaviour are recorded in a reputation management system. Using this system, an agent can choose to cooperate only with those partners with which it has already had a good experience, resulting in an improved expected outcome. Currently, formerly non-cooperative agents are asked to cooperate if the agent realises that the current workload will be too high if it only works with those that are the most trustworthy. This process occurs by continuously adapting cooperation trust thresholds to the current situation.3 This trust threshold adaptation is our first implicit approach to forgiveness. However, forgiveness4 in general is a more far-reaching prosocial concept and can be used to enrich trust-based algorithms with mechanisms to improve cooperation. Therefore, we will investigate how to use the concept of forgiveness to re-integrate refined, formerly egoistic agents. Agents are adaptive because they continuously change their behaviour to that best suited to the current situation. Here, agent awareness is crucial. The more information they are able to perceive and interpret, the better the quality of their adaptation solutions. Figure 1 shows a hierarchy of different agent types. Based on our analysis, types are ordered according to their awareness of environmental knowledge and solution quality. Trust-neglecting agents use a standard grid strategy based on workload (WL) and do not use any trust or reputation values. We use them for reference purposes. Static trust-considering agents know the current workload, trust and reputation values of other agents (Rep). Therefore, their chosen behaviour leads to a better quality solution. The trust-adaptive type interprets the current workload, trust and reputation values of others, creating a short-term situation description (SD.S). They are able to choose between different pre-configured behaviour parameters based on this SD.S. This adaptation happens continuously and autonomously at runtime. In the future, our trust-adaptive agent will be able to learn which parameter configuration is best suited to a given situation. Finally, the trust-strategic class has additional information based on a long-term situation description (SD.L). The SD.L incorporates trend analyses of workload and reputation values, so it can be used to predict other agents' future behaviour or the next possible situation. By forecasting future developments in a situation, they are able to act proactively before the situation occurs. By extending the approach of trust-adaptive agents to the system level, we analyse agent organizations that are built using a bottom-up approach based on these trust relations. These so-called Trusted Communities (TCs) are self-organizing groups composed of agents known to be trustworthy from previous interactions. This allows the members of a TC to omit safety overhead (such as work unit replication), which makes them more efficient. In an implicit TC, the exclusion of malicious peers is an emergent effect of local interaction policies. Here, each agent has its own view of the Trusted Community and acts based on this local knowledge only. Therefore, it is able to adapt its behaviour autonomously to changing environmental conditions.3 In summary, combining Trusted Communities and trust-adaptive agents leads to more robust and efficient organic computing systems, so this can also be used to further improve other SASO systems. Achieving the goals of the OC-Trust research unit will be an important step towards taking SASO systems out of research laboratories and into innovative software companies and, ultimately, into real-world applications. In the future, we plan to improve both the agent and system levels. We plan to build proactive, trust-strategic agents by including learning, trend analysis and prediction into their awareness. As they change their behaviour at runtime, we need mechanisms to detect these changes. At the system level, we will introduce explicit TCs with a unique membership function and a Trusted Manager as a hierarchical component that observes and, if necessary, influences the behaviour of the TC members. References
Stay Informed
|
||||
|