Situational Awareness

Self-organizing trusted communities of trust-adaptive agents

| Permalink

Yvonne Bernard, Lukas Klejnowski, Jörg Hähner, and Christian Müller-Schloer

16 April 2012

Self-organizing trust-based mechanisms improve the efficiency and robustness of self-adaptive self-organizing systems.

In order to cope with the growing complexity of today's computing systems, the Organic Computing initiative (OC)1 introduced a broad range of architecture paradigms and self-organization mechanisms. The development of organic computing, and self-adaptive and self-organizing (SASO) systems in general, promotes more open environments and an increasing heterogeneity of participating entities.

Our aim is to create provisions that help develop trustworthy systems, even in safety- or mission-critical environments. One means to achieve this is to control emergent behaviour. The exhibition of emergent behaviour is one of the main characteristics of organic computing systems. However, such behaviour is at times detrimental to a system, and so has to be limited by analysing, verifying and restraining the interactions between agents during design or runtime and fostering cooperative behaviour. In our work we use Trusted Communities and trust-adaptive agents to control emergent behaviour in a positive way to ensure the efficiency and robustness of a system.


A hierarchy of agents ordered according to their awareness of environmental knowledge and solution quality. Trust-neglecting agents use a standard grid strategy based on workload (WL). Static trust-considering agents base decisions on their knowledge of the current workload, trust and reputation values of other agents (Rep). Trust-adaptive agents choose between different pre-configured behaviour parameters based on a short-term situation description (SD.S). Trust-strategic agents are able to predict future values based on a long-term situation description (SD.L).

The techniques developed in the course of our project are demonstrated with three case studies. For example, the Trusted Desktop Grid is a multi-agent approach to a Desktop Grid and Volunteer Computing system (DGVCS2) in which agents act on behalf of the users. It is a distributed system without central control. Grid systems are exposed to threats by clients that plan to exploit or damage the system. By extending each user client with an agent component and modelling the relations between the agents with a trust mechanism, we expect to counter these threats and thus increase the robustness and efficiency of such a system. In our Trusted Desktop Grid, observations of the other agents' behaviour are recorded in a reputation management system. Using this system, an agent can choose to cooperate only with those partners with which it has already had a good experience, resulting in an improved expected outcome. Currently, formerly non-cooperative agents are asked to cooperate if the agent realises that the current workload will be too high if it only works with those that are the most trustworthy. This process occurs by continuously adapting cooperation trust thresholds to the current situation.3 This trust threshold adaptation is our first implicit approach to forgiveness. However, forgiveness4 in general is a more far-reaching prosocial concept and can be used to enrich trust-based algorithms with mechanisms to improve cooperation. Therefore, we will investigate how to use the concept of forgiveness to re-integrate refined, formerly egoistic agents.

Agents are adaptive because they continuously change their behaviour to that best suited to the current situation. Here, agent awareness is crucial. The more information they are able to perceive and interpret, the better the quality of their adaptation solutions. Figure 1 shows a hierarchy of different agent types. Based on our analysis, types are ordered according to their awareness of environmental knowledge and solution quality. Trust-neglecting agents use a standard grid strategy based on workload (WL) and do not use any trust or reputation values. We use them for reference purposes. Static trust-considering agents know the current workload, trust and reputation values of other agents (Rep). Therefore, their chosen behaviour leads to a better quality solution. The trust-adaptive type interprets the current workload, trust and reputation values of others, creating a short-term situation description (SD.S). They are able to choose between different pre-configured behaviour parameters based on this SD.S. This adaptation happens continuously and autonomously at runtime. In the future, our trust-adaptive agent will be able to learn which parameter configuration is best suited to a given situation. Finally, the trust-strategic class has additional information based on a long-term situation description (SD.L). The SD.L incorporates trend analyses of workload and reputation values, so it can be used to predict other agents' future behaviour or the next possible situation. By forecasting future developments in a situation, they are able to act proactively before the situation occurs.

By extending the approach of trust-adaptive agents to the system level, we analyse agent organizations that are built using a bottom-up approach based on these trust relations. These so-called Trusted Communities (TCs) are self-organizing groups composed of agents known to be trustworthy from previous interactions. This allows the members of a TC to omit safety overhead (such as work unit replication), which makes them more efficient. In an implicit TC, the exclusion of malicious peers is an emergent effect of local interaction policies. Here, each agent has its own view of the Trusted Community and acts based on this local knowledge only. Therefore, it is able to adapt its behaviour autonomously to changing environmental conditions.3

In summary, combining Trusted Communities and trust-adaptive agents leads to more robust and efficient organic computing systems, so this can also be used to further improve other SASO systems. Achieving the goals of the OC-Trust research unit will be an important step towards taking SASO systems out of research laboratories and into innovative software companies and, ultimately, into real-world applications. In the future, we plan to improve both the agent and system levels. We plan to build proactive, trust-strategic agents by including learning, trend analysis and prediction into their awareness. As they change their behaviour at runtime, we need mechanisms to detect these changes. At the system level, we will introduce explicit TCs with a unique membership function and a Trusted Manager as a hierarchical component that observes and, if necessary, influences the behaviour of the TC members.




Authors

Yvonne Bernard
Leibniz Universität Hannover - SRA

Yvonne Bernard received an MSc in computer science from the Leibniz Universität Hannover, Hannover, Germany in 2008, where she is currently pursuing her PhD in computer science. Her research interests are in the areas of self-organization paradigms, emergence, trustworthiness of systems and multi-agent systems.

Lukas Klejnowski
Leibniz Universität Hannover - SRA

Lukas Klejnowski received his MSc in computer science in 2009 from the Leibniz Universität Hannover, Germany, where he is currently pursuing his PhD at the Institute of Systems Engineering–System and Computer Architecture. He is with the Organic Computing Group, and his research focuses on architectures and algorithms in the field of distributed systems, desktop grids and multi-agent systems.

Jörg Hähner
Leibniz Universität Hannover - SRA

Jörg Hähner received an MSc in computer science from the Darmstadt University of Technology, Darmstadt, Germany, in 2001 and the Dr. rer. nat. in computer science from the Universität Stuttgart, Stuttgart, Germany, in 2006. He worked in the area of data management in mobile ad-hoc networks (MANETs) and, in 2006, was appointed assistant professor in the System and Computer Architecture Group at Leibniz Universität Hannover, Hannover, Germany. His research focuses on architectures and algorithms in the field of organic computing (e.g., distributed smart camera systems, mobile ad-hoc and sensor networks and global scale peer-to-peer systems). In 2012, he was appointed full professor of Organic Computing at Augsburg University, Augsburg, Germany.

Christian Müller-Schloer
Leibniz Universität Hannover - SRA

Christian Müller-Schloer received his PhD in semiconductor physics from the Technical University of Munich, Munich, Germany, in 1975 and 1977, respectively. In 1977, he joined Siemens Corporate Technology where he worked in a variety of research fields, among them computer-aided design (CAD) for communication systems, cryptography, simulation accelerators, and reduced instruction set computing (RISC) architectures. From 1980 to 1982, he was a member of the Siemens Research Labs, Princeton, NJ. In 1991, he was appointed full professor of Computer Architecture and Operating Systems at the University of Hannover, Hannover, Germany. His institute, later renamed to the Institute of Systems Engineering–System and Computer Architecture, was engaged in systems-level research, including system design and simulation, embedded systems, virtual prototyping, educational technology and, since 2001, adaptive and self-organizing systems. He is author of more than 110 papers and several books. His current projects, predominantly in the area of organic computing, deal with quantitative emergence and self-organization, organic traffic control, self-organizing smart camera systems and ontology-based self-organizing embedded systems. Currently, he is with the Leibniz Universität Hannover, Hannover, Germany. He is one of the founders of the German Organic Computing initiative which was launched in 2003 with support of GI and itg, the two key professional societies for computer science in Germany. In 2005, he co-initiated the Special Priority Programme on Organic Computing of the German Research Foundation (DFG).


References
  1. Website for the Organic Computing Initiative: http://www.organic-computing.com/

  2. S. Choi, H. Kim, E. Byun, M. Baik, S. Kim, C. Park and C. Hwang, Characterizing and classifying Desktop Grid, 7th IEEE Int'l Symp. on Cluster Computing and the Grid (CCGrid '07), pp. , 2007.

  3. Y. Bernard, L. Klejnowski, J. Hähner and C. Müller-Schloer, Efficiency and robustness using Trusted Communities in a Trusted Desktop Grid, Proc. the 2011 4th IEEE Int'l Conf. on Self-Adaptive and Self-Organizing Systems Workshop (SASOW), IEEE Computer Society Press, 2011.

  4. A. Vasalou, A. Hopfensitz and J. V. Pitt, In praise of forgiveness: Ways for repairing trust breakdowns in one-off online interactions, Int. J. Hum.-Comput. Stud. 66, pp. June, 2008.


 
DOI:  10.2417/3201203.004065

Stay Informed