Intelligent Agents Roles Agent Architectures Agents, Roles and Architectures Systems of Agents Interaction The Social Dimension

General Concepts of Agents and Multiagent Systems

In the following, I will outline the basic characteristics of intelligent agents and multiagent systems (MAS). I will start with an introduction to intelligent agents and then extend the single-agent case to systems with several intelligent agents. The basic concepts are presented from a very general point-of-view; the reader already familiar with MAS may safely skip this page. The main aspect of this introduction is its use of a very general notation scheme for the description of agents in order to be independent of any particular agent school.

top

Intelligent Agents or What's an Agent, anyway?

Answering this question thoroughly is far beyond of the scope of this section as there exist roughly as many definitions of the term as there exist researchers in the field - or perhaps even more. Therefore, I will use a notion of agency that is widely accepted because it covers almost any of the more specific definitions and is therefore not suspicious of being too much biased. Basically, an agent is a software system that is situated in an environment and that operates in a continuous Perceive-Reason-Act (PRA) cycle. Thus, the agent receives some stimulus from the environment and processes this stimulus with its perceptual apparatus. Next the agent starts a reasoning process that combines the newly incorporated information and the agents existing knowledge and goals and this then determines possible actions of the agent. One of these possible actions is then selected and executed by the agent. The action activation changes the state of the environment which in turn generates new perceptions for the next cycle.

One of the basic assumptions in the above description about agents is that they are situated in some environment. Let us denote this environment by S as a set of external states without imposing any constraints on the structure of the elements in the set. Then, we can describe an agent as a 7-tuple (D, T, A, perceive, infer, select, act) where D is a database that contains the agent's acquired knowledge, a set T of partitions of the environment S which constitute the possible perceptions of the agent and a set A of possible actions of the agent. An agent is then defined by the following four functions. The perceive: S -> T function determines how the state of the environment is perceived by the agent, i.e. it limits the amount of information that is provided to a partial view on the complete state. The infer: D x T -> D function is used by the agent to update its internal knowledge base according to the newly received perceptions. The select: D x T -> A function is then used to determine the best action for the current cycle and the act: A x S -> S function, finally, changes the state of the environment accordingly.

To illustrate the ideas that are captured in this rather simple model, consider an automated container terminal where a robot has the task to unload incoming containers from trucks and to store them on shelves in the storage area. Using the abstract description scheme given above, the scenario is modeled as follows. The environment S of the agent (robot) is a grid world with labeled objects on grid locations, the possible actions A of the agent are pick_container, drive_to_location and drop_container, the robot's perception T is the content of the field in front of the robot and the knowledge base D of the agent, finally, contains the destination of each container that is delivered by a truck container.

The Perceive-Reason-Act cycle of the agent is started when the perceive function of the agent determines the presence of newly arrived containers in front of the robot (assuming that the default waiting position of the robot is at the container ramp of the terminal). Then, the infer function decides that the only possible action is to pick up a container which is consequently scheduled for execution by the select function and finally executed by the act function of the robot. As a result of this action, the state of the environment changes (because the robot is now holding a container) and thus the next PRS cycle is started in which the robot will determine the destination of the container and bring it into the storage area.

In this example, the problem solving capabilities that are necessary in the problem domain are directly associated with the agent. This concept, however, has sometimes shown to be too restrictive and it is becoming more accepted now that an intermediate concept that de-couples the agent from its associated problem solving capabilities adds clarity to the modeling process. This intermediate concept is called a role and it is discussed in the next section.

top

Roles

What is a role? Unfortunately, this question has no straightforward answer as there exist several definitions of the "role" concept in the agent research community that differ mainly in their focus on generic properties. A very general definition is to view a role as a primary sociological concept that must be operationalized for the context of agent systems. Thus the question of what is a role cannot be defined in a general fashion but needs the context in which it is to be used. A more specific definition for a role is to define it as "The functional or social part which an agent, embedded in a multiagent environment, plays in a (joint) process like problem solving, planning or learning...". Other authors limit the concept of a role to purely cognitive states that are defined by the knowledge, the permissions, the responsibilities and the assessment of the agents current situative context.

Probably the best idea to work on a broad definition that is still useful is to start in the field of sociology as suggested above. There, the major characteristics of a role are given as follows:

  1. A role is a collection of expectations towards the behavior of the inhibitor of a particular position that allows the members of the society to predict the inhibitors behavior and to plan according to their expectations.
  2. There exist mutual dependencies between roles, some roles can only exist if other roles do exist as well, for example the role of a "teacher" only makes sense if the corresponding role of (at least one) "pupil" exists as well.
  3. A member of a society can play several roles even at the same time. This property is called role multiplicity and can lead to so-called role conflicts.

A major problem in the field of sociology is the delimitation of roles that occur within a society. Not every set of coherent behaviors can be regarded as a role, there must exist some special properties that make such a set a role. In developing agent applications, the system designer is faced with a similar problem in identifying coherent sets of behaviors that can be grouped together to form the roles that occur in the problem domain. However, I will postpone this problem to Role view where I will discuss my approach to solving it and instead continue with the more abstract view on agents that we begun with earlier in this chapter.

Formally speaking, the concept of a role is modeled as an extension of the agents current knowledge, the possible actions and the perceive, infer, select and act functions. Thus, agents that can play several roles from a set of roles R are described by the 7-tuple (D U D_r, T, A U A_r, perceive U perceive_r , infer U infer_r, select U select_r, act U act_r) with r denoting the role index in the set of roles R.

To illustrate these ideas, we add a second role of the robot within the container terminal scenario by extending the original perception function of the agent by the possibility to receive external commands. The (human) area operator can now direct the robot to search for a particular container in the stock and to report the status and position of the container. Thus, we now have two possible roles for the agent, i.e. "carrier" or "verifier".

top

Agent Architectures

In order to show how the theoretical concepts are actually implemented into computer hard- and software we need an intermediate layer of abstraction that is provided by agent architectures. Elsewhere, an agent architecture is defined "as the portion of a system that provides and manages the primitive resources of an agent". This definition, however, is still to general to be directly applicable and therefore, a two step process is used to bring the conceptual abstractions down to an actual implementation.

The first level of abstraction is given by cognitive models that refine the basic abstractions into more specific concepts. One of the most prominent examples for a cognitive model are BDI architectures that have gained much attention in the agent community in recent years. In the BDI theory, an agent is described by its Beliefs that determine the agents current world knowledge, its Desires that determine the goals of the agent and finally, the Intentions that are generated from reasoning about the current beliefs and goals and therewith determine the best possible actions.

But even these more concrete specifications of agents are still difficult to break down into operational concepts. Therefore, a second level of abstraction is necessary that describes how the abstract concepts of the first level are made executable on computer hardware. Mike Wooldridge suggests three possible means to achieve this goal. The first possibility is functional refinement as it is common in most standard Software Engineering environments, the second one is direct execution of the specifications which implies powerful description languages and runtime environments and the third possibility, finally, is compilation of the abstract specification into executable code. All three of this methods are currently used and none of these has proven to be better then any other.

Because of the huge impact of the decision for a particular agent architecture, the Architecture view will provide the reader with a characterization scheme that structures the requirements of the problem domain and that supports the decision for or against a particular architecture. Therefore I will not go into further details but instead discuss the connection between agents, roles and architectures in the next section.

top

Agents, Roles and Architectures

In the previous sections, I have introduced three fundamental aspects of intelligent agents. But how do these concepts relate to each other? Basically, the relation between agents, roles and architectures can be reduced to the following equation.

agent = roles + architecture

Thus, an "agent" is an abstract concept that is filled with a particular content by defining the possible roles of the agent and by providing a runtime environment that is capable of executing the given role models. The concept of an agent thus encloses the architecture that contains the perception and actuation subsystem as well as the role interpreter. which links the domain-independent architecture to the domain specific aspects of the different roles by associating each role with particular tasks.

In the example of the container terminal, the hardware of the robot corresponds to the agent architecture that implements the runtime environment for the possible roles. The roles itself are modeled as task trees, e.g. the "carrier" role has the subtasks of checking for incoming containers, determining the destination of each container and then taking each container to the indicated destination.

The above relation between the basic entities in an agent application has some implications on the development of agent (and later multiagent) applications because it determines the basic structure of each application in this class. A possible generic application architecture may consists of three layers: the basic layer is the Plattform Layer that hosts the target application. We may distinguish between the rather simple case where the entire applications runs on the same hardware platform and the case of more complicated applications, where plattform may spread over several hardware platforms and therefore may require an additional layer that provides an abstract interface to the individual platforms. The Agent Layer of the generic application architecture contains two major elements. The Agent Management System provides the interface between the agent architecture and the hardware platform and the Agent Architecture implements the runtime environment for the domain dependent roles of the agent. The roles themselves are subject to the Domain Layer that covers the domain specific aspects. Additionally, there may exist an interface between the agent management system and the domain layer which is necessary whenever the systems functionality is not entirely covered by agents.

top

Systems of Agents

In a multiagent system, several intelligent agents exist within the same environment. The term "environment" is hereby used is a very broad sense and covers physical environments for robotic agents as well as runtime environments for software agents, virtual reality environments etc. To express the fact of a shared environment, a system of multiple agents is described by a set structure {S, (D, T, A, perceive, infer, select, act )_i } where S denotes the environment just like before and each of the different agents that share the environment has a unique identifier i that distinguishes it from the other agents. Other forms of agent coupling then that via the environement have also been discussed by researchers. However, I will limit the focus to this particular form a coupling via the environment because it is predominant in the multiagent research community and none of the other forms presented there has gained general acceptance. The main feature of a system that is comprised of several intelligent entities is that a major part of the systems functionality is not explicitly and globally specified, but that it emerges from the interaction between these individual entities. Therefore, interaction is the main aspect of multiagent systems.

top

Interaction

Coordinated interaction among several autonomous entities is the core concept of multiagent technology. But first of all, what is interaction? To be as general as possible, let us define the concept as follows.

Definition[Interaction] Interaction is the mutual adaption of the behavior of agents while preserving individual constraints.

This very general definition has some interesting properties that need further clarification. First of all, interaction is not limited to explicit communication or even more specific to the case of message exchange as, of course, the predominant means in the multiagent literature. Interaction is defined as any kind of behavior that is related to other agents. The example of an ant hill illustrates the basic idea where the single ant does not reflect about the existence of other ants but it still adapts its behavior to the behavior of other ants in a way such that the entire society shows coordinated interaction. The communication between the ants is carried out by several means e.g. physical tactile behavior, chemical substances, vision and others.

The second important property of the above definition is the focus on mutual adaption, i.e. the requirement that the participating agents co-ordinate their behavior. For example, a pedestrian, who jumps out of the way of an approaching car and the driver of the respective car do not have any kind of interaction according to our definition. The pedestrian has unilaterally adapted his behavior in order to avoid a situation that would have yielded a worse payoff to him then to the driver of the car.

The third major focus of the above definition is the aspect of balancing between social behavior that is manifested in the mutual adaption and the self-interest of the agent. Neither egoism nor altruism are the best means to achieve globally optimal system states, but a good combination of these two aspects of interaction can yield the best global results. It is therefore important to equip the agents within a multiagent systems with a mix of self-interest and social consciousness that allows them to value the performance of the entire society over their individual performance.

Especially the second and third of the above properties of the definition contain the potential for conflicts within the agent society that must be resolved in one way or the other.

Coordination in a (natural or artificial) society is the process of conflict resolution within the society and can be achieved in a number of ways. The most natural conflict resolution strategy that can be found in a physical environment is simply to do nothing. The laws of physics clearly define the outcome of actions that include more than one agent. For example, two robots that approach the same location will be coordinated by the physical law that only one of them can occupy the particular location. Thus, either the first robot to reach the location or the stronger of the two robots will finally occupy it. Obviously, this is a somewhat artificial example for a coordination strategy and it far away from being reasonable. Although it is straightforward and therefore easy to implement, it is usually a better idea to implement some sort of collision avoidance strategy except for the case when you have extremely robust robots.

The second conflict resolution strategy uses external mediation to solve the conflict. Mediation means that the conflicting parties apply to a third, neutral party that decides what should be done. The most important prerequisite for this sort of conflict resolution is the mutual agreement of the agents to obey the decision of the mediator. The advantage of the mediation solution is that the decision about what to do is not based on local preferences of the agents but on a more global view (depending on the knowledge of the mediator). However, this sort of conflict resolution is often not seen as "real" multiagent technology because the agents loose some of their autonomy by relying on an external mediator.

The third way of conflict resolution, finally, is negotiation. This approach is mostly used in multiagent systems and it has shown to be a powerful tool to solve all kinds of conflict situations. In a conflict resolution process based on negotiation mechanisms, the agents exchange messages until they have reached a agreement on how the conflict is settled to their mutual benefit. Besides its ability to solve conflicts among agents, negotiation mechanisms are a good means to attack complex optimization problems by simulation a market situation where the agents negotiate in order to find a solution that optimizes the local performance of the agents as well as the global performance of the entire agent society. Coordinated interaction is the core concept of multiagent systems and the most common form of coordinated interaction that is used in multiagent systems is negotiation. Generic forms of interaction are discussed in the Interaction view and so I will not go into further details here. Instead, we will now turn to structural aspects of an agent society.

top

The Social Dimension

The social structure of a society determines how the entities within the society relate to each other. Thus, the major questions that occur in conjunction with the social structure of an artificial society are clearly related to sociology and to organizational theory and therefore, I will use some definitions from the field of sociology to explain the basic ideas of agent societies.

Definition[Structure] A structure is a collection of entities that are connected in a non-random manner.

This definition prescribes two important properties for an agent society that is built upon the definition. First, it requires the agents to be able to perceive existence of other agents, otherwise the term "connected" would not make sense. Second, it requires that the agents are arranged towards a particular intention, i.e. any structure must have a purpose. The structural description of the agent society can serve two purposes. A descriptive society model is used to model a society that already exists and which should either be modeled by a multiagent system or that constitutes the organizational context of the system, whereas a prescriptive society model captures the developers intention of how the agent society should look like. Regardless of the purpose of the characterization, however, a sociogram can be used to express the structural connections between the agents in an agent society. Each link in a sociogram has an associated characterization of its meaning that describes the nature of the connection between the entities.

However, having a non-random structure alone does not make an agent society and thus we must find additional properties that refine the intuitive concept.

Definition[Society] A society is a structured set of agents that agree on a minimal set of acceptable behaviors.

This is a very general definition of the concept that leaves sufficient freedom for the system designer to model a wide variety of agent societies. The term "acceptable behavior", however, needs some additional clarification. The definition of what is acceptable behavior can be implemented into the agents themselves by the agent designer. The agents then do not have a chance to show non-acceptable behavior. In this case, it is straightforward to achieve acceptable behavior of all agents within the agent society. Unfortunately, this method is only applicable in closed agent societies. In an open agent society such as the Internet, no central definition of acceptable behavior exists. Each agent may have a different view on the topic and the first difficulty is for the agents to agree on a common definition. Furthermore, the agent society must be equipped with punishment mechanisms that can be used against agents that violate the commonly agreed definition. This case is difficult to handle and up to now, no satisfactory solution (especially for punishment mechanisms) has been proposed.

However, having defined the concept of an agent society is still not sufficient. An agent society, just like a structure, does not exist for its own right, instead it must have a purpose. Hence the next definition.

Definition[Social System] A social system is a society that implements a closed functional context with respect to a common goal.

This definition adds the teleological component to the agent society in that it puts the society into a well defined functional context. Thus the agents within a social system must have a common goal that they pursue as long as they are part of the society.

It is one of the most difficult parts of the development process to find the society structure that is suited best for a particular functional specification because the quality of the solution is usually determined by several, sometimes contradicting, aspects. In the Society view, I will outline some of this influential factors and present a micro process model that supports the developer in finding the best society structure for a given problem.

top