Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2005
…
13 pages
1 file
For over a decade, agent research has shown that social commitments support the definition of open multiagent systems by capturing the responsibilities that agents contract toward one another through their communications. These systems, however, rely on the assumption that agents respect the social commitments they adopt. To overcome this limitation, in this paper we investigate the role of sanctions as elements whose enforcement fosters agents' compliance with adopted commitments.
Cognitive Science Quarterly, 2002
Many social interactions between agents demand the use of commitments to reach socially efficient or avoid socially inefficient outcomes. Commitments express the desires, goals, or intentions of the agents in an interaction. In this article, we distinguish between unilateral and bilateral commitments, and between whether or not an agent has to agree with a commitment made by the other agent before the commitment becomes effective. Using a game-theoretic model, we will show that, depending on the incentive structure, different interactions require different types of commitments to reach socially efficient outcomes. Based on these results, we discuss whether existing (or slightly adapted) logical formalizations are adequate for the description of certain types of commitments and which formalization is suitable for reaching a socially efficient outcome in a specific interaction. We claim that a logical formalization of commitment aiming at a socially efficient outcome should be based on assumptions about the type of interaction and the suitable type of commitment. A more general conclusion of this article is that game-theoretic arguments can help to provide specifications for logical formalizations of systems of more agents if one has an idea about the incentive structure of the interaction.
Proceedings of the Workshop on …, 2002
Artificial Intelligence and Law, 2008
Software agents' ability to interact within different open systems, designed by different groups, presupposes an agreement on an unambiguous definition of a set of concepts, used to describe the context of the interaction and the communication language the agents can use. Agents' interactions ought to allow for reliable expectations on the possible evolution of the system; however, in open systems interacting agents may not conform to predefined specifications. A possible solution is to define interaction environments including a normative component, with suitable rules to regulate the behaviour of agents. To tackle this problem, we propose an application-independent model of artificial institutions that can be used to define open multiagent systems. With respect to other approaches to artificial (or electronic) institutions, which mainly focus on the definition of the normative component of open systems, our proposal has a wider scope, in that we model the social context of the interaction, define the semantics of an Agent Communication Language to operate on such a context, and give an operational definition of the norms that are necessary to constrain the agents' actions. In particular, we define the semantics of a library of communicative acts in terms of operations on agents' social reality, more specifically on commitments, and regard norms as event-driven rules that, when fired by events happening in the system, create or modify a set of commitments. An interesting aspect of our proposal is that both the definition of the ACL and the definition of norms are based on the same notion of commitment. Therefore an agent capable of reasoning on commitments can reason both on the semantics of communicative acts and on the normative system.
Artificial Intelligence and Law, 1999
Social commitments have long been recognized as an important concept for multiagent systems. We propose a rich formulation of social commitments that motivates an architecture for multiagent systems, which we dub spheres of commitment. We identify the key operations on commitments and multiagent systems. We distinguish between explicit and implicit commitments. Multiagent systems, viewed as spheres of commitment (SoComs), provide
Autonomous Agents and Multi-Agent …, 2007
Advances in Agent Communication, 2004
2015
Previous work describes the use of formal commitments to mediate the communication between autonomous agents through commitment-based protocols. I extend that work to examine the conditions that encourage the success of agents that use commitments. I define a parameterized and iterated Committer’s Dilemma game that extends the well-known Prisoner’s Dilemma game, and then use this game and different agent strategies to examine how the conditions for commitments affect game outcomes. I describe the results of multiple simulations on a multiagent society with various game parameters. Results show that committing and satisfying agent types dominate over other agent types most frequently (1) when commitments are frequently exchanged, (2) when games tend to end at other than a Nash equilibria, (3) when the cost to create a commitment is low, and (4) when the utility of a given good is about 40% to 60% of the utility of a received good. A classifier, with over 95% accuracy, is trained to p...
In this paper we address the problem of how the autonomy of agents in an organization can be enhanced by means of contracts. Contracts are modelled as legal institutions: systems of legal rules which allow to change the regulative and constitutive rules of an organization. The methodology we use is to attribute to organizations mental attitudes, beliefs, desires and goals, and to take into account their behavior by using recursive modelling.
2006
Abstract Social commitments have been increasingly used to model inter-agent dependencies and normative aspects of multi-agent systems such as the semantics of agent communication. However, current cognitive agent architecture rest on a formalization of private mental states. In this paper, we propose a modelling of the links between private mental states resulting in individual intentions and social commitments.
This paper explores a hitherto largely ignored dimension to norms in multi-agent systems: the normative role played by optimization objectives. We introduce the notion of optimization norms which constrain agent behaviour in a manner that is signi cantly distinct from norms in the traditional sense. We argue that optimization norms underpin most other norms, and o er a richer representation of these. We outline a methodology for identifying the optimization norms that under- pin other norms. We then de ne a notion of compliance for optimization norms, as well as a notion of consistency and inconsistency resolution. We o er an algebraic formalization of valued optimization norms which allows us to explicitly reason about degrees of compliance and graded sanctions. We then outline an approach to decomposing and distributing sanctions amongst multiple agents in settings where there is joint responsibility.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Applied Intelligence, 2013
Applied Intelligence, 2014
Declarative Agent Languages and Technologies IV, 2006
Multi Agent Systems, 1998. …, 2002
Declarative Agent Languages …, 2010
Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems - AAMAS '06, 2006
Proceedings. IEEE/WIC/ACM International Conference on Intelligent Agent Technology, 2004. (IAT 2004)., 2004
Autonomous Agents and Multiagent Systems
Journal of Logic and Computation, 2009
SSRN Electronic Journal, 2011