Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010, Logic Journal of IGPL
…
20 pages
1 file
One way to allocate tasks to agents is by ascribing them obligations. From obligations to be, agents are able to infer what are the forbidden, permitted and obligatory actions they may perform, by using the wellknown Meyer's reduction from obligations to be to obligations to do. However, we show through an example that this method is not completely adequate to guide agents' decisions. We then propose a solution using, instead of obligations, the concept of 'responsibility'. To formalise responsibility we use a multi-agent extension of propositional dynamic logic as framework, and then we define some basic concepts, such as 'agent ability', also briefly discussing the problem of uniform strategies and a possible solution. In the last part, we show that our framework can be used in the specification of normative multi-agent systems, by presenting an extensive running example.
Proceedings of the sixth international conference on Artificial intelligence and law - ICAIL '97, 1997
In this paper, we are interested in formally modeling the concept of responsibility. It appears that this concept is essential in order to reason in many norm-governed organizations. However, obtaining a formal representation of responsibility is quite complex because of the very di erent meanings this concept can take. Therefore, our rst task will be to clarify and classify the various meanings. We then propose a logical framework and show how it enables us to model several aspects of responsibility. This framework combines a deontic logic with a logic of actions and it distinguishes between direct and indirect agencies. We nally present an example to illustrate how this framework enables us to analyze some subtleties of a speci c situation.
2020
We formally introduce a novel, yet ubiquitous, category of norms: norms of instrumentality. Norms of this category describe which actions are obligatory, or prohibited, as instruments for certain purposes. We propose the Logic of Agency and Norms (\(\mathsf {LAN}\)) that enables reasoning about actions, instrumentality, and normative principles in a multi-agent setting. Leveraging \(\mathsf {LAN}\), we formalize norms of instrumentality and compare them to two prevalent norm categories: norms to be and norms to do. Last, we pose principles relating the three categories and evaluate their validity vis-a-vis notions of deliberative acting. On a technical note, the logic will be shown decidable via the finite model property.
… Second International Joint …, 2011
We propose a logical framework to represent and reason about agent interactions in normative systems. Our starting point is a dynamic logic of propositional assignments whose satisfiability problem is PSPACE-complete. We show that it embeds Coalition Logic of Propositional Control CL-PC and that various notions of ability and capability can be captured in it. We illustrate it on a water resource management case study. Finally, we show how the logic can be easily extended in order to represent constitutive rules which are ...
Lecture Notes in Computer Science, 2012
In this article, we propose a Dynamic Logic of Propositional Control DL-PC in which the concept of 'seeing to it that' (abbreviated stit) as studied by Belnap, Horty and others can be expressed; more precisely, we capture the concept of the so-called Chellas stit theory and the deliberatibe stit theory, as opposed to Belnap's original achievement stit. In this logic, the sentence 'group G sees to it that ϕ' is defined in terms of dynamic operators: it is paraphrased as 'group G is going to execute an action now such that whatever actions the agents outside G can execute at the same time, ϕ is true afterwards'. We also prove that the satisfiability problem is decidable. In the second part of the article we extend DL-PC with operators modeling normative concepts, resulting in a logic DL-PC Leg . In particular, we define the concepts of 'legally seeing to it that' and 'illegally seeing to it that'. We prove that the decidability result for DL-PC transfers to DL-PC Leg .
The contribution of this paper 1 is threefold: First, we outline some possibilities for use of deontic logics (the logics of obligation and permission) in multiagent systems. Second, we point out some problems pertaining to the formalisation of obligations in a well known approach to multiagent systems (i.e. AOP). Third, we offer our view on how a set of deontic operators might be fruitfully employed in the analysis of multiagent systems, and how a particular combination of them may be used to avoid the problems in AOP. 1 Published in Aamodt, Agnar and Komorowski, Jan (eds); SCAI'95 -- Fifth Scandinavian Conference on Artificial Intelligence; Trondheim May 29 -- 31, 1995; ISO Press; Amsterdam, 1995. pp. 19 -- 30. ISBN 90-5199-221-1. 232 MEDLAR II Deliverable V.2--3 1 Motivation On its never ending search for information, the internet agent Hugin-13 approaches a promising source at Yggdrasil.no. The guarding agent Mime-7 intervenes, and the two agents go into conversation m...
Journal of Applied Logic, 2011
This issue reports on the latest developments in formal approaches to intelligent agents and multi-agent systems based on modal logics and their applications on various aspects of agency. Intelligent agents, be it on their own or as part of a multi-agent system, operate in a dynamic and complex environment and hence they need to be able to cope with uncertainty and reason with often incomplete information. Agents must interact and also cooperate with one another to be able to meet their intended goals. Formal approaches may assist the developers of agent-based systems in modelling and verifying that the actions of intelligents agents and multi-agent interactions will lead to a desirable outcome. In order to develop theories to specify and reason about various aspects of intelligent agents and multi-agent systems, many researchers have therefore proposed the use of modal logics such as logics of beliefs, knowledge, norms, obligations and time, as these are among the concepts that are critical to an understanding of intelligent agent behaviour.
Agent-Based Defeasible Control in Dynamic Environments, 2002
We present a formal system to reason about and specify the behavior of multiple intelligent arti cial agents. Essentially, each agent can perform certain actions, and it may possess a variety of information in order to reason about its and other agent's actions. Thus, our KARO-framework tries to deal formally with the notion of Knowledge, possessed by the agents, and their possible execution of actions. In particular, each agent m a y reason about its |or, alternatively, other's| Abilities to perform certain actions, the possible Results of such an execution and the availability of the Opportunities to take a particular action. Formally, w e combine dynamic and epistemic logic into one modal system, and add the notion of ability to it. We demonstrate that there are several options to de ne the ability to perform a sequentially composed action, and we outline several properties under two alternative c hoices. Also, the agents' views on the correctness and feasibility o f their plans are highlighted. Finally, the complications in the completeness proof for both systems indicate that the presence of abilities in the logic makes the use of in nite proof rules useful, if not inevitable.
Proceedings. IEEE/WIC/ACM International Conference on Intelligent Agent Technology, 2004. (IAT 2004)., 2004
A theory of rational decision making in normative multiagent systems has to distinguish among the many reasons why agents fulfill or violate obligations. We propose a classification of such reasons for single cognitive agent decision making in a single normative system, based on the increasing complexity of this agent. In the first class we only consider the agent's motivations, in the second class we consider also its abilities, in the third class we consider also its beliefs, and finally we consider also sensing actions to observe the environment. We sketch how the reasons can be formalized in a normative multiagent system with increasingly complex cognitive agents.
2005
This special issue contains four selected and revised papers from the second international workshop on normative multiagent systems, for short NorMAS07 (Boella et al. (eds) Normative multiagent systems. Dagstuhl seminar proceedings 07122, 2007), held at Schloss Dagstuhl, Germany, in March 2007. At the workshop a shift was identified in the research community from a legal to an interactionist view on normative multiagent systems.
Lecture Notes in Computer Science, 2001
In this paper we present a theory for reasoning about actions which is based on the Product Version of Dynamic Linear Time Temporal Logic (denoted DLT L ⊗) and allows to describe the behaviour of a network of sequential agents which coordinate their activities by performing common actions together. DLT L ⊗ extends LTL, the propositional linear time temporal logic, by strengthening the until operator by indexing it with the regular programs of dynamic logic. Moreover, it allows the formulas of the logic to be decorated with the names of sequential agents, taken from a finite set. The action theory we propose is an extension of the theory presented in [8], which is based on the logic DLTL, and allows reasoning with incomplete initial states and dealing with postdiction, ramifications as well as with nondeterministic actions. Here we extend this theory to cope with multiple agents synchronizing on common actions.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Expert Systems with Applications, 2009
Studia Logica, 2016
Foundations and Applications of Multi-Agent …, 2002
Topics in Artificial Intelligence, 1995
HAL (Le Centre pour la Communication Scientifique Directe), 2018
arXiv: Logic in Computer Science, 2017
Lecture Notes in Computer Science, 1996
Artificial Intelligence, 2021