Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
This paper presents an investigation of rational agents tha t have limited computational resources and intentionally interact with t heir environments. We present an example logical formalism, based on Active Logic and Sit- uation Calculus, that can be employed in order to satisfy the requirements arising due to being situated in a dynamic universe. We analyse how such agents can combine, in a time-aware fashion, inductive learning from expe- rience and deductive reasoning using domain knowledge. In particular, we consider how partial plans are created and reasoned about, focusing on what new information can be provided as a result of action execution.
2008
Rational, autonomous agents that are able to achieve their goals in dynamic, partially observable environments are the ultimate dream of Artificial Intelligence research since its beginning. The goal of this PhD thesis is to propose, develop and evaluate a framework well suited for creating intelligent agents that would be able to learn from experience, thus becoming more efficient at solving their tasks.
1999
We present an approach toward design of a rational agent, integrating aspects of theoretical reasoning, practical reasoning, and reasoning about and executing plans. The approach uses Active Logic, which combines reactivity and logical inference, taking resource bounds into account, and providing mechanisms for handling contradiction. We augment this logic with a formalization of practical reasoning and plan execution, which also makes uses of contradiction handling abilities to cope with plan failure. We conclude with a description of a preliminary implementation and plans for embedding that within a dialogue system.
2000
This paper is a first attempt towards a theory for reactive planning systems, i.e. systems able to plan and control execution of plans in a partially known and unpredictable environment. We start from an experimental real world application developed at IRST, discuss some of the fundamental requirements and propose a formal theory based on these requirements. The theory takes into account the following facts: (1) actions may fail, since they correspond to complex programs controlling sensors and actuators which have to work in an unpredictable environment; (2) actions need to acquire information from the real world by activating sensors and actuators; (3) actions need to generate and execute plans of actions, since the planner needs to activate different special purpose planners and to execute the resulting plans.
2000
The design of intelligent agents is a key issue for many applications. Since there is no universally accepted definition of intelligence, the notion of rational agency was proposed by Russell as an alternative for the characterization of intelligent agency. A rational agent must have models of itself and its surroundings to use them in its reasoning. To this end, this paper develops a formalism appropriate to represent the knowledge of an agent. Moreover, if dynamic environments are considered, the agent should be able to observe the changes in the world, and integrate them into its existing beliefs. This motivates the incorporation of perception capabilities into our framework. The abilities to perceive and act, essential activities in a practica! agent, demand a timely interaction with the environment. Given that the cognitive process of a rational agent is complex and computationally expensive, this interaction may not be easy to achieve. To solve this problem, we propase inference mechanisms that rely on the use precompiled knowledge to optimize the reasoning process.
The current paper details a restricted semantics for active logic, a time-sensitive, contradictiontolerant logical reasoning formalism. Central to active logic are special rules controlling the inheritance of beliefs in general, and beliefs about the current time in particular, very tight controls on what can be derived from direct contradictions (P &¬P ), and mechanisms allowing an agent to represent and reason about its own beliefs and past reasoning. Using these ideas, we introduce a new definition of model and of logical consequence, as well as a new definition of soundness such that, when reasoning with consistent premises, all classically sound rules are sound for active logic. However, not everything that is classically sound remains sound in our sense, for by classical definitions, all rules with contradictory premises are vacuously sound, whereas in active logic not everything follows from a contradiction.
Lecture Notes in Computer Science, 2005
In recent years, within the planning literature there has been a departure from approaches computing total plans for given goals, in favour of approaches computing partial plans. Total plans can be seen as (partially ordered) sets of actions which, if executed successfully, would lead to the achievement of the goals. Partial plans, instead, can be seen as (partially ordered) sets of actions which, if executed successfully, would contribute to the achievement of the goals, subject to the achievement of further sub-goals. Planning partially (namely computing partial plans for goals) is useful (or even necessary) for a number of reasons: (i) because the planning agent is resource-bounded, (ii) because the agent has incomplete and possibly incorrect knowledge of the environment in which it is situated, (iii) because this environment is highly dynamic. In this paper, we propose a framework to design situated agents capable of planning partially. The framework is based upon the specification of planning problems via an abductive variant of the event calculus.
This paper presents an investigation of rational agents that have limited computational resources and that can interact with their environments. We analyse how such agents can combine deductive reasoning using domain knowledge and inductive learning from past experiences, while remaining time-aware in a manner appropriate for beings situated in a dynamic universe. In particular, we consider how they can create and reason about partial plans, choose and execute the best ones of them -in such way as to acquire the most knowledge. We also discuss what are the different types of interactions with the world and how they can influence agent's ability to consciously direct its own learning process.
Workshop on Agent Based Computing (ABC'07), 2007
We study agents situated in partially observable environments, who do not have sufficient resources to create conformant (complete) plans. Instead, they create plans which are conditional and partial, execute or simulate them, and learn from experience to evaluate their quality. Our agents employ an incomplete symbolic deduction system based on Active Logic and Situation Calculus for reasoning about actions and their consequences. An Inductive Logic Programming algorithm generalises observations and deduced knowledge so that the agents can choose the best plan for execution. We describe an architecture which allows ideas and solutions from several subfields of Artificial Intelligence to be joined together in a controlled and manageable way. In our opinion, no situated agent can achieve true rationality without using at least logical reasoning and learning. In practice, it is clear that pure logic is not able to cope with all the requirements put on reasoning, thus more domainspecific solutions, like planners, are also necessary. Finally, any realistic agent needs a reactive module to meet demands of dynamic environments. Our architecture is designed in such a way that those three elements interact in order to complement each other's weaknesses and reinforce each other's strengths.
Journal of Logic Language and Information
Reasoning about change is a central issue in research on human and robot planning. We study an approach to reasoning about action and change in a dynamic logic setting and provide a solution to problems which are related to the frame problem. Unlike most work on the flame problem the logic described in this paper is monotonic. It (implicitly) allows for the occurrence of actions of multiple agents by introducing non-stationary notions of waiting and test. The need to state a large number of "frame axioms" is alleviated by introducing a concept of chronological preservation to dynamic logic. As a side effect, this concept permits the encoding of temporal properties in a natural way. We compare the relative merits of our approach and non-monotonic approaches as regards different aspects of the frame problem. Technically, we show that the resulting extended systems of propositional dynamic logic preserve (weak) completeness, finite model property and decidability.
2000
Abstract The paper discusses an architecture for intelligent agents based on the use of A-Prolog-a language of logic programs under the answer set semantics. A-Prolog is used to represent the agent's knowledge about the domain and to formulate the agent's reasoning tasks. We outline how these tasks can be reduced to answering questions about properties of simple logic programs and demonstrate the methodology of constructing these programs. Keywords: Intelligent agents, logic programming and nonmonotonic reasoning.
We present a temporal reasoning mechanism for an individual agent situated in a dynamic environment such as the web and collaborating with other agents while interleaving planning and acting. Building a collaborative agent that can flexibly achieve its goals in changing environments requires a blending of real-time computing and AI tech- nologies. Therefore, our mechanism consists of an Artificial Intelligence (AI) planning *This material is based upon work supported in part by the NSF, under Grant No. IIS-9907482.
Annals of Mathematics and Artificial …, 2003
We present a temporal reasoning mechanism for an individual agent situated in a dynamic environment such as the web and collaborating with other agents while interleaving planning and acting. Building a collaborative agent that can exibly achieve its goals in changing environments requires a blending of real-time computing and AI technologies. Therefore, our mechanism consists of an Arti cial Intelligence (AI) planning
Proceedings of the 10th International Workshop on Non-Monotonic Reasoning, Whistler BC, Canada, 2004
The aim of this work is to study an argumentationbased formalism that an agent could use for constructing plans. Elsewhere, we have introduced a formalism for agents to represent knowledge about their environment in Defeasible Logic Programming, and a set of actions that they are capable of executing in order to change the environment where they are performing their tasks. We have also shown that action selection, when combined with a defeasible argumentation formalism, is more involved than expected. In this paper we will ...
2006
Goals are used to define the behavior of (pro-active) agents. It is our view that the goals of an agent can be seen as a knowledge base of the situations that it wants to achieve. It is therefore in a natural way that we use Dynamic Logic Programming (DLP), an extension of Answer-Set Programming that allows for the representation of knowledge that changes with time, to represent the goals of the agent and their evolution, in a simple, declarative, fashion. In this paper, we represent agent’s goals as a DLP, discuss and show how to represent some situations where the agent should adopt or drop goals, and investigate some properties that emerge from using such representation.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2011
Action languages have gained popularity as a means for declaratively describing planning domains. This paper overviews two action languages, the Boolean language B and its multi-valued counterpart B M V . The paper analyzes some of the issues in using two alternative logic programming approaches (Answer Set Programming and Constraint Logic Programming over Finite Domains) for planning with B and B M V specifications. In particular, the paper provides an experimental comparison between these alternative implementation approaches.
ArXiv, 2014
In this paper we present the new logic programming language DALI, aimed at defining agents and agent systems. A main design objective for DALI has been that of introducing in a declarative fashion all the essential features, while keeping the language as close as possible to the syntax and semantics of the plain Horn--clause language. Special atoms and rules have been introduced, for representing: external events, to which the agent is able to respond (reactivity); actions (reactivity and proactivity); internal events (previous conclusions which can trigger further activity); past and present events (to be aware of what has happened). An extended resolution is provided, so that a DALI agent is able to answer queries like in the plain Horn--clause language, but is also able to cope with the different kinds of events, and exhibit a (rational) reactive and proactive behaviour.
2016
In this paper we present the new logic programming language DALI, aimed at defining agents and agent systems. A main design objective for DALI has been that of introducing in a declarative fashion all the essential features, while keeping the language as close as possible to the syntax and semantics of the plain Horn-clause language. Special atoms and rules have been introduced, for representing: external events, to which the agent is able to respond (reactivity); actions (reactivity and proactivity); internal events (previous conclusions which can trigger further activity); past and present events (to be aware of what has happened). An extended resolution is provided, so that a DALI agent is able to answer queries like in the plain Hornclause language, but is also able to cope with the different kinds of events, and exhibit a (rational) reactive and proactive behaviour.
Proceedings of the 5th International Conference on Agents and Artificial Intelligence, 2013
We propose an approach for single-agent epistemic planning in domains with incomplete knowledge. We argue that on the one hand the integration of epistemic reasoning into planning is useful because it makes the use of sensors more flexible. On the other hand, defining an epistemic problem description is an error prone task as the epistemic effects of actions are more complex than their usual physical effects. We apply the axioms of the Discrete Event Calculus Knowledge Theory (DECKT) as rules to compile simple non-epistemic planning problem descriptions into complex epistemic descriptions. We show how the resulting planning problems are solved by our implemented prototype which is based on Answer Set Programming (ASP).
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.