Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2017
…
7 pages
1 file
There has been much recent interest in the topic of goal reasoning: where do an agent’s goals come from and how is it decided which to pursue? Previous work has described goal reasoning as a unique and separate process apart from previously studied AI functionalities. In this paper, we argue an alternative view: that goal reasoning can be thought of as multilevel planning. We demonstrate that scenarios previously argued to support the need for goal reasoning can be handled easily by an on-line planner, and we sketch a view of how more complex situations might be handled by multiple planners working at different levels of abstraction. By considering goal reasoning as a form of planning, we simplify the AI research agenda and highlight promising avenues for future planning research.
2013
This is the third in a series of workshops related to this topic, the first of which was the AAAI-10 Workshop on Goal-Directed Autonomy while the second was the Self-Motivated Agents (SeMoA) Workshop, held at Lehigh University in November 2012. Our objective for holding this meeting was to encourage researchers to share information on the study, development, integration, evaluation, and application of techniques related to goal reasoning, which concerns the ability of an intelligent agent to reason about, formulate, select, and manage its goals/objectives. Goal reasoning differs from frameworks in which agents are told what goals to achieve, and possibly how goals can be decomposed into subgoals, but not how to dynamically and autonomously decide what goals they should pursue. This constraint can be limiting for agents that solve tasks in complex environments when it is not feasible to manually engineer/encode complete knowledge of what goal(s) should be pursued for every conceivable state. Yet, in such environments, states can be reached in which actions can fail, opportunities can arise, and events can otherwise take place that strongly motivate changing the goal(s) that the agent is currently trying to achieve. This topic is not new; researchers in several areas have studied goal reasoning (e.g., in the context of cognitive architectures, automated planning, game AI, and robotics). However, it has infrequently been the focus of intensive study, and (to our knowledge) no other series of meetings has focused specifically on goal reasoning. As shown in these papers, providing an agent with the ability to reason about its goals can increase performance measures for some tasks. Recent advances in hardware and software platforms (involving the availability of interesting/complex simulators or databases) have increasingly permitted the application of intelligent agents to tasks that involve partially observable and dynamically-updated states (e.g., due to unpredictable exogenous events), stochastic actions, multiple (cooperating, neutral, or adversarial) agents, and other complexities. Thus, this is an appropriate time to foster dialogue among researchers with interests in goal reasoning. Research on goal reasoning is still in its early stages; no mature application of it yet exists (e.g., for controlling autonomous unmanned vehicles or in a deployed decision aid). However, it appears to have a bright future. For example, leaders in the automated planning community have specifically acknowledged that goal reasoning has a prominent role among intelligent agents that act on their own plans, and it is gathering increasing attention from roboticists and cognitive systems researchers. In addition to a survey, the papers in this workshop relate to, among other topics, cognitive architectures and models, environment modeling, game AI, machine learning, meta-reasoning, planning, selfmotivated systems, simulation, and vehicle control. The authors discuss a wide range of issues pertaining to goal reasoning, including representations and reasoning methods for dynamically revising goal priorities. We hope that readers will find that this theme for enhancing agent autonomy to be appealing and relevant to their own interests, and that these papers will spur further investigations on this important yet (mostly) understudied topic. Many thanks to the participants and ACS for making this event happen!
2005
Mixed-initiative planning systems attempt to integrate human and AI planners so that the synthesis results in high quality plans. In the AI community, the dominant model of planning is search. In state-space planning, search consists of backward and forward chaining through the effects and preconditions of operator representations. Although search is an acceptable mechanism to use in performing automated planning, we present an alternative model to present to the user at the interface of a mixed-initiative planning system. That is we propose to model planning as a goal manipulation task. Here planning involves moving goals through a hyperspace in order to reach equilibrium between available resources and the constraints of a dynamic environment. The users can establish and "steer" goals through a visual representation of the planning domain. They can associate resources with particular goals and shift goals along various dimensions in response to changing conditions as well as change the structure of previous plans. Users need not know details of the underlying technology, even when search is used within. Here we empirically examine user performance under both alternatives and see that many users do better with the alternative model.
2018
Human-agent teaming is a difficult yet relevant problem domain to which many goal reasoning systems are well suited, due to their ability to accept outside direction and (relatively) human-understandable internal state. We propose a formal model, and multiple variations on a multi-agent problem, to clarify and unify research in goal reasoning. We describe examples of these concepts, and propose standard evaluation methods for goal reasoning agents that act as a member of a team or on behalf of a supervisor.
Proceedings of the AAAI Conference on Artificial Intelligence, 2019
In part motivated by topics such as agency safety, there is an increasing interest in goal reasoning, a form of agency where the agents formulate their own goals. One of the crucial aspects of goal reasoning agents is their ability to detect if the execution of their courses of actions meet their own expectations. We present a taxonomy of different forms of expectations as used by goal reasoning agents when monitoring their own execution. We summarize and contrast the current understanding of how to define and check expectations based on different knowledge sources used. We also identify gaps in our understanding of expectations.
An intelligent agent should be able to decide upon a course of actions to achieve certain goals. A plan is such a course of actions and the planning problem is the problem of f inding a plan for a given representation of actions in the world the agent operates in. In this paper, we provide a survey of the developments in AI planning research, with an emphasis on recent techniques. Introduction, Background and Early Work: An intelligent agent should be able to decide upon a course of actions to achieve certain goals. A plan is such a course of actions and the planning problem is the problem of f inding a plan for a given representation of actions in the world the agent operates in. GPS (general problem solver) was the first planner to distinguish between general problemsolving knowledge and domain knowledge. It introduced means-end analysis in its search procedure. This approach tried to find a difference between the current object and the goal, and then used a lookup table to invoke...
informatik.uni-freiburg.de
One of the most important applications of planning technology is guiding robotic agents in an autonomous fashion through complex problem scenarios. Increasingly, real-world scenarios are evolving in ways that require intensive interaction between human actors and the robotic agent, mediated by a planning system. We propose to demonstrate an integrated system in one such problem that falls under the aegis of an urban search and rescue (USAR) scenario. We show a simulation of a run through one such problem where ...
2002
An important concept for intelligent agent systems is goals. Goals have two aspects: declarative (a description of the state sought), and procedural (a set of plans for achieving the goal). A declarative view of goals is necessary in order to reason about important properties of goals, while a procedural view of goals is necessary to ensure that goals can be achieved efficiently in dynamic environments. In this paper we propose a framework for goals which integrates both views. We discuss the requisite properties of goals and the link between the declarative and procedural aspects, then derive a formal semantics which has these properties. We present a high-level plan notation with goals and give its formal semantics. We then show how the use of declarative information permits reasoning (such as the detection and resolution of conflicts) to be performed on goals.
Journal of Automated Reasoning, 2011
It is important that intelligent agents are able to pursue multiple goals in parallel, in a rational manner. In this work we have described the careful empirical evaluation of the value of data structures and algorithms developed for reasoning about both positive and negative goal interactions. These mechanisms are incorporated into a commercial agent platform and then evaluated in comparison to the platform without these additions. We describe the data structures and algorithms developed, and the X-JACK system, which incorporates these into JACK, a state of the art agent development toolkit. There are three basic kinds of reasoning that are developed: reasoning about resource conflicts, reasoning to avoid negative interactions that can happen when steps of parallel goals are arbitrarily interleaved, and reasoning to take advantage of situations where a single step can help to achieve multiple goals. X-JACK is experimentally compared to JACK, under a range of situations designed to stress test the reasoning algorithms, as well as situations designed to be more similar to real applications. We found that the cost of the additional reasoning is small, even with large numbers of goal interactions to reason about. The benefit however is noticeable, and is statistically significant, even when the amount of goal interactions is relatively small.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings of the third annual conference on Autonomous Agents - AGENTS '99, 1999
arXiv (Cornell University), 2021
Advances in Cognitive Systems, 2017
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021
Advances in Cognitive Systems, 2017
Annals of Mathematics and Artificial …, 2003