Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2006
In almost all current approaches to decision making, it is assumed that a decision problem is described by a set of states and set of outcomes, and the decision maker (DM) has preferences over a rather rich set of acts, which are functions from states to outcomes. However, most interesting decision problems do not come with a state space and an outcome space. Indeed, in complex problems it is often far from clear what the state and outcome spaces would be. We present an alternate foundation for decision making, in which the primitive objects of choice are syntactic programs. A program can be given semantics as a function from states to outcomes, but does not necessarily have to be described this way. A representation theorem is proved in the spirit of standard representation theorems, showing that if the DM's preference relation on programs satisfies appropriate axioms, then there exist a set S of states, a set O of outcomes, a way of viewing program as functions from S to O, a probability on S, and a utility function on O, such that the DM prefers program a to program b if and only if the expected utility of a is higher than that of b. Thus, the state space and outcome space are subjective, just like the probability and utility; they are not part of the description of the problem. A number of benefits of this approach are discussed.
2021
In Savage’s classic decision-theoretic framework [12], actions are formally defined as functions from states to outcomes. But where do the state space and outcome space come from? Expanding on recent work by Blume, Easley, and Halpern [3], we consider a language-based framework in which actions are identified with (conditional) descriptions in a simple underlying language, while states and outcomes (along with probabilities and utilities) are constructed as part of a representation theorem. Our work expands the role of language from that in [3] by using it not only for the conditions that determine which actions are taken, but also the effects. More precisely, we take the set of actions to be built from those of the form do(φ), for formulas φ in the underlying language. This presents a problem: how do we interpret the result of do(φ) when φ is underspecified (i.e., compatible with multiple states)? We answer this using tools familiar from the semantics of counterfactuals [13]: rough...
Journal of Economic Methodology, 2015
La théorie de la décision contemporaine accorde une importance considérable à une famille de résultats mathématiques qu'on appelle les théorèmes de représentations. Ces théorèmes relient des critères pour évaluer les options qui s'offrent au décideur (comme le critère de l'espérance d'utilité) à des axiomes qui portent sur ses préférences (comme l'axiome de transitivité). Plusieurs raisons ont été avancées pour expliquer ou défendre l'importance de ces résultats. L'objectif de cet article est d'évaluer leur rôle sémantique : dans cette perspective, les théorèmes de représentation ont pour fonction de fournir des définitions des concepts décisionnels mobilisés dans les critères d'évaluation (comme ceux d'utilité ou de probabilité subjective, qui sont mobilisés par le critère de l'espérance subjective d'utilité). Nous examinerons cette fonction en comparant les théorèmes de représentation aux théories philosophiques de la signification des termes dits théoriques.
La théorie de la décision contemporaine accorde une importance considérable à une famille de résultats mathématiques qu'on appelle les théorèmes de représentations. Ces théorèmes relient des critères pour évaluer les options qui s'offrent au décideur (comme le critère de l'espérance d'utilité) à des axiomes qui portent sur ses préférences (comme l'axiome de transitivité). Plusieurs raisons ont été avancées pour expliquer ou défendre l'importance de ces résultats. L'objectif de cet article est d'évaluer leur rôle sémantique : dans cette perspective, les théorèmes de représentation ont pour fonction de fournir des définitions des concepts décisionnels mobilisés dans les critères d'évaluation (comme ceux d'utilité ou de probabilité subjective, qui sont mobilisés par le critère de l'espérance subjective d'utilité). Nous examinerons cette fonction en comparant les théorèmes de représentation aux théories philosophiques de la signification des termes dits théoriques.
shortly before his death. As a scholar and a colleague, Karl has left us all much to be grateful for. We are particularly grateful to him for pushing us to deliver a paper for this conference, and then to generously comment on it despite his ill health. We are also grateful to several individuals for comments and to the many seminar audiences who have been generous with their time and comments.
2007
This work is a contribution to prioritized reasoning in logic programming in the presence of preference relations involving atoms. The technique, providing a new interpretation for prioritized logic programs, is inspired by the semantics of Prioritized Logic Programming and enriched with the use of structural information of preference of Answer Set Optimization Programming. Specifically, the analysis of the logic program is carried out together with the analysis of preferences in order to determine the choice order and the sets of comparable models. The new semantics is compared with other approaches known in the literature and complexity analysis is also performed, showing that, with respect to other similar approaches previously proposed, the complexity of computing preferred stable models does not increase.
Erkenntnis, 1989
A possible world semantics for preference is developed. The remainder operator (_1.) is used to give precision to the notion that two states of the world are as similar as possible, given a specified difference between them. A general structure is introduced for preference relations between states of affairs, and three types of such preference relations are defined. It is argued that one of them, "actual preference", corresponds closely to the concept of preference in informal discourse. Its logical properties are studied and shown to be plausible.
Institute of Mathematical Statistics Lecture Notes - Monograph Series, 2004
We generalize a set of axioms introduced by Rubin (1987) to the case of partial preference. That is, we consider cases in which not all uncertain acts are comparable to each other. We demonstrate some relations between these axioms and a decision theory based on sets of probability/utility pairs. We illustrate by example how comparisons solely between pairs of acts is not sufficient to distinguish between decision makers who base their choices on distinct sets of probability/utility pairs.
Uncertainty in Artificial Intelligence, 2004
This paper proposes a decision theory for a symbolic generalization of probability theory (SP). Darwiche and Ginsberg [2, 3] proposed SP to relax the requirement of using numbers for uncertainty while preserving desirable patterns of Bayesian reasoning. SP represents uncertainty by symbolic supports that are ordered partially rather than completely as in the case of standard probability. We show that a preference relation on acts that satisfies a number of intuitive postulates is represented by a utility function whose domain is a set of pairs of supports. We argue that a subjective interpretation is as useful and appropriate for SP as it is for numerical probability. It is useful because the subjective interpretation provides a basis for uncertainty elicitation. It is appropriate because we can provide a decision theory that explains how preference on acts is based on support comparison. * We thank Bharat Rao for all encouragement and support. We are also indebted to UAI referees for many constructive comments and criticism. desirable patterns of inference thought to be unique to Bayesian reasoning also hold for SP. Moreover, SP is shown to subsume not only standard probability calculus but also a number of important calculi used in AI e.g., propositional logic, non-monotonic reasoning, fuzzy possibility and objection-based reasoning. An open problem for SP is the formulation of a decision theory whose role is similar to the von Neumann-Morgenstern linear utility theory that makes probability so useful. The goal of this paper is to address this problem. We show that a preference relation on acts in a world described by SP structure could be modeled by a binary utility function if it satisfies a number of postulates. The binary utility has been introduced in [7, 6] and shown to work with non-probabilistic calculi e.g., possibility theory and consonant belief function. This paper is structured as follows. In the next section, SP is reviewed. In section 3, we present a set of postulates that lead to a representation theorem. Comparison with related works is presented in section 4 that is followed by a concluding remark. We list here a brief glossary of symbols to facilitate reading this paper. Ω is the set of possible worlds. Capital letters A, B, C are used for subsets of the possible world. Lower case letters a, b, c denote acts. X is the set of prizes that includes the most preferred (♯), the neutral (♮) and the least preferred (♭) elements. S is the support set that includes ⊤ as the top and ⊥ as the bottom. Elements of S are denoted by Greek letters α, β, γ. A support function uses symbol Φ. ≥ ⊕ denotes a partial order on the support set. ⊲ denotes a preference relation on acts. 2 Symbolic probability This section reviews the symbolic probability theory developed in [2, 3]. SP is motivated by the reality that information available to an agent is often not enough to commit her to a precise numerical probability function.
1997
A class of preferential orderings in non-monotonic logics assumes that various extensions of a model (possible worlds) can be ordered based on both their likelihood and desirability. I suggest that there is a basic incompatibility between this qualitative notion of preference and the decision-theoretic notion of utility. I demonstrate that while reasoning and decision making in the former can focus on a single state, it is meaningless in expected utility theory to say that a state or a set of states is important for a decision. This, I believe, is thought-provoking as it poses the question whether a qualitative formalism should be compatible with its quantitative counterpart or whether it can aord to be at odds with it. I discuss the dierence between normative and cognitive utility and the implications of this dierence for work on user interfaces to systems based on probabilistic and decision-theoretic methods.
Synthese, 2018
In his classic book Savage (1954, 1972) develops a formal system of rational decision making. It is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems – without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of " idealized agent " that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent.
Philosophical Studies
The basic axioms or formal conditions of decision theory, especially the ordering condition put on preferences and the axioms underlying the expected utility (EU) formula, are subject to a number of counterexamples, some of which can be endowed with normative value and thus fall within the ambit of a philosophical reflection on practical rationality. Against such counterexamples, a defensive strategy has been developed which consists in redescribing the outcomes of the available options in such a way that the threatened axioms or conditions continue to hold. We examine how this strategy performs in three major cases: Sen's counterexamples to the binariness property of preferences, the Allais paradox of EU theory under risk, and the Ellsberg paradox of EU theory under uncertainty. We find that the strategy typically proves to be lacking in several major respects, suffering from logical triviality, incompleteness, and theoretical insularity (i.e., being cut off from the methods and results of decision theory). To give the strategy more structure, philosophers have developed "principles of individuation"; but we observe that these do not address the aforementioned defects. Instead, we propose the method of checking whether the strategy can overcome its typical defects once it is given a proper theoretical expansion (i.e., it is duly developed using the available tools of decision theory). We find that the strategy passes the test imperfectly in Sen's case and not at all in Allais's. In Ellsberg's case, however, it comes close to meeting our requirement. But even the analysis of this more promising application suggests that the strategy ought to address the decision problem as a whole, rather than just the outcomes, and that it should extend its revision process to the very statements it is meant to protect. Thus, by and large, the same cautionary tale against redescription practices runs through the analysis of all three cases. A more general lesson, simply put, is that there is no easy way out from the paradoxes of decision theory.
Preference Logic Programming (PLP) is an extension of Constraint Logic Programming (CLP) for declaratively specifying optimization problems. In the PLP framework, the de nite clauses of a CLP program are augmented by two new kinds of clauses: optimization clauses and arbiter clauses. Optimization clauses specify which predicates are to be optimized and arbiter clauses specify the criteria to be used for optimization. Together, these three kinds of clauses form a preferential theory, for which a possible worlds semantics was rst given by Mantha et al. This paper shows how modal concepts can be used to capture the notion of optimization: Essentially, each world in the possibleworlds semantics for a preference logic program is a model of the program, and an ordering over these worlds is enforced by the arbiter clauses in the program. We introduce the notion of preferential consequence as truth in the optimal worlds. We propose an operational semantics that is an extension of SLD derivation and prove its soundness. Finally, we provide a variety of examples to illustrate our paradigm: minimum and maximum predicates, partial-order programming, syntactic ambiguity resolution and its application in document formatting, and general optimization problems.
Econometrica, 2001
MLQ, 2009
Key words Preference representation, computational complexity, computational social choice.
2006
In Possibilistic Decision Theory (PDT), decisions are ranked by a pes- simistic or by an optimistic qualitative criteria. The preference relations induced by these criteria have been axiomatized by corresponding sets of rationality postulates, botha la von Neumann and Morgenstern anda la Savage. In this paper we first address a particular issue regarding the ax- iomatic systems of PDTa la von Neumann and Morgenstern. Namely, we show how to adapt the axiomatic systems for the pessimistic and optimistic criteria when some finiteness assumptions in the original model are dropped. Second, we show that a recent axiomatic approach by Giang and Shenoy using binary utilities can be captured by preference relations defined as lexi- cographic refinements of the above two criteria. We also provide an axiomatic characterization of these lexicographic refinements.
2006
Preference is a basic notion in human behaviour, underlying such varied phenomena as individual rationality in the philosophy of action and game theory, obligations in deontic logic (we should aim for the best of all possible worlds), or collective decisions in social choice theory. Also, in a more abstract sense, preference orderings are used in conditional logic or non-monotonic reasoning as a way of arranging worlds into more or less plausible ones. The field of preference logic (cf. Hansson ) studies formal systems that can express and analyze notions of preference between various sorts of entities: worlds, actions, or propositions. The art is of course to design a language that combines perspicuity and low complexity with reasonable expressive power. In this paper, we take a particularly simple approach. As preferences are binary relations between worlds, they naturally support standard unary modalities. In particular, our key modality ♦ϕ will just say that is ϕ true in some world which is at least as good as the current one. Of course, this notion can also be indexed to separate agents. The essence of this language is already in [4], but our semantics is more general, and so are our applications and later language extensions. Our modal language can express a variety of preference notions between propositions. Moreover, as already suggested in [9], it can "deconstruct" standard conditionals, providing an embedding of conditional logic into more standard modal logics. Next, we take the language to the analysis of games, where some sort of preference logic is evidently needed ([23] has a binary modal analysis different from ours). We show how a qualitative unary preference modality suffices for defining Nash Equilibrium in strategic games, and also the Backward Induction solution for finite extensive games. Finally, from a technical perspective, our treatment adds a new twist. Each application considered in this paper suggests the need for some additional access to worlds before the preference modality can unfold its true power. For this purpose, we use various extras from the modern literature: the global modality, further hybrid logic operators, action modalities from propositional dynamic logic, and modalities of individual and distributed knowledge from epistemic logic. The total package is still modal, but we can now capture a large variety of new notions. Finally, our emphasis in this paper is wholly on expressive power. Axiomatic completeness results for our languages can be found in the follow-up paper [27].
2009
In the last few years, preference logic and in particular, the dynamic logic of preference change, has suddenly become a live topic in my Amsterdam and Stanford environments. At the request of the editors, this article explains how this interest came about, and what is happening. I mainly present a story around some recent dissertations and supporting papers, which are found in the references. There is no pretense at complete coverage of preference logic (for that, see Hanson 2001) or even of preference change (Hanson 1995). Agency, information, and preference Human agents acquire and transform information in different ways: they observe, or infer by themselves, and often also, they ask someone else. Traditional philosophical logics describe part of this behaviour, the 'static' properties produced by such actions: in particular, agents' knowledge and belief at some given moment. But rational human activity is goal-driven, and hence we also need to describe agents' evaluation of different states of the world, or of outcomes of their actions. Here is where preference logic have come to describe what agents prefer, while current dynamic logics describe effects of their physical actions. In the limit, all these things have to come together in understanding even such a simple scenario as a game, where we need to look at what players want, what they can observe and guess, and which moves and long-term strategies are available to them in order to achieve their goals. There are two dual aspects to this situation. The static description of what agents know, believe, or prefer at any given moment has long been performed by standard systems of philosophical logic since the 1950s -of course, with continued debate surrounding the merits of particular proposals. But there is also the dynamics of actions and events that produce information and generate attitudes Overview This paper is mainly based on some recent publications in the Amsterdam environment over the last three years. Indeed, 'dynamics' presupposes an account of 'statics', and hence we first give a brief survey of preference logic in a simple modal format using binary comparison relations between possible worlds -on the principle that 'small is beautiful'. We also describe a recent alternative approach, where world preferences are generated from criteria or constraints. We show how to dynamify both views by adding explicit events that trigger preference change in the models, and we sketch how the resulting systems connect. Next, we discuss some entanglements between preference, knowledge and belief, and what this means for combined dynamic logics. On top of this, we also show how more delicate aspects of preference should be incorporated, such as its striking 'ceteris paribus' character, which was already central in Von Wright 1963. Finally, we relate our considerations to social choice theory and game theory. Preference is a very multi-faceted notion: we can prefer one individual object, or one situation, over another -but preference can also be directed toward kinds of objects or generic types of situation, often defined by propositions. Both perspectives make sense, and a bona fide 'preference logic' should do justice to all of them eventually. We start with a simple scenario on the object/world side, leaving other options for later. In this paper, we start with a very simple setting. Modal models M = (W, ≤, V) consist of a set of worlds W (but they really stand for any sort of objects that are subject to evaluation and comparison), a 'betterness' relation ≤ between worlds ('at least as good as'), and a valuation V for proposition letters at worlds (or, for unary properties of objects). In principle, the comparison relation may be different for different agents, but in what follows, we will suppress agent subscripts ≤ i whenever possible for greater readability. Also, we use the artificial term 'betterness' to stress that this is an abstract comparison relation, making no claim yet concerning the natural rendering of the intuitive term 'preference', about which some people hold passionate proprietary opinions. Still, this semantics is entirely natural and concrete. Just think of decision theory, where worlds (standing for outcomes of actions) are compared as to utility, or Here move is the union of all one-step move relations available to players, and * denotes the reflexive-transitive closure of a relation. The formula then says there is no alternative move to the BI-prescription at the current node all of whose outcomes would be better than the BI-solution. Thus, modal preference logic seems to go well with games. 3 But there are more examples. Already Boutilier 1994 observed how such a simple modal language can also define conditional assertions, normally studied per se as a complex new binary modality (Lewis 1973), and how one can then analyze their logic in standard terms. 4 For instance, in modal models with finite pre-orders (see below), the standard truth definition of a conditional A ⇒ B reads as 'B is true in all maximal A-worlds' -and this clause can be written as the following modal combination: with [] some appropriate universal modality. While this formula may look complex at first, the point is that the inferential behaviour of the conditional, including its well-known non-monotonic features, can now be completely understood via the base logic for the unary modalities, say, as a sub-theory of modal S4. Moreover, the modal language easily defines variant notions whose introduction seems a big deal in conditional logic, such as existential versions saying that each A-world sees at least one maximal A-world which is B. Of course, explicit separate axiomatizations of these defined notions retain an independent interest: but we now see the whole picture. 5 Constraints on betterness orders Which properties should a betterness relation have? Many authors like to work with total orders, satisfying reflexivity, transitivity, and connectedness. This is also common practice in decision theory and game theory, since 3 This, and also the following examples are somewhat remarkable, because there has been a widespread prejudice that modal logic is not very suitable to formalizing preference reasoning. 4 This innovative move is yet to become common knowledge in the logical literature. 5 There still remains the question of axiomatizing such defined notions per se: and that may be seen as the point of the usual completeness theorems in conditional logic. Also, Halpern 1997 axiomatized a defined notion of preference of this existential sort.
2002
This paper investigates to what extent a purely symbolic approach to decision making under uncertainty is possible, in the scope of Artificial Intelligence. Contrary to classical approaches to decision theory, we try to rank acts without resorting to any numerical representation of utility nor uncertainty, and without using any scale on which both uncertainty and preference could be mapped. Our approach is a variant of Savage's where the setting is finite, and the strict preference on acts is a partial order. It is shown that although many axioms of Savage theory are preserved and despite the intuitive appeal of the ordinal method for constructing a preference over acts, the approach is inconsistent with a probabilistic representation of uncertainty. The latter leads to the kind of paradoxes encountered in the theory of voting. It is shown that the assumption of ordinal invariance enforces a qualitative decision procedure that presupposes a comparative possibility representation of uncertainty, originally due to Lewis, and usual in nonmonotonic reasoning. Our axiomatic investigation thus provides decision-theoretic foundations to preferential inference of Lehmann and colleagues. However, the obtained decision rules are sometimes either not very decisive or may lead to overconfident decisions, although their basic principles look sound. This paper points out some limitations of purely ordinal approaches to Savage-like decision making under uncertainty, in perfect analogy with similar difficulties in voting theory.
1998
This paper describes a logical machinery for computing decisions based on an ATMS procedure, where the available knowledge on the state of the world is described by a possibilistic propositional logic base (i.e., a collection of logical statements associated with qualitative certainty levels). The preferences of the user are also described by another possibilistic logic base whose formula weights are interpreted in terms of priorities and formulas express goals. Two attitudes are allowed for the decision maker: a pessimistic uncertainty-averse one and an optimistic one. The computed decisions are in agreement with a qualitative counterpart to classical expected utility theory for decision under uncertainty.
Reasoning about preferences is a major issue in many decision making problems. Recently, a new logic for handling preferences, called qualitative choice logic (QCL), was presented. This logic adds to classical propositional logic a new connective, called ordered disjunction symbolized by ×. That new connective is used to express preferences between alternatives. Intuitively, if A and B are propositional formulas then A ×B means: "if possible A, but if A is impossible then at least B". One of the important limitations of QCL is that it does not correctly deal with negated and conditional preferences. Conditional rules that involve preferences are expressed using propositional implication. However, using QCL semantics, there is no difference between such material implication "(KLM ×Air France) ⇒ Hotel Package" and the purely propositional formula "(Air France∨KLM) ⇒ Hotel Package". Moreover, the negation in QCL misses some desirable properties from propositional calculus. This paper first proposes an extension of QCL language to universally quantified first-order logic framework. Then, we propose two new logics that correctly address QCL's limitations. Both of them are based on the same QCL language, but define new non-monotonic consequence relations. The first logic, called PQCL (prioritized qualitative choice logic), is particularly adapted for handling prioritized preferences, while the second one, called QCL+ (positive qualitative choice logic), is appropriate for handling positive preferences. In both cases, we show that any set of preferences, can equivalently be transformed into a set of basic preferences from which efficient inferences can be applied. Lastly, we show how our logics can be applied to alert correlation.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.