Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2018, ArXiv
We tackle the problem of constructive preference elicitation, that is the problem of learning user preferences over very large decision problems, involving a combinatorial space of possible outcomes. In this setting, the suggested configuration is synthesized on-the-fly by solving a constrained optimization problem, while the preferences are learned itera tively by interacting with the user. Previous work has shown that Coactive Learning is a suitable method for learning user preferences in constructive scenarios. In Coactive Learning the user provides feedback to the algorithm in the form of an improvement to a suggested configuration. When the problem involves many decision variables and constraints, this type of interaction poses a significant cognitive burden on the user. We propose a decomposition technique for large preference-based decision problems relying exclusively on inference and feedback over partial configurations. This has the clear advantage of drastically reducing ...
Artificial Intelligence, 2021
This paper introduces CLEO, a novel preference elicitation algorithm capable of recommending complex configurable objects characterized by both discrete and continuous attributes and constraints defined over them. While existing preference elicitation techniques focus on searching for the best instance in a database of candidates, CLEO takes a constructive approach to recommendation through interactive optimization in a space of feasible configurations. The algorithm assumes minimal initial information, i.e., a set of catalog attributes, and defines decisional features as logic formulae combining Boolean and algebraic constraints over the attributes. The (unknown) utility of the decision maker is modelled as a weighted combination of features. CLEO iteratively alternates a preference elicitation step, where pairs of candidate configurations are selected based on the current utility model, and a refinement step where the utility is refined by incorporating the feedback received. The elicitation step leverages a Max-SMT solver to return optimal configurations according to the current utility model. The refinement step is implemented as learning to rank, and a sparsifying norm is used to favour the selection of few informative features in the combinatorial space * The main part of this work has been done while the author was with DISI, the remaining part while the author was with the Informatik Lehrstuhl
Department of Computer Science, University of Toronto, 2006
AAAI Spring Symposium on …, 1997
We investigate the solution of constraint-based configuration problems in which the preference function over outcomes is unknown or incompletely specified. The aim is to configure a system, such as a personal computer, so that it will be optimal for a given user. The goal of this project is to develop algorithms that generate the most preferred feasible configuration by posing preference queries to the user. In order to minimize the number and the complexity of preference queries posed to the user, the algorithm reasons about the user's preferences while taking into account constraints over the set of feasible configurations. We assume that the user can structure their preferences in a particular way that, while natural in many settings, can be exploited during the optimization process. We also address in a preliminary fashion the trade-offs between computational effort in the solution of a problem and the degree of interaction with the user.
2003
When searching for multi-attribute services or products, understanding and representing user’s preferences is a crucial task. However, many computer tools do not afford users to adequately focus on fundamental decision objectives, reveal hidden preferences, revise conflicting preferences, or explicitly reason about tradeoffs with competing decision goals. As a result, users often fail to find the best solution. From building decision support systems for various application domains, we have observed some common areas of design pitfalls, which could lead to undesirable user behaviors and ineffective use of decision systems. By incorporating findings from behavior decision theory, we have identified and accumulated a set of principles for avoiding these design pitfalls: 1) provide a flexible order and choice in preference elicitation so that users can focus on fundamental objectives, 2) include appropriate information in a decision context to guide users in revealing hidden preferences...
Machine Learning, 2013
This paper presents a framework for optimizing the preference learning process. In many real-world applications in which preference learning is involved the available training data is scarce and obtaining labeled training data is expensive. Fortunately in many of the preference learning situations data is available from multiple subjects. We use the multitask formalism to enhance the individual training data by making use of the preference information learned from other subjects. Furthermore, since obtaining labels is expensive, we optimally choose which data to ask a subject for labelling to obtain the most of information about her/his preferences. This paradigm-called active learning-has hardly been studied in a multi-task formalism. We propose an alternative for the standard criteria in active learning which actively chooses queries by making use of the available preference data from other subjects. The advantage of this alternative is the reduced computation costs and reduced time subjects are involved. We validate empirically our approach on three real-world data sets involving the preferences of people.
2021
In this paper we propose efficient methods for elicitation of complexly structured preferences and utilize these in problems of decision making under (severe) uncertainty. Based on the general framework introduced in Jansen, Schollmeyer & Augustin (2018, Int. J. Approx. Reason), we now design elicitation procedures and algorithms that enable decision makers to reveal their underlying preference system (i.e. two relations, one encoding the ordinal, the other the cardinal part of the preferences) while having to answer as few as possible simple ranking questions. Here, two different approaches are followed. The first approach directly utilizes the collected ranking data for obtaining the ordinal part of the preferences, while their cardinal part is constructed implicitly by measuring meta data on the decision maker’s consideration times. In contrast, the second approach explicitly elicits also the cardinal part of the decision maker’s preference system, however, only an approximate ve...
EURO Journal on Decision Processes, 2015
Preferences are fundamental to decision processes, because decision analysts must account for the preferences of the stakeholders who participate in these processes and are impacted by the decision outcomes. To support the elicitation of stakeholder preferences, many models, procedures and methodologies have been proposed. These approaches to preference elicitation and learning will become more and more important with the proliferation of semi-automated computerized interfaces and the adoption of decision support systems which build on increasingly large datasets. One of the major central tasks of the decision analyst is to elicit the judgements and value systems of the decision makers (DMs), including their views on the problem, and to integrate the resulting information into a preference model from which recommendations can be derived. This preference elicitation activity can be tricky: the preferences expressed by the DMs can be imprecise, conflicting, unstable, time-dependent, yet they should be structured and synthesized into numerical values (or intervals of numerical values) concerning the parameters that characterize preferences in the decision model. For the domain Preference Elicitation and Learning, models, procedures and methodologies have been developed not only by researchers working in the field of Multiple Criteria Decision Aid but also in that of Artificial Intelligence. Their research has focused on the modeling, representation, elicitation, learning,
2010
Abstract Most frameworks for utility elicitation assume a predefined set of features over which user preferences are expressed. We consider utility elicitation in the presence of subjective or user-defined features, whose definitions are not known in advance. We treat the problem of learning a user's feature definition as one of concept learning, but whose goal is to learn only enough about the concept definition to enable a good decision to be made. This is complicated by the fact that user utility is unknown.
2004
We present an approach to elicitation of user preference models in which assumptions can be used to guide but not constrain the elicitation process. We demonstrate that when domain knowledge is available, even in the form of weak and somewhat inaccurate assumptions, significantly less data is required to build an accurate model of user preferences than when no domain knowledge is provided. This approach is based on the KBANN (Knowledge-Based Artificial Neural Network) algorithm pioneered by . We demonstrate this approach through two examples, one involves preferences under certainty, and the other involves preferences under uncertainty. In the case of certainty, we show how to encode assumptions concerning preferential independence and monotonicity in a KBANN network, which can be trained using a variety of preferential information including simple binary classification. In the case of uncertainty, we show how to construct a KBANN network that encodes certain types of dominance relations and attitude toward risk. The resulting network can be trained using answers to standard gamble questions and can be used as an approximate representation of a person's preferences. We empirically evaluate our claims by comparing the KBANN networks with simple backpropagation artificial neural networks in terms of learning rate and accuracy. For the case of uncertainty, the answers to standard gamble questions used in the experiment are taken from an actual medical data set first used by . In the case of certainty, we define a measure to which a set of prefer-ences violate a domain theory, and examine the robustness of the KBANN network as this measure of domain theory violation varies.
2021
Identifying the preferences of a given user through elicitation is a central part of multi-criteria decision aid (MCDA) or preference learning tasks. Two classical ways to perform this elicitation is to use either a robust or a Bayesian approach. However, both have their shortcoming: the robust approach has strong guarantees through very strong hypotheses, but cannot integrate uncertain information. While the Bayesian approach can integrate uncertainties, but sacrifices the previous guarantees and asks for stronger model assumptions. In this paper, we propose and test a method based on possibility theory, which keeps the guarantees of the robust approach without needing its strong hypotheses. Among other things, we show that it can detect user errors as well as model misspecification.
2007
Search heuristics that embody domain knowledge often reflect preferences among choices. This paper proposes a variety of ways to identify a good mixture of search heuristics and to integrate the preferences they express. Two preference expression methods inspired by the voting literature in political science prove particularly reliable. On hard problems, good methods to integrate such heuristics are shown to speed search, reducing significantly both computation time and the size of the search tree.
2011
Declarative Modelling environments exhibit an idiosyncrasy that demands specialised machine learning methodologies. The particular characteristics of the datasets, their irregularity in terms of class representation, volume, availability as well as user induced inconsistency further impede the learning potential of any employed mechanism, thus leading to the need for adaptation and adoption of custom approaches, expected to address these issues. In the current work we present the problems encountered in the effort to acquire and apply user profiles in such an environment, the modified boosting learning algorithm adopted and the corresponding experimental results.
Proceedings of the National Conference on Artificial …, 2002
Preference elicitation is a key problem facing the deployment of intelligent systems that make or recommend decisions on the behalf of users. Since not all aspects of a utility function have the same impact on object-level decision quality, determining which information to extract from a user is itself a sequential decision problem, balancing the amount of elicitation effort and time with decision quality. We formulate this problem as a partially-observable Markov decision process (POMDP). Because of the continuous nature of the state and action spaces of this POMDP, standard techniques cannot be used to solve it. We describe methods that exploit the special structure of preference elicitation to deal with parameterized belief states over the continuous state space, and gradient techniques for optimizing parameterized actions. These methods can be used with a number of different belief state representations, including mixture models.
Decision theory has become widely accepted in the AI community as a useful framework for planning and decision making. Applying the framework typically requires elicitation of some form of probability and utility information. While much work in AI has focused on providing representations and tools for elicitation of probabilities, relatively little work has addressed the elicitation of utility models. This imbalance is not particularly justified considering that probability models are relatively stable across problem instances, while utility models may be different for each instance. Spending large amounts of time on elicitation can be undesirable for interactive systems used in low-stakes decision making and in time-critical decision making. In this paper we investigate the issues of reasoning with incomplete utility models. We identify patterns of problem instances where plans can be proved to be suboptimal if the (unknown) utility function satisfies certain conditions. We present...
Lecture Notes in Computer Science, 2003
In this paper we explore the relationship between "preference elicitation", a learning-style problem that arises in combinatorial auctions, and the problem of learning via queries studied in computational learning theory. Preference elicitation is the process of asking questions about the preferences of bidders so as to best divide some set of goods. As a learning problem, it can be thought of as a setting in which there are multiple target concepts that can each be queried separately, but where the goal is not so much to learn each concept as it is to produce an "optimal example". In this work, we prove a number of similarities and differences between two-bidder preference elicitation and query learning, giving both separation results and proving some connections between these problems.
While decision theory provides an appealing normative framework for representing rich preference structures, eliciting utility or value functions typically incurs a large cost. For many applications involving interactive systems this overhead precludes the use of formal decision-theoretic models of preference. Instead of performing elicitation in a vacuum, it would be useful if we could augment directly elicited preferences with some appropriate default information. In this paper we propose a case-based approach to alleviating the preference elicitation bottleneck. Assuming the existence of a population of users from whom we have elicited complete or incomplete preference structures, we propose eliciting the preferences of a new user interactively and incrementally, using the closest existing preference structures as potential defaults. Since a notion of closeness demands a measure of distance among preference structures, this paper takes the first step of studying various distance ...
AI Magazine, 2009
As automated decision support becomes increasingly accessible in a wide variety of AI applications, addressing the preference bottleneck is vital. Specifically, since the ability to make reasonable decisions on behalf of a user depends on that user's preferences over outcomes in the domain in question, AI systems must assess or estimate these preferences before making decisions. Designing effective preference assessment techniques to incorporate such user-specific considerations (that is, breaking the preference bottleneck) is one of the most important problems facing AI.
Knowledge-Based Systems
Planning with preferences has been employed extensively to quickly generate highquality plans. However, it may be difficult for the human expert to supply this information without knowledge of the reasoning employed by the planner. We consider the problem of actively eliciting preferences from a human expert during the planning process. Specifically, we study this problem in the context of the Hierarchical Task Network (HTN) planning framework as it allows easy interaction with the human. We propose an approach where the planner identifies when and where expert guidance will be most useful and seeks expert's preferences accordingly to make better decisions. Our experimental results on several diverse planning domains show that the preferences gathered using the proposed approach improve the quality and speed of the planner, while reducing the burden on the human expert.
2010
Although there has been significant research on modeling and learning user preferences for various types of objects, there has been relatively little work on the problem of representing and learning preferences over sets of objects. We introduce a representation language, DD-PREF, that balances preferences for particular objects with preferences about the properties of the set. Specifically, we focus on the depth of objects (i.e., preferences for specific attribute values over others) and on the diversity of sets (i.e., preferences for broad vs. narrow distributions of attribute values). The DD-PREF framework is general and can incorporate additional object-and set-based preferences. We describe a greedy algorithm, DD-Select, for selecting satisfying sets from a collection of new objects, given a preference in this language. We show how preferences represented in DD-PREF can be learned from training data. Experimental results are given for three domains: a blocks world domain with several different task-based preferences; a real-world music playlist collection; and rover image data gathered in desert training exercises.
2020
We study the problem of strategically eliciting the preferences of a decision-maker through a moderate number of pairwise comparison queries with the goal of making them a high quality recommendation for a specific decision-making problem. We are particularly motivated by applications in high stakes domains, such as when choosing a policy for allocating scarce resources to satisfy basic human needs (e.g., kidneys for transplantation or housing for those experiencing homelessness) where a consequential recommendation needs to be made from the (partially) elicited preferences. We model uncertainty in the preferences as being set based and investigate two settings: a) an offline elicitation setting, where all queries are made at once, and b) an online elicitation setting, where queries are selected sequentially over time in an adaptive fashion. We propose robust optimization formulations of these problems which integrate the preference elicitation and recommendation phases with aim to ...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.