Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1998
…
10 pages
1 file
We investigate the application of classification techniques to utility elicitation. In a decision problem, two sets of parameters must generally be elicited: the probabilities and the utilities. While the prior and conditional probabilities in the model do not change from user to user, the utility models do. Thus it is necessary to elicit a utility model separately for each new user. Elicitation is long and tedious, particularly if the outcome space is large and not decomposable. There are two common approaches to utility function elicitation. The first is to base the determination of the user's utility function solely on elicitation of qualitative preferences. The second makes assumptions about the form and decomposability of the utility function. Here we take a different approach: we attempt to identify the new user's utility function based on classification relative to a database of previously collected utility functions. We do this by identifying clusters of utility func...
2010
Abstract Most frameworks for utility elicitation assume a predefined set of features over which user preferences are expressed. We consider utility elicitation in the presence of subjective or user-defined features, whose definitions are not known in advance. We treat the problem of learning a user's feature definition as one of concept learning, but whose goal is to learn only enough about the concept definition to enable a good decision to be made. This is complicated by the fact that user utility is unknown.
Department of Computer Science, University of Toronto, 2006
2004
We present an approach to elicitation of user preference models in which assumptions can be used to guide but not constrain the elicitation process. We demonstrate that when domain knowledge is available, even in the form of weak and somewhat inaccurate assumptions, significantly less data is required to build an accurate model of user preferences than when no domain knowledge is provided. This approach is based on the KBANN (Knowledge-Based Artificial Neural Network) algorithm pioneered by . We demonstrate this approach through two examples, one involves preferences under certainty, and the other involves preferences under uncertainty. In the case of certainty, we show how to encode assumptions concerning preferential independence and monotonicity in a KBANN network, which can be trained using a variety of preferential information including simple binary classification. In the case of uncertainty, we show how to construct a KBANN network that encodes certain types of dominance relations and attitude toward risk. The resulting network can be trained using answers to standard gamble questions and can be used as an approximate representation of a person's preferences. We empirically evaluate our claims by comparing the KBANN networks with simple backpropagation artificial neural networks in terms of learning rate and accuracy. For the case of uncertainty, the answers to standard gamble questions used in the experiment are taken from an actual medical data set first used by . In the case of certainty, we define a measure to which a set of prefer-ences violate a domain theory, and examine the robustness of the KBANN network as this measure of domain theory violation varies.
2012
In this thesis, we present a decision-theoretic framework for building decision support systems that incrementally elicit preferences of individual users over multiattribute outcomes and then provide recommendations based on the acquired preference information. By combining decision-theoretically sound modeling with effective computational techniques and certain user-centric considerations, we demonstrate the feasibility and potential of practical autonomous preference elicitation and recommendation systems.
EURO Journal on Decision Processes, 2015
Preferences are fundamental to decision processes, because decision analysts must account for the preferences of the stakeholders who participate in these processes and are impacted by the decision outcomes. To support the elicitation of stakeholder preferences, many models, procedures and methodologies have been proposed. These approaches to preference elicitation and learning will become more and more important with the proliferation of semi-automated computerized interfaces and the adoption of decision support systems which build on increasingly large datasets. One of the major central tasks of the decision analyst is to elicit the judgements and value systems of the decision makers (DMs), including their views on the problem, and to integrate the resulting information into a preference model from which recommendations can be derived. This preference elicitation activity can be tricky: the preferences expressed by the DMs can be imprecise, conflicting, unstable, time-dependent, yet they should be structured and synthesized into numerical values (or intervals of numerical values) concerning the parameters that characterize preferences in the decision model. For the domain Preference Elicitation and Learning, models, procedures and methodologies have been developed not only by researchers working in the field of Multiple Criteria Decision Aid but also in that of Artificial Intelligence. Their research has focused on the modeling, representation, elicitation, learning,
PROCEEDINGS OF THE NATIONAL …, 2006
Any automated decision support software must tailor its actions or recommendations to the preferences of different users. Thus it requires some representation of user preferences as well as a means of eliciting or otherwise learning the preferences of the specific user on whose behalf it is acting. While additive preference models offer a compact representation of multiattribute utility functions, and ease of elicitation, they are often overly restrictive. The more flexible generalized additive independence (GAI) model maintains much of the intuitive nature of additive models, but comes at the cost of much more complex elicitation. In this article, we summarize the key contributions of our earlier paper (UAI 2005): (a) the first elaboration of the semantic foundations of GAI models that allows one to engage in preference elicitation using local queries over small subsets of attributes rather than global queries over full outcomes; and (b) specific procedures for Bayesian preference elicitation of the parameters of a GAI model using such local queries.
AI Magazine, 2009
As automated decision support becomes increasingly accessible in a wide variety of AI applications, addressing the preference bottleneck is vital. Specifically, since the ability to make reasonable decisions on behalf of a user depends on that user's preferences over outcomes in the domain in question, AI systems must assess or estimate these preferences before making decisions. Designing effective preference assessment techniques to incorporate such user-specific considerations (that is, breaking the preference bottleneck) is one of the most important problems facing AI.
Proceedings of the Thirteenth Conference on …, 1997
Decision theory has become widely accepted in the AI community as a useful framework for planning and decision making. Applying the framework typically requires elicitation of some form of probability and utility in-formation. While much work in AI has fo-cused on providing representations ...
ArXiv, 2018
We tackle the problem of constructive preference elicitation, that is the problem of learning user preferences over very large decision problems, involving a combinatorial space of possible outcomes. In this setting, the suggested configuration is synthesized on-the-fly by solving a constrained optimization problem, while the preferences are learned itera tively by interacting with the user. Previous work has shown that Coactive Learning is a suitable method for learning user preferences in constructive scenarios. In Coactive Learning the user provides feedback to the algorithm in the form of an improvement to a suggested configuration. When the problem involves many decision variables and constraints, this type of interaction poses a significant cognitive burden on the user. We propose a decomposition technique for large preference-based decision problems relying exclusively on inference and feedback over partial configurations. This has the clear advantage of drastically reducing ...
aaai.org
The development of automated preference elicitation tools has seen increased interest among researchers in recent years due to a growing interest in such diverse problems as development of user-adaptive software and greater involvement of patients in medical decision making. These tools not only must facilitate the elicitation of reliable information without overly fatiguing the interviewee but must also take into account changes in preferences. In this paper, we introduce two complementary indicators for detecting change in preference which can be used depending on the granularity of observed information. The first indicator exploits conflicts between the current model and the observed preference by using intervals mapped to gamble questions as guides in observing changes in risk attitudes. The second indicator relies on answers to gamble questions, and uses Chebyshev's inequality to infer the user's risk attitude. The model adapts to the change in preference by relearning whenever an indicator exceeds some preset threshold. We implemented our utility model using knowledge-based artificial neural networks that encode assumptions about a decision maker's preferences. This allows us to learn a decision maker's utility function from a relatively small set of answers to gamble questions thereby minimizing elicitation cost. Results of our experiments on a simulated change of real patient preference data suggest significant gain in performance when the utility model adapts to change in preference.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Lecture Notes in Computer Science, 2013
Proceedings of the Twenty-first Conference on …, 2005
Proceedings of the National Conference on Artificial …, 2002
Methods of Information in Medicine
… on Preferences in AI and CP: …, 2002
Journal of Risk and Uncertainty