Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
24 pages
1 file
A natural way to obtain information about the concentration and dispersion of an expert’s beliefs is to ask for a confidence interval. Our objective is to design an elicitation mechanism that rewards the expert on the basis of the realized event and satisfies a set of desirable properties. We show that the existing mechanisms fail some of these properties, and formulate a new mechanism - the Truncated Interval Scoring Rule - that has all properties and is easily implementable in experimental work.
We show how to elicit the beliefs of an expert in the form of a “most likely interval”, a set of future outcomes that are deemed more likely than any other outcome. Our method, called the Most Likely Interval elicitation rule (MLI), asks the expert for an interval and pays according to how well the answer compares to the actual outcome. We show that the MLI performs well in economic experiments, and satisfies a number of desirable theoretical properties such as robustness to the risk preferences of the expert.
Experimental Economics
Incentivized methods for eliciting subjective probabilities in economic experiments present the subject with risky choices that encourage truthful reporting. We discuss the most prominent elicitation methods and their underlying assumptions, provide theoretical comparisons and give a new justification for the quadratic scoring rule. On the empirical side, we survey the performance of these elicitation methods in actual experiments, considering also practical issues of implementation such as order effects, hedging, and different ways of presenting probabilities and payment schemes to experimental subjects. We end with a discussion of the trade-offs involved in using incentives for belief elicitation and some guidelines for implementation.
2006
Belief elicitation methods based on proper scoring rules such as the quadratic scoring rule provide the experimenter with little opportunity to control the incentives for optimal reporting behavior by experimental subjects. Since subjects are typically risk averse, distortions in belief reports should be expected from any kind of proper scoring rule. But how likely and how large are these distortions in practice? Can one correct for such distortions? And what impact will corrections have on the incentives for optimal behavior? We approach these questions theoretically and empirically. Our theoretical model shows how the widely used quadratic scoring rule can be generalized and represented as a contingent wealth opportunity set described by the indirect utility function of a CRRA agent in a competitive contingent claims market. This representation has a naturally-occurring form, much like the markets that sports betting agencies have developed. Optimal reporting behavior is logically equivalent to optimal pricing behavior against compensated demand functions of a consumer whose certainty equivalent, in a dual perspective, has the form of a CES utility function. The parameters of this utility function are the risk attitude (elasticity of substitution), distributional weights, and endowment location in the space of available experimental funds. Critically, these parameters are all under the control of the experimenter. We provide graphical examples to show how variations in the CRRA/CES parameters of this constraint impact on the incentives for optimal reporting of beliefs by subjects. The class of incentive functions we develop provide significantly stronger penalties for sub-optimal belief reporting behavior than the conventional quadratic scoring rule. The theory also suggests that belief reporting in the lab be framed for the subject as a pricing problem. Besides having strong conceptual and analytical foundations in De Finetti/Savage type Bayesian statistics where probabilities are viewed as prices, the idea of having a subject set odds or prices in a contingent claims market may help to overcome well known difficulties subjects face in understanding the language of frequency and probability. Their odds-setting or pricing behavior will reveal their beliefs even without them explicitly articulating their beliefs as probability distributions. We provide experimental evidence evaluating the performance of this new approach to belief elicitation.
Journal of Economic Behavior & Organization, 2006
Experiments in psychology, where subjects estimate confidence intervals to a series of factual questions, have shown that individuals report far too narrow intervals. This has been interpreted as evidence of overconfidence in the preciseness of knowledge, a potentially serious violation of the rationality assumption in economics. Following these results a growing literature in economics has incorporated overconfidence in models of, for instance, financial markets. In this paper we investigate the robustness of results from confidence interval estimation tasks with respect to a number of manipulations: frequency assessments, peer frequency assessments, iteration, and monetary incentives. Our results suggest that a large share of the overconfidence in interval estimation tasks is an artifact of the response format. Using frequencies and monetary incentives reduces the measured overconfidence in the confidence interval method by about 65%. The results are consistent with the notion that subjects have a deep aversion to setting broad confidence intervals, a reluctance that we attribute to a socially rational trade-off between informativeness and accuracy.
2015
Reducing uncertainty is an important problem in many applications such as risk and reliability analysis, system design, etc. In this paper, we study the problem of optimally querying experts to reduce interval uncertainty. Surprisingly, this problem has received little attention in the past, while similar issues in preference elicitation or social choice theory have witnessed a rising interest. We propose and discuss some solutions to determine optimal questions in a myopic way (one-at-a-time), and study the computational aspects of these solutions both in general and for some specific functions of practical interest. Finally, we illustrate the application of the approach in reliability analysis problems.
Risk Analysis, 2010
Elicitation of expert opinion is important for risk analysis when only limited data are available. Expert opinion is often elicited in the form of subjective confidence intervals; however, these are prone to substantial overconfidence. We investigated the influence of elicitation question format, in particular the number of steps in the elicitation procedure. In a 3-point elicitation procedure, an expert is asked for a lower limit, upper limit, and best guess, the two limits creating an interval of some assigned confidence level (e.g., 80%). In our 4-step interval elicitation procedure, experts were also asked for a realistic lower limit, upper limit, and best guess, but no confidence level was assigned; the fourth step was to rate their anticipated confidence in the interval produced. In our three studies, experts made interval predictions of rates of infectious diseases (Study 1, n = 21 and Study 2, n = 24: epidemiologists and public health experts), or marine invertebrate populations (Study 3, n = 34: ecologists and biologists). We combined the results from our studies using meta-analysis, which found average overconfidence of 11.9%, 95% CI [3.5, 20.3] (a hit rate of 68.1% for 80% intervals)-a substantial decrease in overconfidence compared with previous studies. Studies 2 and 3 suggest that the 4-step procedure is more likely to reduce overconfidence than the 3-point procedure (Cohen's d = 0.61, [0.04, 1.18]).
2009
Incorporation of expert information in inference or decision settings is often important, especially in cases where data are unavailable, costly or unreliable. One approach is to elicit prior quantiles from an expert and then to fit these to a statistical distribution and proceed according to Bayes rule. An incentive-compatible elicitation method using an external randomization is available.
Accurate measurements of probabilistic beliefs have become increasingly important both in practice and in academia. Introduced by statisticians in the 1950s to promote truthful reports in simple environments, Proper Scoring Rules (PSR) are now arguably the most popular incentivized mechanisms to elicit an agent's beliefs. This paper generalizes the analysis of PSR to richer environments relevant to economists. More speci…cally, we combine theory and experiment to study how beliefs reported with a PSR may be biased when i) the PSR payments are increased, ii) the agent has a …nancial stake in the event she is predicting, and iii) the agent can hedge her prediction by taking an additional action. Our results reveal complex distortions of reported beliefs, thereby raising concerns about the ability of PSR to recover truthful beliefs in general economic environments.
2009
We present a simple scoring rule which can be used to elicit several characteristics of interest of the subjective beliefs of an agent about a random variable. The agent has to choose an interval. If the realization of the random variable falls inside the specified interval she receives a reward, which is decreasing in the width of the interval. If the belief distribution is single peaked, then the optimal interval will cover the mode and the median. Moreover, the optimal interval widens when beliefs of the agent become more dispersed. In contrast to other scoring rules, these results hold for any degree of risk aversion.
American Journal of Industrial and Business Management, 2013
The purpose of this paper is to experimentally demonstrate the existence of the bias of over-confidence as a human psychological bias. This bias was measured by three methods: the estimation interval, the frequency estimation method and the method of question with two answer choices. The estimation interval method finds a very wide bias compared to the other two methods, but overconfidence persists in the other two methods at lower levels. In the first experiment, monetary incentives have exacerbated the over-confidence because of the given compensation. This system has demonstrated that there is a strong link between over-confidence and risk taking. The second experiment that used the method of question with two answer choices was given a different pay system and it was expected that overconfidence will be reduced by monetary incentives but the results show that the bias is not significantly reduced by these new monetary incentives. Similarly, the iteration that was made during the first experiment did not significantly reduce the bias.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Uncertainty in Artificial Intelligence, 1987
Journal of Economic Psychology, 2020
CERN European Organization for Nuclear Research - Zenodo, 2022
Psychological Review, 2007
Journal of Experimental Psychology: Learning, Memory, and Cognition, 2004
Journal of the Economic Science Association
Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems - AAMAS '06, 2006
Journal of Risk and Uncertainty, 2011
arXiv (Cornell University), 2016
Low-Probability High-Consequence Risk Analysis, 1984
Lietuvos matematikos rinkinys, 2015
Proceedings of the AAAI Conference on Artificial Intelligence
Organizational Behavior and Human Decision Processes, 2008
Artificial Intelligence, 2008