Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
A Bayesian prior over first-order theories is defined. It is shown that the prior can be approximated, and the relationship to previously studied priors is examined.
Probability theory as extended logic is completed such that essentially any probability may be determined. This is done by considering propositional logic (as opposed to predicate logic) as syntactically sufficient and imposing a symmetry from propositional logic. It is shown how the notions of 'possibility' and 'property' may be sufficiently represented in propositional logic such that 1) the principle of indifference drops out and becomes essentially combinatoric in nature and 2) one may appropriately represent assumptions where one assumes there is a space of possibilities but does not assume the size of the space.
2007
How to form priors that do not seem artificial or arbitrary is a central question in Bayesian statistics. The case of forming a prior on the truth of a proposition for which there is no evidence, and the definte evidence that the event can happen in a finite set of ways, is detailed. The truth of a propostion of this kind is frequently assigned a prior of 0.5 via arguments of ignorance, randomness, the Principle of Indiffernce, the Principal Principal, or by other methods. These are all shown to be flawed. The statistical syllogism introduced by Williams in 1947 is shown to fix the problems that the other arguments have. An example in the context of model selection is given.
Cognitive Science, 1995
Establishing reasonable, prior distributions remains a significant obstacle far the construction of probabilistic expert systems. Humon assessment of chance is often relied upon for this purpose, but this hos the drawback of being inconsistent with axioms of probability. This arficfe ndvonces o method for extracting o coherent distribution of probability from humon judgment. The method is based on a psychological model of probabilistic reasoning, followed by o correction phase using linear programming.
Andrés Rivadulla: Éxito, razón y cambio en física, Madrid: Ed. Trotta, 2004
In these pages I offer my solution to the problem of inductive probability of theories. Against the existing expectations in certain areas of the current philosophy of science, I argue that Bayes’s Theorem does not constitute an appropriate tool to assess the probability of theories and that we would do well to banish the question about how likely a certain scientific theory is to be true, or to what extent one theory is more likely true than another. Although I agree with Popper that inductive probability is impossible, I disagree with him in the way Sir Karl presents his argument, as I have showed elsewhere, so my proof is completely different. The argument I present in this paper is based on applying Bayes’s Theorem to specific situations that show its inefficiency both in the case of whether a hypothesis becomes all the more likely true the greater the empirical evidence that supports it, as whether the probability calculus allows to identify a given hypothesis from a set of hypotheses incompatible with each other as the most likely true.
ArXiv, 2019
We demonstrate that the functional form of the likelihood contains a sufficient amount of information for constructing a prior for the unknown parameters. We develop a four-step algorithm by invoking the information entropy as the measure of uncertainty and show how the information gained from coarse-graining and resolving power of the likelihood can be used to construct the likelihood-induced priors. As a consequence, we show that if the data model density belongs to the exponential family, the likelihood-induced prior is the conjugate prior to the corresponding likelihood.
2014
I certify that this thesis satisfies the requirements as a thesis for the degree of Master of Science in Applied Mathematics and Computer Science.
2005
Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Until recently, classical first-order logic has reigned as the de facto standard logical foundation for artificial intelligence. The lack of a built-in, semantically grounded capability for reasoning under uncertainty renders classical first-order logic inadequate for many important classes of problems. General-purpose languages are beginning to emerge for which the fundamental logical basis is probability. Increasingly expressive probabilistic languages demand a theoretical foundation that fully integrates classical first-order logic and probability. In first-order Bayesian logic (FOBL), probability distributions are defined over interpretations of classical first-order axiom systems. Predicates and functions of a classical first-order theory correspond to a random variables in the corresponding first-order Bayesian theory. This is a natural correspondence, given that random variables are formalized in mathematical statistics as measurable functions on a probability space. A formal system called Multi-Entity Bayesian Networks (MEBN) is presented for composing distributions on interpretations by instantiating and combining parameterized fragments of directed graphical models. A construction is given of a MEBN theory that assigns a non-zero probability to any satisfiable sentence in classical first-order logic. By conditioning this distribution on consistent sets of sentences, FOBL can represent a probability distribution over interpretations of any finitely axiomatizable first-order theory, as well as over interpretations of infinite axiom sets when a limiting distribution exists. FOBL is inherently open, having the ability to incorporate new axioms into existing theories, and to modify probabilities in the light of evidence. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. The results of this paper provide a logical foundation for the rapidly evolving literature on first-order Bayesian knowledge representation, and point the way toward Bayesian languages suitable for generalpurpose knowledge representation and computing. Because FOBL contains classical first-order logic as a deterministic subset, it is a natural candidate as a universal representation for integrating domain ontologies expressed in languages based on classical first-order logic or subsets thereof.
Journal of the American Statistical Association, 1994
2010
Howson and Urbach (1996) wrote a carefully structured book supporting the Bayesian view of scientific reasoning, which includes an unfavorable judgment about the so-called objective Bayesian inference. In this paper, the theses of the book are investigated from Carnap's analytical viewpoint in the light of a new formulation of the Principle of Indifference. In particular, the paper contests the thesis according to which no theory can adequately represent 'ignorance' between alternatives. Beginning from the new formulation of the principle, a criterion for the choice of an objective prior is suggested in the paper together with an illustration for the case of Binomial sampling. In particular, it will be shown that the new prior provides better frequentist properties than the Jeffreys interval.
Erkenntnis, 2014
Belief-revision models of knowledge describe how to update one's degrees of belief associated with hypotheses as one considers new evidence, but they typically do not say how probabilities become associated with meaningful hypotheses in the first place. Here we consider a variety of Skyrms-Lewis signaling game [Lewis (1969)] [Skyrms (2010)] where simple descriptive language and predictive practice and associated basic expectations coevolve. Rather than assigning prior probabilities to hypotheses in a fixed language then conditioning on new evidence, the agents begin with no meaningful language or expectations then evolve to have expectations conditional on their descriptions as they evolve to have meaningful descriptions for the purpose of successful prediction. The model, then, provides a simple but concrete example of how the process of evolving a descriptive language suitable for inquiry might also provide agents with effective priors.
2019
Bayesian inference begins with a statistical model: M(x)={(x;θ) θ∈Θ} x∈R for θ∈Θ⊂R where (x;θ) is the distribution of the sample X:=(1 ) R is the sample space and Θ the parameter space. Bayesian inference modifies the frequentist inferential set up in two crucial respects: (A) It views the unknown parameter(s) θ as random variables with their own distribution, known as the prior distribution: (): Θ→ [0 1]
IJESR, 2015
The success of Bayesian analysis has lead to an incredible production in statistics and probability theory but there have been much less efforts towards a generalization outside these disciplines. A new approach to the Bayesian scheme for a posteriori evaluation allows the construction of analogous schemes in other fields, where it could be as successful as in its probabilistic setting. First it is shown that a formal Bayesian scheme, presented under the viewpoint of system theory, can be translated to other fields. Examples in logic and graph theory show that Bayesian-like schemes function when the set of previous events or premises or nodes for the actual situation of the system is known. An evaluation of these events or premises is then calculated based on the previous information and on the characteristics of the system (probabilistic, logic, graph-theoretical, etc.). The dynamics of the systems is given via recursive implicit schemes for the step previous to the actual, zk-1. This is condensed in the definition of a new class of systems, retroactive systems, closely related to anticipatory systems, and specifically designed for applications analogous to the probabilistic setting but in other areas of applied mathematics. This research should be very useful in AI.
Information Systems Frontiers, 2000
† Comments and suggestions for improvement are welcome and will be gratefully appreciated.
2017
A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation. 1. The role of the prior distribution in a Bayesian analysis Both in theory and in practice, the prior distribution can play many roles in a Bayesian analysis. Perhaps most formally the prior serves to encode information germane to the problem being analyzed, but in prac...
Statistical Science, 2011
Bayesian methods are increasingly applied in these days in the theory and practice of statistics. Any Bayesian inference depends on a likelihood and a prior. Ideally one would like to elicit a prior from related sources of information or past data. However, in its absence, Bayesian methods need to rely on some "objective" or "default" priors, and the resulting posterior inference can still be quite valuable.
Philosophy Compass, 2011
Bayesianism is a popular position (or perhaps, positions) in the philosophy of science, epistemology, statistics, and other related areas, which represents belief as coming in degrees, measured by a probability function. In this article, I give an overview of the unifying features of the different positions called 'Bayesianism', and discuss several of the arguments traditionally used to support them.
International Journal of Adaptive Control and Signal Processing, 2001
Quanti"cation of prior information about possibly high-dimensional unknown parameters of dynamic probabilistic models is addressed. Their prior probability density function (pdf) is chosen in conjugate form. Individual pieces of information are converted into a common basis of "ctitious data so that di!erent nature and uncertainty levels are respected. Then, available measured data are used for assessing con"dence to various information pieces and "nal prior pdf is obtained as a geometric mean of individual pdfs weighted by respective con"dence weights. The algorithm is elaborated for a rich exponential family and normal regression model with external inputs as its prominent example. Positive in#uence of proper prior information on a design of adaptive controllers is demonstrated.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.