Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2021, Journal of Economic Methodology
https://doi.org/10.1080/1350178X.2021.1898660…
14 pages
1 file
In a recent special issue dedicated to the work of Dani Rodrik, Grüne-Yanoff and Marchionni [(2018). Modeling model selection in model pluralism.
Philosophy of Social Science
This paper introduces and defends an account of model-based science that I dub model pluralism. I argue that despite a growing awareness in the philosophy of science literature of the multiplicity, diversity, and richness of models and modeling-practices, more radical conclusions follow from this recognition than have previously been inferred. Going against the tendency within the literature to generalize from single models, I explicate and defend the following two core theses: (i) any successful analysis of models must target sets of models, their multiplicity of functions within science, and their scientific context and history and (ii) for almost any aspect x of phenomenon y, scientists require multiple models to achieve scientific goal z.
Session on Epistemology and Modelling Agreenskills Research school - Toulouse - October 2014
Ecology, 2010
Cornell University - arXiv, 2020
Model selection often aims to choose a single model, assuming that the form of the model is correct. However, there may be multiple possible underlying explanatory patterns in a set of predictors that could explain a response. Model selection without regard for model uncertainty can fail to bring these patterns to light. We explore multimodel penalized regression (MMPR) to acknowledge model uncertainty in the context of penalized regression. We examine how different penalty settings can promote either shrinkage or sparsity of coefficients in separate models. The method is tuned to explicitly limit model similarity. A choice of penalty form that enforces variable selection is applied to predict stacking fault energy (SFE) from steel alloy composition. The aim is to identify multiple models with different subsets of covariates that explain a single type of response.
This paper argues that common intuitions regarding a) the specialness of "use-novel" data for confirmation, and b) that this specialness implies the "no-double-counting rule", which says that data used in "constructing" (calibrating) a model cannot also play a role in confirming the model's predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in light of prominent accounts of confirmation of model predictions. We show that, on the Bayesian account of confirmation, and also on the standard Classical hypothesis-testing account, claims a) and b) are not generally true, but for some select cases, it is possible to distinguish data used for calibration from use-novel data, where only the latter confirm. The more specialised Classical model-selection methods, on the other hand, uphold a nuanced version of claim a), but this comes apart from b), which must be rejected in favour of a more refined account of the relationship between calibration and confirmation. Thus, depending on the framework of confirmation, either the scope or the simplicity of the intuitive position must be revised.
TEORIE VĚDY / THEORY OF SCIENCE, 2017
Review: Emiliano Ippoliti, Fabio Sterpetti and Thomas Nickles, eds. Models and Inferences in Science. Cham: Springer, 2016, 256 pages.
Statistica Neerlandica, 2012
ArXiv, 2021
It’s regarded as an axiom that a good model is one that compromises between bias and variance. The bias is measured in training cost, while the variance of a (say, regression) model is measure by the cost associated with a validation set. If reducing bias is the goal, one will strive to fetch as complex a model as necessary, but complexity is invariably coupled with variance: greater complexity implies greater variance. In practice, driving training cost to near zero does not pose a fundamental problem; in fact, a sufficiently complex decision tree is perfectly capable of driving training cost to zero; however, the problem is often with controlling the model’s variance. We investigate various regression model frameworks, including generalized linear models, Cox proportional hazard models, ARMA, and illustrate how misspecifying a model affects the variance.
Studies in History and Philosophy of Science, 2011
In his 1966 paper “The Strategy of model-building in Population Biology”, Richard Levins argues that no single model in population biology can be maximally realistic, precise and general at the same time. This is because these desirable model properties trade-off against one another. Recently, philosophers have developed Levins’ claims, arguing that trade-offs between these desiderata are generated by practical limitations on scientists, or due to formal aspects of models and how they represent the world. However this project is not complete. The trade-offs discussed by Levins had a noticeable effect on modelling in population biology, but not on other sciences. This raises questions regarding why such a difference holds. I claim that in order to explain this finding, we must pay due attention to the properties of the systems, or targets modelled by the different branches of science.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
HAU: Journal of Ethnographic Theory, 2017
NeuroImage, 2013
Logic Colloquium, 2000
European Journal for Philosophy of Science, 2009
Science in Context, 2014
The Brain in a Vat. Ed. S. Goldberg. Cambridge, 2015
Empirical Software Engineering, 2022
The British Journal for the Philosophy of Science, 1999