Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
AI
This document provides solutions to various computational modelling exercises, focusing on deriving equations to minimize the sum of squared deviations in different models. Solutions include fitting quadratic, polynomial, and logarithmic models to datasets related to physical traits (such as weight and length of fish), the pace of life in urban settings, and environmental phenomena (cricket chirping relative to temperature). The analysis emphasizes the least-squares criterion and explores model performance through error metrics, revealing insights on model adequacy in capturing data variability.
Trends in Biochemical Sciences, 1985
Model fitting and mathematical models are becoming increasingly important in the. biochemical sciences. Here the statistical procedures of linear and non-linear regression for parameter estimation and goodness-of-fit analysis are examined. The mechanics of non-linear regression are described for the Gauss-Newton method, with particular reference to the Michaelis--Menten model. Suitable computer software is suggested to entice those who wish to familiarize themselves with these powerful tools.
SSSA Book Series, 2002
Radiation Center of Osaka Prefecture Technical Report 4, 1984
[Note: This report has been superseded by: Tatsuo Tabata and Rinsuke Ito, "ALESQ, a Nonlinear Least-Squares Fit Code, and TSOLVE, a Nonlinear Best Approximation Code, Third Edition," Institute for Data Evaluation and Analysis Technical Report Supplement No. 1 (IDEA-TRS 1) (2022).]
ForsChem Research Reports, 2023
Nonlinear regression consists in finding the best possible model parameter values of a given homoscedastic mathematical structure with nonlinear functions of the model parameters. In this report, the second part of the series, the mathematical structure of models with nonlinear functions of their parameters is optimized, resulting in the minimum estimation of model error variance. The uncertainty in the estimation of model parameters is evaluated using a linear approximation of the model about the optimal model parameter values found. The homoscedasticity of model residuals must be evaluated to validate this important assumption. The model structure identification procedure is implemented in R language and shown in the Appendix. Several examples are considered for illustrating the optimization procedure. In many practical situations, the optimal model obtained has heteroscedastic residuals. If the purpose of the model is only describing the experimental observations, the violation of the homoscedastic assumption may not be critical. However, for explanatory or extrapolating models, the presence of heteroscedastic residuals may lead to flawed conclusions.
We consider the problem of regression when study variable depends on more than one explanatory or independent variables, called as multiple linear regression model. This model generalizes the simple linear regression in two ways. It allows the mean function () E y to depend on more than one explanatory variables and to have shapes other than straight lines, although it does not allow for arbitrary shapes.
This book has been prepared for the beginners to help them understand basic to advanced functionality of MATLAB. After completing this chapter 1 (Which included an explanation of the Matlab language) you will find yourself at a moderate level of expertise in using MATLAB from where you can take yourself to next levels. On other side, in spite of the availability of highly innovative tools in statistics, the main tool of the applied statistician remains the linear model. The linear model involves the simplest and seemingly most restrictive statistical properties: independence, normality, constancy of variance, and linearity. However, the model and the statistical methods associated with it are surprisingly versatile and robust. More importantly, mastery of the linear model is a prerequisite to work with advanced statistical tools because most advanced tools are generalizations of the linear model. The linear model is thus central to the training of any statistician, applied or theoretical. This book develops the basic theory of linear models for regression, analysis-of variance, and analysis–of–covariance. Applications are illustrated by examples and problems using real data. This combination of theory and applications will prepare the reader to further explore the literature and to more correctly interpret the output from a linear models computer package and MATLAB. This introductory linear models book is designed primarily for a one- semester course for advanced undergraduates or MS students. It includes more material than can be covered in one semester so as to give an instructor a choice of topics and to serve as a reference book for researchers who wish to gain a better understanding of regression and analysis-of-variance. The book would also serve well as a text for PhD classes in which the instructor is looking for a one-semester introduction, and it would be a good supplementary text or reference for a more advanced PhD class for which the students need to review the basics on their own. Our overriding objective in the preparation of this book has been clarity of exposition. We hope that students, instructors, researchers, and practitioners will find these linear models' text more comfortable than most. In the final stages of development, we asked students for
Research Institute for Advanced Science and Technology Osaka Prefecture University Technical Report No. 2, 1997
[Note: This report has been superseded by: Tatsuo Tabata and Rinsuke Ito, "ALESQ, a Nonlinear Least-Squares Fit Code, and TSOLVE, a Nonlinear Best Approximation Code, Third Edition," Institute for Data Evaluation and Analysis Technical Report Supplement No. 1 (IDEA-TRS 1) (2022).]
Regression models form the core of the discipline of econometrics. Although econometricians routinely estimate a wide variety of statistical models, using many different types of data, the vast majority of these are either regression models or close relatives of them. In this chapter, we introduce the concept of a regression model, discuss several varieties of them, and introduce the estimation method that is most commonly used with regression models, namely, least squares. This estimation method is derived by using the method of moments, which is a very general principle of estimation that has many applications in econometrics. The most elementary type of regression model is the simple linear regression model, which can be expressed by the following equation: y t = β 1 + β 2 X t + u t. (1.01) The subscript t is used to index the observations of a sample. The total number of observations, also called the sample size, will be denoted by n. Thus, for a sample of size n, the subscript t runs from 1 to n. Each observation comprises an observation on a dependent variable, written as y t for observation t, and an observation on a single explanatory variable, or independent variable, written as X t. The relation (1.01) links the observations on the dependent and the explanatory variables for each observation in terms of two unknown parameters, β 1 and β 2 , and an unobserved error term, u t. Thus, of the five quantities that appear in (1.01), two, y t and X t , are observed, and three, β 1 , β 2 , and u t , are not. Three of them, y t , X t , and u t , are specific to observation t, while the other two, the parameters, are common to all n observations. Here is a simple example of how a regression model like (1.01) could arise in economics. Suppose that the index t is a time index, as the notation suggests. Each value of t could represent a year, for instance. Then y t could be household consumption as measured in year t, and X t could be measured disposable income of households in the same year. In that case, (1.01) would represent what in elementary macroeconomics is called a consumption function.
Philosophy of Science, 2007
The main aim of this paper is to revisit the curve-fitting problem using the reliability of inductive inference as a primary criterion for the 'fittest' curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curve-fitting problem is, on the one hand, the undue influence of the mathematical approximation perspective, and on the other, the insufficient attention paid to the statistical modeling aspects of the problem. Using goodness-of-fit as the primary criterion for best, the mathematical approximation perspective undermines the reliability of inference objective by giving rise to selection rules which pay insufficient attention to 'capturing the regularities in the data'. A more appropriate framework is offered by the error-statistical approach, where (i) statistical adequacy provides the criterion for assessing when a curve captures the regularities in the data adequately, and (ii) the relevant error probabilities can be used to assess the reliability of inductive inference. Broadly speaking, the fittest curve (statistically adequate) is not determined by the smallness if its residuals, tempered by simplicity or other pragmatic criteria, but by the non-systematic (e.g. white-noise) nature of its residuals. The advocated error-statistical arguments are illustrated by comparing the Kepler and Ptolemaic models on empirical grounds. * Forthcoming in Philosophy of Science, 2007. † I'm grateful to Deborah Mayo and Clark Glymour for many valuable suggestions and comments on an earlier draft of the paper; estimating the Ptolemaic model was the result of Glymour's prompting and encouragement.
Applied Mathematics and Computation, 1996
The most commonly used numerical optimization techniques include the methods of Gauss-Newton, Newton-Raphson, gradient methods, including methods of steepest ascent and descent, and Marquardt algorithm. Kumar [1] has recently proposed a new technique based on optimum exponential regression. Another noniterative procedure proposed in this paper is based on the principle of internal regression. In this paper, we have compared these methods using real data sets.
Journal of Business & Economic Statistics, 1988
Physica A: Statistical Mechanics and its Applications, 2007
This paper examines the performance of various fitting methods between the curves of two explicit functions. The approximating test function is considered to be one-parametrical first and multi-parametrical directly after. The widely used least square deviations constitutional method based on the Euclidean norm is not unique. Methods based on q-norm, for qX1, can also be defined. Emphasis on these methods, especially for q ¼ 1, is placed. Furthermore, any functional F fulfilling the norm's preconditions induces a metric for deviations, supporting a respective method for fitting through the minimization of total deviations. This dissertation addresses also the sensitivity of each method that is a measure of how abrupt the variation of the total deviations near its minimum is. We show that the least square method does not indispensably perform the largest sensitivity in regard to the alternative methods based on other q-norms. In addition, we represent the explicit general expression of normal equations, from which the fitting can be achieved, and the sensitivity, from which one can positively extract the suitable norm for a given model. r
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.