Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2007
…
4 pages
1 file
A common practice in scientific experimentation in areas such as Medicine, Pharmacy, Nutrition, among others, is to measure each sample unit three times (in triplicate) or more generally, m times (in m-plicate) and take the average of such measurements as the response variable. This is generally done to improve the precision of model parameter estimates. When the objective is to estimate the population mean, we use a random effects model to show that the efficiency of working with m-plicates is related to the magnitude of the intraclass correlation coefficient, which essentially measures the contribution of the variance between sample units to the total variance. We show that above certain values of this parameter, the use of m-plicates does not bring significant improvement (say, of 10% or more) to the precision of the estimates. Additionally, taking the costs of sampling units and making measurements into account, we compare sampling schemes with and without m-plicates designed to obtain fixed width confidence intervals for the mean. We illustrate the results through a practical example.
The American Statistician, 2009
Correlated data frequently arise in contexts such as, for example, repeated measures and meta-analysis. The amount of information in such data depends not only on the sample size, but also on the structure and strength of the correlations among observations from the same independent block. A general concept is discussed, the effective sample size, as a way of quantifying the amount of information in such data. It is defined as the sample size one would need in an independent sample to equal the amount of information in the actual correlated sample. This concept is widely applicable, for Gaussian data and beyond, and provides important insight. For example, it helps explaining why fixed-effects and random-effects inferences of meta-analytic data can be so radically divergent. Further, we show that in some cases the amount of information is bounded, even when the number of measures per independent block approaches infinity. We use the method to devise a new denominator degrees-of-freedom method for fixed-effects testing. It is compared to the classical Satterthwaite and Kenward-Roger methods for performance and, more importantly, to enhance insight. A key feature of the proposed degrees-of-freedom method is that it, unlike the others, can be used for non-Gaussian data too.
Contemporary Clinical Trials, 2011
2000
When designing a clinical trial, investigators commonly feel that they are fighting (or are caught in the middle of) a two-front war. One front is driven by the requirement that the research effort should be productive, the other by statistical concerns.
International Journal of Statistics and Probability
We proposed three methods to find an approximate confidence interval for the variance of the random effects for a one-way analysis of the variance model in completely randomized design. We compared the proposed methods with some other methods reported in the literature. Several criteria are used for the empirical comparisons: the mean width of the confidence interval, the variance of the width, and the coverage probability. We use Simulation and Monte-Carlo techniques to perform the comparison study. We use R language to facilitate the simulation procedures. We found that one of the proposed methods was in general superior to the others.
Journal of Biopharmaceutical Statistics, 2007
Journal of Biopharmaceutical Statistics, 2013
Biometrika, 2015
Meta-analysis is widely used to compare and combine the results of multiple independent studies. To account for between-study heterogeneity, investigators often employ random-effects models, under which the effect sizes of interest are assumed to follow a normal distribution. It is common to estimate the mean effect size by a weighted linear combination of study-specific estimators, with the weight for each study being inversely proportional to the sum of the variance of the effect-size estimator and the estimated variance component of the random-effects distribution. Because the estimator of the variance component involved in the weights is random and correlated with study-specific effect-size estimators, the commonly adopted asymptotic normal approximation to the meta-analysis estimator is grossly inaccurate unless the number of studies is large. When individual participant data are available, one can also estimate the mean effect size by maximizing the joint likelihood. We establish the asymptotic properties of the meta-analysis estimator and the joint maximum likelihood estimator when the number of studies is either fixed or increases at a slower rate than the study sizes and we discover a surprising result: the former estimator is always at least as efficient as the latter. We also develop a novel resampling technique that improves the accuracy of statistical inference. We demonstrate the benefits of the proposed inference procedures using simulated and empirical data.
BMC Medical Research Methodology, 2014
Background: The intraclass correlation coefficient (ICC) is widely used in biomedical research to assess the reproducibility of measurements between raters, labs, technicians, or devices. For example, in an inter-rater reliability study, a high ICC value means that noise variability (between-raters and within-raters) is small relative to variability from patient to patient. A confidence interval or Bayesian credible interval for the ICC is a commonly reported summary. Such intervals can be constructed employing either frequentist or Bayesian methodologies. Methods: This study examines the performance of three different methods for constructing an interval in a two-way, crossed, random effects model without interaction: the Generalized Confidence Interval method (GCI), the Modified Large Sample method (MLS), and a Bayesian method based on a noninformative prior distribution (NIB). Guidance is provided on interval construction method selection based on study design, sample size, and normality of the data. We compare the coverage probabilities and widths of the different interval methods. Results: We show that, for the two-way, crossed, random effects model without interaction, care is needed in interval method selection because the interval estimates do not always have properties that the user expects. While different methods generally perform well when there are a large number of levels of each factor, large differences between the methods emerge when the number of one or more factors is limited. In addition, all methods are shown to lack robustness to certain hard-to-detect violations of normality when the sample size is limited. Conclusions: Decision rules and software programs for interval construction are provided for practical implementation in the two-way, crossed, random effects model without interaction. All interval methods perform similarly when the data are normal and there are sufficient numbers of levels of each factor. The MLS and GCI methods outperform the NIB when one of the factors has a limited number of levels and the data are normally distributed or nearly normally distributed. None of the methods work well if the number of levels of a factor are limited and data are markedly non-normal. The software programs are implemented in the popular R language.
Communications in Statistics - Simulation and Computation, 2003
Although studies of the relationship between risk factors measured at
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
arXiv (Cornell University), 2024
Biometrical Journal, 2009
Advances in Methods and Practices in Psychological Science., 2018
Statistics in Medicine, 1987
arXiv: Methodology, 2020
Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, 2009
Nephrology Dialysis Transplantation, 2010
Statistics in Medicine, 2002
Statistics in Medicine, 2008
Multivariate Behavioral Research, 2005