Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020, arXiv: Methodology
…
20 pages
1 file
Outcomes from studies assessing exposure often use multiple measurements. In previous work, using a model first proposed by Buonoccorsi (1991), we showed that combining direct (e.g. biomarkers) and indirect (e.g. self-report) measurements provides a more accurate picture of true exposure than estimates obtained when using a single type of measurement. In this article, we propose a valuable tool for efficient design of studies that include both direct and indirect measurements of a relevant outcome. Based on data from a pilot or preliminary study, the tool, which is available online as a shiny app \citep{shinyR}, can be used to compute: (1) the sample size required for a statistical power analysis, while optimizing the percent of participants who should provide direct measures of exposure (biomarkers) in addition to the indirect (self-report) measures provided by all participants; (2) the ideal number of replicates; and (3) the allocation of resources to intervention and control arms...
Bioinformatics, 2015
Motivation: Very large studies are required to provide sufficiently big sample sizes for adequately powered association analyses. This can be an expensive undertaking and it is important that an accurate sample size is identified. For more realistic sample size calculation and power analysis, the impact of unmeasured aetiological determinants and the quality of measurement of both outcome and explanatory variables should be taken into account. Conventional methods to analyse power use closed-form solutions that are not flexible enough to cater for all of these elements easily. They often result in a potentially substantial overestimation of the actual power. Results: In this article, we describe the Estimating Sample-size and Power in R by Exploring Simulated Study Outcomes tool that allows assessment errors in power calculation under various biomedical scenarios to be incorporated. We also report a real world analysis where we used this tool to answer an important strategic question for an existing cohort. Availability and implementation: The software is available for online calculation and downloads at
International Journal of Statistics in Medical Research, 2014
When designing studies to assess occupational exposures, one persistent decision problem is the selection between two technical methods, where one is expensive and statistically efficient and the other is cheap and statistically inefficient. While a few studies have attempted to determine the relatively more cost-efficient design between two technical methods, no successful study has optimized the fraction of the expensive efficient method in a combined technique intended for long-run exposure assessment studies. The purpose of this study was therefore to optimize the fraction of the expensive efficient measurements by resolving a precision-requiring cost minimization problem. For an indefinite total number of measurements, the total cost of a working posture assessment study was minimized by performing only expensive direct technical measurements. However, for a definite total number of measurements, the use of combined techniques in assessing the posture could be optimal, depending on the constraints placed on the precision and on the research budget.
Journal of Research on Educational Effectiveness, 2013
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual-and group-random assignment design studies and for common quasi-experimental design studies. The paper and accompanying tool cover computation of minimum detectable effect sizes under the following study designs: individual random assignment designs, hierarchical random assignment designs (2-4 levels), block random assignment designs (2-4 levels), regression discontinuity designs (6 types), and short interrupted timeseries designs. In each case, the discussion and accompanying tool consider the key factors associated with statistical power and minimum detectable effect sizes, including the level at which treatment occurs and the statistical models (e.g., fixed effect and random effect) used in the analysis. The tool also includes a module that estimates for one and two level random assignment design studies the minimum sample sizes required in order for studies to attain user-defined minimum detectable effect sizes.
Difficulty in obtaining the correct measurement for an individual's longterm exposure is a major challenge in epidemiological studies that investigate the association between exposures and health outcomes. Measurement error in an exposure biases the association between the exposure and a disease outcome. Usually, an internal validation study is required to adjust for exposure measurement error; it is challenging if such a study is not available. We propose a general method for adjusting for measurement error where multiple exposures are measured with correlated errors (a multivariate method) and illustrate the method using real data. We compare the results from the multivariate method with those obtained using a method that ignores measurement error (the naive method) and a method that ignores correlations between the errors and true exposures (the univariate method). It is found that ignoring measurement error leads to bias and underestimates the standard error. A sensitivity analysis shows that the magnitude of adjustment in the multivariate method is sensitive to the magnitude of measurement error, sign, and the correlation between the errors. We conclude that the multivariate method can be used to adjust for bias in the outcome-exposure association in a case where multiple exposures are measured with correlated errors in the absence of an internal validation study. The method is also useful in conducting a sensitivity analysis on the magnitude of measurement error and the sign of the error correlation.
2013
This paper complements existing power analysis tools by offering tools to compute minimum detectable effect sizes (MDES) for existing studies and to estimate minimum required sample sizes (MRSS) for studies under design. The tools that accompany this paper support estimates of MDES or MSSR for 21 different study designs that include 14 random assignment designs (6 designs in which individuals are randomly assigned to treatment or control condition and 8 in which clusters of individuals are randomly assigned to condition, with models differing depending on whether the sample was blocked prior to random assignment and by whether the analytic models assume constant, fixed, or random effects across blocks or assignment clusters); and 7 quasi-experimental designs (an interrupted time series design and 6 regression discontinuity designs that vary depending on whether the sample was blocked prior to randomization, whether individuals or clusters of individuals are assigned to treatment or ...
Psychological Science
The sample size necessary to obtain a desired level of statistical power depends in part on the population value of the effect size, which is, by definition, unknown. A common approach to sample-size planning uses the sample effect size from a prior study as an estimate of the population value of the effect to be detected in the future study. Although this strategy is intuitively appealing, effect-size estimates, taken at face value, are typically not accurate estimates of the population effect size because of publication bias and uncertainty. We show that the use of this approach often results in underpowered studies, sometimes to an alarming degree. We present an alternative approach that adjusts sample effect sizes for bias and uncertainty, and we demonstrate its effectiveness for several experimental designs. Furthermore, we discuss an open-source R package, BUCSS, and user-friendly Web applications that we have made available to researchers so that they can easily implement our suggested methods.
Behavior Research Methods, Instruments, & Computers, 2002
Statistics in Medicine, 1988
In estimating the sample size for a case-control study, epidemiologic texts present formulae that require a binary exposure of interest. Frequently. however, important exposures are continuous and dichotomization may result in a 'not exposed' category that has little practical meaning. In addition, if risks vary monotonically with exposure, then dichotomization will obscure risk effects and require a greater number of subjects to detect differences in the exposure distributions among cases and controls. Starting from the usual score statistic to detect differences in exposure, this paper develops sample size formulae for case-control studies with arbitrary exposure distributions; this includes both continuous and dichotomous exposure measurements as special cases. The score statistic is appropriate for general differentiable models for the relative odds, and, in particular, for the two forms commonly used in prospective disease occurrence models: (I) the odds of disease increase linearly with exposure; or (2) the odds increase exponentially with exposure. Under these two models we illustrate calculation of sample sizes for a hypothetical case-control study of lung cancer among non-smokers who are exposed to radon decay products at home.
BMC medical research methodology, 2016
Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. We proposed a method to adjust for the bias in the diet-disease association (hereafter, association), due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, e...
Turkiye Klinikleri Journal of Biostatistics
and power analysis is to determine the minimum number of individuals that have the ability to represent the population during the planning phase of the study. Since the statistical methods for each research plan are different, the calculation of sample size and power analysis will be calculates sample size and power analysis for hypothesis testing, diagnostic tests, correlation and regression analysis using the open source software R Shiny package and guides the researchers with examples. Material and Method: re packages, including shiny, shinydashboard, pwr, powerAnalysis, powerMediation, MKmisc and rhandsontable. Scripts were written for calculations that could not be done by these packa ges. Results: developed for the calculation of sample size and power analysis, and screen images of the results of these samples were given. accessible through http://biostatapps.inonu.edu.tr/WSSPAS. In the future studies, it is aimed to further strengthen the software by adding modules that can calculate sample size and power analysis for different multivariate statistical and machine learning methods.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Journal of Exposure Science and Environmental Epidemiology, 2005
Environmental Health Perspectives, 2005
Journal of Experimental Social Psychology
American Journal of Epidemiology, 1997
American Journal of Epidemiology, 1997
European Journal of Epidemiology, 2007
Journal of Research on Educational Effectiveness, 2015
Scandinavian Journal of Work, Environment & Health, 2001
F1000Research, 2020
Journal of the Royal Statistical Society: Series D (The Statistician), 1998
Practical Assessment Research Evaluation, 2009
Chemometrics and Intelligent Laboratory Systems, 1997
Environment International, 1993
Critical reviews in food science and nutrition, 2010