Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2002, Critical Care
The present review introduces the notion of statistical power and the hazard of under-powered studies. The problem of how to calculate an ideal sample size is also discussed within the context of factors that affect power, and specific methods for the calculation of sample size are presented for two common scenarios, along with extensions to the simplest case.
Two of the most important questions related to all research studies are the way of selecting subjects and the number of subjects required for the study. Why are these two issues given so much importance? Let us take a case of a randomized controlled trial (RCT) for the treatment of hypertension and try to understand this. In an RCT, to show a difference between two drugs used for the treatment of hypertension, the researchers randomized hypertensive patients into two groups. Both the groups were given treatment and were evaluated at the end of the study to compare the desired outcome of reduction in blood pressure below a particular level. Suppose they get an inconclusive result in the study, they would have advocated against the use of this new drug. One issue relating to the extrapolation of this result is the size of the sample from which the results have been generated. If the result is generated from a large sample then often the results will be close to the truth provided the ...
Journal of Investigative Dermatology, 2018
Sample size and power calculations help determine if a study is feasible based on a priori assumptions about the study results and available resources. Trade-offs must be made between the probability of observing the true effect and the probability of type I errors (a, false positive) and type II errors (b, false negative). Calculations require specification of the null hypothesis, the alternative hypothesis, type of outcome measure and statistical test, a level, b, effect size, and variability (if applicable). Because the choice of these parameters may be quite arbitrary in some cases, one approach is to calculate the sample size or power over a range of plausible parameters before selecting the final sample size or power. Considerations that should be taken into account could include correction for nonadherence of the participants, adjustment for multiple comparisons, or innovative study designs.
Optometry Today, 2010
The concept of sample size and statistical power estimation is now something that Optometrists that want to perform research, whether it be in practice or in an academic institution, cannot simply hide away from. Ethics committees, journal editors and grant awarding bodies are now ...
In most situations, researchers do not have access to an entire statistical population of interest partly because it is too expensive and time consuming to cover a large population or due to the difficulty to get the cooperation from the entire population to participate in the study. As a result, researchers normally resort to making important decisions about a population based on a representative sample. Hence, estimating an appropriate sampling size is a very important aspect of a research design to allow the researcher to make inferences from the sample statistics to the statistical population. The power of a sample survey lies in the ability to estimate an appropriate sample size to obtain the necessary data to describe the characteristics of the population. With that as the rationale, this article was written to make comparison between two commonly used approaches in estimating sampling size: Krejcie and Morgan and Cohen Statistical Power Analysis. It also highlights the significance of using Cohen's formula over Krejcie and Morgan's for higher accuracy to base decisions on research findings with confidence.
Sample size determination is often an important step in planning a statistical study-and it is usually a dif cult one. Among the important hurdles to be surpassed, one must obtain an estimate of one or more error variances and specify an effect size of importance. There is the temptation to take some shortcuts. This article offers some suggestions for successful and meaningful sample size determination. Also discussed is the possibility that sample size may not be the main issue, that the real goal is to design a high-quality study. Finally, criticism is made of some ill-advised shortcuts relating to power and sample size.
Applied Ergonomics, 2004
Estimates of statistical power are widely used in applied research for purposes such as sample size calculations. This paper reviews the benefits of power and sample size estimation and considers several problems with the use of power calculations in applied research that result from misunderstandings or misapplications of statistical power. These problems include the use of retrospective power calculations and standardized measures of effect size. Methods of increasing the power of proposed research that do not involve merely increasing sample size (such as reduction in measurement error, increasing ‘dose’ of the independent variable and optimizing the design) are noted. It is concluded that applied researchers should consider a broader range of factors (other than sample size) that influence statistical power, and that the use of standardized measures of effect size should be avoided (except as intermediate stages in prospective power or sample size calculations).
2014
The main aim of this paper is to provide some practical guidance to researchers on how statistical power analysis can be used to estimate sample size in empirical design. The paper describes the key assumptions underlying statistical power analysis and illustrates through several examples how to determine the appropriate sample size. The examples use hypotheses often tested in sport sciences and verified with popular statistical tests including the independent-samples t-test, one-way and twoway analysis of variance (ANOVA), correlation analysis, and regression analysis. Commonly used statistical packages allow researchers to determine appropriate sample size for hypothesis testing situations listed above.
INTED2021 Proceedings
The use of known expressions to determine the minimum sample size required in specific cases is a common practice that has undeniable advantages but, as a counterpart, it presents some disadvantages. We consider two of them: it is a rigid method that depends on specific assumptions and masks some important concepts such as significance and statistical power, because both values are simple parameters to specify and have standard values (5% for significance and 80% or 90 % for statistical power). In this work we propose a procedure based on Monte Carlo simulation to relate sample size with significance and statistical power. The use of a model allows us to make these relationships explicit and appreciate the consequences of changes in any of these elements. This can be very useful in educational contexts, where understanding these concepts matters. This method can also be useful when the standard formulas for statistical power or sample size are not applicable, because the rigid conditions on which they are based are not met.
Nephrology Dialysis Transplantation, 2010
Although most statistical textbooks describe techniques for sample size calculation, it is often difficult for investigators to decide which method to use. There are many formulas available which can be applied for different types of data and study designs. However, all of these formulas should be used with caution since they are sensitive to errors, and small differences in selected
Journal of advanced …, 2004
How many do I need? Basic principles of sample size estimation Background. In conducting randomized trials, formal estimations of sample size are required to ensure that the probability of missing an important difference is small, to reduce unnecessary cost and to reduce wastage. Nevertheless, this aspect of research design often causes confusion for the novice researcher. Aim. This paper attempts to demystify the process of sample size estimation by explaining some of the basic concepts and issues to consider in determining appropriate sample sizes. Method. Using a hypothetical two group, randomized trial as an example, we examine each of the basic issues that require consideration in estimating appropriate sample sizes. Issues discussed include: the ethics of randomized trials, the randomized trial, the null hypothesis, effect size, probability, significance level and type I error, and power and type II error. The paper concludes with examples of sample size estimations with varying effect size, power and alpha levels.
This paper is designed as a tool that a researcher could use in planning and conducting quality research. This is a review paper which gives a discussion of various aspects of designing consideration in medical research. This paper covers the essentials in calculating power and sample size for a variety of applied study designs. Sample size computation for survey type of studies, observation studies and experimental studies based on means and proportions or rates, sensitivityspecificity tests for assessing the categorical outcome are presented in detail. Over the last decades, considerable interest has been focused on medical research designs and sample size estimation. The resulting literature is scattered over many textbooks and journals. This paper presents these methods in a single review and comments on their application in practice.
Biochemia medica, 2021
Calculating the sample size in scientific studies is one of the critical issues as regards the scientific contribution of the study. The sample size critically affects the hypothesis and the study design, and there is no straightforward way of calculating the effective sample size for reaching an accurate conclusion. Use of a statistically incorrect sample size may lead to inadequate results in both clinical and laboratory studies as well as resulting in time loss, cost, and ethical problems. This review holds two main aims. The first aim is to explain the importance of sample size and its relationship to effect size (ES) and statistical significance. The second aim is to assist researchers planning to perform sample size estimations by suggesting and elucidating available alternative software, guidelines and references that will serve different scientific purposes.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
Black Sea Journal of Health Science, 2021
The approval of local ethics committees is required for clinical researches. In order to obtain approval, how the sample size is determined, whether power analysis is done or not and under what assumptions these analyses are made, are important questions/problems. In hypothesis tests, it is possible two types of errors (type 1 error denoted by α and type 2 error denoted by β), of which α is the probability of rejecting the null hypothesis that is actually true and is the probability of accepting the actually false null hypothesis. These errors also determine the reliability of the test (1-α) and the power of test (1-β). While α is directly determined by the researchers and generally as taken 0.05 (in some cases 0.01), β cannot be determined directly. Because β, hence the power of test (1-β) depends on the α (negatively correlated with β) the variation in the population (positively correlated with β) and sample size (n; negatively correlated with β). In clinical researches, it is required that β does not exceed 0.10 (in some cases 0.05) so the power of test should be at least 0.90 and above. In this study, the sample sizes required for some statistical tests (independent sample t-test, oneway ANOVA and Chi-square) which are widely used in clinical research, were calculated with the G*Power program and some evaluations were made. As a result, as expected in the statistical tests, it was observed that decreasing both α and effect size and increasing the power of the test significantly increased the required sample size. However, it was also observed that increasing effect on the sample size of increasing the power of test decreased (5-11%) in the smaller values of α in the independent sample t-test, decreased (nearly 5%) when increasing the number of compared groups in one-way ANOVA and decreased (10-15%) when increasing degree of freedom of Chi-square test.
Evaluation & the Health Professions, 2003
Sample-size planning historically has been approached from a power analytic perspective in order to have some reasonable probability of correctly rejecting the null hypothesis. Another approach that is not as wellknown is one that emphasizes accuracy in parameter estimation (AIPE). From the AIPE perspective, sample size is chosen such that the expected width of a confidence interval will be sufficiently narrow. The rationales of both approaches are delineated and two procedures are given for estimating the sample size from the AIPE perspective for a twogroup mean comparison. One method yields the required sample size, such that the expected width of the computed confidence interval will be the value specified. A modification allows for a defined degree of probabilistic assurance that the width of the computed confidence interval will be no larger than specified. The authors emphasize that the correct conceptualization of sample-size planning depends on the research questions and particular goals of the study.
International Journal of Statistics in Medical Research, 2017
Determining the optimal sample size is crucial for any scientific investigation. An optimal sample size provides adequate power to detect statistical significant difference between the comparison groups in a study and allows the researcher to control for the risk of reporting a false-negative finding (Type II error). A study with too large a sample is harder to conduct, expensive, time consuming and may expose an unnecessarily large number of subjects to potentially harmful or futile interventions. On the other hand, if the sample size is too small, a best conducted study may fail to answer a research question due to lack of sufficient power. To draw a valid and accurate conclusion, an appropriate sample size must be determined prior to start of any study. This paper covers the essentials in calculating sample size for some common study designs. Formulae along with some worked examples were demonstrated for potential applied health researchers. Although maximum power is desirable, this is not always possible given the resources available for a study. Researchers often needs to choose a sample size that makes a balance between what is desirable and what is feasible.
2011
The sample size is the number of patients or other experimental units that need to be included in a study to answer the research question. Pre-study calculation of the sample size is important; if a sample size is too small, one will not be able to detect an effect, while a sample that is too large may be a waste of time and money. Methods to calculate the sample size are explained in statistical textbooks, but because there are many different formulas available, it can be difficult for investigators to decide which method to use. Moreover, these calculations are prone to errors, because small changes in the selected parameters can lead to large differences in the sample size. This paper explains the basic principles of sample size calculations and demonstrates how to perform such a calculation for a simple study design.
Journal of Clinical Epidemiology, 2018
General Psychiatry
Power analysis is a key component of planning prospective studies such as clinical trials. However, some journals in biomedical and psychosocial sciences request power analysis for data already collected and analysed before accepting manuscripts for publication. Many have raised concerns about the conceptual basis for such post-hoc power analyses. More recently, Zhang et al showed by using simulation studies that such power analyses do not indicate true power for detecting statistical significance since post-hoc power estimates vary in the range of practical interests and can be very different from the true power. On the other hand, journals’ request for information about the reliability of statistical findings in a manuscript due to small sample sizes is justified since the sample size plays an important role in the reproducibility of statistical findings. The problem is the wording of the journals' request, as the current power analysis paradigm is not designed to address jour...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.