Papers by Kristine Hogarty

Medicine & Science in Sports & Exercise, 2014
Team-Based Learning (TBL) is a teaching pedagogy for flipping the classroom that moves the focus ... more Team-Based Learning (TBL) is a teaching pedagogy for flipping the classroom that moves the focus of the classroom from the instructor conveying course concepts via lecture to the application of concepts by student teams. It has been used extensively in lecture courses; however, there is little evidence of its use in laboratory courses. PURPOSE: To describe the implementation of TBL in a graduate exercise physiology laboratory course. METHODS: TBL was implemented via the Readiness Assurance Process, after which a significant Question of the Day (QoD) was posed. Student teams then developed research protocols to answer the QoD and anonymously submitted these for review by their peers. The best protocol was selected by a student vote and then a team was randomly selected to implement that protocol the following week. Data were then made available to all teams and simultaneous reporting of results and discussion carried out the next week. RESULTS: Using TBL in a graduate laboratory course was very successful and well received by both the students and the instructor. Students reported increased content learning, skill development, and retention. They took on the responsibility for learning and were more accountable. CONCLUSIONS: The learners drove the process and were guided by the instructor rather than an instructor-centered delivery.
... edu Ann Barron Kristine Hogarty Tina Hohlfeld Kathryn Loggie Shauna Schullo Elizabeth Gulitz ... more ... edu Ann Barron Kristine Hogarty Tina Hohlfeld Kathryn Loggie Shauna Schullo Elizabeth Gulitz Melissa Venable Sanaa Bennouna University of South Florida Phyllis Sweeney Our Lady of the Lake College Keywords: Online ...
... Jeffrey D. Kromrey Melissa Venable Amy Hilbelink Tina Hohlfeld Kris Y. Hogarty ... connect at... more ... Jeffrey D. Kromrey Melissa Venable Amy Hilbelink Tina Hohlfeld Kris Y. Hogarty ... connect at high bandwidths. Only one student was using a dialup modem, four used cable modems, five had DSL connections, and one accessed the course via a LAN. When asked which ...
The accuracy and precision of the estimation of population effect size was evaluated using standa... more The accuracy and precision of the estimation of population effect size was evaluated using standardized mean differences and Cliff's δ. Monte Carlo methods were used to simulate primary studies, which were combined into meta-analyses. Five factors were included in the simulation: number of primary studies in each meta-analysis, sample sizes in primary studies, group variances, population distribution shape, and population effect size. Results were evaluated in terms of accuracy and precision of point and interval estimates of the population effect size. Results of the research support the use of the unweighted Cliff's δ as a robust effect size estimate, particularly under conditions where there is severe violation of the normality and homogeneity of variance assumptions.

Researchers using mixed linear models often use fit criteria to select among possible covariance ... more Researchers using mixed linear models often use fit criteria to select among possible covariance structures for their data. Unfortunately, fit criteria do not always lead to the correct specification of the covariance structure, and misspecification can have negative consequences for estimation and inference. A program is presented that allows researchers to explore the sensitivity of Akaike's Information Criterion (AIC) and Schwartz's Bayesian Criterion (SBC) to possible misspecifications. For these potential misspecifications, the program then allows the researcher to examine the bias in the estimation of the variance parameters and the fixed effects, and the Type I error rates for tests of fixed effects. Monte Carlo methods are employed using a SAS® macro in which data are generated using SAS/IML® and analyzed using PROC MIXED of SAS/STAT®. An illustrative example based on a longitudinal study of student achievement at one of the National Science Foundation's Urban...

Review of Educational Research, 2002
In this special issue of Review of Educational Research, we present a collection of articles that... more In this special issue of Review of Educational Research, we present a collection of articles that cohere around the common theme of standards-based reform and accountability. The authors critically examine the extant literature in a time of increasing polarization of views about the desirability and consequences of the accountability movement and of educational reform in general. In the first article, Madhabi Chatterji focuses our attention on purposes, models, and methods of inquiry. This article classifies the literature into seven categories and analyzes it with respect to content and methodology, using theoretically supported taxonomies. With the premise that past reforms were meant to be systemic, Chatterji examines the extent to which related inquiry has been guided by designs that explicitly or implicitly acknowledge the presence of a system, and evaluates the utility of designs in support of large-scale systematic changes in education. In the second article, James P. Spillane, Brian J. Reiser, and Todd Reimer develop a cognitive framework to characterize sense-making in the implementation process. Their framework is especially relevant for recent education policy initiatives, such as standards-based reform, that press for tremendous changes in classroom instruction. The framework is premised on the assumption that if teachers and school administrators are to implement state and national standards, they must first interpret them. Their understanding of reform proposals is the basis on which they decide whether to ignore, sabotage, adapt, or adopt them. The authors argue that behavioral change includes a fundamentally cognitive component. They draw on both theoretical and empirical literatures to develop a cognitive perspective on implementation. In the third article, Laura Desimone reviews, critiques, and synthesizes research that documents the implementation of comprehensive school reform models. She applies the theory that the more specific, consistent, authoritative, powerful, and stable a policy is, the stronger will be its implementation. The research suggests that all five of these policy attributes contribute to implementation and that, in addition, each is related to particular aspects of the process: specificity to implementation fidelity, power to immediate implementation effects, and authority, consistency, and stability to long-lasting change. In the final article, Jian Wang and Sandra J. Odell draw on the mentoring and learning-to-teach literature, exploring how mentoring that supports novices’ learning to teach in ways consistent with the standards brings about reform, whether in preservice or induction contexts. The authors point out that the assumptions on which mentoring programs are based often do not focus on standards and that the prevailing mentoring practices are consistent with those assumptions rather than with the assumptions behind standards-based teaching. To promote mentoring that reforms teaching in ways consistent with the aims of the standards movement, policymakers must determine effective methods of educating mentors and mentoring program designers.

With the growing popularity of meta-analytic techniques to analyze and synthesize results across ... more With the growing popularity of meta-analytic techniques to analyze and synthesize results across sets of empirical studies, have come concerns about the sensitivity of traditional tests in meta-analysis to violations of assumptions. This is particularly distressing because the tenability of such assumptions in primary studies is often impossible to evaluate unless sufficient details are reported. Robust estimates of effect size, such as Cliff's δ , may yield superior inferences about the population effect size. The purpose of this study was to compare standardized mean differences (Cohen's d and Hedges' g) and δ in terms of the accuracy and precision of interval estimates of population mean effect size in meta-analysis. Factors investigated in the Monte Carlo study included characteristics of both the populations from which samples were drawn (distribution shape, variance heterogeneity, and population effect size) and the corpus of studies in each meta-analysis (sampl...
Educational and Psychological Measurement, 2005
The purpose of this studywas to investigate the relationship between sample size and the quality ... more The purpose of this studywas to investigate the relationship between sample size and the quality of factor solutions obtained from exploratory factor analysis. This research expanded upon the range of conditions previously examined, employing a broad selection of criteria for the evaluation of the quality of sample factor solutions. Results showed that when communalities are high, sample size tended to have less influence on the quality of factor solutions than when communalities are low. Overdetermination of factors was also shown to improve the factor analysis solution. Finally, decisions about the quality of the factor solution depended upon which criteria were examined.
Society for Information …, 2004
... Smith, G., Kromrey, J., Barron, A., Carey, L., Hogarty, K. & Hess, M. (2004). Assessi... more ... Smith, G., Kromrey, J., Barron, A., Carey, L., Hogarty, K. & Hess, M. (2004). Assessing the Pedagogical and Technological Quality of Online Courses. In R. Ferdig et al. (Eds.), Proceedings of Society for Information Technology ...
National …, 2005
Page 1. Evaluation of Online Instruction 1 From the Learners' Eyes: Student Evaluation of On... more Page 1. Evaluation of Online Instruction 1 From the Learners' Eyes: Student Evaluation of Online Instruction Melinda Hess [email protected] Ann E. Barron Lou Carey Amy Hilbelink Kris Hogarty Jeffrey D. Kromrey Ha Phan Shauna Schullo ...

annual meeting of the …
Background: The main important method for airway management during anesthesia is endotracheal int... more Background: The main important method for airway management during anesthesia is endotracheal intubation. Laryngeal mask airway (LMA) and supraglottic gel device (I-Gel) are considered alternatives to endotracheal tube. Objectives: This study sought to assess the success rate of airway management using LMA and I-Gel in elective orthopedic surgery. Patients and Methods: This single-blinded randomized clinical trial was performed on 61 ASA Class 1 and 2 patients requiring minor orthopedic surgeries. Patients were randomly allocated to two groups of LMA and I-Gel. Supraglottic airway placement was categorized into three groups regarding the number of placement attempts, i.e. on the first, second, and third attempts. Unsuccessful placement on the third attempt was considered failure and endotracheal tube was used in such cases. The success rate, insertion time, and postoperative complications such as bleeding, sore throat, and hoarseness were recorded. Results: In the I-Gel group, the success rate was 66.7% for placement on the first attempt, 16.7% for the second, and 3.33% for the third attempt. In the LMA group, the success rates were 80.6% and 12.9% for the first and second attempts, respectively. Failure in placement occurred in four cases in the I-Gel and two cases in LMA groups. The mean insertion time was not significantly different between two groups (21.35 seconds in LMA versus 27.96 seconds in I-Gel, P = 0.2). The incidence of postoperative complications was not significantly different between study groups. Conclusions: I-Gel can be inserted as fast as LMA with adequate ventilation in patients and has no major airway complications. Therefore, it could be a good alternative to LMA in emergency airway management or general anesthesia.
SAS Users Group …, 2005
This paper discusses a SAS® macro that provides three approaches to statistical inferences about ... more This paper discusses a SAS® macro that provides three approaches to statistical inferences about Mahalanobis distance. Mahalanobis distance is useful as a multivariate effect size, being an extension of the standardized mean difference (i.e., Cohen's d). This program calculates three point estimates of D 2 (a sample estimate, a jackknife estimate, and an adjusted estimate advanced by Rao, 1973). Further, the program computes a series of confidence bands around each estimated value of D 2. These confidence intervals are calculated employing two estimation procedures (an interval inversion approach and a common bootstrapping method). The paper presents a demonstration of the SAS/IML® code employed in a macro entitled D2BAND, and provides a table with the resulting output.
annual meeting of …, 2004
This study describes the development and implementation of an evaluation system applied to newly ... more This study describes the development and implementation of an evaluation system applied to newly created Masters level online degree programs at a major metropolitan research university. A multi-method approach to evaluation provided formative feedback on the processes and products of course development. From multiple data sources including course documents, individual and focus group interviews with instructors and designers, and webbased surveys of both instructors and students, we established the foundation of a comprehensive system for evaluating, verifying, and contrasting inferences related to both pedagogical and technological qualities. Results of both quantitative and qualitative analyses support the integrity of the evaluation system and underscore the importance of a carefully planned and executed approach to the evaluation of distance learning courses.
This paper presents a computer program that calculates two effect size estimates (gamma and trimm... more This paper presents a computer program that calculates two effect size estimates (gamma and trimmed-d) that are robust to violations of the assumptions of population normality and homogeneity of variance. Because of their robustness properties, these indices are frequently more appropriate than the conventional parametric indices such as Cohen’s d or Hedges’ g. Additionally, a program that uses these effect size indices in robust permutation tests for betweenclass homogeneity (a test frequently used in metaanalysis) is introduced. The paper provides a demonstration of the SAS/IML‚ code and examples of the application of the code in simulation studies.

This paper describes a SAS® macro that provides three approaches to estimating mean population ef... more This paper describes a SAS® macro that provides three approaches to estimating mean population effect sizes and confidence intervals for meta-analysis. The program calculates point estimates of the standardized mean difference (Cohen’s d and Hedges’ g) and the ordinal index Cliff’s Delta. Further, the program computes a series of confidence bands around each estimated value of the effect size. These confidence intervals are calculated using the estimated variance of the mean effect size and the critical z value (1-α ). The paper presents a demonstration of the SAS/IML® code employed in a macro entitled METACOMP, and provides a table with the resulting output. INTRODUCTION Meta-analysis is a popular technique in many fields for statistically analyzing and synthesizing results across sets of empirical studies. Meta-analytic techniques provide a variety of models and procedures for pooling effect sizes across studies and for evaluating the effects of potentially moderating variables (C...

Statistical power analysis is useful from both prospective and retrospective viewpoints. Prospect... more Statistical power analysis is useful from both prospective and retrospective viewpoints. Prospectively, statistical power analysis is used in the planning of research to estimate sample size requirements. In contrast, retrospective power analysis provides an estimate of the statistical power of a hypothesis test after an investigation has been conducted. Several techniques for retrospective power analysis have been suggested in the statistical literature, including both point and interval estimates of power. This paper presents a SAS macro that calculates three point estimators (a plugin estimate, an unbiased estimate and a median unbiased estimate) and a confidence band estimate of the power of a hypothesis test using sample estimates of the noncentrality parameter of the test statistic. The macro was written to evaluate the power of an F test (obtained through an analysis of variance, multiple regression analysis, or any of several multivariate analyses). Inputs to the macro inclu...
Action in Teacher Education
Psychometrika, Feb 1, 2004
The purpose of this study was to investigate and compare the performance of a stepwise variable s... more The purpose of this study was to investigate and compare the performance of a stepwise variable selection algorithm to traditional exploratory factor analysis. The Monte Carlo study included six factors in the design; the number of common factors; the number of variables explained by the common factors; the magnitude of factor loadings; the number of variables not explained by the
Uploads
Papers by Kristine Hogarty