Conference Presentations by Ildiko Porter-Szucs

Ann Arbor Polonia Association, 2022
According to a Cherokee tale, inside every human are two wolves battling for control: one bad, th... more According to a Cherokee tale, inside every human are two wolves battling for control: one bad, the other one good. “Which one wins?” you may ask. “The one you feed,” sounds the answer. The presenter builds on the fundamental insight from this tale as she discusses how to maximize opportunities and minimize counterproductive practices when learning and teaching Polish from birth into adolescence in the U.S. The presenter draws on the fields of second language acquisition, brain-based research, human development, second- and foreign-language education as well as her personal experiences growing up bilingual, raising a trilingual child, co-founding the Polish Language Center of Ann Arbor, and cultivating language preservation among children growing up with heritage languages. The presentation will be accessible to all and will close with time for questions.
IATEFL, 2021
The fifth of TESOL's 6 Principles for Exemplary Teaching of English Learners states the need to m... more The fifth of TESOL's 6 Principles for Exemplary Teaching of English Learners states the need to monitor and assess student language development. Assessing language alone can present enough of a challenge if it is to be done well, not to mention when the learners’ subject-matter mastery also needs to be assessed. Yet increasingly teachers in various second- and foreign-language contexts – e.g., ESP, CLIL, EMI – are called on to do just that. In this interactive presentation, after briefly touching on essential concepts such as construct and construct-irrelevant variance, we explore the extent to which it is possible to assess content without language. We analyze items, tasks, and rating scales that purport to do one or the other and examine how they can be improved. Time is reserved at the end of the presentation for a summary of best practices and questions.
ESL teachers, TESOL professors, assessment coordinators, and administrators are invited to explor... more ESL teachers, TESOL professors, assessment coordinators, and administrators are invited to explore various factors that are outside ESL students’ control yet may influence the students’ success on speaking tests. The presenters will offer concrete so-lutions to mitigate the potential negative impact of these fac-tors to promote equity.

With the high cost of implementing speaking placement tests with live examiners, institutions som... more With the high cost of implementing speaking placement tests with live examiners, institutions sometimes consider the less expensive alternative of delivering speaking tests via recording and having examiners score them later. Previous research has shown that computer-delivered human-scored tests have resulted in scores that are comparable to live tests’ (Stansfield & Kenyon, 1992). In contrast with the favorable statistical findings, most test takers (TTs) prefer live tests over the artificiality of interacting with a machine (Ginther, 2012; Qian, 2009).
In their recent study of 106 TTs, the presenters compared results of face-to-face vs. audio-recorded and computer-delivered modes of a new, commercially available, institutional speaking placement test and found similar results: that is TTs preferred the face-to-face version of the test, yet no difference in overall scores was found between the modes of delivery. Analysis of variance (ANOVA), however, revealed that several factors did affect the sub-scores, including the interactions of Rater by Mode, Examiner by Mode, Examiner by TT gender, and others. It was also found that TTs' perceptions of their performances had no statistically significant association with their overall mean performance on either mode of test delivery or their relative performances between the two modes. The study revealed though that even trained and experienced speaking examiners can vary significantly in their interpretation of a test’s rating scale when scoring the same TT’s performance. These findings are potentially troubling because they raise questions of reliability and inequities for the TTs.
Papers by Ildiko Porter-Szucs
Routledge eBooks, Oct 10, 2022

SN Social Sciences
This study investigated whether the success of students in a Master of Arts in Teaching English t... more This study investigated whether the success of students in a Master of Arts in Teaching English to Speakers of Other Languages (MA TESOL) assessment course was comparable regardless of their chosen mode of attendance (face-to-face, synchronously online, asynchronously online) in this "Triple Hybrid" (or "TriHy") class. In an interactive, convergent, mixed-methods design, a pragmatic, participant-focused framework guided the study. Data collection extended to pre-, while-, and post-surveys of the participants; tracking of mode of communication with the instructor; as well as proxies for students' success in the course, including the rate of course completion, weekly class attendance, completion of weekly assignments, grades on lowstakes individual assignments, grades on a high-stakes individual assignment, and a final course grade. The findings of the quantitative and qualitative analyses revealed that overall there was no statistically significant difference in the learning outcomes among the modalities even though one of the groups' pre-test scores did differ from the others' significantly. Although the students' success in the course did not differ, their perception of the factors that contributed to their success did. The findings suggest that with considerable institutional support, substantial investment of time and commitment from the instructor, and meaningful choices from the students, the quality of instruction even in a language-teacher-preparation course focused on skill building does not need to be compromised.
... Prospective and current master's TESOL students may learn from hearing the v... more ... Prospective and current master's TESOL students may learn from hearing the voices of those who ... this study examines questions at the intersection of various disciplines TESOL, ESL, teacher ... may benefit by duplicating or further investigating aspects of this research project. ...
The TESOL Encyclopedia of English Language Teaching
Relying on data from two nationwide surveys, this study examines the status of ESL programs in pr... more Relying on data from two nationwide surveys, this study examines the status of ESL programs in primarily U.S. higher educational settings as perceived by professionals in such programs. The focus is on the perceived lack of recognition and on measures taken against it. Survey respondents make suggestions for increasing the field’s visibility and respect on campus through interdepartmental outreach, policy and curricular initiatives, marketing, publishing/presenting, and academic as well as non-academic initiatives involving students.

CaMLA Working Papers, 2016
This study examines whether the mode of delivery—direct (face-to-face) or semidirect
(computer-me... more This study examines whether the mode of delivery—direct (face-to-face) or semidirect
(computer-mediated audio-recorded)—influences test takers’ scores on the CaMLA
Speaking Test (CST). A mixed-methods design was employed for data analysis. The results
were analyzed to answer four research questions:
1) Do CST test takers receive different scores on face-to-face vs. audio-delivered modes
of the speaking test?
2) Do CST test takers prefer one mode of test delivery over the other?
3) Is there any alignment between test takers’ scores and preferred mode of test
delivery?
4) What factors do test takers claim affect their performance on both modes of the
speaking test?
The first question was answered by fitting a linear mixed model to the data and
examining parameter estimates for the factors and their interactions. The second question
was answered by examining the percentages of test takers who stated the different pretest
preferences and post-test beliefs. The third question was answered using ANOVAs to
compare differences between mean outcomes by mode vs. the test takers’ answers to the
preference/belief questions. The fourth research question was answered by analyzing the
open-ended comment section on the post-study survey for themes pertaining to perceived
factors influencing test takers’ performance on the two modes of test delivery.
Books by Ildiko Porter-Szucs

Wiley Blackwell, 2025
An essential resource on effective language assessment, invaluable for a new generation of teache... more An essential resource on effective language assessment, invaluable for a new generation of teachers and education researchers
A Practical Guide to Language Assessment helps educators at every level redefine their approach to language assessment. Grounded in extensive research and aligned with the latest advances in language education, this comprehensive guide introduces foundational concepts and explores key principles in test development and item writing. Authored by a team of experienced language teacher educators, this book addresses the potential impacts of poorly designed tools and prepares teachers to make informed, effective assessment decisions.
Perfect for developing test blueprints and crafting effective assessment tools, including those for young learners, A Practical Guide to Language Assessment bridges the gap between theory and practice to provide the real-world training educators need to successfully navigate the complexities of modern language assessment. Clear and accessible chapters highlight the critical role of well-designed assessments, emphasize the importance of selecting appropriate tools to accurately measure student proficiency, and discuss recent innovations and emerging needs. With practical examples and a focus on current innovations, including ‘ungrading’ and the use of AI, A Practical Guide to Language Assessment:
Explains the foundational concepts of language assessment with practical examples and clear explanations
Bridges theoretical principles with practical applications, enabling educators to create effective test blueprints and assessment items and tasks
Provides up-to-date coverage of timely topics such as the integration of AI in assessments and the ethical and legal considerations of language testing
Features a wealth of in-depth examples of how theoretical concepts can be operationalized in practice
A Practical Guide to Language Assessment is an essential read for students in language education, as well as teachers, assessment managers, professional development trainers, and policymakers in language program evaluation.
https://www.wiley.com/en-us/A+Practical+Guide+to+Language+Assessment%3A+How+Do+You+Know+That+Your+Students+Are+Learning%3F-p-9781394238736?fbclid=IwY2xjawIJF2dleHRuA2FlbQIxMAABHRfr1oUhwR_8ZTIgW82zyTBwEc5k5rxPNU7yQp_3q5N9oRob1tRcBSt35g_aem_W3ckjlROs7dd9SdwXSQqMQ
Uploads
Conference Presentations by Ildiko Porter-Szucs
In their recent study of 106 TTs, the presenters compared results of face-to-face vs. audio-recorded and computer-delivered modes of a new, commercially available, institutional speaking placement test and found similar results: that is TTs preferred the face-to-face version of the test, yet no difference in overall scores was found between the modes of delivery. Analysis of variance (ANOVA), however, revealed that several factors did affect the sub-scores, including the interactions of Rater by Mode, Examiner by Mode, Examiner by TT gender, and others. It was also found that TTs' perceptions of their performances had no statistically significant association with their overall mean performance on either mode of test delivery or their relative performances between the two modes. The study revealed though that even trained and experienced speaking examiners can vary significantly in their interpretation of a test’s rating scale when scoring the same TT’s performance. These findings are potentially troubling because they raise questions of reliability and inequities for the TTs.
Papers by Ildiko Porter-Szucs
Read here: http://newsmanager.commpartners.com/tesolicis/issues/2018-03-02/2.html
(computer-mediated audio-recorded)—influences test takers’ scores on the CaMLA
Speaking Test (CST). A mixed-methods design was employed for data analysis. The results
were analyzed to answer four research questions:
1) Do CST test takers receive different scores on face-to-face vs. audio-delivered modes
of the speaking test?
2) Do CST test takers prefer one mode of test delivery over the other?
3) Is there any alignment between test takers’ scores and preferred mode of test
delivery?
4) What factors do test takers claim affect their performance on both modes of the
speaking test?
The first question was answered by fitting a linear mixed model to the data and
examining parameter estimates for the factors and their interactions. The second question
was answered by examining the percentages of test takers who stated the different pretest
preferences and post-test beliefs. The third question was answered using ANOVAs to
compare differences between mean outcomes by mode vs. the test takers’ answers to the
preference/belief questions. The fourth research question was answered by analyzing the
open-ended comment section on the post-study survey for themes pertaining to perceived
factors influencing test takers’ performance on the two modes of test delivery.
Books by Ildiko Porter-Szucs
A Practical Guide to Language Assessment helps educators at every level redefine their approach to language assessment. Grounded in extensive research and aligned with the latest advances in language education, this comprehensive guide introduces foundational concepts and explores key principles in test development and item writing. Authored by a team of experienced language teacher educators, this book addresses the potential impacts of poorly designed tools and prepares teachers to make informed, effective assessment decisions.
Perfect for developing test blueprints and crafting effective assessment tools, including those for young learners, A Practical Guide to Language Assessment bridges the gap between theory and practice to provide the real-world training educators need to successfully navigate the complexities of modern language assessment. Clear and accessible chapters highlight the critical role of well-designed assessments, emphasize the importance of selecting appropriate tools to accurately measure student proficiency, and discuss recent innovations and emerging needs. With practical examples and a focus on current innovations, including ‘ungrading’ and the use of AI, A Practical Guide to Language Assessment:
Explains the foundational concepts of language assessment with practical examples and clear explanations
Bridges theoretical principles with practical applications, enabling educators to create effective test blueprints and assessment items and tasks
Provides up-to-date coverage of timely topics such as the integration of AI in assessments and the ethical and legal considerations of language testing
Features a wealth of in-depth examples of how theoretical concepts can be operationalized in practice
A Practical Guide to Language Assessment is an essential read for students in language education, as well as teachers, assessment managers, professional development trainers, and policymakers in language program evaluation.
https://www.wiley.com/en-us/A+Practical+Guide+to+Language+Assessment%3A+How+Do+You+Know+That+Your+Students+Are+Learning%3F-p-9781394238736?fbclid=IwY2xjawIJF2dleHRuA2FlbQIxMAABHRfr1oUhwR_8ZTIgW82zyTBwEc5k5rxPNU7yQp_3q5N9oRob1tRcBSt35g_aem_W3ckjlROs7dd9SdwXSQqMQ
In their recent study of 106 TTs, the presenters compared results of face-to-face vs. audio-recorded and computer-delivered modes of a new, commercially available, institutional speaking placement test and found similar results: that is TTs preferred the face-to-face version of the test, yet no difference in overall scores was found between the modes of delivery. Analysis of variance (ANOVA), however, revealed that several factors did affect the sub-scores, including the interactions of Rater by Mode, Examiner by Mode, Examiner by TT gender, and others. It was also found that TTs' perceptions of their performances had no statistically significant association with their overall mean performance on either mode of test delivery or their relative performances between the two modes. The study revealed though that even trained and experienced speaking examiners can vary significantly in their interpretation of a test’s rating scale when scoring the same TT’s performance. These findings are potentially troubling because they raise questions of reliability and inequities for the TTs.
Read here: http://newsmanager.commpartners.com/tesolicis/issues/2018-03-02/2.html
(computer-mediated audio-recorded)—influences test takers’ scores on the CaMLA
Speaking Test (CST). A mixed-methods design was employed for data analysis. The results
were analyzed to answer four research questions:
1) Do CST test takers receive different scores on face-to-face vs. audio-delivered modes
of the speaking test?
2) Do CST test takers prefer one mode of test delivery over the other?
3) Is there any alignment between test takers’ scores and preferred mode of test
delivery?
4) What factors do test takers claim affect their performance on both modes of the
speaking test?
The first question was answered by fitting a linear mixed model to the data and
examining parameter estimates for the factors and their interactions. The second question
was answered by examining the percentages of test takers who stated the different pretest
preferences and post-test beliefs. The third question was answered using ANOVAs to
compare differences between mean outcomes by mode vs. the test takers’ answers to the
preference/belief questions. The fourth research question was answered by analyzing the
open-ended comment section on the post-study survey for themes pertaining to perceived
factors influencing test takers’ performance on the two modes of test delivery.
A Practical Guide to Language Assessment helps educators at every level redefine their approach to language assessment. Grounded in extensive research and aligned with the latest advances in language education, this comprehensive guide introduces foundational concepts and explores key principles in test development and item writing. Authored by a team of experienced language teacher educators, this book addresses the potential impacts of poorly designed tools and prepares teachers to make informed, effective assessment decisions.
Perfect for developing test blueprints and crafting effective assessment tools, including those for young learners, A Practical Guide to Language Assessment bridges the gap between theory and practice to provide the real-world training educators need to successfully navigate the complexities of modern language assessment. Clear and accessible chapters highlight the critical role of well-designed assessments, emphasize the importance of selecting appropriate tools to accurately measure student proficiency, and discuss recent innovations and emerging needs. With practical examples and a focus on current innovations, including ‘ungrading’ and the use of AI, A Practical Guide to Language Assessment:
Explains the foundational concepts of language assessment with practical examples and clear explanations
Bridges theoretical principles with practical applications, enabling educators to create effective test blueprints and assessment items and tasks
Provides up-to-date coverage of timely topics such as the integration of AI in assessments and the ethical and legal considerations of language testing
Features a wealth of in-depth examples of how theoretical concepts can be operationalized in practice
A Practical Guide to Language Assessment is an essential read for students in language education, as well as teachers, assessment managers, professional development trainers, and policymakers in language program evaluation.
https://www.wiley.com/en-us/A+Practical+Guide+to+Language+Assessment%3A+How+Do+You+Know+That+Your+Students+Are+Learning%3F-p-9781394238736?fbclid=IwY2xjawIJF2dleHRuA2FlbQIxMAABHRfr1oUhwR_8ZTIgW82zyTBwEc5k5rxPNU7yQp_3q5N9oRob1tRcBSt35g_aem_W3ckjlROs7dd9SdwXSQqMQ