Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008, Assessment in Education: Principles, Policy & Practice,
…
5 pages
1 file
AI-generated Abstract
International comparisons in education, while suggesting differences in achievement among countries, face significant challenges related to language and translation. The study explores how linguistic nuances affect the interpretation of test scores, emphasizing the impact of cultural context on educational assessments. It concludes that although broad differences in test scores imply variations in educational quality, the complexities introduced by multilingual testing limit their interpretability.
2004
Forty years have passed since educational achievements were first compared on an international scale. What began as a hesitant and sporadic attempt to compare scholastic achievements in various countries has grown into a well-established enterprise encompassing close to 50 countries worldwide. Perhaps as a function of globalization and increasing awareness of the role human capital plays in furthering economic development, policy makers around the world are expressing growing interest in the results of such surveys, realizing their importance for precipitating educational reform.
Critical Studies in Education, 2017
This article organises potential areas of criticism or challenges embedded in the design and administration of standardised assessments of learning levels in order to promote dialogue and research on educational assessments. The article begins by addressing debates around epistemological claims: issues that pertain to testing in general and issues that are particular to standardised testing. Then, it addresses some political attributes of international tests so as to situate the debates beyond feasibility, attributes and scope-related issues. The article claims that the field of education testing has identified a number of issues and challenges stemming from diversity, and has developed methods and procedures to address many of them. From this viewpoint, testing is just like any other domain of scientific enquiry. However, international assessments of learning outcomes are not necessarily, or primarily, scientific endeavours; they are political devices and therefore should be scrutinised considering scientific attributes as well as some political features that, even if intertwined with technicalities, go well beyond them. Thus, critiques of international assessments would be better framed if their political attributes are taken as organising principles of the criticism, alongside those elements that pertain to their technical attributes, since these are not incidental but deeply interlinked.
This report is the product of a review of a number of large-scale international learning assessments, including school-based surveys and household-based surveys. The review covered all aspects of the surveys' approaches for assessing and reporting on component skills, from assessment frameworks and item development, through test design and mode of delivery, to analysis and reporting proficiency. Translation, field trialling and final item selection were also covered. The review also looked at all aspects of the surveys' approaches to collecting and reporting contextual information, including the development of contextual data collection instruments, their translation and adaptation, the main factors and variables used, question formats, scaling, relevant constructs and crosscountry comparability. The review also considered how the surveys were implemented, methods and approaches for including out-of-school children, and the analysis, reporting and use of data.
Quality Assurance in Education, 2014
Purpose – This paper presents a moderated discussion on popular misconceptions, benefits and limitations of International Large-Scale Assessment (ILSA) programs, clarifying how ILSA results could be more appropriately interpreted and used in public policy contexts in the USA and elsewhere in the world. Design/methodology/approach – To bring key issues, points-of-view and recommendations on the theme to light, the method used is a “moderated policy discussion”. Nine commentaries were invited to represent voices of leading ILSA scholars/researchers and measurement experts, juxtaposed against views of prominent leaders of education systems in the USA that participate in ILSA programs. The discussion is excerpted from a recent blog published by Education Week. It is moderated with introductory remarks from the guest editor and concluding recommendations from an ILSA researcher who did not participate in the original blog. References and author biographies are presented at the end of the...
Quality Assurance in Education, 2014
Purpose – This policy brief, the third in the AERI-NEPC eBrief series “Understanding validity issues around the world”, discusses validity issues surrounding International Large Scale Assessment (ILSA) programs. ILSA programs, such as the well-known Programme of International Student Assessment (PISA) and the Trends in International Mathematics and Science Study (TIMSS), are rapidly expanding around the world today. In this eBrief, the authors examine what “validity” means when applied to published results and reports of programs like the PISA. Design/methodology/approach – This policy brief is based on a synthesis of conference proceedings and review of selected pieces of extant literature. It begins by summarizing perspectives of an invited expert panel on the topic. To that synthesis, the authors add their own analysis of key issues. They conclude by offering recommendations for test developers and test users. Findings – ILSA programs and tests, while offering valuable informatio...
This study examines the main characteristics of national learning assessments, which have been conducted with increasing frequency worldwide since the mid-1990s. It describes the subject areas and grade levels that have been assessed, and how these have changed over time and varied across regions. In addition, it provides examples of the results from national learning assessments and points out how these can be used to evaluate changes in learning outcomes over time and to compare the extent of country-level disparities in student achievement. The paper calls on international agencies to work with national stakeholders to further support, improve and utilize national learning assessments in the future.
Studies in Educational Policy and Educational Philosophy, 2003
In this article I raise some questions about international comparative assessments and the impact these assessments seems to have on the field of education and curriculum. In doing this I exemplify the discussion with two different concepts of assessment. One early methodology of assessment carried out with the scientific cooperation of the IEA-mostly exemplified in the TIMSS study-and a newer methodology carried out by the OECD-exemplified in the PISA 2000 study. By doing this I will show some interesting results with a curriculum sensitive assessment and a newer post-curriculum sensitive assessment. In this development the theory of literacy have come to play an important role with an interesting implication from measuring knowledge to assessing competencies. These results are discussed as an international educational hegemony which consists of structural adjustments and a development of knowledge assessments carried out by international organisations and the impact an international educational hegemony has on the field of education and curriculum and the way we speak of them.
British Educational Research Journal, 2004
American Educational Research Journal, 2016
International assessments, such as the Program for International Student Assessment (PISA), are being used to recommend educational policies to improve student achievement. This study shows that the cross-sectional estimates behind such recommendations may be biased. We use a unique data set from one country that applied the PISA mathematics test in 2012 in ninth grade to all students who had taken the Trends in International Mathematics and Science Survey (TIMSS) test in 2011 and collected information on students’ teachers in ninth grade. These data allowed us to more precisely estimate the effects of classroom variables on students’ PISA performance. Our results suggest that the positive roles of teacher “quality” and “opportunity to learn” in improving student performance are much more modest than claimed in PISA documents.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
National Forum: Phi Kappa Phi Journal, 1993
Assessment in Education: Principles, Policy & Practice, 2019
https://www.ijrrjournal.com/IJRR_Vol.7_Issue.4_April2020/Abstract_IJRR0021.html, 2020
Handbook of Education Policy Studies
ETS Research Report Series, 2018
Assessment in Education: Principles, Policy & Practice
Frontiers: The interdisciplinary journal of study abroad, 2020
Critical Studies in Education
Educational Measurement: Issues and Practice, 2005