Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2019, Assessment in Education: Principles, Policy & Practice
…
7 pages
1 file
AI-generated Abstract
The editorial discusses the complex role of assessment in education, highlighting its influence on learning outcomes and its potential for misuse, particularly in the context of international assessments like PISA. It emphasizes the need for assessments that accurately reflect the capabilities of all student populations, especially low performers, and calls attention to the methodological issues inherent in such assessments. The editorial also reviews a book on testing in the professions and acknowledges the contributions of the outgoing executive editor.
Assessment in Education: Principles, Policy & Practice, 2016
On 6 December 2016, the Programme for International Student Assessment (PISA) releases its report on the achievements of 15-year-olds from 72 countries and economies around the world. This triennial international survey aims to evaluate education systems across 72 contexts by testing skills in Mathematics, Science and Reading Literacy. This is the sixth cycle of PISA and the OECD suggests countries and economies now have the capability to compare the results over time to 'assess the impact of education policy decisions' 1. Compared to other education studies, the media coverage of PISA must be described as massive (Baird et al., 2016; Meyer & Benavot, 2013) and, as with previous years, it is expected that PISA will attract considerable discussion among policy-makers, educators and researchers (Wiseman, 2014). It is therefore timely to present a thematic issue of Assessment in Education, where we publish four articles that have analysed previous data-sets from the PISA studies each commenting upon the challenges, limitations and potential future assessment research on the PISA data. The articles touch upon issues regarding sampling, language, item difficulty and demands, as well as the secondary analyses of students' reported experiences of formative assessment in the classroom. One important message from the authors in this thematic Special Issue is the need for a more complex discussion around the use and misuse of PISA data, and the importance of pointing to the limitations of how the results are presented to policy-makers and the public. In an area where the media produces narratives on schools and education systems based upon rankings in PISA, researchers in the field of large-scale assessment studies have a particularly important role in stepping up and advising on how to interpret and understand these studies, while warning against potential misuse. In 2014, Yasmine El Masri gave a keynote at the Association for Educational Assessment-Europe conference in Tallinn, Estonia, following her Kathleen Tattersall New Researcher Award. We are pleased to publish the paper based upon her DPhil research: Language effects in international testing: the case of PISA 2006 science items. Together with Jo-Anne Baird and Art Graesser, El Masri investigates the extent to which language versions of the PISA test in Arabic, English and French are comparable in terms of item difficulty and demand (El Masri et al., 2016). As there is an ongoing discussion on whether it is possible to assess in a fair manner and compare science, mathematics and reading performances across countries and cultures, this present study offers important findings for future research. Using released PISA items, El Masri et al. show how language demands vary when comparing Arabic, English and French versions of the same item, and hence could impose different cognitive demands on the students participating in the PISA test in different countries. With the expansion of PISA to other countries through PISA for Development and the need for fair comparisons across countries, El Masri et al. suggest that subsequent research could explore the possibility of investigating computational linguistics approaches in test transadaptation as an alternative to the use of expert judgement which is the current practice in international test development. The next article in this issue by Freitas, Nunes, Reis, Seabra, and Ferro (2016), Correcting for sample problems in PISA and the improvement in Portugese students' performance, reports a
British Journal of Educational Studies, 2020
This chapter raises many important questions about the PISA project, focused on two critical arguments with implications for education policymaking. The first argument relates to the PISA project itself and is that basic structural problems are inherent in the PISA undertaking and, hence, cannot be fixed. I will argue that it is impossible to construct a test that can be used across countries and cultures to assess the quality of learning in real life situations with authentic texts. Problems arise when the intentions of the PISA framework are translated into concrete test items to be used in a great variety of languages, cultures, and countries. The requirement of “fair testing” implies by necessity that local, current, and topical issues must be excluded if items are to transfer objectively across cultures, languages, and customs. This runs against most current thinking in science education, where “science in context” and “localized curricula” are ideals promoted by UNESCO and many educators, as well as in national curricula. My second argument relates to some of the rather intriguing results that emerge from analyses of PISA data. It seems that pupils in high-scoring countries also develop the most negative attitudes toward the subjects on which they are tested. It also seems that PISA scores are unrelated to educational resources, funding, class size, and similar factors. PISA scores also seem to be negatively related to the use of active teaching methods, inquiry-based instruction, and computer technology. PISA scores seem to function like a kind of IQ test on school systems. A most complex issue is reduced to simple numbers that may be ranked with high accuracy. But, as with IQ scores, there are serious concerns about the validity of the PISA scores. Whether one believes in the goals and results of PISA, such issues need to be discussed.
2010
Issues with the conceptualization, implementation and interpretation of the OECD Programme for International Student Assessment (PISA) are examined. The values that underpin the project are discussed. What PISA is intended to measure (preparation for life, key competencies, real-life challenges, curriculum independence) is contrasted with what it probably measures. Issues are identified relating to cultural fairness (quality and equivalence of translations, Anglophone origins, response styles, importance accorded to the test) and the representativeness of participating populations (samples, response rates, adjustment for nonresponse). PISA is a project of the Organisation for Economic Cooperation and Development (OECD), set up to provide member states with 'international comparisons of the performance of education systems' in key subject areas (OECD, 2001, p. 27), more specifically, the reading, mathematical, and scientific literacy skills of 15-year-olds. First conducted in 2000, it runs in three-year cycles. It is one of a number of large international surveys, such as Trends in International Mathematics and Science Study (TIMSS) and PIRLS (Progress in Reading Literacy Study), both organized by the International Association for the Evaluation of Educational Achievement (IEA). PISA differs from its competitors in its frequency and cyclical nature, and in its focus on the 'knowledge and skills that are essential for full participation in society' (OECD, 2007, p. 16) rather than on the outcomes of a curriculum. Perhaps due to the reasonable performance of Irish students, PISA has not come under the same media and academic scrutiny in Ireland as in some other countries. For example, the unexpectedly low ranks in PISA 2000 of Germany and Denmark resulted in major political, educational and academic responses, and in changes in their education systems (e.g., Dolin, 2007; Rubner, 2006). It has meant that an unusually large proportion of academic criticism of PISA has come from Germany (e.g., Hopmann, Brinek, & Retzl's 2007). This can be contrasted with the largely uncritical response to PISA in Ireland.
In this chapter the authors explore the role of non-educational factors -specifically socio-economic and cultural variables -on Programme for International Student Assessment (PISA) outcomes. They suggest that noneducational factors like socio-economic and cultural variables have a strong, but largely unexplored, impact on PISA outcomes. They show that PISA scores increase with a country's socio-economic affluence, as well as with measures of human development (like the absence of child labor and Internet use). PISA outcomes also vary with cultural factors like individualism and obedience to authority (power distance). The authors argue that it is not warranted to attribute, without qualification, high scores on PISA to excellent schools and poor performance to weak schools. Specifically, they find that there seem to be two paths to the top of PISA rankings: relatively egalitarian and individualistic cultures (like those of Finland or Canada) or relatively collectivist and paternalistic cultures. A country's position on the global PISA ranking provides very little information about the quality of its schools. It is more meaningful to compare a country with meaningfully chosen peers -for example, those other countries with which one shares important socio-economic and cultural attributes.
The Australian Educational Researcher, 2015
This article considers the role played by policy makers, government organisations, and research institutes (sometimes labelled "think tanks") in the analysis, use and reporting of PISA data for the purposes of policy advice and advocacy. It draws on the ideas of Rizvi and Lingard (2010), Bogdandy and Goldmann (2012) and others to explore the ways in which such "agents of change" can interpret, manipulate and disseminate the results of data arising from large scale assessment survey programs such as PISA to influence and determine political and/or educational research agendas. This article illustrates this issue by highlighting the uncertainty surrounding the PISA data that have been used by a number of prominent, high profile agents of change to defend policy directions and advice. The final section of this paper highlights the need for policy makers and their advisors to become better informed of the technical limitations of using international achievement data if such data are to be used to inform policy development and educational reforms.
2017
The aim of this article is to explore the phenomenon of PISA – the Programme for International Student Assessment. The PISA study is a unique initiative in the contemporary educational world. Initiated in 2000 by OECD, currently it includes 65 countries and territories from all over the world. We decided to identify the multiple aspects, or, speaking metaphorically, the many faces of PISA, which carry different messages and are subject to different value judgements by various interest groups. In our publication, we discuss ten different aspects of PISA. PISA can be perceived as a symbol of globalisation, as a manifestation of neoliberal ideology in education, as a methodological controversy, as a research database, a benchmark, a league table, as promotion, as punishment, as business, and, finally, as policymaking. We conclude that the key actors of an educational domain should use various aspects of PISA survey in a more skillfull and selective way for achieving their goals and sec...
The Programme for International Student Assessment-PISAis the most ambitious endeavour of large-scale education systems evaluation ever implemented. The Organization for Economic Cooperation and Development-OECDlaunched this exercise for the first time in 2000, and in the 2012 edition 65 education systems were assessed. According to OECD, the programme "[…] is a triennial international survey which aims to evaluate education systems worldwide by testing skills and knowledge of 15-year-old students." And, "[…] tests are designed to assess to what extent students at the end of compulsory education, can apply their knowledge to real-life situations and be equipped for full participation in society." Albeit being a prestigious programme, entrenched in sound theoretical grounds, and notwithstanding all the efforts made by PISA experts to mitigate shortcomings, the PISA is not exempt from criticisms of various kinds. When analysing the quotes mentioned above, and taking into consideration the applied methodologies, several questions can be raised and some concerns should be pointed out. The first question arising in the process of evaluation is that any measurement always affects, direct or indirectly, the system itself, disturbing its inner workings. This fact is particularly relevant when social systems are at stake. A second difficulty results when students from very different countries in what regards culture, tradition, and beliefs are subjected to the same test. Although all items are always carefully analysed by panels of experts in order to detect cultural bias or offending interpretations, there is no complete guarantee that the final set of items is adequate to evaluate all students. Another question regarding the fairness of PISA results is the fact that a paper-and-pencil (or computer) test, limited to three disciplinary domains, cannot encompass the possibly rich, diverse, and unsuspected knowledge and skills of 15-year-old students. There are also technical criticisms regarding the adopted approaches and methodologies, from the utilization of the Rasch model to negative remarks about the way data are collected and questions are coded. Some of what could be considered advantages of PISAthe literacy based instead of a curriculum based approach, the assessment of 15-year-old students instead of a particular school year pupils, and the definition of a large set of indicators, as is the case of ESCShave been also severely criticised. Finally, some of the criticisms reside, not in the PISA methods and characteristics themselves but on an excessive focus on country rankings, primarily promoted by media, and consequently followed by political leaders. The main objective of this research is to reframe difficulties and artefacts together with virtuous results of PISA, putting in perspective praises and criticisms to foster a better understanding of this important programme.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
RELIEVE - Revista Electrónica de Investigación y Evaluación Educativa, 2016
PISA, Power, and Policy the emergence of global educational governance, 2013
Globalisation, Societies and Education, 2018
Crato, N. (Ed.) Improving a Country's Education: PISA Results in 10 Countries, 2021
Improving a Country’s Education, 2020
Assessment in Education: Principles, Policy & Practice, 2020
Handbook of Education Policy Studies, 2020
American Educational Research Journal, 2016
Policy Futures in Education, 2014
Improving a Country's Education: PISA 2018 Results in 10 Countries, 2021
Assessment in Education: Principles, Policy & Practice,, 2008
OECD Publishing eBooks, 2015
Confero: Essays on Education, Philosophy and Politics, 2019
The Effects of PISA on National Education Policy Reforms, 2020
Improving a Country’s Education PISA 2018 Results in 10 Countries, 2021