Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, O’Sullivan, A. (2013) ‘Gulf Comparative Education Society’s Fourth Annual Symposium ’, in Bridging the Policy/Research Divide in Education in the GCC. Ras al Khaimah, UAE: GCES, pp. 110–119.
…
21 pages
1 file
I take my title from a 2002 article by Ann Oakley concerning the move in academic and policy circles towards strengthening the research evidence base in the social sciences. Educational research was often deemed deficient in its use of evidence and thus seen by many actors as a particularly ripe area for the adoption of a much more “evidence-based culture”. Over the past few years I have been focusing my research efforts on investigating the use of evidence in the field of education leadership, decision making, and educational policy. The complexities surrounding the nature and understanding of what evidence is; what constitutes evidence, and how evidence is formed and rendered counter poses the attractive directness of the almost common sense assumption that “good” evidence helps in the formulation of better policy and leads to improved practice. The notion of disinterested, objective evidence guiding the policy maker must be examined. Where does the evidence-based approach originate from? Where does the evidence based ‘movement’ reside in terms of its epistemology and its world view? How is evidence deemed compelling or convincing enough to advocate a change or reform policy? What implications for educational practice does a more evidence based policy imply in terms of its constructs of education and research (their conduct, role and purposes)? Where does the evidence-informed policy movement impact educational policy formulation and decision making here in the Gulf region, and in the UAE? There appears to be a growing space in the policy field for advocational research primarily concerned with providing policy actors with supportive data to formulate policy matching their ‘ideological and political constraints’ (Lauder, Brown and Halsey 2009: 5) and with convincing actors of the need for certain types of reform. Lauder, H., Brown, P. and Halsey, A.H. (2009) Sociology of education: a critical history and prospects for the future. Oxford Review of Education, 35 (5): 569-585 Oakley, A (2002) Social science and evidence- based everything: The case of education. In Educational Review, Vol. 54, (3): 277-286
Review of Research in Education, 2010
Practice theory: Diffractive readings in professional practice and education, 2017
Peters, M. A., & Tesar, M. (2017). Bad research, bad education: The contested evidence for evidence-based research, policy and practice in education. In J. Lynch, J. Rowlands, T. Gale, & A. Skourdoumbis (Eds.), Practice theory: Diffractive readings in professional practice and education (231-246). London, UK: Routledge.
What role does scientific evidence play in educational practice? Supporters of evidence-based education (EBE) see it as a powerful way of improving the quality of public services which is readily applicable to the education sector. Academic scholarship, however, points out important limits to this applicability. I offer an account inspired by Tullock's theory of bureaucracy that helps explain EBE's influence despite these limits. Recent configurations of EBE are an imperfect solution to two imperatives where policymakers are at an informational disadvantage: (i) guiding professionals working in the field and (ii) evaluating evidence from academic researchers. EBE, especially in the form of RCTs and systematic reviews, offers a way of filtering a complex range of research to produce a determinate result that is transparent to policymakers. However, this impression of research transparency is misleading as it omits theoretical background that is critical for successfully interpreting the results of particular interventions. This comes at a cost of relevance to the frontline professionals whom this research evidence is supposed to inform and help.
Education Sciences , 2019
This paper examines the context of evidence-informed practice (EIP) by inquiring into how educational practice is defined and organised, and how predominant understandings of educational practice are concomitant with preferences for particular forms of evidence. This leads to discussion of how certain educational research traditions speak (or are unable to speak) to these evidence requirements, and how this shapes the nature of EIP. While the rise of EIP can be understood as part of the increasing attention paid by governments to systemic 'improvement' in education systems, it can be argued that the lack of a coherent body of educational knowledge in many national traditions enables governments to exercise control not only of definitions of 'what works' in education but also over conceptualisations of educational practice. For some policy makers and practitioners, the much-remarked dislocation between 'evidence' and teaching practice in many national contexts can only be solved by a narrowing of what counts as knowledge alongside a more prescriptive control over what counts as acceptable educational judgement. However, such an alignment serves to exclude wider educational purposes and arguably instrumentalises pedagogical relations. Meanwhile, some continental European countries maintain traditions that may serve to mitigate such developments, although these traditions are not without challenge.
Educational Research and Evaluation
In a disquieting phrase in his volume Foucault, Power and Education, Ball (2013) comments that "we do not speak discourses, discourses speak us. Discourses produce the objects about which they speak" (p. 20). Forces outside ourselves, for example, in society, economy, culture, politics, and policy makers, the "establishment", determine and influence in part what we say, what is important, what we think and think about, how we think, what we decide, and what constitutes acceptable knowledge: a sociology of knowledge. Part of Foucault's project was to track the "genealogy" and "archeology" of power in producing what counts as, and what is excluded from, "approved" knowledge, who decides, and how ideology operates in this process. The intersection of power and knowledge promotes particular discourses and silences others; power and its operations, be they positive or negative, are immanent in discourse. Foucault interrogates the constitution of socially legitimated knowledge, which involves surfacing the power of the few and exposing structured silences on certain forms and areas of knowledge: a precursor to the emergence of agnotology. Discourses might set the agenda, and, within discourses, evidence is more than mere information or data; rather, it takes information and data and uses them fairly, without distortion, biased selectivity, or dishonesty, that is, unethically, to make a case for such and such, as in a court of law. Question: "What has this to do with educational research?" Answer: "Everything". Look at who and what shapes a currently increasingly dominant discourse of educational research and its links to evidence-based policy and practice and the "what works" agenda: the prominence of advocacy of randomised controlled trials (RCTs) in educational research, together with meta-analysis of RCTs in education. This discourse approaches hegemonic status (Pearce & Raman, 2014) with the power of the imprimatur of the What Works Clearinghouse and the Institute of Education Sciences from the USA government, the comment from Haynes, Service, Goldacre, and Torgerson (2012) in a UK government publication, stating that RCTs "are the best way of determining whether a policy is working" (p. 4), the sponsorship of RCT's in the high-profile Educational Endowment Foundation in the UK, and national RCTs in Europe (Bouguen & Gurgand, 2012). RCTs and large-scale research (i.e., involving many cases) feature in the volume by Gorard, See, and Siddiqui (2017), reviewed by Wayne Harrison in this issue, and he catches the authors' comments on the need for "more robust evaluations in educational research", advocating greater scrutiny of, inter alia, research design, scale (number of cases), the implications of sample dropout (attrition), the quality of the data, and "the number of counterfactual cases needed to bring a finding into doubt". RCTs, meta-analysis of RCTs, and large-scale research certainly have their place in educational research, but what is their place? Discourses of RCTs, meta-analysis of RCTs, and large-scale research from policy hegemons, if they become the sole order of the day or the dominating discourse of educational research, risk becoming a race to the bottom in terms of the benefits from adopting a wealth of approaches to researching "what works" and what counts as robust research evidence in education. Fitness for, and of, purpose suggests
British Journal of Educational Studies, 2016
This book is described as an indispensable text that critically sets out skills and knowledge required by a specialist educator for students who present with dyslexia' and with this, I concur. The structure of the book is mapped explicitly to the specific professional criteria of the British Dyslexia Association's (BDA) internationally recognized framework. It is therefore well placed to form a part of recommended reading on any courses that are
International journal of education policy and leadership, 2017
This paper examines the impetus for schools to engage in and with evidence in England's self-improving school system. It begins with an examination of how the education policy environment has changed; shifting from predominantly top down approaches to school improvement to the current government's focus on schools themselves sourcing and sharing effective practice to facilitate systemlevel change. The paper then explores some of the key factors likely to determine whether schools engage in meaningful evidence use, before analyzing survey data from 696 primary school practitioners working in 79 schools. The paper concludes by highlighting where schools appear to be well and under-prepared for a future of evidence-informed self-improvement. Educational evidence rarely translates into simple, linear changes in practice in the ways that 'what works' advocates might hope. Instead, it is suggested that evidence must be combined with practitioner expertise to create new knowledge which improves decision-making and enriches practice so that, ultimately, children's learning is enhanced (Cain, 2015). It is also felt that professional values and ethics should inform any such process, so that teachers retain a focus on 'what matters' as well as 'what works'? At the same time, however, it is also argued that any pandering to professional prejudice should be avoided: so while the quality and rigour of the evidence is important, it is also key that practitioners themselves possess the skills, motivation and support required to access and critique evidence, whilst overcoming 'activity traps': i.e. taking quick decisions based on personal judgements, which, in themselves, are often not reliable as well as being susceptible to biases (Katz and Dack, 2013; Barends, Rousseau, Briner, 2014). Adding to this complexity is that there has been little research undertaken to provide an evidence base on effective evidence-use (Levin et al., 2013; Nelson and O'Beirne, 2014; Cain, 2015) and so provide support to either side of the debate; likewise there are no acknowledged practical systems or processes that have been adopted across the piece to represent effective or preferred ways to connect evidence to practice. While this situation is now being addressed through initiatives such as the Education Endowment Foundation's £1.4m investment in projects focusing on approaches to increasing the use of research in schools, it will take a number of years before the evaluations of these projects emerge; and longer still before any meta-analysis or synthesis of them might be undertaken and used to provide an overarching frame outlining effective and less effective ways to connect research-topractice.
In the current climate of "accountability" and "transparency" as demanded by the public, policy makers justify their actions by drawing on research findings and data collected by various means. There appears to be a belief that quantitative data provide more credible evidence than qualitative data. Hence the use of data has become pivotal in decision-making. More recently, education policy documents drawing on international student survey results have appeared around the world. This paper evaluates some of the evidences used by policy makers and shows that there is a great deal of uncertainty surrounding the data underlying these research findings. More importantly, the paper demonstrates that statistics alone cannot provide hard evidence. In fact, we need to draw on our own experience and a great deal of sense-making in interpreting data and drawing conclusions.
2017
Implicitly, therefore, and sometimes quite explicitly, qualitative approaches to research are marginalized. The debate seems to reflect both long-term changes in what we might call the 'terms of trade' between science and policy, along with more specific short-term jockeying for position amongst particular researchers and government officials/advisers at a particular point in time. The intensity and focus of the current debate in the UK can be dated from a speech in 1996 by David Hargreaves (then Professor of Education at Cambridge University) to the Teacher Training Agency (TTA-a government agency regulating teacher training). Hargreaves (1996) attacked the quality and utility of educational research, arguing that such research should produce an "agreed knowledge base for teachers" (p. 2) that "demonstrates conclusively that if teachers change their practice from X to Y there will a significant and enduring improvement in teaching and learning" (p. 5). Subsequent government-sponsored reviews and reports took their lead from this speech and produced what might be termed a mainstream policy consensus that the quality of educational research was low, particularly because so many studies were conducted on a small scale and employed qualitative methods, and therefore "something had to be done" (Hillage,
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Evidence and Expertise in Nordic Education Policy, 2022
Mind, Brain, and Education, 2013
British Educational Research Journal, 2006
International Studies in Sociology of Education, 2011
British Journal of Sociology of Education
British Journal of Educational Studies, 2001
Review of Education, 2019
Humanities and Social Sciences Communications, 2020
British Educational Research Journal, 2003
Forum Oświatowe, 2015
Review of Education, 2020
European Educational Research Journal, 2006
European Educational Research Journal, 2003
Educational Research, 2017