Training and valuing the brain through neurotechnology

Image from Leiden University Libraries on Unsplash

A neurotechnology company has just released a new headset device that claims to improve “cognitive fitness” by scanning and training users’ brains. The device, called Athena, is being marketed by Muse as enabling customers to “Future-Proof Your Brain”, enhance their “mental fitness”, and is accompanied by an app described as “Your handheld neuroscience lab”.

The marketing of Athena is slick and seductive, but it glosses over significant controversies concerning neurotechnology in society and brain-based training programs.

For Muse and other neurotech companies, the brain is an organ to be trained and tuned through educational and self-help programs. It is widely claimed that a “neurotechnology revolution” is underway, which will lead to highly lucrative markets in brain-computer interfaces and rapid improvement in healthcare, as well as implications for workplaces, training and education. But neurotechnology also raises sharp bioethical and human rights problems.

The new Muse neuroheadset is indicative of how the brain has become an object of potential value for neurotechnology companies, far beyond biomedical or neuroscience laboratories – which also resonates with wider discourses of brain improvement and optimization.

Attentive brains

As part of a Leverhulme Trust-funded project on biology, education and data science, Dimitra Kotouza led a recent paper with Martyn Pickersgill, Jessica Pykett and myself exploring how neurotechnologies were being put to the task of monitoring and managing “attention” in educational and training contexts.

The paper showed how attention has been conceived by neuroscientists via electroencephalography (EEG) as distinct brainwave patterns that can be scanned through the skull. EEG devices and associated technologies of analysis have made attention legible as cerebral activity. As a result, new kinds of neurotechnology-driven training interventions have been proposed and developed to directly identify, predict, and prevent “lapses” of attention.

In this way, attention has become a source of value too. Commercial neurotech companies have marketed attention-focusing devices to consumers such as students, and extracted brain data for further product enhancement.

Athena is not the first neurotech device from Muse, which previously released brainwave reading devices based on EEG with neurofeedback functionality to support attention. In the paper, we noted that an earlier iteration of its device was used for a school intervention, where students with high numbers of disciplinary “office referrals” received neurofeedback training to concentrate their minds and reduce problematic behaviours.

The Muse EEG headset had therefore become a neurotechnical means to control learners’ discipline in schools – using neurofeedback to make them more attentive. The subsequent development of the product to target cognitive or mental “fitness” represents the next step for neurotech, shifting from scanning to modelling and training the brain. It exemplifies how a trainable brain has become an object of both intervention and of value generation. That value is now being realized with the upgraded Athena device.

Brain modelling

What Muse claims it’s adding to its suite of neurotech devices with the launch of Athena is functional near-infrared spectroscopy (fNIRS) sensors, which are designed to detect neural signals of mental effort. According to the press release:

Athena transforms real-time brain activity into actionable insights, personalized training, and measurable progress. It’s powered by Muse’s AI-driven Foundational Brain Model (FBM), which is trained on over 80,000 sessions from the world’s largest EEG database.

It also comes with an in-built skills game to help users “strengthen their minds just as they do their bodies”. While previous headsets from Muse incorporated EEG to scan for signals of attention, then, the incorporation of fNIRS means it can scan for effort – and provide “actionable insights” for its improvement.

There’s a lot to unpack here, building upon points in our earlier paper. First, Muse is positioning Athena as a real-time, highly data-driven technology of “personalized” training, reflecting the wider discourse of personalized learning associated with AI in education. Forms of brain-personalized education, training and learning are increasingly invoked by neurotechnology companies and supporters. The idea is that brain data could be incorporated into learning platforms to provide personalized pedagogic feedback to students, particularly during lapses of attention.

While Athena may not be specifically targeted at students, it’s clear that education often demands high levels of cognitive performance and achievement.

“With EEG-powered real-time neurofeedback, you can sharpen focus and sustain attention when it matters most”, claims the Muse marketing. “fNIRS tracking measures mental effort and stamina, helping you push cognitive limits while avoiding burnout”. This is potentially appealing for some in high-pressure higher education contexts.

Second is its claim to have amassed a huge EEG database. This clearly shows how brain data collected form users can be valuable asset to neurotechnology companies, which can be used to create subsequent commercial product releases. Along these lines, a notable feature of Athena is its “AI-driven Foundational Brain model”. Here we can see what Rodrigo de la Fabian and colleagues have called a “de-substantialization of the brain” into “computable data” for model building and training. Through neurotechnological means, the brain can be datafied and translated into neuroinformational models.

From there, brain data become valuable as assets owned and controlled by neurotech companies that promise future income streams. It’s worth noting, for example, that the Athena headset comes with a hefty price tag of more than $450, plus ongoing subscription costs. Customers are also paying on top with their brain data, which Muse can capitalize for future product enhancements.

Phrenological technologies

The third notable feature of the Athena product release is the claim by Muse that neurofeedback can increase cognitive or mental fitness. The promotional materials for the device liken neurofeedback to athletics training, and is replete with imagery of physically fit young people.

While Muse claims that Athena is unique and innovative in supporting cognitive fitness, the discursive claims are much older. They can be found in 19th century programs of “cerebral self-improvement”, as Fernando Vidal and Francisco Ortega have documented in their book Being Brains.

Quoting Victorian thinkers and educators from that time, Vidal and Ortega show how various forms of brain training were assumed to bring about “real physical changes” in brain parts and result in “alterations to the external form of the skull”. Training programs for cognitive fitness were phrenological exercises informed by Victorian morals and values.

“For phrenologists as for latter-day promoters of ‘brain fitness’, mental health consisted of exercising all organs daily”, Vidal and Ortega argue. “This is the very premise of twenty-first century ‘brain gyms’, whose pseudoneuroscientific bases have been debunked without apparent effects on their commercial success”.

Neurotechnologies like Athena, then, can be seen as part of a longer line of phrenological technologies and pseudoscientific initiatives. Such programs have targeted the brain for fitness optimization according to particular sets of historically situated values.

Athena clearly taps into a contemporary cultural fascination with self-tracking, monitoring and sharing personal data on social media too. It’s a kind of brain-tracking FitBit that senses biological signals of mental fitness through the skull rather than the skin. On the sales site, it even asks customers if they are “Ready to livestream your brainwaves?” It’s self-phrenology for posting to social channels.

Future-proofing the brain

The discursive framing of “future-proofing the brain” also resonates with broader political interests in the brain, its health and its cognitive maximization. In fact, as we argued in the paper, it reflects a growing interest in what the OECD has begun to characterize as boosting “brain capital”. The OECD has directly linked mental health and mental performance to economic costs and social and labour market outcomes.

The authors of a “Brain Capital Grand Strategy” paper informing the OECD have promoted the creation of a “Brain Capital Index”, which would also draw mental performance data from “brain imaging” and “digital biomarker-based surveillance tools”.

Our current economy is indeed a Brain Economy—one where most new jobs demand cognitive, emotional, and social, not manual, skills, and where innovation is a tangible “deliverable” of employee productivity. With increased automation, our global economy increasingly places a premium on cerebral, brain-based skills.

Ultimately, the brain capital imaginary is intended to future-proof economic productivity through future-proofing the brain, and its proposal include interventions to improve “brain skills” through various neuro-informed training and education programs.

The OECD’s interest in measuring brain capital through digital neuroimaging and associated biosensors is indicative, then, of how products like Athena are not only consumer products. They are also potentially political-economic technologies that could be repurposed to capture and aggregate embodied signals of future cognitive productivity. The cognitive fitness promoted by Muse is ideal for the OECD’s imagined future Brain Economy.

Intensified learning

The new Athena headband from Muse, then, is not merely a gimmicky neurotechnology. It demonstrates how direct-to-consumer neurotechnologies are incorporating AI for enhanced functionality, and generating value from extracting brain data for aggregation into models and new product enhancement. These neurotechnologies are extending beyond the measurement of attentional states to being framed as aids to cognitive fitness.

They are also reworking older phrenological legacies of brain optimization, only now framed as the self-presentation of neurotech-augmented mental fitness. As a result, neurotechnologies like Athena also resonate with imaginaries of brain-based future economic governance.

The brain is becoming not only scannable and legible with neurotechnologies, but imagined as a trainable organ. It can be tuned for optimal cognitive performance through the capturing of real-time brain data and the application of neurally-personalized interventions. This ultimately contributes to a vision of intensified brain-based learning and training regimes enacted under the discursive framing of future-proofing cognitive fitness.

The paper “Attention as an object of knowledge, intervention and valorisation: exploring data-driven neurotechnologies and imaginaries of intensified learning” by Dimitra Kotouza, Martyn Pickersgill, Jessica Pykett and Ben Williamson is published open access in Critical Studies in Education.

Posted in Uncategorized | Tagged , , , , , , , , | Comments Off on Training and valuing the brain through neurotechnology

Great edtech exhibitions as futuring events

Google product stand at BETT 2025. Author photo.

Educational innovations have been presented at large shows since the Great Exhibitions of the nineteenth century. At the Great Expos, national system leaders would display the latest reformatory ideas and statistics of their performance to other government figures and the general public.

These international shows were designed to create desire for better futures, which were made to seem plausible and attainable through scientific, technical and industrial prowess. Expos persist as international utopian projections of progress through science, technology and innovation.

Today, huge exhibition centres and conference venues now host dedicated educational technology trade events to showcase the latest technical developments by edtech and big tech companies for huge audiences of educational customers, policy officials, investors, and the press.

Edtech exhibition events are literally where edtech and big tech sell their brands and products to schools and universities – but they are much more too. We can see them as “futuring events” where, like the Great Expos, the digital future of education is presented, made to appear certain and inevitable, and depicted in glossy and seductive visuals, discourses, and demonstrations.

In the last few months, as part of our ongoing work on “futuring” practices and methods in the education industry at the ESRC Centre for Sociodigital Futures, Carolina Valladares Celis, Arathi Sriprakash and I have attended three different large events in three countries, each one focused on education, technology and the future. We treat these as social, material and technical sites of future-making.

Here I report some notes from my recent attendance at the British Education and Technology Trade (BETT) show in London, “the biggest education technology exhibition in the world” – or a kind of modern-day Great Expo for edtech. Drawing from observation notes and photographs taken over two days, and building on the published observations of others from similar events, here I reflect on edtech exhibitions as futuring events.

Edtech futurism repeats itself

Edtech exhibitions are futurist discourse dissemination forums. The discourse typical of BETT is that schooling must be “smarter”, with technologies elevated as solutions to all contemporary problems of a “broken” system of institutionalized education.

The particular futurist discourse infusing these events is channelled through speculative claims, sales pitches promising transformative effects, and repetitive invocations of science-fiction-like tropes of seamless human-machine interaction and enhancement.

The smart discourse even extends to school bags and lockers, but is most spectacularly represented by the presence of smart screens and interfaces, which promise frictionless interaction with technologies.

It’s all great fun, staffed by friendly brand reps willing you to try out their tools and toys to see what the interaction is like for yourself, and carries an undeniably affective and optimistic charge.

Pravin Balakrishnan has written from a previous BETT experience that the show represents a “colonization of the future” and the materialization of an “affective ideology”, where the positive rhetoric of “interaction between learners and largely EdTech companies mobilizes broader affective conditions in the taking up of EdTech in schools and communities”.

Product stand at BETT 2025. Author photo.

The discourse of edtech shows like BETT is accompanied by compelling visual semiotic presentations. Huge screens adorn almost every vendor stand – some of them towering over the thousands of passing visitors. The stands themselves are like plastic fortresses of corporate typography and design.

The typical imagery often points to some better environment beyond the conventional classroom. As you walk the stands, it appears that the chalkboards characteristic of outdated schooling have become interactive whiteboards, which in turn have become touchscreens and then transformed finally into “immersive” virtual reality environments. This all signifies a more “engaging”, immersive and tactile “experience”. Such experiences are often “gamified” – there is even an e-sports arena.

The semiotic impression given by BETT is that the physical school walls have dissolved, giving way to experiences more akin to high-definition video gaming and simulation environments.

Teaching machines 2.0

AI and robotics are not new presences at edtech exhibitions, as Kalervo Gulson and Kevin Witzenberger previously observed, but recently they are ever-present as technical-material instantiations of “the future of education”. Startup edtech companies are given time on stages to present and platform their innovations and make claims that (as I observed) “school sucks” but AI promises to “unbox education” from its structural constraints.

AI is attached to every possible aim and purpose of education, with “pioneering” innovation said to be “transformative” for “enhancing” learning outcomes, “empowering” students, addressing inequalities, “upskilling”, providing “opportunity to life” and driving up student employability.

Product stand at BETT 2025. Author photo.

In other words, edtech exhibitions have a kind of incantatory quality when it comes to the latest trending technologies, designed to naturalize the idea of their introduction into educational settings through repetitions of their transformative potential.

Reporting recently about the Consumer Electronics Show, David Roth wrote that its “overwhelming theme, which various attendees I talked to said was basically a rerun of the previous year’s version, amounted to turning your life over, bit by bit and moment by moment, to artificial intelligence technology that would do ever larger amounts of that living for you”.

BETT seemed to suggest, repetitively, that more and more of education should be turned over to AI too, to do our teaching and learning for us.

The twentieth-century pioneers of “teaching machines”, BF Skinner and Sidney Pressey, would be nodding approvingly as they admired the latest “personalized learning” and “automated teaching” innovations, if they had lived long enough to see their mechanical dreams on digital display at BETT.

Move fast and break schools

Naturally, the transformative potential of tech and AI is supposed to be realized as fast as possible. And this also means a rapid de-institutionalization of education, represented repeatedly as a shift from “schooling” to “learning”. Demonstrations of new technical developments are accompanied by plenary discussions with sector and industry experts who insist we must not only “shift from schooling to learning” but also do so at speed, immediately, moving in a more “agile” way like a tech startup than conventional educational institutions can.

Edtech exhibitions, then, depict a kind of accelerated, nimble startup-ification of the historically conditioned structures of education. The message is “move fast and break schools” following the logic of social media and the venture capital industry.

Edtech shows are also spaces where various forms of expertise are invoked to add authority to industry expectations. For example, psychology-lite slogans about “learning” are mobilized to suggest that technology has the power to affect students’ minds. The brain, cognition, and mind are invoked in presentations and imagery. Brains “light up” like light bulbs, amenable to electrical stimulation. Minds are presented as electrical circuits to be activated. Learning futures are to be “engineered”.

Product stand at BETT 2025. Author photo.

Edtech and AI, it is claimed, are in touch with how learning really happens – while schooling is in need of upgrading if not smashing down completely to be “restarted from scratch”. The sense I got was of a kind of scientization of edtech and AI to support a techno-deschooling agenda, though one that rarely gets at the massively complex social, cultural, economic and political factors involved in school reform.

The discursive, semiotic, scientized and assertive repetitions of edtech exhibitions are, however, also repetitive of much longer lines of educational criticism by industry figures. Throughout the history of computers in schools, entrepreneurs have asserted the superiority of new technologies for “speeding up” or “personalizing” learning or “saving time” for teachers. When the desired transformation does not happen, recalcitrant school structures and reluctant teachers are singled out as impediments to change, failing to adapt to industrial expectations of how education could or should be. 

Exhibiting the political economy of edtech

Besides the industry exhibitions, edtech shows are also sites of political work and policy activity. They are spaces of educational diplomacy, where national ministries go to showcase, demo, and share innovations.

Travelling trade delegations consisting of government reps and selected industry partners set up stands to demonstrate their national industrial prowess in edtech. They aim to show other countries how the future should be done, by claiming they are already there, as Michael Forsman and colleagues have recently shown.

Edtech exhibitions are like travelling edtech caravans of enormous trade stands and sales diplomats, doing for edtech what the OECD’s PISA does for assessments – encouraging competition and the lending and borrowing of ideas. As Catarina Player-Koro, Annika Bergviken Rensfedlt and Neil Selwyn observed from a similar edtech exhibition, “these events function as sites of policy interpretation – ‘sharing’ (or more accurately ‘selling’) global ideas and imperatives to local schools and teachers”.

Edtech exhibitions also provide spaces for industry, investors and governments to encounter each other. At BETT, the “Government and Investors” meeting room was positioned next to the edtech startup “Innovation theatre”. Edtech startups, investors, and government are put into contact at these shows. Market oppotunities are imagined, promoted, made to appear plausible and worth investing in for all parties.

Beyond these cross-sector encounters, edtech shows also function as political forums for government ministers to make epochal claims about the tech transformations of schooling they hope to oversee. At BETT, the government Education Secretary Bridget Phillipson made keynote pronouncements about how the AI-powered future is going to unfold that would not have been out of place at an edtech product launch in the “AI Zone” of the exhibition.

Image from Bett 2025. Author photo.

“So here’s my vision for the future”, Philipson announced. “A system in which each and every child gets a top class education, backed by evidence based tech and nurtured by inspiring teachers. A system in which teachers are set free by AI and other technologies, less marking, less planning, less form filling.”

She added, “We’re deploying AI to make that vision a reality, recognising it as the game changer that it is”. 

Edtech exhibitions, then, are not only industry parties and sales pitch venues, but spaces where political commitments to the technofuture of education are made, in highly speculative language of “game-changing” that mixes policy ambitions with industry projections and market aspirations.

These encounters between government, industry and other sectoral experts reveal that edtech is not merely a technical matter but entangled in a political economy of public-private relations and reciprocities. In particular, as governments seek to capitalize on the opportunities of AI as technosolutions to problems like teacher overwork, they also offer reassurances to industry actors that the education sector is open to business, paving the way for industry to sign lucrative public sector contracts and sell licenses to schools and universities.

The hidden business of edtech exhibitions

Behind every product demo and sales pitch at BETT is a business plan intended to secure the vendor economic benefits, particular from long-term subscription agreements and contracts. Many such products are also designed to extract data as an additional source of value, and are capitalized by investors who expect lucrative returns when those public data are transformed into privately owned and monetizable products.

Both the big tech and edtech industries are seeking a bigger share of the education market in a political context that often implicitly preferences school spending on private techno-solutions over political spending. Though such political economy considerations are not explicitly exhibited at edtech shows, they run through it as a kind of energizing current of electricity, invisibly infusing every handshake, sales demo, slidedeck, and lounge-bar after-party.

Edtech exhibitions like BETT, then, offer a seductive and glossy representation of the digital future of education. They are contemporary Great Expos for envisaging and naturalizing desirable technofutures and making other forms of education seem unimaginable. In this sense the show reflects how the AI industry itself seeks to make AI seem like the only possible path.

What is exhibited far less, but underpinning it all, are the business models and market-making activities that are intended to fuse the education sector with the edtech industry. I left wondering what an alternative edtech exhibition, informed by different sets of values and others ways of imagining the future, might look like.

Posted in Uncategorized | Tagged , , , , , , , , | Comments Off on Great edtech exhibitions as futuring events

Piloting turbocharged fast AI policy experiments in education

Government plans for AI in education are locking schools into a trajectory of inevitable adoption of AI to address sectoral problems. Photo by Joshua Hoehne on Unsplash

The UK Labour government has announced plans to “turbocharge” the deployment of artificial intelligence in public services. Part of the AI Opportunities Action Plan drawn up by investor Matt Clifford, and agreed in full by the government, this “unleashing” of AI will impact on education specifically, it is claimed, by supporting teachers to plan lessons and mark students’ work. But we are not going to need to wait months or years to see how this policy approach will play out in practice, as the Department for Education, supported by the Department of Science, Innovation and Technology, is already funding and piloting prototypes and promoting live experiments with AI in schools.

AI is already being (in the political discourse surrounding the plan) “turbocharged” in education. The DfE announced a £4 million fund last summer to support the development of an “educational content store” of official educational materials – curriculum guidance, standardized lesson plans, and exemplars of student assignments – to enable edtech companies to train or finetune AI models for automated planning and marking.

Most of the fund – £3m – has been awarded to the company Faculty AI. A much-favoured government contractor on AI since its involvement in the Vote Leave campaign back in 2016, Faculty was already tasked by the DfE with helping to run teacher hackathons on AI beforehand, and also completed a proof-of-concept study using large language models to automatically mark primary school literacy tests.

The findings were intended “for reference by the EdTech sector.” It also specified the need for the content store of training data for which it has now been awarded the £3m contract. The remaining £1m has been allocated to 16 companies to build working prototypes based on the store.

The new AI Plan reaffirms the government’s commitment to automated lesson planning and marking to alleviate teachers’ workload. It is, in effect, constructing a lock-in mechanism whereby schools are given no option other than to embrace AI, adopt it into practices, and integrate it into existing systems of pedagogy and administration, or risk being left behind.

The effects here are not just on schools: they signify a shift in policy practice to rapidly piloting and testing technologies in public sector settings.

Turbocharged techno-solutionism

Together, these current efforts are prototyping the future of AI-enabled automation of key teaching tasks. They are first steps towards the far grander vision of AI being integrated into the schooling system produced by the Tony Blair Institute, the political think tank of former prime minister Tony Blair that is backed by funding from Oracle’s Larry Ellison and is reported to have significant influence on the Labour government. The TBI’s manifesto for “governing in the age of AI” was co-produced with (again) Faculty AI.

Previously I’ve suggested that such efforts constitute a kind of technological solutionism, whereby “decisionmakers in positions of political authority may reach for technological solutions out of expedience — and the desire to be seen to be doing something — rather than addressing the causes of the problem they are trying to solve.”

The result of such solutionism may be “automated austerity schooling,” with the existing austerity conditions of schools – exemplified by persistent teacher shortages, retention problems, under-recruitment, and classroom overcrowding – left unresolved while funds flow to building AI solutions. Such solutions are said to address issues like reducing teacher workload by a certain number of hours per week. As the AI Plan indicates, this approach to AI solutions for schools is now being turbocharged.

The current desire to turbocharge AI in schools needs to be properly understood as a particular policy approach. Regardless of whether AI is viewed as highly promising for education, or as a public problem, there can be no doubt now that it is a major education policy preoccupation and therefore needs to be examined through the lens of critical policy analysis.

Critical policy studies of AI in education

Contemporary critical education policy analyses often focus in particular on the contexts and conditions of policy development and diffusion, and on the ways that varied actors, materials, technologies and discourses have to be assembled together and sustained in order to make any policy actionable and sustainable. One enduring concept in such research is “fast policy.”

Fast policy, according to political geographers Jamie Peck and Nik Theodore, is how much 21st century social and public policy is done. Policies, they argue, are made and diffuse at speed, supported by sprawling webs of actors that include consultants, think tanks, entrepreneurial “gurus,” and other thought leaders and thinkers-and-doers with “ideas that works.” These figures have the political connections, industry contacts, multisectoral knowledge and social capital to shape policy priorities, diffuse discourses, and get things done. Peck and Theodore describe fast policy as a form of “experimental statecraft.”

In education, we can see recent developments like the rapid diffusion of policy ideas about “learning to code” into official schools curricula as the result of fast policy processes and practices. It’s accelerated, experimental statecraft in the sense of mobilizing ideas about technologies as a route to modernizing education systems and upskilling students for assumed digital futures.

From a fast policy perspective, we can view efforts to test and prototype AI in the English schooling sector as the result of such socially connected networks of ideational visionaries. They are building prototypes, undertaking technical pilots, constructing visionary reports, and circulating discourses to naturalize AI as a taken-for-granted aspect of the future of teaching and learning.

This is now fast policy being turbocharged in order to rush AI into schools. It involves tightly interconnected networks such as the TBI, Faculty AI, DSIT, and DfE, along with other organizations including AI for Schools and multi-academy trusts involved in the hackathons and pilots, as well as the edtech companies now enrolled into this program of live experimentation.

The turbocharged approach to fast AI policy in education can also be seen as an example of what sociologists Marion Fourcade and Jeff Gordon have termed “digital statecraft,” and a process of public authorities “cyberdelegating” their social responsibilities to private technology firms. In what they describe as a “dataist state,”

when the state defines itself as a statistical authority, an open data portal, or a provider of digital services, it opens itself up to claims by private parties eager to piggy-back on its data-minting functions and to challenge its supremacy on the grounds that they, the private technologists, are better equipped technically, more trustworthy, or both. … We suggest that the private appropriation of public data, the downgrading of the state as the legitimate producer of informational truth, and the takeover of traditional state functions by a small corporate elite may all go hand in hand.

Turbocharged AI policy involving the cyberdelegation of authority to mobile networks of private technology expertise is a concrete instance of dataist, digital statecraft. It means not only the work of the state being enacted by new fast policy networks, but outsourced to private automated technologies to accomplish public service tasks.

In fact, if the current DfE work with AI for schools is in fact influenced by the TBI, then it is significant that the TBI has explicitly positioned AI as a modernizing and transformative technology for the future of the British state at large. Current experiments in education may be understood as indicators of what is to come for state-run services across the public sector. 

This reflects the ways UK politicians have even begun treating AI companies. As critical computing scholar Dan McQuillan has put it, “the secretary of state for science, innovation and technology, has repeatedly stated that the UK should deal with Big Tech via “statecraft”; in other words, rather than treating AI companies like any other business that needs taxing and regulating, the government should treat the relationships as a matter of diplomatic liaison, as if these entities were on a par with the UK state.”

“Blitzscaling” AI in education

The kind of fast, cyberdelegated AI policy being developed in education is not just concerned with the production of policy texts and discourses. In line with the AI Plan’s emphasis on fast-paced piloting and scaling of AI in public services, it exemplifies a form of live experimentation, prototyping and beta-testing of new tools within the schools system itself.

Science and technology studies scholar Stephen Hilgartner has written recently that the release of large language models constitutes a “global real-world experiment” which “casts societies throughout the world as test beds for LLM technology.”

The new government’s plans around generative AI likewise cast the schooling sector as a test bed for large-scale experimentation, piloting and testing of new prototypes and products, helping support what Hilgartner calls “an experiment in ‘blitzscaling’” this technology into everyday practices.

The AI plan’ emphasis on rapid piloting and scaling is extremely industry-friendly. It’s hard not to imagine technology and edtech industry companies and their investors being very excited about the market prospects of schools becoming test beds for their AI innovations, ripe for their blitzscaling aspirations.

The current political discourse of “unleashing” and “turbocharging” AI in public services such as education, then, resembles the kind of blitzscaling strategies of the technology industry to rapidly roll-out new technologies and treat users as live testing subjects. In other words, schools may become networks of AI testing labs, where the technology being live-tested is intended to actively intervene in professional processes and pedagogic practices like lesson planning and assessment.   

Critics of both the government’s AI plan and the TBI vision underpinning it argue that public sector technology projects cannot and should not be rushed. For example, the director of the British Academy, Hetan Shah, has argued that “Tony Blair is wrong”:

Public services are complex systems, and rushing to bolt on unproved technology is unlikely to work. The UK does not have a good track record here and the danger is we will see the same kinds of IT transformation project failures that have been commonplace over the years in the public sector. In any nascent technology, and especially one as expensive as AI, the government will need to ask for much higher-quality evidence of costs and benefits. There is a lot of snake oil for sale.

The risks of rushing out AI snake oil into schools are very real. Yet in the English schools sector there is now a very powerful network of fast policy actors seeking cyberdelegated authority to turbocharge technology testing of AI solutions. They are already prototyping tools and publishing use cases, specifying the benefits of AI for teachers, and awarding funds to the edtech industry to build and test new products.

Whether you see AI as potentially beneficial for schools or not, it’s clear that AI in education is now a significant policy preoccupation – a preoccupation that largely prioritizes rapid innovation rather than foregrounding critical issues with the technology and its social, pedagogic and epistemic implications. It is locking in the English schooling sector to a trajectory of seemingly inevitable AI adoption and integration.

But viewing it as a policy process shows precisely that AI in schools is not inevitable, but a political choice that now being supported and driven by highly influential fast policy networks. In fact, what we are observing with AI in English schools is not only pilot projects but a piloting of turbocharged fast education policy processes that may prove hard to slow down.

Posted in Uncategorized | Tagged , , , , , , , , , | Comments Off on Piloting turbocharged fast AI policy experiments in education

Critical keywords of AI in education

Photo by Giu Vicente on Unsplash

Notes for a keynote talk presented at the event Digital Autonomy in Education: a public responsibility, convened by the Governing the Digital Society initiative at Utrecht University and Kennisnet, 7 November 2024, for an audience of school leaders, teachers, teacher educators, academics, and school sector organizations.


The development of AI for education has a long history, but has only become a matter of mainstream excitement and anxiety since so-called “generative AI” arrived. If you want a flavour of the excitement about generative AI in education there are by now plenty of conferences, opinion articles, guidebooks, showcase events and so on about the most recent technical developments, applications, best practices, and forecasts of the future of AI for schools. My approach is different, because while generative AI is undoubtedly impressive in many ways – and may prove to have specific use cases for teaching and learning – it’s also a big problem for education.  

To take one example – last month the US press reported that wealthy parents had launched a legal case against a school where a teacher had penalized their child for using AI to complete an assignment. The school, they and their lawyer argued, had no AI policy in place at the time. It’s a compelling example of the problems with AI in education. At issue here is not whether the technology ‘works’, but – as the family’s lawyer has put it – whether using AI is plagiarism at all, or just ‘an output from a machine’. 

It also reveals the difficult position schools are in to mitigate against its use when AI remains ‘underregulated, especially in a school setting’. It shows how AI is running up against expectations of academic integrity, which are historically central to education systems and systems of credentialing and qualification.  And it surfaces the unintended consequences of AI in educational settings, with schools now potentially pitted against students and parents and their lawyers because of it.

Maybe this will prove to be an edge case, or it could set a ‘legal precedent’. Whatever the outcome, it clearly demonstrates that treating AI simply as a bundle of innovative technologies with beneficial effects, to which schools need to bend themselves, is highly naïve.   

As I have argued before, we should instead see AI in education as a public problem

The sociologist Mike Ananny wrote an essay earlier this year suggesting that AI needs to be understood as a public concern in the same ways we treat the climate, the environment and – in fact – childhood education itself as public problems. These are issues that affect us all, even if indirectly. Generative AI, Ananny argues, is fast emerging as a medium through which people are learning, making sense of their worlds, and communicating. That makes it a public problem that requires collective debate, accountability, and management.

‘Truly public problems,’ Ananny argues, ‘are never outsourced to private interests or charismatic authorities’. Instead we should convene around AI as a public problem in education, deliberate on its consequences and discuss creative, well-informed responses. 

Keywords of AI in education

My approach is to highlight some ‘keywords’ for engaging in discussion and deliberation about the contemporary public problem of AI in education. I take inspiration from other efforts to define and discuss the keywords that help us describe, interpret, conceptualize and critique dominant features of our cultures, societies and technologies. Keywords provide vocabularies for engaging with current issues and problems.

So my aim with the following keywords is not to offer technical definitions or describe AI features, but to offer a critical vocabulary centring AI as a public problem that might help provoke further discussions about the practical applications of AI in schools and the AI futures that are said to be coming. 

Speculation. The first critical keyword about AI in education is ‘speculation’. This is to do with hype, visions, imaginaries and expectations of AI futures. AI speculation related to education did not appear with ChatGPT, but has certainly been a significant feature of edtech marketing, education press coverage, consultancies’ client pitches and more over the last two years. The significance here is that such speculative claims are mobilized to catalyse actions in the present, as if the future is already known.

At the Centre for Sociodigital Futures, Susan Halford and Kirsten Cater have recently written about how speculative ‘futures in the making’ are often actively mobilized to produce conviction in others and incite them to act. But, they argue, the futures being claimed and made about AI and related technologies are often characterized by taken for granted technological inevitability and determinism that erases expertise in the social aspects of any technology, and by thin evidence and linear assumptions that simply take current technical R&D trends as straightforward signals of what is to come. 

This is also the case with many speculative claims about the future of AI in education. They erase the long history of research showing that technologies are rarely as transformative as some make out, and are based on conjecture rather than evidence.

Intensification. While speculation might be one issue, another is that actually-existing technologies are already interweaving with school settings and practices. Rather than speculation about teacherbots coming to save education systems in the future, we have things like data analytics and generative AI interfaces helping to intensify and amplify existing trends and problems in schools. We can detect this in current demands for teachers to dedicate their labour to integrate AI into pedagogy and curriculum content, with the implicit threat that they will be ‘left behind’ and fail to educate their students appropriately for the ‘AI future’ unless they ‘upskill’. This demand on teachers, leaders and administrators to undertake professional upskilling represents an intensification of teachers’ work, with consequences including even more teachers leaving the profession

It also intensifies the role of external experts, consultants and various edu-influencers in setting goals for schools and determining teachers’ professional development. External influence in schools isn’t new of course, but AI has proven to be a big opportunity for consultants and tech experts to sell their expertise and guidance to schools. As Wayne Holmes has argued in a recent report for Education International, a failure to anticipate the unintended consequences of introducing AI into education by such external authorities can lead to a further intensification of workload demands as new challenges and problems have to be addressed in schools. 

As such, we should be examining how AI does not transform schooling in the beneficial ways often imagined, but interweaves with and intensifies trends and logics that are already well in train.

Contextlessness. As intensification already indicates, how AI is actually used and its effects will be highly context-sensitive. Sociological studies of tech have long insisted that technologies are not just technical but ‘sociotechnical’ – they are socially produced, and socially adopted, used, adapted, and sometimes refused in specific settings. 

But the majority of commentary about AI in education tends towards context-free assertions of AI benefits. This glosses over how technologies actually get taken up (or not) in social settings. It also ignores how AI can be politically appropriated for potentially regressive purposes – one example being US schools using AI to identify books to ban from libraries in the context of conservative mandates to ban books with any sexual content.         

Additionally, many AI advocates tend to pick evidence and data that suits their narratives and interests without considering whether it would apply in other contexts. The best example here is tech entrepreneurs like Sal Khan, Bill Gates and Sam Altman routinely citing Benjamin Bloom’s ‘2 sigma achievement effect’ study of one-to-one tutoring to support AI in schools. Despite this original research from 40 years ago having never fully replicated, and applying only to human tutoring in the context of very specific curricular areas, ‘2 sigma’ is routinely cited to support the contextless ideal of personalized learning chatbots. 

Likewise, it’s common to see modest evidence from highly controlled studies exaggerated to support generalized claims of AI benefits for learning, part of a widespread evidence problem in relation to edtech products. And more broadly, AI research itself tends towards over-optimism, is often not reproducible, can’t be verified, and focuses on engineering problems rather than highly context-specific social factors and implications.

Standardization. Related to contextlessness is the significant risk that AI amplifies further standardization. Standardization, of course, seeks to make contexts irrelevant – the idea is the standard model can work everywhere. This again isn’t a new trend in education, but the issue is AI reinforcing it through the reproduction of highly standardized formats of teaching and learning. Such formats and templates might include partially scripted lessons, bite-sized tutorials, multiple-choice quizzes, or standardized assignments – all things that AI can do quite easily.

But, as Philippa Hardman has recently observed, there is also an increasing move with AI towards the ‘buttonification’ of pedagogic design and curriculum content creation. You can design a course or plan a lesson ‘at the push of a button’ with new AI functions that are built in to education platforms. This is accelerating automated standardization. This AI-enabled standardization, argues Marc Watkins, risks ‘offloading instructional skills uncritically to AI’, leaving us with ‘watered-down, decontextualized “lessons”’ that are devoid of a teacher’s knowledge and give students a ‘disjointed collection of tasks’ to complete rather than a pedagogically ‘structured experience’.

Buttonified education may be a streamlined, efficient, time-saving and cost-saving approach, but such standardization risks degrading teachers’ autonomy in planning and students’ experience of a coherent curriculum.

Outsourcing. Indeed, this standardization works in concert with the next keyword – outsourcing. Not only does AI involve outsourcing to external technology vendors. As Carlo Perrotta argues in his new book, Plug and Play Education, AI implies the outsourcing of teachers’ professional pedagogic autonomy itself. 

It means, for example, delegating professional judgment to AI’s mechanisms for measuring, clustering and classifying students – for example if we allow it to perform assessment tasks or to measure a student’s progress and then generate ‘personalized’ recommendations about the next steps to take. As Perrotta argues, in ‘a best-case scenario’ these ‘automated classifications may prove to be erroneous or biased and require constant oversight’. This is outsourcing where the role of the teacher is reduced to a quality assurance assistant.

But in ‘a worst-case scenario’, Perrotta adds, ‘teachers may become unable to exercise judgment [at all], as multiple automated systems operate synchronously behind the scenes … leading to a fragmentation of responsibility’. In this sense, then, outsourcing should be understood not simply in terms of vendor contracts but in terms of the offloading of professional discretion, judgment, decision-making and, potentially, control over the processes by which students are assessed, ranked and rewarded. 

Bias. The example of outsourcing already indicates the next problem with AI – ‘bias’. AI biases may manifest in several ways. One is the use of historic data in analytics systems discriminating against students in the present, as Perrotta indicates – because the past data tells us that students clustered in this or that group tend towards underachievement, automated discriminations can be made about what content or tasks to personalize, or prescribe, or proscribe them from accessing. The real risk here is excluding some students from access to material due to latent biases in the systems.

An interesting study from the Stanford Human-AI Interaction lab recently also found that generative AI produces ‘representational harms’. They tested how generative AI systems represent diverse student populations, and found them significantly biased at a massive magnitude. This is because of the ways such groups are represented, or underrepresented, in AI training data. The researchers reported that such representational biases can lead to the erasure of underrepresented groups, reinforcement of harmful stereotypes, and the triggering of various psychosocial harms. The headline issue here is that a chatbot tutoring application built on top of an AI model with these training data might be biased against already marginalized groups and individuals.

Pollution. Besides the bias in the training data is also the possibility that data reproduced by generative AI systems are already polluted by automatically generated text. A couple of weeks ago it turned out, for example, that Wikipedia editors had been forced to identify and remove AI-generated material that could endanger the veracity of its content.

Last year Matthew Kirschenbaum memorably wrote that ‘the “textpocalypse is coming’ – by which he meant that the internet itself could become overrun with ‘synthetic text devoid of human agency or intent’. It could contain outright hoaxes and misinformation, or just AI-generated summaries that misrepresent their original sources.

If this textpocalypse is now unfolding, then AI could exert degenerative effects on the information environment of schools too – as teachers come to rely on AI-generated teaching resources whose content has not been vetted or evaluated. Students’ processes of knowledge construction could be undermined by encountering synthetic text that’s polluted with plausible-sounding falsehoods.

As some of you might have noticed, all language models come with some ‘small text’ disclaimers that you should always independently verify the information provided. The implication is that the role of students now is not to synthesize material from well-selected authoritative sources, but merely to check the plausibility of the automated summaries produced by AI, and for teachers to spend their time ‘cleaning up’ any polluted text.

Experimentation. Perhaps the best way to characterize the last couple of years is as a global technoscientific experiment in schools. Schools have been treated as petri dishes with squirts of AI injected in to them, then left to see what happens as it spreads and mutates. As a keyword, ‘experimentation’ captures a number of developments and issues.

One is the idea that we are witnessing a kind of experiment in educational governance, as government departments have contracted with AI firms to run hackathons and build prototypes, often as a kind of live experiment involving teachers and schools.  The sociologists Marion Fourcade and Jeff Gordon have called this kind of public-private arrangement ‘cyberdelegation’ of governance authority to tech firms. It’s experimental ‘digital statecraft’ that often results in the private sector profiting from public sector contracts. 

An example here is the tech firm Faculty AI, which has run a hackathon and produced an AI marking prototype for the Department for Education in England. It was awarded a further £3million contract last month to build a ‘content store’ of official educational materials for AI model training and use by edtech companies. As such we now have an AI firm doing the work of government – cyberdelegated to perform digital statecraft on behalf of the department for education.

One aspect of this work by Faculty AI, it has suggested, is the need for a ‘codification of the curriculum’ to fit the demands of AI. What this means is that for the AI to work as intended, the materials it is trained on need to ‘incorporate AI-friendly structures … that AI tools can recognize and interpret’. So what we have here is a live experiment in AI-enabled schooling that requires the adaptation of official curriculum documents, learning outcomes and so on to be machine-readable. It’s making education AI-ready.

This initiative is also part of efforts by the UK government to reduce teacher workload – by reducing their lesson planning and marking demands. But you could see this as a kind of experiment in what I’ve previously called ‘automated austerity schooling’. By this I mean that common problems in schools, like teacher shortages, overwork, and classroom crowding, all of which are results of more than a decade of austerity funding, are now being treated as problems that AI can solve.

It’s an experiment in techno-solutionism, through publicly-funded investments in private tech actors, rather than investment in the public schooling sector itself, perpetuating austerity through automation.

Infrastructuring. This kind of experimentation is also assisting Big Tech and Big AI companies into education. If we are embedding AI into education, then we are embedding it into the existing digital systems of schooling – the edtech platforms, the learning management systems, the apps, and all the practices that go with them.

These edtech systems and platforms in turn depend on the ‘stack’ of services provided by the ‘Big AI’ companies like Amazon Web Services, Microsoft and Google. This means the digital systems of schooling become nested in Big Tech and AI infrastructures, potentially enabling these companies to exert influence in everyday school routines and processes while schools lose autonomy and control over critical systems, operations and processes.

So the keyword of ‘infrastructuring’ here refers to an ongoing techno-economic structural transformation in the digital substratum of schooling. It will integrate AI ever-more tightly into pedagogic practices, learning processes and administrative and leadership practices with unknown consequences.

Habituation. Laying down the infrastructural conditions for AI to operate in schools also necessitates accustomizing users to the systems so that they function smoothly. This is what in infrastructure studies is termed ‘habituation’ – getting systems to work by getting users to synchronize their practices with them. This is why we might view many efforts to make teachers, leaders and students ‘AI literate’ or ‘AI skilled’ as infrastructure habituation programs. If you’re a Big AI vendor like Google looking to ensure your new AI applications are widely used in schools, then you need to invest in training habitual users.

Radhika Gorur and Joyeeta Dey have described this as ‘making the user friendly’ to what the technology offers so that they use it as its proprietor hopes. It involves seeking ‘alliances’ with educators and institutions, ‘making friends’ and changing the habitual ways they work. As Gorur and Dey note, ‘systems and products carry scripts for the ways users are expected to engage with them’. But these expected uses also depend on teachers and students having the right AI skills and literacies to do so habitually, as companies like Google know well enough to be investing millions in AI training for schools.

Assetization. Why would companies like Google be spending so lavishly on this kind of habituation of users? It’s because AI is a tremendous value proposition. The language of financial ‘assetization’ is useful here. Simply put, a product or a platform can be understood as a financial asset when processes are in place to ensure it returns economic benefits into the future. 

Almost all big tech and edtech companies can be understood to be engaged in assetization processes. Big tech, venture capital investors and edtech companies are all seeking asset value from AI-driven platforms and products, if only they can unlock continuous income streams, as Janja Komljenovic and colleagues have shown in recent research on assetization in education. There are two main routes to financial returns from owning assets.

First, by collecting ‘monetary’ payments as license fees and subscriptions from schools for access to services – where the platform or product is the asset being monetized. Second, by collecting data about institutions’, staff and students’ interactions with the platform for future feature design, upgrades and products that can be re-sold to schools – where the data is an asset that can be monetized in the future.

Through these dual income processes, schools may be locked-in to long term subscriptions and licensing contracts. Such long-term lock-ins serve as a business model for AI in education as companies can generate income streams from increasing the scale of their user base and extracting value from the data.

Non-accountability. A significant risk of all this infrastructuring, habituation and assetization is that we end up with AI-driven schooling systems that lack accountability. As Dan McQuillan has argued, most commercial AI is opaque, black boxed, nontransparent, uninterpretable and unaccountable, and its decisions/outputs are hard or impossible to understand or challenge.

If this is the case, then embedding AI in schools means that neither teachers nor administrators might be able to understand, explain, or justify the conclusions the programs reach, or audit or document their validity. School leaders and teachers may be unable to exercise judgment, provide a rationale for what the AI has done, or take responsibility for classroom and institutional decisions if black box AI is integrated into administrative systems and processes. 

School as a Service?

So what does all of this mean? What kind of schooling systems lie ahead of us if the current trajectory of AI integration into education continues? A recent article by Matthew Kirschenbaum and Rita Raley on AI in the higher education sector may offer a warning here. They have suggested that ‘AI may ruin the university as we know it’ – and their argument may stand for schooling too.

With the newest wave of edtech, they argue,  learning becomes ‘autosummary on demand, made possible by a vast undifferentiated pool of content that every successive use of the service helps to grow’. And they suggest that the university itself is now becoming a ‘service’.

‘The idea of the University as a Service extends the model of Software as a Service to education,’ they argue, where ‘Software as a Service refers to the practice of businesses licensing software and paying to renew the license rather than owning and maintaining the software for themselves. For the University as a Service, traditional academic institutions provide the lecturers, content, and degrees (for now). In return, the technological infrastructure, instructional delivery, and support services are all outsourced to third-party vendors and digital platforms’.

We could see the ‘School as a Service’ in similar terms. School as a Service refers to institutions providing the steady flow of users and data that AI demands. It requires well-habituated, friendly users. It extracts data from every interaction, and treats those aggregated data as assets with future value prospects. It also integrates schools into continuous forms of experimentation, which might include the successive introduction of polluted or biased information into educational materials and systems. The School as a Service is a system of outsourcing, of context-free standardization, and of an intensification of some of the most troubling aspects of contemporary schooling. The school could become a service for AI.

Some might say these conclusions are too speculative, and too critical, but I think it’s important to develop a critically speculative orientation to AI in education to counter the futures that are already being imagined and built by industry, entrepreneurs, investors, and solutionist policy authorities.

I hope these critical keywords have helped offer a vocabulary for contending with AI in education as a public problem that urgently requires our deliberation if we want to build other AI futures for the sector. Could we come up with other keywords, informed by other visions, and underpinned by different values, to orient our approach to AI in schools, and build other kinds of AI futures for education?

Posted in Uncategorized | Tagged , , , , , , , , , | Comments Off on Critical keywords of AI in education

Critical edtech gets a conference

Photo by Arthur Lambillotte on Unsplash

Critical perspectives on educational technology (edtech) are more important than ever. Just in recent days, I’ve seen a lengthy post about the “buttonification” of AI in education, as new interfaces make it possible for educators to design and create lessons “at the push of a button”. And I’ve heard of wealthy parents getting litigious with a school that gave their son a bad grade for using AI in his assignment, with the parents concerned it would prevent him getting into a prestigious university.

Despite frequent insistence in the press and on social media, clearly AI is not straightforwardly a transformative force for education, as technology never is. Buttonification is the most reductive, semi-automated, efficiency-driven approach to incorporating AI into education it’s possible to imagine. Push-button pedagogy isn’t even a new imaginary – Audrey Watters did the historical homework on this idea of robotized schooling nearly a decade ago. As for parents slapping the law down on schools – here AI is just another wildly proliferating problem with serious unexpected but real-world consequences for educational institutions.

Critical edtech studies

While up-to-the-minute critical commentary on these developments is extremely welcome, what the last couple of years has really demonstrated is the need for detailed, sustained and critical research on the complex interactions between education, technology and society. Over the last decade there has been a rapid growth in critical edtech research. But what has been lacking are dedicated spaces for sharing knowledge and building the relationships necessary to forming what we might think of as an emerging field of critical edtech studies.

Critical edtech research sits as the intersection of education studies, critical data studies, digital sociology, platform studies, history of technology, and other disciplinary and interdisciplinary approaches. It has powerful potential to generate insights and intervene in the ongoing digitalization of schools, universities, and nonformal sites and practices of learning. And now it has a forthcoming conference for knowledge sharing and building a community of scholarship.

European Conference on Critical Edtech Studies

I’m really pleased to have been asked to team up with Mathias Decuypere, Sigrid Hartong and Jeremy Knox to co-organize a European Conference on Critical Edtech Studies (ECCES) in Zurich, 18-20 June 2025. ECCES is intended to help build a field of critical scholarship on edtech by bringing together researchers and students from Europe and internationally. While it certainly won’t slow the rapid flow of hype and controversy around contemporary technologies in education, our hope is it will help support the development of a collective identity for critical edtech scholarship, catalyze new research, and lay the foundations to reshape how edtech is understood and treated in our education systems.

If we want to contend with edtech, AI, or whatever comes next in our education systems, we need thoughtful, creative, theory-informed and critical researchers to take up the ongoing challenge of conducting painstaking studies – and then to challenge persistent waves of technological hype and expectation with actual research-informed insights.

The conference is aimed at established, early career, and doctoral researchers alike, and we’ve sought funding to keep fees as low as possible, particular for PhD students. Here is the call text.

ECCES call for abstracts

The rapid evolution of educational technologies (edtech) has transformed, and continues to transform, the landscape of education, particularly through the ongoing growth of digital networks, data-based and, more recently, AI-driven technologies. As these technologies become ubiquitous, a critical examination of their implications for teaching, learning, and society has become increasingly imperative. Responding to this need, over the last decades, a growing number of studies dedicated to the critical analysis, evaluation, and (re)design of educational technologies has emerged. More specifically, by examining the pedagogical, social, technical, political, economic and cultural dimensions of edtech, Critical EdTech Studies have sought to uncover the underlying power dynamics, biases, and unintended consequences that often accompany the introduction of technological innovations into educational policy and practice.

Despite their growth in number, however, Critical EdTech Studies have remained dispersed and lack a dedicated space for debate, networking, knowledge building, and agenda-setting – practices vital to the establishment, identity, and maturing of the field. To address this need, we invite junior and senior scholars, as well as educational practitioners and edtech developers, to participate in the inaugural European Conference on Critical Edtech Studies (ECCES). Open to contributors from anywhere in the world, the first edition of ECCES aims to establish a foundational understanding of Critical Edtech Studies, but also to provide a forum for intense discussions around potential futures for the field. The conference invites participants to share in this agenda, through engagement in an informal and supportive community that can stimulate debate and further research in Critical EdTech Studies.

The ECCES conference is particularly dedicated to critical scholarship around the following areas:

  • Technological Artifacts: Educational platforms, apps, AI, VR, data visualizations, and other digital tools.
  • Policy and Governance: The role of governments, institutions, actor networks, and particular discourses in shaping edtech development and adoption.
  • Political Economy: Business practices, capitalization, assets, value creation, corporations, EdTech industry, startups, edu-businesses.
  • Social Justice and Diversity: The impact of edtech on marginalized communities, the (re-) production of inequalities, and how edtech is (not) addressing heterogeneous or postcolonial audiences.
  • Learning, Pedagogy and Assessment: Types and visions of learning, teaching, pedagogy and assessment enhanced or inhibited by interfaces, data analytics, and algorithmic modelling.
  • Ethical Considerations: Privacy, surveillance, and the ethical implications of data-driven education.
  • Methodological Approaches: The various ways in which Critical Edtech Studies can investigate and contribute to (re-)shaping edtech, including evolutions towards more participatory and co- design approaches.
  • Sustainability and Planetary Futures: The environmental impact of edtech, how it matters, and how it can be mitigated.
  • Histories of EdTech: patterns and repetitions, hype cycles, persistent discourses, antecedents and early traces, hidden histories.
  • Future Visions: Speculative futures, utopian and dystopian scenarios, alternative pathways for edtech development and education policy, literacy frameworks for professionalization.

We hope the event will help showcase and stimulate critical edtech research, and are especially keen to attract early career and doctoral students to share their work and help build the field of critical edtech studies. Check out the call for full abstract submission details.

Posted in Uncategorized | Tagged , , , , , , , , , | Comments Off on Critical edtech gets a conference

Automated austerity schooling

The UK Labour government has launched a project to transform education with AI. Photo by Feliphe Schiarolli on Unsplash

The Department for Education for England has announced a £4 million plan to create a “content store” for education companies to train generative AI models. The project is intended to help reduce teacher workload by enabling edtech companies to build applications for marking, planning, preparing materials, and routine admin. It is highly illustrative of how AI is now being channelled into schools through government-industry partnerships as a solution to education problems, and strongly indicates how AI will be promoted in English education under the new Labour government.

Most of the investment, allocated by the Department of Science, Innovation and Technology as part of its wider remit to deploy AI in the public sector, will be used to create the content store.

The £3m project will, according to the education department’s press release, “pool government documents including curriculum guidance, lesson plans and anonymised pupil assessments which will then be used by AI companies to train their tools so they generate accurate, high-quality content, like tailored, creative lesson plans and workbooks, that can be reliably used in schools.”

A further £1m “catalyst fund” will be awarded to education companies to use the store to build “an AI tool to help teachers specifically with feedback and marking.”

As Schools Week pointed out, none of the money is going to schools. Instead, the early education minister Stephen Morgan claims it will “allow us to safely harness the power of tech to make it work for our hard-working teachers, easing the pressures and workload burdens we know are facing the profession and freeing up time, allowing them to focus on face-to-face teaching.”  [Update: on 3 October 2024 it was announced that FacultyAI – long a favoured government contractor on AI – was awarded the £3 million contract for the content store, having been responsible for the “use cases” and technical research on behalf of the DfE that led to it.]

Whether £4m is enough to achieve those aims is debatable, although it appear to signify how the government would rather allocate a few million on tech development than other ways of addressing teacher workload.

Edtech solutionism

Putting aside for a moment the question of the reliability of language models for marking, feedback, planning and preparation — or the appropriateness of offloading core pedagogic tasks from professional judgment to language-processing technologies and edtech firms, or all the many other problems and hazards — the project exemplifies what the technology and politics critic Evgeny Morozov has termed “technological solutionism.”   

Technological solutionism is the idea that technology can solve society’s most complex problems with maximum efficiency. This idea, Morozov argues, privileges tech companies to turn public problems into private ones, to produce “micro-solutions to macro-problems,” from which they often stand to gain financially.

One consequence of this is that many decisionmakers in positions of political authority may reach for technological solutions out of expedience — and the desire to be seen to be doing something — rather than addressing the causes of the problem they are trying to solve.

The DfE/DSIT project can be seen as edtech solutionism in this sense. Rather than addressing the long-running political problem of teacher workload — and its many causes: sector underfunding, political undermining of the teaching profession… — the government is proposing teachers use AI to achieve maximum efficiency in the pedagogic tasks of planning and preparation, marking and feedback. A similar approach was previously prototyped, when the DfE under the prior government funded Oak National Academy to produce an AI lesson planner

The trend represented by these projects is towards automated austerity schooling.

Schools in the UK have experienced the effects of austerity politics and economics for almost 15 years. The consequences have been severe. According to UK Parliament records, the overall number of teachers in state schools has failed to keep pace with student numbers, resulting in an increase in the student to teacher ratio and some of the highest working hours for teachers in the world, compounded by continued failure to meet teacher recruitment targets.

The government is investing in a low-cost technological solution to that problem, but in a way that will also reproduce it. School austerity will not be solved by automated grading; it will sustain it by obviating the need for political investment in the state schooling system.

Algorithmic Thatcherism

The UK’s finances are pretty parlous, and the government is warning the country of  further economic pain to come, so it may seem naïve to imply we should simply allocate public funds to schools instead of investing it in AI. But failure to address the underlying problems of the state schooling sector is likely to lead to layering on more technological solutions in pedagogic and administrative processes and practices with few regulatory safeguards, and continued automation of austerity schooling with unknown long-term effects.

The critic Dan McQuillan argued last year that efforts under the Conservative government to deploy AI across public services represented a form of “algorithmic Thatcherism.” “Real AI isn’t sci-fi but the precaritisation of jobs, the continued privatisation of everything and the erasure of actual social relations,” McQuillan argued. “AI is Thatcherism in computational form.”

Algorithmic Thatcherism prioritizes technical fixes for social structures and institutions that have themselves been eroded by privatization and austerity, privileging the transfer of control over public services to private tech firm. “The goal isn’t to ‘support’ teachers and healthcare workers,” concluded McQuillan, “but to plug the gaps with AI instead of with the desperately needed staff and resources.”

Automated Blairism

Despite a change of government, automated austerity schooling looks like being supercharged. The new Labour government is strongly influenced by the Tony Blair Institute, the former Prime Minister’s organization that has been termed a “McKinsey’s for world leaders.”

The TBI has heavily promoted the idea of AI in UK government, including education. Blair’s New Labour administration in the late 1990s and early 2000s was characterized by investment in public services, often through public-private partnerships and increasing private sector influence; a push for data-driven accountability measures in education; increased choice and competition; and significant boosting for technology in schools.

The TBI recently announced a vision for a “reimagined state” that would use “the opportunities technology presents – and AI in particular – to transform society, giving government the tools it needs to do more with less, make better public services a reality and free up capital for other priorities. … More, better, cheaper, faster.”

This is a vision the Labour party is now acting on hastily. According to a report in Politico, the “plans have tech firms — some of whom have partnerships with Blair’s institute — swarming, lured by the tantalizing prospect of millions of pounds of public contracts.”

Doing “more, better, cheaper, faster” with AI in government services represents the triumph of automated Blairism. It combines political technology solutionism with privatized influence over the public sector, all under the guidance of policy and technology consultants bankrolled by tech billionaires.

The TBI’s latest manifesto for AI governance in the UK was co-produced with Faculty AI, the private tech firm that previously worked with the Conservative government on, among other things, plans for AI in education. Ideals of putting AI into our governance institutions and processes are not intrusions from the commercial realm; they are already embedded in contemporary forms of computational political thinking in the UK.   

Real-time environments

Under the Blairist imaginary of technological transformation of the UK state, its visionary prospectus for “tech-enabled” UK education invokes the promise of “personalized learning” — formerly a favoured education slogan under Blair’s New Labour administration in the early 2000s – and AI “to revolutionise the experience of pupils and teachers.”

Among five key priorities for technology in education, the TBI set the stage for the current DfE project on AI and teaching with its claim that “Technology and AI can provide new ways of organising the classroom and working days, and supporting marking, lesson planning and coordination.”

But the TBI vision goes beyond the immediate aims and constraints of the £4m DfE fund. It imagines “expanding the use of edtech” and giving all school children a “digital learner ID.” The digital ID would contain all their educational records and data, enabling real-time automated “analysis of students’ strengths and weaknesses,” which it argues would “simplify coordination between teachers and allow them to use AI tools to plan lessons that are engaging and challenging for all pupils.”

“Most importantly,” the TBI insists, “it would allow teachers to shift repetitive marking tasks to adaptive-learning systems. A move to a real-time data environment would mean AI could mark a class’s work in less than a second and provide personalised feedback. This wouldn’t replace teachers but instead free them up to do what they do best: teach.”

In keeping with the Blairist approach of old, the TBI envisages a greater role for the private sector in building the digital ID system; sharing student data with the edtech industry for innovation and development; and a wholly data-driven approach to school accountability by using the digital ID data for school performance measurement and enhancing parent choice.

The current £4m DfE project to use AI to help teachers, then, looks like just the first step in a possible longer program of technosolutionist policy — focused on turning schools into real-time data analysis and adaptive environments — that will sustain and reinforce automated austerity schooling as the new normal.

Whether the other TBI proposals on AI and “real-time” data analysis and adaptive technologies in education come to fruition is a matter for speculation just now (they’re not new ideas, and haven’t come to much yet despite many industry and investor efforts). But with the international clout of Blair, and the influence of the TBI in the fresh Labour government, the vision will certainly have considerable visibility and circulation within the government departments responsible for public services.

The edtech industry will certainly be queuing up for potential future contracts to participate in the proposed transformation of English state schooling. [Update: in the 9th September 2024 call for the competition, it was stated that: “You must demonstrate a credible and practical route to market, so your application must include a plan to commercialise your results” and “Only the successful applicants from this competition will be invited to potential future funding rounds.”]

AI fails

Going all-in on AI in education is a risky move. The recent case of the Los Angeles school district is cautionary. After $6m of public investment in an edtech company to build a chatbot for the entire state schooling system, the contractor folded only months later, potentially imperilling masses of student data that it had failed to adequately protect.

Observers suggested the technological vision was far too ambitious to be achievable, and that the company’s senior management had little idea of the realities of the schooling system. The CEO is said to have claimed that they were solving “all the problems” facing large schooling systems just before the company collapsed.

Public opinion may not support AI in schools either. When the new education minister Stephen Morgan announced the DfE project, he did so from South Korea, at a major event on AI in education, favourably comparing the two countries’ technological prowess and approach to educational innovation. But only a few days before, the Financial Times reported a significant public backlash against the South Korean government’s own plans for AI in education. It was only a few years ago that school students in London chanted “fuck the algorithm” to protest their grades being adjusted by an automated system. Failure to consider public opinion can be disastrous for big tech projects in the public sector.

More broadly, investment organizations and businesses are beginning to sense that the “AI bubble” of the last two years may be about to burst, with companies reporting lack of productivity benefits and investors spooked by weak financial returns. The prospects for AI-based edtech are unclear, but could equally be affected by investor flight and customer disinterest. Building public services on AI infrastructure despite industry volatility and public concern may not be the wisest allocation of public funds.

It is understandable that teachers may be using AI in the preparation of materials, and to automate-away administrative tasks under current conditions. But the risks of automated austerity schooling — eroding pedogagic autonomy, garbling information, privacy and data protection threats, enhancing classroom surveillance, and far more — remain significant and underaddressed. Letting AI in unobstructed now will likely lead to layering further automation on to pedagogic and administrative practices, and locking in schools to technological processes that will be hard and costly to undo.

Rather than seeing AI as a public problem that requires deliberation and democratic oversight, it is now being pushed as a magical public-private partnership solution, while both old problems with school structures and the many new problems AI raises in public service provision remain neglected. The DfE’s AI content store project is a first concrete sign of the solutionism that looks set to characterize automated austerity schooling in England under the new government.

Posted in Uncategorized | Tagged , , , , , | Comments Off on Automated austerity schooling

Genetic IQ tests are bad science and big business

A consumer genetics company plans to launch a genetic IQ testing service, raising scientific and ethical concerns. Photo by National Cancer Institute on Unsplash

A personal genomics startup company has announced plans to launch a genetic intelligence testing service. Backed by technology investors, Nucleus Genomics released a disease screening product in March 2024, followed by a beta version of the “Nucleus IQ” test in late June – a product it eventually aims to roll our for all customers. News of the test is resurfacing controversies over the accuracy and ethics of using genetic data to identify and rate “innate” human abilities.

The company makes a big pitch about its “whole genome” sequencing and screening services. The Nucleus Genomics founder and CEO Kian Sadeghi announced on Twitter it was “launching a closed beta for Nucleus IQ — the first intelligence score based on your DNA”, with founding partner and chief operating officer Caio Hachem adding that its “analysis offers an unprecedented insight into the genetic factors that contribute to our cognitive abilities”.

The startup’s claims to innovation and novelty are backed by investment, scientific and industry partnerships. Nucleus has received almost $18 million dollars in funding from tech investors including Peter Thiel’s Founders Fund and Reddit co-founder Alexis Ohanian’s  venture capital firm 776. Ohanian urged his twitter followers to join the waitlist for the Nucleus IQ test.

Scientifically and technically, the Nucleus Genomics service is built on the foundations of impute.me, a free open source website allowing users to upload their consumer genetics data for polygenic risk calculation, which Nucleus acquired in 2022. A partnership with Illumina, a global biotech firm, gives Nucleus access to advanced genomic technologies, while the analysis is undertaken by a genomics laboratory in Massachusetts and an informatics lab in North Carolina.

Like other investor-driven consumer genetics companies, such as 23andme, Nucleus is capitalizing on the promises of “precision medicine” and “personalized healthcare”, as well as the commercialization of previously not-for-profit scientific enterprises. In precision medicine approaches, the individual becomes treated as a “data transmitter” whose personal bioinformation is a valuable commodity. Nucleus offers customers the opportunity to share their data for future third party research and also advertises the benefits of “upgrading” from a basic to a premium plan for “more accurate assessment of your genetic risk”. Like 23andme, Nucleus has applied the platform business model of commercial datafication to personal health.

Though details about the genetic IQ test itself haven’t been made public (Hachem’s tweet suggested the “tech is still in its early stages” and they would be “rolling this out slowly”), available information about its other tests show that Nucleus Genomics produces polygenic scores for various traits and conditions. Describing polygenic scores as “common genetic scores”, it suggests that its “state-of-the-art algorithms unlock previously unavailable insights into your diseases and traits” to provide “personalized reports tailored to you”.

Polygenic scores underpin claims about a DNA revolution in intelligence research, and the prospects of genetic intelligence testing (including genetic IQ testing of children). Nuclueus’s genetic IQ test therefore translates the promise of algorithmic precision into seemingly precise cognitive screening. The test will treat users as biodata transmitters for algorithmic analysis whose intelligence ratings are a source of value for the company.

Bad science?

The Nucleus IQ test is framed in the language and imagery of high-tech algorithmic accuracy and innovation, but it rests on a controversial history of intelligence testing – particularly the proliferation of the IQ test – that has its roots in twentieth-century eugenics. It’s for this reason other consumer genetics companies, like 23andme, have steered clear of producing intelligence ratings — although it has been possible for users to upload their data to other providers to do so instead.

In 2018 the behaviour geneticist Robert Plomin suggested that genetic IQ tests for children were likely to be developed in the future, with parents using direct-to-consumer tests to predict their children’s mental abilities and make educational choices. Plomin termed this “precision education”, but critics saw it as a sign of the arrival of “personal eugenics” and a forthcoming “genotocracy” where wealthy families could afford IQ-test tech services to maximize their children’s chances, while poorer families could not. More prosaically, there remain significant questions over the underpinning theories, measurement instruments and construct validity of IQ tests, and particularly claims that genetic data can be used to discover the “biological reality” of IQ.

Given existing controversies over genetic intelligence testing, the announcement of Nucleus IQ surfaces once more these longstanding concerns about both the scientific validity of such tests and the ethical implications of using genetic data to calculate complex human capacities.

On the scientific side, critics point out that polygenic scores for things like intelligence are highly confounded by social and environmental factors, making genetic prediction of IQ little better than “junk science” or “snake oil”. This is because polygenic scores can only account for around 5% of the variance in intelligence, often measured in proxies like educational attainment, which suggests that marketing and publicity claims of calculating “genetic IQ” are wildly overinflated.    

Arguments about the value of calculating genetic IQ are characterized by hard hereditarianism, where assertions are made of the innate biological processes that shape qualities like intelligence. However, polygenic scores do not simply capture causal genetic effects; they also capture a wide range of complex environmental effects that may be mistakenly interpreted as biological in origin. This is because complex social traits like intelligence or educational attainment are not purely biological states, but, as Callie Burt argues, social constructions “based on social distinctions inevitably layered on top of other social forces that exist irreducibly in a social matrix”.

On the ethical side, too, genetic IQ tests – like other social and behaviour genetics findings – raise the real dangers of biological fatalism, stigmatization, discrimination, distraction from other ways of understanding or addressing a phenomenon, and the reification of race as a biological category. Findings from educational genomics studies have already been appropriate and misused to support racist arguments about the heritability of intelligence, and there are serious ethical debates about the possibility of using polygenic IQ tests for embryo screening. Even those scientists supportive of using polygenic scores for research into complex traits and outcomes regard the idea of “DNA tests for IQ” as overstated and misleading. The general consensus seems to be that genetic IQ tests are bad science.

But those scientific and ethical shortcomings are not stopping companies like Nucleus Genomics from claiming to provide a world-first commercial DNA test for intelligence – and they are politically bullish about doing so.

Democratizing genetics?

In response to criticisms of the Nucleus IQ test on twitter, CEO Kian Sadeghi wrote a 350-word tweet defending it from accusations that it was a eugenic technology:

Yesterday, @nucleusgenomics announced a closed beta for the first genetic IQ score, Nucleus IQ. Lots of people were curious. Some people said genetic analyses for intelligence will devolve into new eugenics.

We disagree. Eugenics is antithetical to my vision for @nucleusgenomics

Instead of eugenics, he argued, Nucleus Genomics was “democratizing” access to genetic data.

To some extent, describing genetic IQ tests as eugenic may be over-dramatic, compared to the appalling historical record of eugenic extermination and reproductive control in the twentieth century. Consumer IQ tests are clearly not in the same terrain. Nonetheless, there certainly is family resemblance with broadly eugenic forms of hereditarianism, genetic determinism and reductionism, evaluations and ratings of desirability, and actions intended to improve individual capacities.

And if genetic IQ testing for embryo screening or precision-targeted educational interventions followed from innovations like Nucleus IQ, then it would be even harder not to view such technologies as at least bordering on the territory of eugenics — a kind of “flexible eugenics” that mobilizes genetic technologies for individualized interventions and improvements. Nucleus clearly sees big business opportunities in the biotechnological improvement of human health and cognition.

But Sadeghi’s response to criticism also indicated the company taking a particular political position in relation to ongoing ethical concerns about the mis-use of genetic data. Rather than restricting genetic science on the grounds of ethical concern, Sadeghi argued that:

This is about information access and liberty. … Suppressing controversial genetic insights that are prone to abuse and misinterpretation doesn’t prevent that information from being abused and misinterpreted. … We believe history and ideology should not outweigh your right to benefit from technological progress.

In an earlier blog post, he also suggested that “ideological battles have led the public health and medical elite to restrict access to genomic insights and their utility”.

Genetic IQ testing, then, has become linked by Nucleus Genomics to current contests over scientific freedom, in contrast to supposed elite ideological control, which have become heated in some areas of social and behavioural genetics. Here the argument is that science is being censored by scientific elites due to an overemphasis on ethical practice and control over “forbidden topics” and “stigmatizing research”, with scientists having their access to genetic data restricted at the expense of innovation and knowledge.

Nucleus Genomics has therefore positioned itself as a defender of scientific freedom, and a source of democratized genetic knowledge, as a way of deflecting from existing and well-founded concerns over the dangers of hereditarian genetic IQ testing. This political defensiveness around scientific freedom to conduct controversial research is mobilized to make genetic IQ testing technologies seem desirable, acceptable, and non-ideological. Additionally, big tech investors see potential value in them, and Nucleus clearly anticipates a market opportunity for consumer genetic IQ testing. Left unsaid is the actual value of genetic IQ tests for users and customers, or the potential longer-term implications of such (contested) technologies being introduced into other sectors and industries.

This political positioning, backed by investor dollars, raises the danger that ethically risky genetic technologies may become normalized and used to quantify and evaluate human capabilities, despite their documented shortcomings. The example of Nucleus Genomics may also anticipate the expanding use of genetic technologies in sectors like education, as using biological signals to predict outcomes is argued to be scientifically viable, accurate, and objective. Some researchers have already argued that data from direct-to-consumer genetics companies could be used in the future to construct polygenic scores and inform educational policy and teaching.

All of this indicates how the highly contested science of genetic IQ testing is now being brought into the mainstream thanks to tech startups, biotech firms and investors seeking valuable market opportunities, twinned with researchers engaging in ethically-risky experiments under the banner of democratizing access to genetics, in a context where frameworks of scientific and regulatory control are increasingly viewed as ideological impositions on scientific freedom.

Posted in Uncategorized | Tagged , , , , , , , | Comments Off on Genetic IQ tests are bad science and big business

Polygenic scores as political technologies in educational genomics

Genomic technologies are being used to study the genetic basis of educational outcomes, and generate proposals for genetically-informed education policy. Photo by National Cancer Institute on Unsplash

Polygenic scores are summary statistics invented in biomedical genetics research to estimate a person’s risk of developing a disease or medical condition, and are often envisaged as the basis for “personalized” or “stratified medicine”. In recent years, social and behavioural genetics researchers have begun suggesting polygenic scores could be used in education too, raising significant concerns along scientific, ethical and political lines.

The publication in June 2024 of a research article titled “Exploring the genetic prediction of academic underachievement and overachievement” shows that polygenic scoring remains a popular methodology in studies of genetics and education. Its authors argue that school achievement can be “genomically predicted” using “genome-wide polygenic scores”. The paper is part of a long-running series of studies by a team mostly associated with the Twins Early Development Study (TEDS, established in 1994 as a longitudinal study of around 15,000 pairs of twins in the UK). Over the past decade, the team has increasingly used polygenic scores (as an earlier paper is titled) for “Predicting educational achievement from DNA”.

In this post I approach polygenic scores for predicting educational achievement as technologies with political properties. Part of our ongoing BioEduDataSci research project funded by the Leverhulme Trust, it follows up from a previous post outlining how “educational genomics” research may be paving the way for the use of genetic samples in educational policy and practice, and another highlighting the severe ethical problems and scientific controversies associated with educational genomics.1 Here I use the new predictive genetic achievement paper to foreground some of the political implications of educational genomics.

Biomarker methodologies

Understanding the political aspects of polygenic scores2 requires some engagement with their construction as methodological technologies. Polygenic scores are artefacts of a complex technoscientific infrastructure of statistical genetics, molecular genomic databases, bioinformatics instruments, analytics algorithms, and the institutions that orchestrate them, which together function to make valued social outcomes—such as educational outcomes—appear legible at a molecular level of analysis.

To construct polygenic scores, researchers require genotyped DNA data, which they then analyze through genome-wide association study methods and technologies. These identify minute genetic differences—genetic biomarkers known as single nucleotide polymorphisms, or SNPs—that are associated with a phenotype (an observable behaviour, trait, or social outcome). One aim of such studies is to identify the “genetic architecture” of a trait or outcome–such as the genetic architecture of educationa attainment.

The SNPs associated with the phenotype can then also be added up into a genetic composite known as a polygenic score. In education, the most common polygenic scores are for educational attainment (years of school), said to predict around 11% of the variance. Individuals can ultimately be ranked on a scale of the genetic probability of success at school.

The use of polygenic scores and associated methods and measures represents the data-centric “biomarkerization” of education, where biological signals are taken as objective evidence of the embodied substrates of academic attainment and achievement. This has only become possible with the development of an infrastructure of biobanks of genetic information and bioinformatics technologies, which can be used to generate and analyze genetic data for markers associated with educational outcomes.

In the latest genetic prediction of academic achievement paper, for example, the authors claim a “DNA revolution has made it possible to predict individual differences in educational achievement from DNA rather than from measures of ability or previous achievement”.3 Their basic claim is that technologies to calculate polygenic scores can operate as “early warning systems” to predict school achievement from infancy. The latest study design used TEDS data collected from children at age 7 to construct polygenic scores, based on a previous study of the educational attainment of a 3 million sample (which I’ve discussed before).

The paper introduces “the concept of genomically predicted achievement delta (GPAΔ), which reflects the difference between children’s observed academic achievement and their expected achievement”, where the former are standardized test achievements and the latter are polygenic predictions. So, the methodological invention of the paper—the measure of genomically predicted achievement—is ultimately a way of comparing a child’s observed academic achievement, as assessed by school test results, with a polygenic score predicting “genomically expected achievement” from DNA samples collected in childhood.

Biosocial sorting

These conceptual inventions and large stats certainly lend the study the quality of digital objectivity. But the critical point here is that the polygenic scores used in the study, and the genomically predicted achievement measures, are the results of social, technical and scientific practices, each of which can affect the results. As Callie Burt has noted in a detailed critical examination of how polygenic scores are made (and their limitations), there are multiple ways to create polygenic scores, each involving different assumptions and goals, measurement instruments, technical adjustments, calculation methods, and analysis specifications, which can introduce further technical biases.

Detailed analysis of the shortcomings of the methodology and findings of the genomic achievement study were posted on twitter by statistical geneticist Sasha Gusev, questioning its causal claims and predictive accuracy. He also showed how methodological choices and limitations in the research (particularly insufficient acknowledgement of social factors) meant that the “underachievers” it identified were actually individuals with high socioeconomic status and high early years achievement, who subsequently underperform at school.4

The study risks labelling and lowering expectations of “underachievers” as having lower education-related “genetic propensity” (as the TEDS team terms it) for achievement, while also privileging well-off kids by directing additional resources their way. And as Gusev points out, any allocation of resources from the study findings would therefore be targeted at “students from high-SES/high-edu backgrounds, while telling ‘overachievers’ (poor kids with good grades) that they’re swimming upstream”, seemingly against the genetic currents determining their achievement prospects.

The implication, then, is that polygenic methods could be used to classify children into groups defined and labelled in terms of genomically predicted levels of achievement. This would amount to a strategy of biosocial classification of children. By biosocial classification is meant the categorization of social groupings as defined by biological measures. In this case, it means sorting children into polygenic biosocial categories through the analysis of SNP biomarkers corresponding with school achievement, in ways that appear to reproduce and reinforce socioeconomic categories and biases.

What this indicates, then, is that despite the seeming objectivity imputed to genomic technologies, polygenic scores and associated measures remain methodologically problematic and potentially skewed in their results. Such studies can harden social biases and inequalities even as major claims are made that they could inform decisions about the just allocation of resources in schools.

Promissory politics

Beyond its biosocial sorting, this kind of polygenic scoring project can also exert other kinds of political effects. The political allure of genetic objectivity and biological authority in polygenic scoring studies appears to be growing, supported by promissory claims of the future potential of genomic technologies to further reveal genetic insights at even larger scale.

As already noted, one political implication of educational genomics research is that the results—predictions of educational outcomes from DNA—could be used as the basis for political interventions targeting children genomically predicted as at risk of underachievement. As discussed elsewhere, some authors of the study were involved in a report for the Early Intervention Foundation (a UK government “what works” centre), which made the case for genetic “screen and intervene” programs in the early years.

The collection of TEDS data from 7 year-olds in the 1990s has given these researchers tremendous bioinformational advantage to make claims to policy relevance. A main claim of latest genetic achievement paper is that “screening for GPAΔ could eventually be a valuable early warning system, helping educators identify underachievers early in development”. From such genetic early warning signals, it seems, should flow early interventions “targeting students underachieving genomically”.

The seeming relevance of this work to policy and practice needs to be understood as deriving from political interest in the potential and promise of data-driven science, supported by the development of genomics technologies by major biotech firms. The methods section of the genomically predicted achievement paper, for example, details how “DNA for 12,500 individuals in the TEDS sample was extracted from saliva and buccal cheek swab samples and hybridized to one of two SNP microarrays (Affymetrix GeneChip 6.0 or Illumina HumanOmniExpressExome chips)”. It also involved use of the application LDPred2 to “compute GPS for all genotyped participants”, and “training” a “model to maximize prediction”. 

This existing apparatus of technologies, however, is presented as just the first step necessary to fully compute genomically expected achievement across the whole population of children, which will only become possible with increased DNA data.

GPAΔ seems impractical now because it requires DNA, genotyping, and the creation of GPS. However, the rise in direct-to-consumer DNA testing suggests a future where GPAΔ becomes more accessible. At least 27 million people have paid direct-to-consumer DNA testing companies for this service, and these companies are increasingly marketing their product to encourage parents to test their children. … Once genotyping is available by whatever means, it will be possible to create GPS for educationally relevant traits, a process that is becoming routinized.

Educational genomics articles like this one routinely invoke promissory claims of future potential, once the existing infrastructure of mass biodata storage, genotyping platforms and polygenic scoring software has been sufficiently upgraded. As this excerpt indicates, the biological authority of educational genomics depends to a significant degree on biotech firms and consumer genetics companies.

It is this promissory quality associated with technological advances that enables researchers involved in educational genomics studies to claim moral and political authority to not only understand but to improve social institutions like schooling—and likewise to criticize forms of social science and policy that do not incorporate genetic measures as ideologically irresponsible.

In other words, genomic technologies are invoked to support the political project of advancing the power and authority genetic sciences in social policy areas like education. A recent report by the UK Government Office for Science, for example, asked “What could genomics mean for wider government?” It highlighted how existing medical infrastructures of medical genomics could be capitalized on for other social policy areas, and proposed education as one key area of potential application.

Educational genomics studies, enabled by new genetic technologies, therefore support visions of future policy possibilities. The idea is that genetic testing and screening could become policy technologies, if only the necessary infrastructure upgrades are put in place.

Genoeconomic policy

The idea of genetic testing as a policy approach is obviously controversial, given the history of eugenic interventions in education. It does, however, appear to link neatly with current mainstream policy approaches. Critics have pointed out that educational genomics proposals often reinforce “technocratic” or “neoliberal” policy models that treat education as a kind of laboratory for boosting economic outcomes and social mobility, and which promise to reduce costs and save money for government agencies and taxpayers. Such promises may reduce the seeming controversy associated with the science by appealing to political expedience.

Along these lines, in the genomic achievement paper, the authors claim that “Targeting GPAΔ might also prove cost-effective because such interventions seem more likely to succeed by going with the genetic flow rather than swimming upstream, helping GPAΔ underachievers to reach their genetic potential”. Later in the paper, they add that the “findings suggest that GPAΔ can help identify underachievers in the early school years, with the rationale of maximizing their achievement by personalizing their education”.

So the policy relevance of the paper appears again to be “cost-effective” interventions in early school years, driven by the aim to increase individual achievement through “personalized” learning. Such proposals certainly look like biomedicalized neoliberal policy, where measurable individual achievement might be bumped up through the efficient genomically-targeted allocation of resources. The cost-saving argument for using genetic data for decision-making in education has also been made in the popular science book The Genetic Lottery.

As the opening sentence of the paper reads, “Underachievement in school is costly to society and to the children who fail to maximize their potential”—with a citation to a paper about the “economics of investing in disadvantaged children” by economist James Heckman. Heckman is well known for his work calculating the economic payoffs of investment in early years child development – the “economization of early life” as Zach Griffen describes it – which is central to the model of “human capital development” he promotes to policymakers.

Other papers by the same TEDS team and their collaborators invoke studies by the OECD similarly citing the importance of education to economic outcomes, in ways that appear to amount to a program of hunting for biological signals of human capital in the genome. Many other educational genomics studies are, in fact, led by economists—or self-described “genoeconomists”—who first latched on to the idea that genetic data about educational outcomes could be used to understand the genetic basis of other downstream socioeconomic outcomes. Ultimately, this work suggests political investments in genetic testing as an investment in economic outcomes, potentially diverting resources from other forms of intervention based on non-genetic analyses.

Educational genomics research and advocacy therefore suggests the emergence of genoeconometric education policy, buttressing and fortifying existing econometric tendencies in international education policy with seemingly objective data about the genetic substrates of outcomes. Whether there is genuine political (or public) appetite for this remains to be seen, but clearly the data and the proposals are being presented and circulated in ways that are intended to promote genoeconometric solutions—such as early years screen and intervene programs—to address the relationship between children’s outcomes, human capital development and economic prospects.  

Biopolitical technologies

There are several reasons to question the assertion that genomic or genoeconometric education policy based on polygenic scores would be a good idea socially, politically or ethically. They include risks that the use of genetic information may lead to forms of biological reductionism, discrimination, stigmatization, racism, self-fulfilling prophecies, or distract from other forms of intervention.

Even if a genomic prediction of achievement outcomes can be made reliably, as the TEDS paper claims, it remains unclear exactly what causal biological mechanisms are associated with it. Although educational genomics research studies are increasingly high-powered in computational and data processing terms, they have very partial explanatory power and remain far from specifying the genetic mechanisms that underpin educational outcomes like achievement or attainment. Statistically speaking, the “genetic architecture” of educational outcomes may have become legible–as thousands of SNP associations–but the actual biology remains unknown.

Another major problem is the thorny issue of race and ethnicity in social and behavioural genetics research, and the eugenic legacy underpinning such science. As the TEDS authors themselves acknowledge, polygenic scores are affected by “cultural bias” because existing datasets over-represent healthy, white, well-educated, and wealthier than average individuals of European ancestry. Any intervention based on genomic data would necessarily exclude all other groups, since the data do not exist to support polygenic prediction beyond European population groups, and would therefore be politically untenable on equity grounds. The findings from such studies can also be appropriated to support racist assertions of biological superiority and inferiority in intelligence, or “function normatively to reinforce conceptions of race as an innate and immutable fact that produces racial inequalities”.

A final issue, for now, is that educational genomics studies persistently obscure the social and environmental factors that shape educational achievement, while overplaying the influence of genetic transmission. Even where social and environmental factors are considered, they may be simplified into reductive measures of socioeconomic status or family factors, rather than taking account of complex social and political structures, dynamics and their impacts. As in other studies of gene-environment interactions, social factors may even be “re-defined in terms of their molecular components”, shifting away “from efforts to understand social and environmental exposures outside the body, to quantifying their effects inside the body”.

Given these issues—unknown biology, non-representativeness, spectre of race science, and obscuring social factors—it is hard to see how the genomically predicted educational achievement findings could translate into genomically targeted educational interventions.

The study does, though, show how polygenic scores and associated genomic methods and measures can function as political technologies. They enable social and behavioural genomics scientists to claim objective, data-based biological authority, despite methodological limitations, while criticizing other forms of non-genetic investigation into the social determinants of school achievement as morally and ideologically irresponsible. The use of genomic technologies also supports particular kinds of political interventions that prioritize cost efficiency and achievement maximization according to economic “human capital” conceptions of educational purpose.

Polygenic scores support a biomarkerized model of schooling that centres the idea of genetic testing and predicting academic achievement in order to target interventions on genetic groupings of students to boost economic metrics, rather than alternative kinds of reform. They help support the solidification of economic models of schooling that have dominated education policy and politics for decades, albeit with a genetic twist that treats societal progress and human capital as embodied in the human genome.

Perhaps it is more accurate, therefore, to call polygenic scores “biopolitical” technologies–that is, techniques that enable knowledge about living processes to be produced and used as the basis for governing interventions. As biopolitical technologies used in educational genomics research, polygenic scores now support the production of knowledge about the genetic correlates of learning achievements and the potential biosocial sorting of children.

That genetic knowledge is now being promoted as the basis for proposing genetically-informed education policy interventions targeting children’s school achievement. But there remain many important reasons to question whether biopolitical technologies of early years mass genetic testing and screening should ever make the leap from the lab to school systems.

Notes

  1. To be clear “educational genomics” is not a unique scientific field, but our name for a body of research on the genetic underpinnings of educational outcomes–and gene-environment interactions–largely carried out by scientists in fields of behaviour genetics and social science genomics (sociogenomics). Different groups and individuals do not always agree about findings, and there is particular controversy among them about the policy relevance (or not) of such work. ↩︎
  2. Polygenic scores (PGS) are also sometimes referred to as genome-wide polygenic scores (GPS), polygenic risk scores (PRS), or more recently polygenic indices (PGI). Callie Burt critically discusses the recent proposal to term them PGIs, convincingly noting that ‘the shift to index potentially obscures the fact these are “rankings” (i.e., positions on a scale) of genetic associations with socially valued outcomes, whether we call them scores or indices’. ↩︎
  3. A distinction is often made between “prediction” in the biostatistical sense–that a genetic measure is strongly correlated with an outcome or trait–and prediction as a way of making forecasts about the future. In the study discussed here, and elsewhere, that distinction dissolves, and genetic prediction through polygenic scores becomes “fortune telling“. ↩︎
  4. Gusev has also written a thorough technical analysis of the heritability of educational attainment, where he argues that “Cultural transmission and environment is much more important than genetic transmission”, though this is often under-reported in published studies and particularly in press coverage. ↩︎

Posted in Uncategorized | Tagged , , , | Comments Off on Polygenic scores as political technologies in educational genomics

Oblongification of education

Photo by Kelly Sikkema on Unsplash

According to Microsoft and Google, artificial intelligence is going to be fully integrated into teaching and learning in the very near future. In the space of just a few days, Google announced its LearnLM automated tutor running on the Gemini model, and Microsoft announced it was partnering with Khan Academy to make its Khanmigo tutorbot available for free to US schools by donating access to the MS Azure Open AI Service. But it remains very hard to know from these announcements what the integration of AI into classrooms will actually look like in practice.

The promotional videos released to support both announcements are not especially instructive. Google’s LearnLM promo video doesn’t show students interacting with the tutor at all, and the main message is about preserving the “human connection” of education.

The Microsoft promo for Khanmigo doesn’t really reveal yhe AI in action either, though it does feature a self-confessed “defeated” teacher watching the “miracle” bot automatically produce a lesson plan, with Khan Academy’s director of engineering suggesting it will remove some of the “stuff off of their plate to really actually humanize the classroom”.

You’re unlikely to see many more idealized representations of “humanized” school classrooms than these two videos, not least because you barely see any computers in them—except the odd glimpse of a laptop—and the AI stuff is practically invisible.

A better indication of what AI will look like when it hits schools is a promotional video from Sal Khan showcasing the OpenAI GPT-4o model’s capacity for math tutoring just a week earlier. Now, this isn’t a great representation of a typical school either – it’s Sal and his son in a softly-lit lounge with an OpenAI mug on the desk, not 30 students packed into 100 square metres of classroom.

But it is revealing of how entrepreneurs like Khan—and presumably the big tech boys at Microsoft and OpenAI who are supporting and enabling his bot—envisage AI being used in schools. Sal Khan’s son interacts with an iPad, at dad’s vocal prompting, to work out a mathematical problem, with the bot making encouraging noises and prompting Khan jr when he seems to be faltering.

Sal Khan’s video clearly illustrates how AI in classrooms means students in one-to-one dialogue with a portable device, a tablet or laptop, to work on very tightly constrained tasks. Khan himself has frequently talked up the idea of every student having a “Socratic tutor” (invoking Bloom’s 2-sigma achievement effect of 1:1 tutoring in a weird mashup of classical philosophy and debunked edu-stats).

Beyond the lofty Socratic rhetoric and cherrypicked evidence, however, it’s clearly a kind of pristine “showhome” demo rather than any indication whatsoever of how such an automated tutor could operate in the actual social context of a classroom. Marc Watkins sees it exemplifying a kind of automation of learning that is imagined by its promoters to be as “frictionless” as possible, based on a highly “transactional” view of learning.

“When you reduce education to a transactional relationship and start treating learning as a commodity”, Watkins argues, “you risk turning education into a customer-service problem for AI to solve instead of a public good for society”.

Oblong professors

AI tutors are a vision of the impending “oblongification” of education (if you can forgive yet another suffixification). In Kazuo Ishiguro’s novel Klara and the Sun, a minor feature is “screen professors” who deliver lessons via “oblongs”—these are instructors who appear on a child’s portable device to offer “oblong lessons” at a distance rather than in person, in a near future where home-schooling is the norm for many children. .

The oblong professors of the novel are embodied educators—one is described as perspiring heavily—but I found myself thinking of Ishiguro’s depiction of oblong professors while watching the Khan/OpenAI demo. Here, AI tutors appear to students from the oblong of a tablet or laptop—they are automated oblong professors that are imagined as always-available personal pedagogues.

Characterizing them as oblongs, after Ishiguro, rightly robs them of their promotional rhetoric. Oblong tutors aren’t “magic” or a “miracle” but mathematically defined flat 2D objects that can only operate in the idealized environment of a quiet studio space where every student has an oblong to hand.     

The Khan demo also arrived about the same time as Apple released a controversial advertisement for its new iPad. The ad, called “Crush!”, depicted all of human creativity and cultural production—musical instruments, books, art supplies, cameras—being squished into the “thinnest” iPad that Apple has ever made by a giant industrial vice. It’s a representation of the oblongification of culture itself, accurately (if inadvertently on Apple’s part) capturing the threat that many feel AI poses to any kind of cultural or knowledge production.

The ideal of the AI tutor is very similar to the Apple Crush! ad—it crushes teaching down into its flattest possible form, as a kind of transaction between the student and the tutor that can be modelled in a big computer. And enacted on an oblong.

The recent long paper released by Google DeepMind to support the LearnLM tutor similarly flattens teaching. The report aims to identify models of “good pedagogy” and use the relevant datasets for “fine-tuning” the Gemini-based tutor. Page 11 features a striking graphic, with the text caption:

Hypothetically all pedagogical behaviour can be visualised as a complex manifold lying within a high-dimensional space of all possible learning contexts (e.g. subject type, learner preferences) and pedagogical strategies and interventions.

The manifold image is a multidimensional (incomprehensible) representation of what it terms the “pedagogical value” of different “pedagogical behaviours”. In the same report the authors acknowledge that “we have not come even close to fully exploring the search space of optimal pedagogical strategies, let alone operationalising excellent pedagogy beyond the surface level into a prompt”.

Despite that, then they suggest using AI techniques of “fine-tuning” and “backpropagation to search the vast space of pedagogical possibilities” for “building high-quality gen AI tutors”. But this involved creating their own datasets since little data exists on good pedagogy, so it’s not even a model based on actual teaching.

The “ultimate goal may not be the creation of a new pedagogical model”, the Google DeepMind team writes, “but to enable future versions of Gemini to excel at pedagogy under the right circumstances”.

Despite the surface complexity of the report and its manifold graphic of good pedagogy, it still represents the oblongification of teaching insofar as it seeks to crush “optimal pedagogy” into a measurable model that can then be reproduced by Gemini. This is a model built from a small set of datasets constructed by the Google DeepMind team itself that it intends to place in schools, no doubt to compete with Khan/Microsoft/OpenAI.

But much about teaching and pedagogy remains outside of this flat model, and beyond the capacity of any tutor that can only interact with a student via the surface of an oblong device. Like Apple crushing culture into an iPad, Google has tried to crush good pedagogy into its device, except all it could find to put in the vice were some very limited datasets that it had created for itself.

Oblong students

As for the “humanizing” aspects of the AI tutorbots promoted by Microsoft and Google, it is worth considering what image of the “human” appears here. Their promo videos are full of humans, with a very purposeful emphasis on showing teachers interacting with students in physical classroom environments, unmediated by machines.

In a recent essay, Shannon Vallor has suggested that big AI companies and scientists have shifted conceptions of the “human” alongside their representations of “artificial general intelligence” (AGI). Vallor notes that OpenAI has recently redefined AGI as “highly autonomous systems that outperform humans at most economically valuable work”, which she argues “wipes anything that does not count as economically valuable work from the definition of intelligence”.

Such shifts, Vallor argues, not only narrow the definition of artificial intelligence, but reduce “the concept of human intelligence to what the markets will pay for”, treating humans as nothing more than “task machines executing computational scripts”. In the field of education, Vallor suggests, the “ideal of a humane process of moral and intellectual formation” is now overshadowed by AI imaginaries of “superhuman tutors” which position the student as “an underperforming machine”.  

Deficit assumptions of students as underperforming machines, which require prompting by AI to perform future economically valuable work, seem as odds with the rosy rhetoric of humanizing education with AI. AI tutors, as well as being oblongified teachers, also oblongify students—treating them as flattened-out, task-completing machines. Like iPads, but with fingers and eyes.   

Oblong education

My metaphorical labouring of the “oblong” as a model of education is a fairly light way of trying to illuminate some of the limitations and constraints of current approaches to AI in education. Most obviously, despite the rhetoric of transformation, all these AI tutors really seem to promise is a one-to-one transactional model of learning where the student interacts with a device.

It’s an approach that might work OK in the staged setting of a promo video recording studio, but is likely to run up hard against the reality of busy classrooms.

AI tutors are also just models that, as the Google DeepMind report illuminates, are highly constrained because there’s simply not good enough data to build an “optimal pedagogy” engine. And that’s before you even start assessing how well a language model like Gemini performs.

These limitations and constraints are important to consider as Microsoft and Google—among many many others—are now making concerted efforts to make flattened model teachers inside computers, then set them free in classrooms at significant scale.

Ishiguro’s notion of the “oblong professor” is useful because it helps to deflate all of the magical thinking that accompanies AI in education. It’s hard to get excited about an oblong.

Sure, AI might be useful for certain purposes, but a lot of the current promises could also lead to real problems that need serious consideration before activating autopedagogic tutors in classrooms. Currently, AI is being promoted to solve a huge range of complex issues in education.

But AI tutors are simplified models of the very complex, situated work of pedagogy. We shouldn’t expect so much from oblongs.

Posted in Uncategorized | Tagged , , , , | Comments Off on Oblongification of education

Edtech has an evidence problem

Edtech brokers have begun producing new evidence and measurements of the impact of technologies in schools. Photo by Alexander Grey on Unsplash

Schools spend a lot of money on edtech, and most of the time it’s a waste of their limited funds. According to the Edtech Evidence Exchange, educators estimate that “85% of edtech tools are poor fits or poorly implemented”, indicating very weak returns for the $25 billion or more annually spent on edtech in the US alone. The problem is that school procurement of edtech is rarely based on rigorous or independent evidence. The Edtech Evidence Exchange is one example of a new type of organization in education that is aiming to address this problem, by constructing an evidence base to support edtech spending decisions.

In a new paper just published in Research in Education, Carlos Ortegon, Matthias Decuypere and I conceptualize these new edtech evidence intermediary organizations as edtech brokers. Edtech brokers perform roles such as guiding local schools in “evidence-based” procurement, adoption, and pedagogical use of edtech, and have the mission to support teachers and school authorities to modernize in safe, reliable, and cost-effective ways. Edtech brokers are appearing around the world yet they have not, as yet, captured much critical attention. We kicked off our project on edtech brokers a couple of years ago, with Carlos Ortegon taking the lead for his doctoral research and lead-authoring the paper entitled “Mediating educational technologies: Edtech brokering between schools, academia, governance and industry” as the first major output.

Edtech brokers are significant emerging actors in education because they are gaining the authority and capacity to shape the future direction of edtech in schools, at a time of rapid digitalization of the schooling sector in many countries around the world. They can also be powerful catalyzers of the edtech market. As expenditure in edtech from governments, companies, and consumers has increased in the past decade and as the edtech industry continues to seek new market opportunities, such as the application of AI, edtech brokers play a role by connecting technical products to the specific social and political contingencies of different local settings.

Edtech brokers

In the paper we identify three distinctive kinds of edtech brokers:

Edtech ambassador brokers, which act as representatives (or ambassadors) of specific edtech brands. Edtech embassador brokers encourage the procurement of their products and promote their educational potential. Ambassador brokers are a global phenomenon, as the growing number of Google and Microsoft specialized organization partners across different countries makes clear, and they usually offer services such as streamlined procurement and professional development for teachers.

Edtech search engine brokers operate as search portals that focus on providing on-demand evidence about “what works” in edtech, thereby shaping procurement and usage from a wide range of market providers. They place strong emphasis on providing “bias-free advice” and “evidence-based recommendations” that can prevent problems of over-expenditure as the Edtech Evidence Exchange puts it. Edtech search engine brokers often combine multi-sector mixtures of academic, industry, policy, and philanthropic expertise, though some are commercial companies and others directly goverment-funded.

Edtech data brokers support schools in managing, regulating, and analyzing their digital data. Edtech data brokers are gatekeepers of the data produced by schools when using edtech, whose core activity is securing data flows between schools and vendors. Data brokers offer distinct tools for schools to analyze their data, facilitating school-level educational decisions. 

Though they are relatively unknown in the digital education landscape, edtech brokers are therefore becoming important figures that make claims to expertise in edtech effectiveness, filter purchasing options, shape edtech procurement decisions, manage data flows, and lead the professional development of teachers in schools.

Beyond this seemingly straightforward definition of their role, we also see edtech brokers as strategically mediating between schools, industry, evidence and policy settings. In this mediating role, this means edtech brokers construct relations between a variety of different constituents. For example, they connect vendors to schools, act as relays of evidence produced in research centres, and they strengthen policy agendas on evidence-based edtech. They also act as transmitters and brokers of normative ideas about tech-enabled transformation and reform, assisting the circulation of powerful imaginaries and expectations of educational futures into the attention of school decision makers. One initiative even brokers relations between startups, learning scientists and investors for evidence-based edtech financing.

But this means edtech brokers also have some capacity to affect each of the constituents they connect. First and foremost, edtech brokers take up powerful positions in determining which and how edtech is used in schools, according to particular standards of evidence. This means, second, that edtech brokers can influence edtech markets, shaping the financial prospects of startups and incumbents, as they either promote or devalue specific products, and thus affect the procurement decisions of schools. And third, they can influence policy settings and priorities, by positioning themselves as arbiters of “what works” and thus amplifying policy attention on certain affordances and functionalities.

Mediating edtech

In the paper we highlight the mediating practices of edtech brokers and their implications. The first set of mediating practices we refer to as infrastructure building. In their documents and promotional discourse, edtech brokers frequently invoke the idea of school modernization, and of using evidence-based edtech to update and upgrade schools’ digital infrastructures for teaching and learning. In the case of ambassador brokers, this updating of digital infrastructure also involves synchronizing schools and teachers’ pedagogic practices with the broader digital ecosystems of big companies like Google and Microsoft. Edtech data brokers emphasize interoperability and the synchronization of student data flows across different edtech applications. More than merely offering technical products and support, these efforts shape the digital architecture of the school through the promotion of rapid, easy, and safe processes of transformation.

The second key brokering practice is evidence making. Edtech brokers use different evidentiary mechanisms and instruments to produce evidence of “impact” and “efficacy”. By doing so, edtech search engine brokers in particular guide the adoption and usage of edtech in schools, ultimately mediating and shaping the production of “what works” evidence and its circulation into school decision-making sites. One edtech search engine broker studied in the paper, for example, operates as a kind of database of edtech products that are ranked and promoted in terms of online reviews provided by teachers. The broker calls this kind of evidence “social proof”, with its legitimacy derived from front-line teachers’ active participation in its production, though it is also shaped and constrained by a series of specific criteria the organization has derived for “assessing impact”.

Another search broker, by contrast, rates edtech according to specific variables and measurement instruments, enabling schools to define their needs and receive contextualized recommendations through a “matching” program. As such, edtech brokers reinforce the political ideal that “what works” can be repeated in diverse settings, by incorporating educators themselves into the evidence-making process and by producing locally contextualized guidance via new instruments. Edtech brokers’ evidence is not neutral but imprinted by specific assumptions and interests.

The final practice of brokers is professionality shaping through professional development and training programs. By mediating between edtech vendors and pedagogic practice, brokers aim to transform teachers into knowledgeable edtech users, while simultaneously extending edtech vendors’ reach into everyday professional routines. Edtech brokers therefore project a particular normative image of the digitally-competent teacher who, armed with evidence and training, can capably choose the right edtech for the job at hand and deploy it to beneficial effect in the classroom.

Examining edtech brokers

The article is now the basis for ongoing empirical work with edtech brokers across Europe. They are mediating edtech into schools, and while doing so laying claim to expertise in edtech evidence and practice. This makes them significantly powerful yet little-studied actors in shaping how and which digital technology is promoted to schools, how schools make procurement decisions, and how teachers incorporate edtech into their routine pedagogic practices.

In turn, these brokering practices open up important questions about the nature and production of evidence about edtech impact, about the role of little-known intermediary organizations in shaping the future of edtech use in classrooms, the interests, assumptions and financial and industrial support underpinning their judgements, and their capacity to affect the market prospects of edtech startups. Edtech brokers may be putting efforts into solving the evidence problem in edtech, but by doing so they are also positioning themselves as powerful influences on the digital future of schooling.

The full paper, “Mediating educational technologies: Edtech brokering between schools, academia, governance and industry”, is available (paywalled) from Research in Education, or as an open access version.

Posted in Uncategorized | Tagged , , , , , , , , , | Comments Off on Edtech has an evidence problem