As generative AI tools continue to be pushed into education—most recently, with OpenAI partnering with Khan Academy, GPT 4o being cheerfully touted as a tutor in their promotional videos, Google launching LearnLM, and MagicSchool.ai claiming to be “the most used and loved AI platform for educators worldwide”—there has also been mounting critical resistance. Speaking of, if you’re looking for consistent (much more consistent/frequent than me!), thoughtful, articulate critical takes on AI in education, my favorites to chew on these days are Helen Beetham’s Imperfect Offerings; Marc Watkins’ Rhetorica (particularly his recent Beyond ChatGPT series); and Ben Williamson’s Code Acts in Education. (I also recommend following Leon Furze, Jon Ippolito, Jane Rosenzweig, and Charles W. Logan on various socials for critical takes. There are many more, of course—please feel free to boost yourself or your faves in the comments. )
Since the recent GPT 4o drop, and in response to Mira Murati’s comment therein that “we’re always trying to find ways to reduce friction” (quoted here by Jane Rosenzweig), several writers about AI and education have commented on how AI is being pushed to students as a “frictionless” shortcut to learning. The first reflections I heard about AI, “friction” and education in fact came from Stanford’s Michele Elam in a panel last fall, later published, which I reposted as part of a brief reflection on X: “
a critical AI literacy approach (h/t @Bali_Maha) adds friction to systems intended to be frictionless. This, to me, is a strength of humanities education, to "[turn] our attention to those seams we are seduced into not seeing." (Michele Elam, from https://read.dukeupress.edu/american-literature/article/95/2/281/344231/Poetry-Will-Not-Optimize-or-What-Is-Literature-to ).
OpenAI and other generative models have not only tried to remove friction and smooth the seams of generative systems, but also added anthropomorphic elements to suggest that those “seams” are just quirks similar to those that humans have; it’s no accident, as many have noted since its release, that ChatGPT generates responses in first person. As someone who is deeply interested in form (I’m a modernist by training, after all), the forms these technologies take seem to me to be deeply important and themselves worthy of study.
In technology products, of course, we generally think of good UI/UX as frictionless, as making its seams invisible. When you’re looking to submit your taxes or order a pizza, after all, speed and seamlessness is pretty desirable, right?
In education, however, “friction” is a feature, not a bug.
As Jane Rosenzweig’s recent article in the Boston Globe argues, “our students are not products to be moved down a frictionless assembly line, and the hard work of reading, writing, and thinking is not a problem to be solved.” Marc Watkins’s post “We Need to Reclaim Slowness” similarly notes that “friction matters in learning”; and in “Why Are We in a Rush to Replace Teachers?” he writes that
One way we learn is through friction. Contending with experiences that require multiple steps, time between, and applying previous knowledge with new knowledge helps ensure students learn material and reflection asks them to pause and take account of what that learning meant to them.
Looking back again to fall 2023, Michael Gonzales, in “Artificial Mediocrity: The Hazards of AI in Education,” asks whether “the purported obstacles that chatbots help us circumvent just bumps in the road, or might the difficulties we encounter be a vital part of what is required for us to learn in a serious, lasting way?” He concludes that “a bit of friction—what educational technology attempts to overcome—is exactly what the mind needs.”
If we agree that friction is necessary for learning and we want students to slow down, what about teachers?
A frictionless experience is being pushed to educators, too, with the promise of saving us time. Khanmigo is now available for free to teachers, as are parts of MagicSchool.ai. (I imagine that providing free access to Khanmigo is in part a move to allow Microsoft and OpenAI to take back some of the market share from platforms like MagicSchool.ai in its long game to make OpenAI profitable. Education is a big market, and educational data—from teachers and students both—is valuable.)
In the wake of pandemic pivots and in the context of continued under-resourcing and burnout, one-stop AI shops like Khanmigo and MagicSchool.ie are certainly seductive.
And, you might rightly say, teachers are experts in their fields. We aren’t students; the issues aren’t the same. Where’s the harm in us using AI?
Well, first of all, there is no shortage of information about the ethical issues and harms that emerge from the creation, training, and deployment of commercial generative AI systems. When I give workshops in which I touch on how teachers might use select generative AI tools, I do so after a lengthy recounting of generative AI harms—e.g. data and privacy concerns, labor exploitation, incorporation of CSAM into datasets, environmental impacts, misinformation and factual inaccuracies—with the question “is there a way to use an unethical system ethically?” always in the mix. Of course, I am not naïve, nor am I a purist: there are ethical issues at this point with most if not all technological systems. (Do you chat with colleagues on X? Do you use Microsoft Word or Google Docs? Do you use cloud computing? There are ethical questions to consider in all of those use cases and few of us are in a position to avoid all of them.) But to me, a critical AI literacy approach involves giving people the opportunity to step back and ask those questions, to weigh harms and benefits before using, or asking students to use, any given generative technology, and to move ahead with transparency and consent.
This critical AI approach is at odds with “time saving,” at least at the front end. But we teachers are all also students of a new technology. We, too, would benefit from slowing down.
Yet MagicSchool tells us that “AI is a vast and complex field, but we make it so that you can take advantage of this new technology and immediately apply it to all kinds of tasks on your plate.” They are invested in us embedding it into our prep and teaching now, not later. Like stage magicians or the Wizard of Oz, they hand-wave away the “vast and complex field” of generative AI and ask us not to look too closely behind the curtain, to trust it to them. But trying to employ these tools ethically and effectively would actually mean taking a deep dive into their privacy policies and terms of use and ensuring that all of the components are, for instance, FERPA or GDPR compliant.
It would mean testing the limits of their tendency to “hallucinate” (i.e., produce plausible but inaccurate text predictions). It would mean exploring and exposing the more or less subtle biases that emerge in generated feedback, as folks like Leon Furze and the team of Melissa Warr, Punya Mishra, and Nicole Oster have recently done. It would mean weighing the value of automating feedback with the message it gives students about the value of their work, as in for instance Marc Watkins’ post about automating feedback, where he notes that “once we normalize offloading human relationships, it’s not too hard to imagine automating the truly meaningful aspects of teaching and learning that form the core of human connection,” or Peter Greene’s post, where he asks “what happens to a student's writing process when they know that their ‘audience’ is computer software? What does it mean when we undo the fundamental function of writing, which is to communicate our thoughts and feelings to other human beings?” Again, embarking on these explorations would be the opposite of time saving—but I’d argue this is the work we must do before we use these tools.
If we take the time now to slow down and learn something about that “vast and complex field,” the promise persists: that using these tools will save time to free teachers for “more important” tasks. “We’re here to help lighten the load, so teachers can save their energy for where the shine best—in the classroom, in front of students,” claims MagicSchool. But what is it exactly that we’re supposed to do “in the classroom, in front of students”? Run our AI-generated quizzes? Show our AI-generated slides? Sit with our students and watch the chatbot generate funny poems? Tell AI-generated teacher jokes (yes, an actual “tool” on MagicSchool)?
It is safe to say most technologies of automation have made similar claims: that they will handle some element more quickly to make it easier to do something more important than what the tech automates. They’re never quite as clear about what that “more important” element is, though. Audrey Watters’ ever-important Teaching Machines: The History of Personalized Learning (2021) shows that the desire for automation of teaching and learning, framed at least in part as a way to help teachers but also as a way to make education more “efficient,” goes back at least to the turn of the last century. Sidney Pressey, Ben Wood, B. F. Skinner, and others promised to save teachers from “drudgery”—even while they seemed to imply that teachers couldn’t be trusted with that same “drudgery” (e.g. preparing questions, testing, grading) which, ironically, they thought comprised the real work of teaching and learning.
As Watters writes of Wood, for instance,
to ‘learn’ students was not a matter of cultivating interpersonal rapport with each one as it was a matter of developing a scientific profile and a statistical analysis of them. To know students, for Wood, meant to test students—via content examinations, psychological analyses, personality assessments, and intelligence and aptitude tests. (69)
We might well disagree that the latter are the real work of teaching and learning. But even as ed tech gurus are selling the “magic” of time-saving to teachers, the quiet logic that animates the development of these tools is that what is being automated is actually what matters. If we believe otherwise—that is, if we believe that human relationship is essential to the teaching and learning process—we had better be very clear about it. That may mean rethinking the “drudgery” to be more effective means to the ends we actually value rather than simply ensuring we’re more “efficient” or that we’ve “saved time” in some process that we don’t see as actually valuable. And it may mean rethinking those ends, which might mean for instance pushing back against a transactional view of education, as Watkins suggests.
Watters’ book is part of a technocritical tradition that examines who really gains from the deployment of particular technologies; as she writes, “it’s a story of how education became a technocracy, and it’s a story about how education technology became big business” (9). She’s not the first to notice. Back in 2018, in “The Tech Industry’s War on Kids,” psychologist Richard Freed warned,
There are few industries as cutthroat and unregulated as Silicon Valley. Social media and video game companies believe they are compelled to use persuasive technology in the arms race for attention, profits, and survival. Children’s well-being is not part of the decision calculus.
We would all benefit from exploring this longer history of tech, including ed tech, and who it serves. We would benefit from taking the time to ask hard questions about what constitutes effective teaching and learning in our disciplines and how tech might or might not play into it. And our students would benefit from us slowing down, providing some friction, and asking the tough questions about the value and harms of technologies before deploying them.
In other news….
I’m looking forward to our upcoming AI & Digital Literacy Institute, which will bring a group of KC-area secondary and postsecondary educators together to build critical AI literacy together through lectures, conversation, and dedicated time to work on resources and assignments. My colleague Sean Kamperman and I are also working with the National Humanities Center to bring a similar program to the Tulsa area. In keeping with the theme of this post and I’ve suggested elsewhere, educators need the space and time to build this kind of literacy; it needs to be supported materially as well as theoretically. We’re lucky to have two major organizations in the Kansas City area dedicated to supporting education and human flourishing: the Hall Family Foundation and the William T. Kemper Foundation (Commerce Bank, Trustee). The Hall Family Foundation has underwritten our whole institute, and the William T. Kemper Foundation has ensured that every attending educator gets a stipend for the week. This kind of support is essential if teachers are to make informed decisions about whether and how to engage with new technologies.
As a formed ed-tech evangelist with startup capital and dreams of making the world so "easy" for learning - I profess, we need a helluva lot more friction involved in teaching. Even if we have to artificially inject it. Nobody ever climbed Everest without first going up a lot of smaller hills, day after day after day. As Kundera in "Slowness" relates - speed is about forgetting and slowness about memory. The first is necessary at times but the later is crucial to intelligence and learning - for experience itself is of necessity but the slow recall of the past.
Katie, nicely done. Two thoughts. Of course you are right, but I have a sense of King Canute with his feet wet. Second, if anything, I think the metaphor of "friction" gives too much away, because it leaves the underlying imaginary in place, that is, education as a transport, or transaction, to be accomplished as "smoothly" and efficiently as possible. Contrast Bildung, which is about the student, as you know. Or, for another metaphor, track athletes run in circles. The weights return to the rack, or their position on the machine. No "work" in the mechanical sense is done (Work = Force x Displacement). Instead, the work is done on the athlete. I almost never care about a students interpretation or argument as such, except to indicate whether the student is becoming better. Anyway, keep up the good work.