[Note: updated June 6, 2025: see below link for full article]
I thank my readers for their patience as I worked on the following piece for the journal Critical AI this summer, and I’m honored to be featured as their Sneak Preview piece for the second issue of the journal, a special issue on LLMs, which will be published in February 2024. I hope you will find some value in it; if so, please share! Ultimately I hope that it will help us develop policies that protect us and our students as we engage with these technologies.
A Blueprint for an AI Bill of Rights for Education, Critical AI.
The link above takes you to the full article in the journal now, rather than the blog post. For those who cannot access the journal, the article is below.
April 2024
A Blueprint for an AI Bill of Rights for Education Available
Kathryn Conrad
Critical AI (2024) 2 (1)
https://doi-org.www2.lib.ku.edu/10.1215/2834703X-11205245
Abstract
In the wake of the introduction of ChatGPT, educators have been faced with pressure to adapt to the disruptive technology of AI chatbots. But these tools were not developed with educational applications in mind, and they come with many potential risks and harms to students. As educators decide how to address generative systems in their classrooms in the context of an ever-changing technological landscape, this essay offers a starting point for conversations about policy and protections. It begins with the rights articulated by the US Office of Science and Technology Policy and goes on to elaborate rights for educators and students, including institutional support for critical AI literacy professional development; educator collaboration on AI policy and purchase and implementation of generative systems; protection of student privacy and creative control; and consultation, notice, guidance, and appeal structures for students.
keywords: privacy, data, surveillance, ethics, educators
Generative AI1 fully came to the attention of the public in 2022, first through the coverage of the improvement in large image models such as DALL-E2, Stable Diffusion, and Midjourney and, since late November, through OpenAI's introduction of ChatGPT, a large language model (LLM) engineered for question answering and dialogue. Almost immediately (as the editors of this special issue elaborate further in their introduction), LLMs were hailed as the end of the high school and college essay while educators were urged by technophiles and technodeterminists inside and outside the educational domain to “teach with it.”2
There is some truth to the media's obsessive focus on plagiarism or violations of academic integrity: the ease with which students can create ostensibly passable work on a range of assignments, combined with the unreliability of AI-detection software, has compelled educators to reassess assignments, rubrics, and statements of academic integrity to ensure that students can consistently meet learning goals.3 They have met this challenge, moreover, in the wake of pandemic-driven pedagogical disruptions: shifts to (and from) online learning often in tandem with layoffs, austerity, and heightened workloads. From K–12 schools to elite research universities, educators have managed this technology-driven turbulence with minimal training, support, or guidance—all while contending with clickbait articles portraying teachers as pearl-clutching technophobes.
Since ChatGPT's debut, the pressure to “teach with” so-called generative AI has begun to mount, driven partly by technology companies that have long perceived education as a lucrative market. The designers of these commercial projects did not consult with educators or students or, indeed, engage in dialogue with domain experts outside the AI industry and its preferred research partners.4 Not only designed without consideration of educational goals, practices, or principles, these models emerge from a landscape in which some elite technocrats actively oppose higher education, imagine education to be largely automatable, conceive of human learning primarily as the acquisition of monetizable skills, and regard students and teachers as founts of free training data.5
As this special issue elaborates at some length, today's AI entails a host of ethical problems, including the nonconsensual “scraping” of human creative work for private gain,6 amplification of stereotypes and bias, perpetuation of surveillance, exploitation of human crowdworkers, exacerbation of environmental harms, and unprecedented concentration of power in the hands of a few corporations who have already proven themselves poor stewards of the public interest.7 The impacts on education extend beyond the classroom; the potential for harm in the deployment of these systems has led the EU to place “AI systems intended to be used for the purposes of assessing students” and “participants in tests commonly required for admission to educational institutions” to be in their highest category of risk, alongside those used for law enforcement and administration of justice (EU 2023). Teaching critical AI literacy (Bali 2023) includes making this larger context visible to students, but advancing such literacy does not preclude the possibility of envisioning forms of AI that might work for educational purposes, such as LLMs that have been trained on ethically obtained data sets and designed in collaboration with educators, students, and community stakeholders with careful attention to access, equity, and learning goals.
Ultimately, as the anthropologist Kate Crawford (2021: 8) argues, AI is not simply a technology; it is also “a registry of power.” Law professor Frank Pasquale (2020: 229) has contended that contemporary technological development has so far been governed by neoliberal principles and argues instead for an approach to continued development and deployment framed by principles that are “diverse, co-developed with domain experts, and responsive to community values.” As we move to consider whether and how so-called generative AI has a place in our classrooms, it is time to place such principles first rather than creating reactive policies to contend with every new technological rollout.8
As the title suggests, my blueprint is modeled on a document that the Biden administration's Office of Science and Technology Policy released in 2022: “Blueprint for an AI Bill of Rights.” I believe that the following principles, quoted verbatim from each of the blueprint's five sections, should be enforced rather than serving merely as an aspirational guide.9
Safe and Effective Systems
You should be protected from unsafe or ineffective systems.
Algorithmic Discrimination Protections
You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
Data Privacy
You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
Notice and Explanation
You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
Human Alternatives, Consideration, and Fallback
You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.10
With these principles as a starting point, I propose a supplemental set of rights for educators and students.
These are intended as the beginning rather than the end of the conversation, a foundation on which policies and protections can be based. Ultimately, however, educators must lead this conversation, guided by our aspirations for our students rather than driven by tech companies whose goals are not our own.
Rights for Educators
Input on Purchasing and Implementation
You should have input into institutional decisions about purchasing and implementation of any automated and/or generative system (“AI”) that affects the educational mission broadly conceived. Domain experts in the relevant fields should be informed and enabled to query any consultants, vendors, or experts who have promoted the systems before such systems are adopted. Institutions should also set up opportunities for students to participate in and advise on policies that involve the mandatory use of any such applications. Institutions interested in exploring coursework devoted to the use of automated and/or generative tools should enable instructors to work with developers and vendors to ensure that any adopted tools are appropriate for educational contexts and do not subject students or educators to surveillance or data theft.
Input on Policies
You (or your representative in the appropriate body for governance) should have input into institutional policies concerning “AI” (including automated and/or generative systems that affect faculty, students, and staff). By definition, educators are at the heart of the educational mission and must be given the opportunity to lead the development of “AI”-related policies.
Professional Development
You should have institutional support for training around critical AI literacy. Critical AI literacy includes understanding how automated and/or generative systems work, the limitations to which they are subject, the affordances and opportunities they present, and the full range of known harms (environmental as well as social). Such literacy is essential, but educators cannot be expected to add gaining critical AI literacy to their workloads without such support.
Autonomy
So long as you respect student rights (as elaborated below), you should decide whether and how to use automated and/or generative systems (“AI”) in your courses. Teaching about “AI” is increasingly important to educating students, but commitment to teaching critical AI literacy (as elaborated above) does not imply any mandatory student use of an automated system. Educators should not be pressured into adopting new systems or penalized for opting out. Educators should be given resources to evaluate best practices for teaching in consultation with other domain experts and peer-reviewed research on pedagogy.
Protection of Legal Rights
You should never be subjected to any automated and/or generative system that impinges on your legal rights (including but not limited to those stated above).
Rights for Students
Guidance
You should be able to expect clear guidance from your instructor on whether and how automated and/or generative systems are being used in any of your work for a course. These guidelines should make clear which specific systems or tools are appropriate for any given assignment.
Consultation
You should be able to ask questions of your instructor and administration about the use of automated and/or generative systems prior to submitting assignments without fear of reprisal or assumption of wrongdoing. Critical AI literacy, especially in an environment of rapid technological development, requires honest conversations between all stakeholders. This includes students being able to ask why any given system is required for a given assignment. Educators should recognize that students who have been using AI tools in other courses or in their private lives should be treated respectfully on this as on any other matter.
Privacy and Creative Control
You should be able to opt out of assignments that may put your own creative work at risk for data surveillance and use without compensation or that might put your privacy at risk. Educational institutions have an obligation to protect students from privacy breaches and exploitation.
Appeal
You should be able to appeal academic misconduct charges if you are falsely accused of using any AI system inappropriately. If you are accused of using technology inappropriately, you should be invited to a conversation and allowed to show your work. Punitive responses to student abuse of generative technologies must be based on the same standard of evidence as any other academic misconduct charges. Critical AI literacy means that all parties recognize that detection tools are at present fallible and subject to false positives.
Notice
You should be informed when an instructor or institution is using an automated process to assess your assignments, and you should be able to assume that a qualified human will be making final evaluative decisions about your work. You should always have the ability to choose to be assessed by a human and to appeal automated assessments.
Protection of Legal Rights
You should never be subjected to any automated and/or generative system that impinges on your legal rights (including but not limited to those stated above).
Acknowledgments
I especially want to thank the Critical AI editorial team for extensive and thoughtful feedback on several drafts of this essay; Anna Mills, Maha Bali, Autumn Caines, Lisa Hermsen, and Perry Shane's input on my earliest draft of this framework; and the Kansas and Missouri educators who attended the June 2023 AI and Digital Literacy Summit at the Hall Center for the Humanities at the University of Kansas, whose concerns and dialogue helped sharpen my thinking about this work.
Notes
1.
I use generative AI while recognizing that artificial intelligence (AI) is a loaded term with a complicated history. Though the term “generative AI” is increasingly common, I concur with Emily Bender (2023) that a more apt term for this cluster of technologies might be “synthetic media machines.” Other options might include “media generation from extracted information,” “generation from mined data,” “data-mining-based predictive model,” or, following the novelist Ted Chiang, simply “applied statistics” (Murgia 2023).
2.
For “end of the essay” predictions, see Herman 2022; Marche 2022. For urging educators to “teach with” AI, see, for instance, Heaven 2023; Roose 2023; Rim 2023. Lauren M. E. Goodlad and Samuel Baker (2023) note that the much-criticized New York City public school ban on ChatGPT enabled these educators to become “early role models in teaching students the limitations of this much-hyped technology.”
3.
On flawed efforts to detect AI-generated work, see Wiggers 2023; Fowler 2023. Evidence already suggests that vulnerable populations are more likely to be accused of cheating (e.g., Liang et al. 2023). See Klee 2023 on one notable case of a false accusation by an instructor who assumed that the LLM could detect its own outputs.
4.
Notably, the release of OpenAI's GPT-4, supported by Microsoft, coincided with the firing of Microsoft's AI Ethics and Society team in 2023. In this, Microsoft follows in the footsteps of Google's firing of ethicists Timnit Gebru and Margaret Mitchell in late 2020 and early 2021, respectively (see, e.g., Metz and Wakabayashi 2020; Schiffer 2021). To be sure, some educators have been in productive, collaborative conversations with tech companies and fellow educators about the use of generative technologies in education (e.g., Mills, Bali, and Eaton 2023).
5.
See, for example, Pasquale 2020: 6–88; Goodlad and Baker 2023. While Pasquale's (2020: 3) first proposed law specifies that “robotic systems and AI should complement professionals, not replace them,” the public release of GPT-4 was accompanied by widely hailed claims that falsely implied that chatbots capable of passing, say, the LSAT, are thereby equipped to practice law. As Goodlad and Stone suggest in the introduction to this special issue, the same mindset that finds tech companies eager to portray LLMs as gifted educators also finds them keen to push chatbots as the ideal replacement for counselors, lawyers, doctors, and other professionals. For the recent case of one lawyer's disastrous reliance on ChatGPT for legal research, see Armstrong 2023; Milmo 2023.
6.
Describing human creative work as training data is already reductive (Conrad 2023) in stripping such work “of the critical essence by which it avails itself of copyright protection: its expressive value and human creativity” (Kupferschmid 2023).
7.
Important work on ethical issues includes Bender et al. 2021; Crawford 2021; Weidinger et al. 2021; Caines 2023; D'Agostino 2023; Fergusson et al. 2023; Furze 2023; Gal 2023; Hendricks 2023; Luccioni et al. 2023; Perrigo 2023; Sweetman and Djerbal 2023; Turkewitz 2017, 2023; van Rooij 2023. On the concentration of power, see also Whittaker 2021; Acemoglu and Johnson 2023.
8.
Adopting a principles-first approach helps to avoid situations similar to those encountered by copyright law, which emerged in reaction to the printing press (ARL, n.d.) and whose “fair use” stipulations (Turkewitz 2023) have been exploited by large corporations. While the Russell Group of UK universities has, at the time of this writing, articulated policies supposedly based on principles, the impact of those principles is attenuated by the assumption that AI provides a “transformative opportunity” that the “universities are determined to grasp” and lacunae around the notion of “appropriate” use in the policies themselves (Russell Group 2023).
9.
This blueprint is not enforceable (as its extensive legal disclaimer page makes clear [OSTP 2022a]), and indeed the US White House and Congress have both provided a platform for and actively chosen to financially support companies that have already violated this bill (see White House 2023a, 2023b; Krishan 2023).
10.
The quoted text comprises each of the headers and the statement of principles in the blueprint as of July 2023. For the full text, see OSTP 2022b.
Works Cited
Acemoglu, Daron, and Simon Johnson. 2023. “Big Tech Is Bad. Big A.I. Will Be Worse.” New York Times, June 9. https://www.nytimes.com/2023/06/09/opinion/ai-big-tech-microsoft-google-duopoly.html
.
ARL (Association of Research Libraries). n.d. “Copyright Timeline: A History of Copyright in the United States.” https://www-arl-org.www2.lib.ku.edu/copyright-timeline/
(accessed July 4, 2023).
Armstrong, Kathryn. 2023. “ChatGPT: US Lawyer Admits Using AI for Case Research.” BBC News, May 27. https://www.bbc.com/news/world-us-canada-65735769
Bali, Maha. 2023. “What I Mean When I Say Critical AI Literacy.” Reflecting Allowed (blog), April 1. https://blog.mahabali.me/educational-technology-2/what-i-mean-when-i-say-critical-ai-literacy/
Bender, Emily. 2023. “I think that having one cover term for everything that gets called ‘AI’ is part of the problem.” Twitter, June 17, 8:47 a.m.
https://twitter.com/emilymbender/status/1670065739196420096?s=20
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. New York: Association for Computing Machinery. https://doi-org.www2.lib.ku.edu/10.1145/3442188.3445922.
Caines, Autumm. 2023. “Prior to (or Instead of) Using ChatGPT with Your Students.” Is a Liminal Space (blog), January 18. https://autumm.edtech.fm/2023/01/18/prior-to-or-instead-of-using-chatgpt-with-your-students/
Conrad, Kathryn (Katie). 2023. “Data, Text, Image: How We Describe Creative Work Matters.” Pandora's Bot (blog), May 4. https://kconrad.substack.com/p/data-text-image
Crawford, Kate. 2021. Atlas of AI. New Haven, CT: Yale University Press.
D'Agostino, Susan. 2023. “How AI Tools Both Help and Hinder Equity.” Inside Higher Ed, June 5. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/06/05/how-ai-tools-both-help-and-hinder-equity
European Union. 2023. “Artificial Intelligence Act.”
https://artificialintelligenceact.com/
(accessed July 4, 2023).
Fergusson, Grant, Calli Schroeder, Ben Winters, and Enid Zhou, eds. 2023. Generating Harms: Generative AI's Impact and Paths Forward. EPIC.org. https://epic.org/wp-content/uploads/2023/05/EPIC-Generative-AI-White-Paper-May2023.pdf
Fowler, Geoffrey A. 2023. “We Tested a New ChatGPT-Detector for Teachers. It Flagged an Innocent Student.” Washington Post, April 3. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/
Furze, Leon. 2023. “Teaching AI Ethics.” Blog, January 1. https://leonfurze.com/2023/01/26/teaching-ai-ethics/
Gal, Uri. 2023. “ChatGPT Is a Data Privacy Nightmare, and We Ought to Be Concerned.” ArsTechnica, February 8. https://arstechnica.com/information-technology/2023/02/chatgpt-is-a-data-privacy-nightmare-and-you-ought-to-be-concerned/
Goodlad, Lauren M. E., and Samuel Baker. 2023. “Now the Humanities Can Disrupt ‘AI.’” Public Books, February 20. https://www.publicbooks.org/now-the-humanities-can-disrupt-ai/
Heaven, Will Douglas. 2023. “ChatGPT Is Going to Change Education, Not Destroy It.” MIT Technology Review, April 6. https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/
Hendricks, Christina. 2023. “Some Ethical Considerations in ChatGPT and Other LLMs.” You're the Teacher, February 2. https://blogs.ubc.ca/chendricks/2023/02/02/ethical-considerations-chatgpt-llms/
Herman, Daniel. 2022. “The End of High-School English.” Atlantic, December 9. https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-writing-high-school-english-essay/672412/
Klee, Miles. 2023. “Professor Flunks All His Students after ChatGPT Falsely Claims It Wrote Their Papers.” Rolling Stone, May 17. https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601/
Krishan, Nihal. 2023. “Congress Gets Forty ChatGPT Plus Licenses to Start Experimenting with Generative AI.” Fedscoop, April 24. https://fedscoop.com/congress-gets-40-chatgpt-plus-licenses/
Kupferschmid, Keith. 2023. “Copyright Alliance, AI Accountability Policy Request for Comment.” Docket No. 230407–0093, June 12. https://copyrightalliance.org/wp-content/uploads/2023/06/NTIA-AI-Comments-FINAL.pdf
Liang, Weixin, Mert Yuksekgonul, Yininh Mao, Eric Wu, and James Zou. 2023. “GPT Detectors Are Biased against Non-native English Writers.” Preprint, submitted April 6. https://doi-org.www2.lib.ku.edu/10.48550/arXiv.2304.02819
Luccioni, Alexandra Sasha, Christopher Akiki, Margaret Mitchell, and Yacine Jernite. 2023. “Stable Bias: Analyzing Societal Representations in Diffusion Models.” Preprint, submitted March 20. https://doi-org.www2.lib.ku.edu/10.48550/arXiv.2303.11408
Marche, Stephen. 2022. “The College Essay Is Dead.” Atlantic, December 6. https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/
Metz, Cade, and Daisuke Wakabayashi. 2020. “Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.” New York Times, December 3. https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html
Mills, Anna, Maha Bali, and Lance Eaton. 2023. “How Do We Respond to Generative AI in Education? Open Educational Practices Give Us a Framework for an Ongoing Process.” Journal of Applied Learning and Teaching 6, no. 1. https://doi-org.www2.lib.ku.edu/10.37074/jalt.2023.6.1.34
Milmo, Dan, et al. 2023. “Two US Lawyers Fined for Submitting Fake Court Citations from ChatGPT.” Guardian, June 23. https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt
Murgia, Madhumita. 2023. “Sci-Fi Writer Ted Chiang: ‘The Machines We Have Now Are Not Conscious.’” Financial Times, June 2. https://www.ft.com/content/c1f6d948-3dde-405f-924c-09cc0dcf8c84
OSTP (US Office of Science and Technology Policy). 2022a. “About This Document (Blueprint for an AI Bill of Rights).” October 4. https://www.whitehouse.gov/ostp/ai-bill-of-rights/about-this-document/
OSTP (US Office of Science and Technology Policy). 2022b. “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.” October 4. https://www.whitehouse.gov/ostp/ai-bill-of-rights/
Pasquale, Frank. 2020. New Laws of Robotics: Defending Human Expertise in the Age of AI. Cambridge, MA: Harvard University Press.
Perrigo, Billy. 2023. “OpenAI Used Kenyan Workers on Less than $2 Per Hour to Make ChatGPT Less Toxic.” Time, January 18. https://time.com/6247678/openai-chatgpt-kenya-workers/
Rim, Christopher. 2023. “Don't Ban ChatGPT—Teach Students How to Use It.” Forbes, May 3. https://www.forbes.com/sites/christopherrim/2023/05/03/dont-ban-chatgpt-teach-students-how-to-use-it/?sh=581ea1b8245b
Roose, Kevin. 2023. “Don't Ban ChatGPT in Schools. Teach with It.” New York Times, January 12. https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html
Russell Group. 2023. “Russell Group Principles on the Use of Generative AI Tools in Education.” July 4. https://russellgroup.ac.uk/news/new-principles-on-use-of-ai-in-education/
Schiffer, Zoe. 2021. “Google Fires Second AI Ethics Researcher after Internal Investigation.” Verge, February 19. https://www.theverge.com/2021/2/19/22292011/google-second-ethical-ai-researcher-fired
Sweetman, Rebecca, and Yasmine Djerbal. 2023. “ChatGPT? We Need to Talk about LLMs.” University Affairs, May 25. https://www.universityaffairs.ca/opinion/in-my-opinion/chatgpt-we-need-to-talk-about-llms/
Turkewitz, Neil. 2017. “Fair Use, Fairness, and the Public Interest.” Blog, February 20. https://medium.com/@nturkewitz_56674/fair-use-fairness-and-the-public-interest-27e0745bee86.
Turkewitz, Neil. 2023. “The Fair Use Tango: A Dangerous Dance with [Re]Generative AI Models.” Blog, February 22. https://medium.com/@nturkewitz_56674/the-fair-use-tango-a-dangerous-dance-with-re-generative-ai-models-f045b4d4196e
van Rooij, Iris. 2023. “Stop Feeding the Hype and Start Resisting.” Blog, January 14. https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/.
Weidinger, Laura, et al. 2021. “Ethical and Social Risks of Harm from Language Models.” Preprint, submitted December 8. https://doi-org.www2.lib.ku.edu/10.48550/arXiv.2112.04359.
White House. 2023a. “FACT SHEET: Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety.” May 4. https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announces-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-rights-and-safety/
White House. 2023b. “FACT SHEET: Biden-Harris Administration Takes New Steps to Advance Responsible Artificial Intelligence Research, Development, and Deployment.” May 23. https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/23/fact-sheet-biden-harris-administration-takes-new-steps-to-advance-responsible-artificial-intelligence-research-development-and-deployment/.
Whittaker, Meredith. 2021. “The Steep Cost of Capture.” Interactions 28, no. 6: 50–55. https://doi-org.www2.lib.ku.edu/10.1145/3488666.
Wiggers, Kyle. 2023. “Most Sites Claiming to Catch AI-Written Text Fail Spectacularly.” TechCrunch, February 16. https://techcrunch.com/2023/02/16/most-sites-claiming-to-catch-ai-written-text-fail-spectacularly/.
Copyright © 2024 Kathryn Conrad