Note: This is a preprint of a review forthcoming in Computers and Composition: An International Journal of TextGenEd: Teaching with Text Generation Technologies, edited by Annette Vee, Tim Laquintano, and Carly Schnitzler, published by The WAC Clearinghouse.
Let me come right out and say that TextGenEd: Teaching with Text Generation Technologies has shaped my sense of what’s possible and what I want to try when it comes to AI and the teaching of writing. This is above all a useful book, one I have returned to over and over since its publication and referred to in myriad presentations on AI in Fall 2023. It is useful first because it describes many compelling teaching practices. I wish I could try most of the assignments described! Even though the reflections are about classroom activities that took place in Fall 2022 into early 2023, they have not lost relevance. Adapting them to the updated language models of late 2023 is for the most part trivial.
Second, the collection is useful because it frames discussion of the place of AI in the higher ed classroom. It models both critique and curiosity around AI as well as supplying historical and technical context. It invites us to our own explorations without pressuring us to be for or against AI or to resolve all the uncertainties about the future of writing in a time of rapid technological change.
The Introduction
I have listened to synthesized voices reading the collection’s introduction to me at least three times now in addition to reading and annotating it. It stands on its own as an introduction to AI and writing in higher education. What's really striking is the way it bridges divides in the discourse. Over the last year as higher education has grappled with AI since the release of ChatGPT, there has been a lot of pressure to either “embrace” or resist student use of AI text generators or large language models (LLMs). We might expect a collection titled “Teaching with Text Generation Technologies” to fall in the “embrace” camp, but in fact the collection isn’t promoting the technology or recommending indiscriminate use. Rather, it promotes a hybrid response of critical engagement. The authors “believe that these tools are likely to be adopted rapidly in certain sectors of the writing economy in the coming months and years, and fostering student understanding of them is important.” They bring home this sense of a coming sea change when they describe the prototype of text generation in Google Docs where the user “ clicks a "Help me write" button and has the tone of their paragraph changed.” “To the first generation of AI users, it might feel like magic. To the second generation, it might feel ordinary.” According to them, the kind of “instructional experimentation” showcased in the collection “will collectively put us in a much better position to determine, to the extent that we are able, how these tools should be adopted, and how we might resist them when necessary.”
Many teachers have not yet learned too much about text generation AI, and the editors hope the collection “offers something for teachers with all levels of comfort with technologies.” The introduction provides just enough historical context and technical information to jumpstart explorations. The editors give a brief, readable history of automated writing that belies the current sense that ChatGPT is sui generis. Without getting sidetracked by detail, they explain how the advances that preceded ChatGPT’s release made automated writing into a new realm of sophistication.
If teachers incorporate text-generation technologies into class activities, which ones should we use? How should we vet these systems around student privacy and use of student data? The authors draw attention to open-source models like those from Hugging Face as the only option to fully protect student privacy. However, they suggest that using systems like ChatGPT while mitigating their harms is also a defensible approach. In the near future, they suggest, corporate models with privacy protections may well be available through educational institutions. In the meantime, they do “recommend that instructors provide alternate assignments if a student objects to using a commercial application.”
If we are not wholly embracing AI and not banning it, how do we determine how much and in what ways it should be used in pedagogy? The authors acknowledge that some limits will likely be needed, asking, “[w]hich parts of the writing process can we cede to AI while retaining what we value about writing? We will soon learn if it is tenable to allow students to use AI for some parts of the writing process (e.g., brainstorming and grammar/style checkers) but not for others (e.g., text generation). We may want to embed constraints in our assignments so as not to offload too much of students’ cognitive work to AI.” Yet will we be able to maintain such constraints? And even if we can, will the tension between AI constraint in students’ academic writing and AI use practices in other writing contexts be too great?
The authors gently warn writing teachers that it may become increasingly difficult for us to establish norms around writing in the classroom that promote our vision of writing for inquiry, given how differently writing is seen and practiced outside the academy and given the difficulty of constraining student use of AI. They highlight the divides between norms around student authorship, norms around authorship in academic journals, and norms in the workplace, and ask if AI will “expose the artificiality of writing practices in the academy.” I was a little surprised that the warning wasn’t paired with any more hopeful speculation. The first impulse of many writing teachers, like me, will be toward the hopeful and evangelistic. Many of us, surely, are already preaching that teaching about text generation is an opportunity for us to make a yet more compelling case to our students that writing helps us think, both in the classroom and beyond. Professional writing tasks can be both process and product; writing a memo can be drudgery, but it can also help us figure out the nuances of how to handle a situation in a way in line with our values. But I am glad that instead of preaching to the choir, the authors have alerted us to a way in which our pedagogy may be fundamentally challenged by AI. I am left wondering if my optimism about the writing-as-thinking gospel is more reflexive than considered.
The Assignments
Many educators feel teaching about or with AI must involve introducing new material into courses that are already overstuffed. Yet the assignments in this collection show that in many cases, teaching about or with AI can support existing learning goals. For example, Kyle Booten teaches student awareness of genre conventions by inviting students to prompt an LLM to produce an example of podcast dialogue. In the process of iteratively nudging the LLM to improve its outputs, the students become more aware of the genre conventions and also learn prompt writing techniques. Booten takes care to point out that such near-instant responsive revisions of a writing sample are not possible without an LLM. The AI-enabled exercise is used to prepare students to write their own example in the genre.
Those of us relatively new to thinking about AI might assume that it would be preferable to use the most advanced language model for every pedagogical application. However, several contributors to the collection, such as Kyle Booten, Nick Montfort, and Alan Knowles, emphasize the benefits of asking students to work with a clunkier model because it is older or smaller, such as the open-source models available on HuggingFace. These less capable models can make the predictive nature of the systems and the way they break text into tokens more apparent. Their reliable failures are opportunities for students to articulate what is wrong and what is needed.
AI Literacy
The AI Literacy collection of assignments shows a spectrum of ways that teachers can “help students understand how the machines work,” all of which are likely to remain relevant even as AI models and interfaces evolve. Many of these assignments support other goals like reading comprehension and summary writing as well. Some are immediately useful in any course that assigns writing. These invoke little to no jargon and require little to no technical expertise on the part of the educator, yet they do teach students about the sophistication as well as the inaccuracies of autogenerated text. For example, Daniel Hutchinson and Erin Jensen show students how to prompt a language model to explain a difficult reading by inventing an example in terms of a popular culture reference of the student’s choice. The students then check whether the example really reflects the meaning of the passage and discuss what any inaccuracies reveal about AI. Nathan Murray and Elisa Tersigni suggest a blinded exercise where students critique three instructor-chosen texts. Later, these are revealed to be one accurate, authoritative human-written text; one authoritative-sounding, inaccurate human-written text; and one authoritative-sounding, inaccurate AI-generated text. Alan Knowles’ activity has students use a set of social media posts to prime a language model to generate new posts in the same style (he chooses posts about the January 6 attack on the U.S. Capitol). Then students analyze the rhetorical features of the generated posts and their implications about language models’ training data and societal impacts.
Other, more technical assignments in this section are designed to give students experience under the hood. Students work with Markov chains, find gender bias in word embeddings, and even build their own neural networks. The assignment descriptions educate teachers who are not familiar with these building blocks of large language models. They build our AI literacy even if we do not end up assigning the activities in our classes.
Creative Explorations
The collection’s introduction celebrates creative experiments with text generation, including a history of “ a small number of artists and programmers” who “were going against the grain of what computation was generally designed to be used for.” It affirms that “ The spirit of early creative computational writing… is still very much alive.” The “Creative Explorations” assignments show us what that looks like through a variety of playful experiments that teach students how to use AI and encourage them to reflect on its limitations. Brandee Easter asks students to use generated text and images to create a children’s book, a process that spurs discussion of biases in the models. Mark Marino’s “Grand Exhibition of the Prompts” frames prompting image generators as a genre in itself, one that demands concentrated, evocative language. Dana LeTriece Calhoun has students get ChatGPT to generate spells and then compare these with their own spells inspired by the Black Southern American practice of conjuring.
Other activities include more technical exploration. Nick Montfort asks students to use a rules-based AI system to generate multiple versions of a story and analyze the differences. Mark Sample asks students to be the ones to define rules for automatic content generation in a specific genre. Jason Boyd invites students to explore cyborg authorship using a range of lightweight, specialized text-generation programs, and kathy wu invites students to use Markov chains to create found writing inspired by Dadaism. Pretty much all of these activities sound fun and intriguing.. They will likely inspire teachers to try inviting students to prompt AI in creative genres even in classes without a creative focus.
Ethical Considerations
I was relieved to find that TextGenEd addresses the ethics of large language models without attempting to be comprehensive. The ethical concerns around AI text generation are legion; they demand a book of their own, like Leon Furze’s Teaching AI Ethics. The ethics-related assignments we do find in TextGenEd are variou and creatively designed. For example, Christopher D. Jimenez invites students to share information about their interests with a language model, then ask it to guess their race, class, gender, and sexual orientation, and then reflect on the bias revealed in the process.
Several assignments ask students to develop their own informed opinions about ethical AI use in college. Marc Watkins develops an academic integrity policy around AI with students. Jentery Sayers prompts students to consider whether and how a university should use AI to evaluate essays. Paul Fyfe invites students to test the instructor’s ability to distinguish an essay they have generated from one they have written. Perhaps future iterations of TextGenEd will include assignments that deal with a broader range of such as intellectual property, privacy, energy use, labor, and linguistic justice.
Professional Writing
The assignments from teachers of professional writing introduce students to the prompting and critical evaluation skills needed to work with text generation in professional settings. As the authors point out, these teachers may consider AI in a different light in part because “unlike academic discourse, professional writing is not grounded in an ethos of truth-seeking and critical inquiry; it tends to be grounded in an ethos of efficacy as well as constraints of legality and workplace ethics. “ Douglas Eyman’s assignment, for example, asks students to review AI summarizing software tools and reflect on how they might choose to use such apps in the workplace. Nupoor Ranade has students prompt text generators and then manually edit their outputs. In her class, this led to heated debate on whether professional human editors will still have an important role in the future.
I was intrigued to see that many activities in this section seem suited for a composition classroom as well. For example, Heidi McKee has students reflect on a hybrid process where they write a summary of a text and then with an AI-generated one. Many of her students noted “the need, always, for human decision-making and human agency in the writing process.” Tim Laquintano asks students to translate a policy document for different age audiences and compare results against LLM translations.
Rhetorical Engagements
As the authors put it, the assignments in this section “help students build out the new rhetorical competencies enabled by LLMs and also” reflect “the possibility of using them to enhance more traditional literacies.” John J Silvestro invites students to define their own writing styles by comparing their paragraphs to stylistically different versions generated by a language model. Justin Lewis and Ted Wayland ask students to consider AI-generated counterarguments early in the research paper process so they are more likely to make fundamental improvements to their theses. Anthony Byrd has students compare human peer reviews with language model feedback on their writing and reflect critically on the AI suggestions. He frames the activity in part as a way to discourage overreliance on AI. Juan Pablo Pardo-Guerra offers students substandard auto-generated responses to his essay prompts and asks students to annotate and improve them. He sees his assignment as a kind of answer to academic integrity concerns around AI, writing “The solution to this puzzling situation is not avoiding LLMs but accepting them as extensions of our analytical and pedagogic toolkits.”
TextGenEd as a Living Collection
“Continuing Experiments” is the last section of TextGenEd, and I was delighted to discover that it consists of a call for submissions and a promise to publish at least two new collections. The authors write, “The work of teaching writing is a constant process of experimentation, revision, and collaboration, particularly when teaching with technologies that shift and evolve quickly.” Even as we anticipate future additions to TextGenEd, they invite us to participate on our own, treating TextGenEd as “a living collection, adapting the assignments to local conditions and new technologies as they evolve.“
Teachers and the Future of Text Generation
In the acknowledgments for TextGenEd, there’s a comment I’ve been puzzling over. The authors write, “A backhanded thank you to OpenAI for releasing ChatGPT and instigating an AI arms race with little understanding of how LLMs will be exploited by malicious actors or weaponized against the poor. Keep on believing in that future of techno utopianism! You made this collection necessary. “
“Necessary” suggests that the collection is remediating a problem, yet we know the teaching experiments here do much more than that. The complaint is exaggerated, perhaps out of playfulness and anger. But on the other hand, there's an exhortation to the techies to keep going. That reminds us of the excitement, curiosity, and playfulness found throughout the introduction and the assignment descriptions. Teachers can and do enjoy exploring this terrain, engaging with these powerful language systems made on others’ terms.
Like many commentators, the authors are angry about the ethical failings of large language models. But they are also outraged at the exclusion of writing teachers and historians of writing technologies from the responsibilities and joys of working on systems that forever change the writing environment. They write, “ “As Big Tech rushes ahead in its AI arms race with the intention of having large language models (LLMs) mediate most of our written communication, writers and teachers are forced to consider…” the workings of this technology. They are indignant about the passive role into which writers and teachers have been “forced.”
Are we not needed to help shape the technology? We have had glimpses of the bafflement of the tech world as it attempts to steer text-based AI. Soon after the release of ChatGPT, OpenAI president Greg Brockman tweeted, “Big takeaway from the GPT paradigm is that the world of text is a far more complete description of the human experience than almost anyone anticipated.” He seems unaware of those who have found the world of text rich enough to dedicate our careers to it.
In “Now The Humanities Can Disrupt “AI” Lauren Goodlad and Samuel Baker have called on educators to “claim a seat at the table where tech entrepreneurs are already making their pitch for the future.” Will teachers continue to be angry, excited, and largely reactive as others develop text generation AI further? Or will we find more active roles in the regulation and creation of LLMs? Vee, Laquintano, and Schnitzler are not quite so assertive as Goodlad and Baker, but even In this practical, pedagogy-oriented collection, they too send a message about teachers’ possible role in LLM development. They write, “writing teachers are poised to help steer the discourse and paths of generative AI technology. This collection serves to orient writing teachers in that essential work.” In addition to expanded explorations of classroom practice, I would love to see a TextGenEd collection of ways teachers are participating to help shape the future of text generation.
However we feel about large language models, writing teachers may find this moment in the history of writing to be energizing. Immediate pedagogical questions and societal conversations about writing and technology are so closely linked. Though TextGenEd’s emphasis is critical engagement, it also feels like a joyful collection. It encourages us to turn toward other educators and celebrate our reflections on teaching as timely and essential. Those of us who lean in to “the world of text,” as Greg Brockman puts it, have something to say to shape an era where both humans and machines put words together.
Photo by Dylan Hunter on Unsplash
Works Cited
Furze, L. (2023, September). Teaching AI Ethics. https://leonfurze.com/2023/01/26/teaching-ai-ethics/.
Brockman, G. [@GDB]. (2023, January 6). Big takeaway from the GPT paradigm is that the world of text is a far more complete description of the human experience than almost anyone anticipated [Tweet]. X. https://x.com/gdb/status/1611429677218004992?s=20.
Goodlad, L. M. E., & Baker, S. (2023, February 20). Now the Humanities Can Disrupt 'AI'. Public Books. http://www.publicbooks.org/now-the-humanities-can-disrupt-ai/.
Vee, A., Laquintano, T., & Schnitzler, C. (Eds.) (2023). TextGenEd: Teaching with Text Generation Technologies. The WAC Clearinghouse. https://doi.org/10.37514/TWR-J.2023.1.1.02
Hi Anna,
been following your work for some time now, very excited to read your substack piece. An excellent, comprehensive review. I am aware that you have collected many resources and references on AI and education. Hope you can share those resources in the substack at some point.
Thanks for all the work that you do with and for the community.