A brief update: Why I'm still using AI detection after all, alongside many other strategies
I'm incorporating Turnitin alongside process tracking, writing process assignments, social annotation, lots of student choice, peer review and tutoring, video assignments, and more.
One year ago, I shared why I had started using AI detection after opposing it (I’ve included the original post below). It was wonderful to have dialogue with a number of educators—including Jason Gulya, Laura Dumin, and Nick Potkalitsky—who disagreed with me so thoughtfully and graciously. Others—like Chris Ostro, Tricia Bertram Gallant, and Derek Newton—agreed and appreciated that I risked backlash for the unpopular stance.
Since then, I have continued to use AI detection in online asychronous composition courses. For context, my institution pays for Turnitin, and many instructors in the English department use it while recognizing that we must be cautious because it can be wrong.
I do want to clarify that I consider designgin for intrinsic motivation the most powerful approach to reducing AI misuse, and I use AI detection as one strategy among many (see a recent presentation on my approaches).
Here’s how I frame AI detection in the academic integrity section of my syllabus:
AI detection to deter AI misuse
With each major essay, I’ll ask you to submit the work to Turnitin for plagiarism and AI detection review. AI detection is a kind of statistical guess as to whether the text was generated by AI or not. I won’t entirely trust what the AI detector says because I don’t want to make false accusations. I am aware that AI detection software occasionally labels human writing as AI. It also fails to identify some AI text as AI. Still, detectors can offer useful information alongside other indicators and also serve as a way to discourage AI misuse. (See this Turnitin student guide for more).
What if I suspect you have turned in work that is not your own?
I’m not too interested in punishing students. But I do want to make sure that I am assessing your learning accurately and encouraging your learning. So though I’m an understanding teacher, I do hold firm! I can’t give a passing grade to work that doesn’t show a student’s own thinking and learning.
If Turnitin labels some of your writing as “likely AI” or if I have a question about whether work is your own, I will look into it in the following ways:
I’ll take a close look at what you’ve shared about your process,
I may ask to meet with you to discuss your writing and writing process before I will assign a grade.
If you’re concerned about any of these approaches, please let me know. I won’t assume that means you are dishonest (really!). See also How can I protect myself against false accusations of using AI?
In Fall 2025, to my surprise, I saw significantly more student essays flagged as likely AI. I also heard anecdotes that other teachers were seeing the same, to the point where the instructional designer in charge of our Turnitin contract reached out to the company. I am still wondering if this increase in false positives could be due to the some change in the algorithm aimed at improving the “paraphrase detection” Turnitin launched in 2024.
What I’d really like to see is required independent testing of detection software, including updates. How accurate is it? Is it biased? It’s frustrating that the companies are constantly changing their systems, and any peer-reviewed studies are likely behind.
I was frustrated and a little upset that Turnitin’s detection seemed less reliable, and I was having more stressful meetings with students. Students seemed fairly good-natured about it, and I had several good discussions of their essays and process that I wouldn’t have had otherwise.
Given how many times I met with students and ended up apologizing and agreeing with them that their work had been wrongly flagged, it is striking to me that in an end-of-semester anonymous survey, not one student came out against detection. They were either fine with it or indicated mixed feelings.
I recognize that it’s possible some of them didn’t trust the anonymity of the survey. They knew I chose to use detection, so that may have predisposed them a little, though I described my own reluctance and awareness of its shortcomings. I made it clear that I was aware it sometimes wrongly flags student writing as AI, and I wouldn’t trust its assessment.
Here are the comments on AI detection they gave permission to share:
“I think it is helpful but sometimes difficult due to words that ive actually wrote that have caught onto AI. “
"I'm fine with them I don't see a problem."
"I think that in online classes is definitley hard to have to deal with students using AI, but I also think teachers could run into similar problems with in person classes as well. I think that turnitin.com and grammerly authorship seem to be working super well and are good at preventing students from using AI. "
"love it"
"Do we students have access to Turnitin AI detector software, or is it only available to teachers?"
"I don't think the AI detection is always accurate."
"I think it's beneficial given how easy it is to just copy and paste from AI"
"I write my own papers so I'm not worried about that. However, on my last Turnitin, the software suggested I didn't quote a couple of sources. Sources I did not even read and had never heard about, so it was bothersome that this detection software thought my own writing was from anything but that of my own writing."
"I have no problems with the use of AI detection software, especially with how understanding you are towards it. I think if a teacher were to use something such as Turnitin, it's important to recognize that it is not always accurate and sometimes simply talking with students can give you a better idea of their intentions towards the class policies. "
"It helps us uphold our integrity and respect for the writing process, as well as holding us all accountable."
After learning about new, possibly more accurate detection software called Pangram from Christopher Ostro, I asked my institution, College of Marin, to vet it. My understanding is that they’re not moving forward with the request at this time.
I’ll be curious to see if the incidence of false positives with Turnitin continues to be high this semester. Since I’m teaching in person and students will be doing a fair amount of in-class writing, I’ll have more of a basis to judge.
Here is my original post from February 2025:
I argued against use of AI detection in college classrooms for two years, but my perspective has shifted. I ran into the limits of my current approaches last semester, when a first-year writing student persisted in submitting work that was clearly not his own, presenting document history that showed him typing the work (maybe he. typed it and maybe he used an autotyper). He only admitted to the AI use and apologized for wasting my time when he realized that I was not going to give him credit and that if he initiated an appeals process, the college would run his writing through detection software. I liked this student, had met with him previously and encouraged him to build confidence in his own voice and helped him find a research topic that interested him. I didn't think he would be well served by an A on an autopaper, so I was glad that AI detection existed.
I'm also influenced by recent research that suggests detection is likely not biased against English language learners after all, that educators are not as good as we think we are at distinguishing AI from student writing on our own, and that some detection systems are pretty accurate when it comes to naive copy/paste AI use. Christopher Ostro's slide deck lit review of recent research on detection has been invaluable. He discusses this research in an engaging episode of Bonni Stachowiak's wonderful podcast Teaching in Higher Ed. Dr. Tricia Bertram Gallant's leadership on academic integrity and AI has influenced me over the last two years as well. She has shown levelheaded willingness to consider a possible role for detection even as she promotes other approaches as more important and effective.
As I teach composition online asychronously this semester, I'm incorporating Turnitin alongside process tracking, writing process assignments, social annotation, lots of student choice, peer review and tutoring, video assignments, and clear messages about the purpose of each activity and the value of the writing process. I love that Phillip Dawson has described this kind of laying of strategies as a "swiss cheese" approach and others have used the mosaic metaphor (I'm having trouble tracking down who). I've described my combination of approaches in slides for a recent presentation on Academic Integrity and AI.
What about the risk that AI detection will lead to false accusations? It's real, and I let students know I'm aware that detection yields some false positives. I will never trust AI detection as firm evidence, and I am not punishing students. If a student discusses an essay with me, shows process history, and denies AI use, I will give them the benefit of the doubt even if the detector says "AI." If they aren't able to discuss their writing, I ask students to rewrite.
Christopher Ostro put it in a way that resonates for me: "I think AI detection has a place, but its place is limited." In most human endeavors, some accountability structures are important even when we design for intrinsic motivation. And we don't have perfect options here. The options are fewer in online asynchronous classes that allow many working-class students and parents to access college.
I know so many colleagues I respect are highly critical of AI detection, seeing it as signaling antagonism toward students. Christopher Ostro clarifies that his purpose is not to punish students but to provide some accountability that, in the end, encourages learning and shows that we care. He says, "I am not a cop teacher. I am not someone who likes catching cheaters. I’m not someone who wants that to be a big part of my job. Honestly, it’s the least fun part of teaching, but it’s also it is still a part of the job."
I'm a member of the MLA/CCCC Joint Task Force on Writing and AI, a group that has put out strong cautions about it in our working paper on Generative AI and Policy Development. There, we argue that "Tools for detection and authorship verification in GAI use should be used with caution and discernment or not at all." We write, "For those who decide to use AI detectors, please consider the following questions: What steps have you taken to substantiate a positive detection? What other kinds of engagement with the student’s writing affirms your decision to assign a failing grade outside the AI detector’s claim that the text was AI generated?" We also emphasize that "any technological approaches to academic integrity should respect legal, privacy, nondiscrimination, and data rights of students."
I have tried to use detection and process tracking in ways that I hope address those concerns. I invite students to comment frankly on syllabus policies. If students don't want to share process history or if they object to AI detection, I invite them to meet with me to chat about the essay instead. My approach is a work in progress in a changing landscape. Next up: an anonymous survey to see what more I can find out about what students think.


Excellent piece, really appreciate the full range of thoughts, opinions, and advice. We were using Turnitin, however, as of November 25', the university has recommended either turning the AI detection off, or being very skeptical of its results. The reasoning was a combination of high false positives and disagreement among faculty concerning other AI "grey" areas (grammar/punctuation correction, slide creation, etc). We have now pulled a 180 from a little over a year ago, and now require AI assignment integration, along with all new TA's being required to use AI tools and pass an online course using AI to create assignments. 🙆 🤷
Thanks for sharing this, Anna. And I definitely see your point and understand the reasoning behind it, even if I don’t use AI detection myself!