16 Comments
User's avatar
Stephen Badalamente's avatar

I hope you will find this works well for you: "Since I’m teaching in person and students will be doing a fair amount of in-class writing, I’ll have more of a basis to judge." Given your posts on agentic AI I don't see another way we can certify our students' learning.

Nick's avatar

Excellent piece, really appreciate the full range of thoughts, opinions, and advice. We were using Turnitin, however, as of November 25', the university has recommended either turning the AI detection off, or being very skeptical of its results. The reasoning was a combination of high false positives and disagreement among faculty concerning other AI "grey" areas (grammar/punctuation correction, slide creation, etc). We have now pulled a 180 from a little over a year ago, and now require AI assignment integration, along with all new TA's being required to use AI tools and pass an online course using AI to create assignments. 🙆 🤷

Anna Mills's avatar

Thanks, it's really helpful to know that your university. was also seeing high false positive rates. The difficulty I see with AI integration as a solution is that we still need to know what is the student's and what is AI...

Michael G Wagner's avatar

False positives in AI detection are impossible to avoid since genAI as well as any system aimed at detecting genAI are non-deterministic by nature. And these false positive rates will likely increase substantially over time primarily because human writing is adapting to the style and structure of genAI writing. However, my main grievance with AI detection is the fact that it reverses the burden of proof. Students are considered guilty until proven innocent. The system says “likely AI” and the students need to proof that they did not cheat. They are constantly under pressure of proving their innocence. I appreciate your thoughts. I really do. But I strongly believe you are not acting in the interest of the mental health of your students.

Anna Mills's avatar

Thank you for your response and good will! I understand the concern about putting the burden of proof on students. However, note that in my approach I am not reporting students to academic integrity offices. I am just meeting with them as I would if I had an intuitive concern about possible AI misuse. If they can explain the paper well, I give them the benefit of the doubt.

I don't think there's a way to avoid asking students to explain their work at times so we can have more understanding of whether or not they have met the learning outcomes.

Michael G Wagner's avatar

Fair point, but I think you are underestimating the power differential. From the student’s perspective, you are judge and jury in one person. You are the one giving the grade. You can ask students to explain their work and that is, in my opinion, the correct way to do this. But I strongly believe that you need to do this for everybody, regardless of the TurnItIn “AI likelihood” result. Students can bypass AI detection quite easily. In addition to the false positive issue, there is also a substantial amount of false negatives.

Larry Till's avatar

A very thoughtful and timely piece. I, too, am struggling with whether or not to use AI detection software. I teach a lot of international students, and there's notable research indicating that these programs return an abnormally high number of false positives in work by students who don't speak English as a first language. A colleague who specializes in the area also notes that international students tend to lack the self-efficacy that domestic students enjoy, and as a result, can reach for these supports when others might not. I maintain that the very best thing we can do for our students, and for our society, is to teach our students good critical thinking skills so they know how to validate what AI gives them. It's also a good tool for better citizenship, as it turns out.

Anna Mills's avatar

Thanks for the thoughtful response. Do you know of more than one study showing higher false positive rates for English language learners? My understanding is that there are two main studies with opposite findings...here's my slide on that: https://docs.google.com/presentation/d/1CvxPBOTbQx53SOEmCqoQ8em2IAp0Tx4Z0HNjKmVdMS4/edit?slide=id.g3bc2d7697e2_0_3039#slide=id.g3bc2d7697e2_0_3039

Larry Till's avatar

Here’s one article I found that offers some additional evidence of bias. If you’re not already familiar with her, I recommend following Sarah Eaton, who’s doing stellar work on the intersection of AI, assessment, and academic integrity. I don’t think she’s on this platform. She’s on pretty much all the others. https://www.edweek.org/technology/black-students-are-more-likely-to-be-falsely-accused-of-using-ai-to-cheat/2024/09

Anna Mills's avatar

Agreed that Sarah Elaine Eaton is an important voice on this subject!

Thanks for that article. I do remember seeing it now... what's interesting is that it's not clear if the teachers' intuitive detection is more or less biased than AI detection software. I'm all for requiring systematic bias testing of detection software!

Larry Till's avatar

Machines have bias because humans have bias. Dismissing something (or someone) because of it is a cheat. We have to learn to deal with these things openly and transparently.

Jason Gulya's avatar

Thanks for sharing this, Anna. And I definitely see your point and understand the reasoning behind it, even if I don’t use AI detection myself!

Anna Mills's avatar

Thanks for engaging with my reflections and being so generous... it gives me hope we can all keep talking to each other across these differences.

Jason Gulya's avatar

I think we need to! And I always try to stay open-minded about this stuff.

Anna Mills's avatar

I'm trying too!

Maha Bali's avatar

I think your particular very understanding and AI-literate approach to using AI detection software is different than how the majority of educators would use it, which is why I still never advise using it; but you're always thoughtful about every step you take and I understand where you're coming from when you explain; it seems your students understand, too.

P.S. "mosaic approach" is Chris Ostro :) funny you cite him a lot but don't remember this particular aspect.