12 Comments
User's avatar
Larry Till's avatar

A very thoughtful and timely piece. I, too, am struggling with whether or not to use AI detection software. I teach a lot of international students, and there's notable research indicating that these programs return an abnormally high number of false positives in work by students who don't speak English as a first language. A colleague who specializes in the area also notes that international students tend to lack the self-efficacy that domestic students enjoy, and as a result, can reach for these supports when others might not. I maintain that the very best thing we can do for our students, and for our society, is to teach our students good critical thinking skills so they know how to validate what AI gives them. It's also a good tool for better citizenship, as it turns out.

Larry Till's avatar

Here’s one article I found that offers some additional evidence of bias. If you’re not already familiar with her, I recommend following Sarah Eaton, who’s doing stellar work on the intersection of AI, assessment, and academic integrity. I don’t think she’s on this platform. She’s on pretty much all the others. https://www.edweek.org/technology/black-students-are-more-likely-to-be-falsely-accused-of-using-ai-to-cheat/2024/09

Anna Mills's avatar

Agreed that Sarah Elaine Eaton is an important voice on this subject!

Thanks for that article. I do remember seeing it now... what's interesting is that it's not clear if the teachers' intuitive detection is more or less biased than AI detection software. I'm all for requiring systematic bias testing of detection software!

Larry Till's avatar

Machines have bias because humans have bias. Dismissing something (or someone) because of it is a cheat. We have to learn to deal with these things openly and transparently.

Anna Mills's avatar

Thanks for the thoughtful response. Do you know of more than one study showing higher false positive rates for English language learners? My understanding is that there are two main studies with opposite findings...here's my slide on that: https://docs.google.com/presentation/d/1CvxPBOTbQx53SOEmCqoQ8em2IAp0Tx4Z0HNjKmVdMS4/edit?slide=id.g3bc2d7697e2_0_3039#slide=id.g3bc2d7697e2_0_3039

Stephen Badalamente's avatar

I hope you will find this works well for you: "Since I’m teaching in person and students will be doing a fair amount of in-class writing, I’ll have more of a basis to judge." Given your posts on agentic AI I don't see another way we can certify our students' learning.

Nick's avatar

Excellent piece, really appreciate the full range of thoughts, opinions, and advice. We were using Turnitin, however, as of November 25', the university has recommended either turning the AI detection off, or being very skeptical of its results. The reasoning was a combination of high false positives and disagreement among faculty concerning other AI "grey" areas (grammar/punctuation correction, slide creation, etc). We have now pulled a 180 from a little over a year ago, and now require AI assignment integration, along with all new TA's being required to use AI tools and pass an online course using AI to create assignments. 🙆 🤷

Anna Mills's avatar

Thanks, it's really helpful to know that your university. was also seeing high false positive rates. The difficulty I see with AI integration as a solution is that we still need to know what is the student's and what is AI...

Jason Gulya's avatar

Thanks for sharing this, Anna. And I definitely see your point and understand the reasoning behind it, even if I don’t use AI detection myself!

Anna Mills's avatar

Thanks for engaging with my reflections and being so generous... it gives me hope we can all keep talking to each other across these differences.

Jason Gulya's avatar

I think we need to! And I always try to stay open-minded about this stuff.

Anna Mills's avatar

I'm trying too!