The time to reckon with AI agents in digital learning spaces is now
We need ed tech and AI companies collaborating to prevent widespread fraud
Note: I imagine you, like me, might be suffering from call-to-action burnout. So I apologize in advance for yet another call! I think this is a big threat, and I also think some things can be done in the near term if we raise awareness.
Student use of AI agents
Imagine a student saying to their browser, “Please log in to Canvas for me and complete all the assignments due today.” Imagine if the browser then moves the cursor around, clicks, generates, and submits discussion posts, quizzes, or pretty much anything else.
That’s possible today.
Perplexity just released the Comet AI agent browser and made it free for students. This makes the functionality widely available, but it was already possible with ChatGPT Agent (at the $20/month level) and with the ManusAI agent (limited availability). Anthropic is testing an agentic Claude for Chrome system with a few premium users. Every major AI company has been talking about and building AI agents for a while now; we’re just seeing them reach the point where they’re ready for broad use.
As far as I’m aware, none of these companies has shared anything they are doing to stop the fraud these systems can facilitate so easily in learning management systems. As Marc Watkins points out in his powerful Open Letter to Perplexity AI, Perplexity is even explicitly advertising the ease of cheating.
You can give any of these systems your Canvas login and ask them to complete tasks for you in Canvas as if it were a human user, with or without your supervision. (Side note: when you’ve suffered from clunky Canvas design for years, it’s oddly satisfying watching these agents narrate their attempts to get around a course.)
I recently tested with Perplexity Comet in student mode in a free Canvas account not associated with my college or any student data. It jumped right into completing quizzes.
The first mainstream media coverage I’ve seen of the issue is Dr. Aviva Legatt’s forceful Forbes article Colleges And Schools Must Block And Ban Agentic AI Browsers Now. Here’s Why. But I’m just one of a host of educators who have been trying to draw attention to this issue. (Let me know about articles or videos I’ve missed, and I’ll add to the list):
David Wiley: An “AI Student Agent” Takes an Asynchronous Online Course
Stavros Hadjisolomou: AI Agents Can Now Navigate and Complete LMS Tasks: A Call for Pedagogical Innovation
Josh Brake : The Moral Hazards Of AI Are Closer Than You Realize
Note: This is a separate issue from how learning management systems choose to integrate AI. For example, Instructure and OpenAI just announced a partnership that will give instructors various options for incorporating AI into assignments and using it for teaching assistance. But incorporating AI doesn’t mean you have a handle on preventing misuse. A teacher might ask students to chat with a custom bot for an assignment in Canvas. The student might tell an AI agent to go engage in that interaction and “reflect” on it.
Of course, there’s also the question of teacher use of AI agents. Institutions need to determine whether to allow faculty to use AI agents for teaching assistance in the LMS. As Marc Watkins points out in The Dangers of using AI to Grade: Nobody Learns, Nobody Gains, working conditions in higher ed make use of AI for grading extremely tempting, and that hold for other forms of automated teaching assistance. But this is a discussion for another piece.
We can’t just give up on online learning
In discussions of student use of AI agents, I’ve heard many educators suggest a return to in-class writing. That’s a reasonable move in some contexts. But we shouldn’t discount how much learning happens today in online spaces or the reasons we put learning activities online.
I’d like to see students encouraged to take in-person classes where possible. But demand for online is high; 57% of all California community college course offerings were fully online in 2023 (Results From a Comprehensive California Community Colleges Online Education Study). Many students taking those online classes can’t take in-person ones. Often, job and family obligations prevent working-class and returning students from regular campus visits. And many students do better learning online, either because of disabilities or preference. There is an art and science to doing online learning well; it can be highly connected and social. See Karen Costa’s post, Flower Darby’s post, and Michelle Pacansky-Brock’s post on online learning.
Some increase in in-person instruction and proctoring may be needed going forward. But that will take time and major institutional shifts. We’ll need to weigh it carefully because any requirement for in-person work means some would-be students are excluded.
It would be a shame to abandon online learning because companies decided to release software without guardrails. Before we do that, let’s ask the companies to help.
What can educators do?
Certainly, we should always be trying to make our courses more intrinsically motivating. That will help. Dr Philippa Hardman argues for course redesign as best response to agents in The Great Online Learning Reset? How agentic AI is forcing us to reimagine online learning -- for the better.
But I don’t think course redesign solves the problem, especially since agentic AI means students can get their assignments completed without ever finding out what the assignments are. If they ask the agent to go do their homework, they might never know if the assignments were motivating or not.
Tim Mousel and Anna Haney-Withrow started a group of instructors and instructional designers looking for instructional design approaches that make it harder for AI agents to complete work in LMSes. Tim Mousel put up a form for educators to share ideas on solutions and also makes public the ideas shared so far.

I admire the collaborative search for practical steps. However, it seems to me that many of the strategies may not work consistently or depend on in-person class time or audio and video, which can now be autogenerated as well. I imagine that a student using an AI agent could get it to complete much of my online asynchronous Academic Reading and Writing course even though the course includes lots of process assignments, student choice, connection to personal experience, and social annotation.
It really doesn’t make sense to ask individual teachers to try to solve this unaided.
What can companies do? Surely something, if the will is there
What would it take to block AI agents from completing work in a student’s name in an LMS? After several months of asking around and getting wildly varying responses, I’m still not clear what’s possible or strategic. But I do know that no players have really tried yet. Here are some possibilities:
AI companies could tell their agents not to complete work while logged into student accounts in Canvas, Moodle, Blackboard, or D2L. That could be hardwired or inserted in the agents’ system prompt. Or companies could label agents so that websites can choose to block them or limit their activity.
Learning management systems could block agents, either by identifying them in cooperation with AI companies or by developing software to detect their patterns of activity.
Educational institutions could try to block AI agents on their Wifi networks. Forbes journalist Dr. Aviva Legatt says, “There, unfortunately, is no perfect blocking solution. However, I am cautiously optimistic that LMSes and campuses will take steps to block these browsers to the best of their ability. It’s quite feasible, for example, that a campus IT department could block certain sites from being accessed on the campus WiFi network - so this logic could be applied to known agentic browsers.”
It might not be possible to stop people from running an agent locally on their computer, but that requires some expertise and will hopefully remain rare.
In sum, it doesn’t make sense to give up on technical barriers to fraud when we haven’t begun to try.
Let’s ask ed tech and AI companies to work together to help
We haven’t heard much yet from the big companies on what is possible. It seems more pressure and press coverage are in order. There’s a precedent. Recently, Ian Linkletter contacted Geoffrey Fowler of The Washington Post and urged him to cover Google’s “Homework help” button in Chrome. Fowler’s reporting may well be what led to Google’s suspension of the feature: Teachers got mad about a cheat button in Chrome. Now Google’s pausing it.
OpenAI and Instructure just announced a partnership to bring AI into Canvas. Couldn’t they collaborate on the agent issue as well?
Tim Mousel writes, “After I posted this video 4 months ago of an agent completing a course in Canvas, several connections sent it to the top LMS companies. They replied back stating they would work on some solutions. They’ve not released one yet as far as I’m aware.” Yun Moh put in a request to Instructure in the Canvas Community Forums to “Block AI Agents from Logging in on Behalf of Students.” The company has marked the current status of this item “Will not consider.” I hope they’ll reconsider.
Ways educators can advocate
Here are a few concrete ways to advocate for the support educators need to keep helping students learn online. Let me know if you have other suggestions!
Raise the issue with colleagues and your academic integrity office
Ask your IT department what can be done
Ask the person who manages your LMS contract to raise the issue
Post on LinkedIn asking your LMS company and executives to take action.
Put in a feature request to your LMS company to block agents
Urge professional organizations to make a statement
Contact ed tech or tech reporters to cover the issue
AI Use Statement
I did not use AI to generate text for this piece. I did use ChatGPT to generate the two cartoon images, and I reflected on Claude feedback. Claude gave me two ideas that I added to the list of possible actions (see the chat transcript).





I don’t think Google removed Homework Help - just buried a menu or two deeper. Google Lens still does the same thing, I think?
And I’m not sure blocking AI agents makes sense as a possibility. If they just control the mouse and keyboard, it’s not like there is metadata about who is controlling it. I assume a login screen gets the exact same text if a human types/pastes it in or an agent does (or the browser autofills, etc.). I suppose detecting inhuman mouse movement the way auto captcha stuff uses that as one signal among tons of others (browser fingerprint, etc.)…but doesn’t seem reliable enough for Canvas to start refusing logins.
I don’t know…maybe there’s a way *if* all the big companies play nice on both the AI side and the LMS side, but that seems really implausible.
And we’ve seen a million “companies”/apps pop up to put a wrapper on ChatGPT to create a cheat website/app, so I imagine if the big AI companies block it all, the open source models will be taken up by some entrepreneurial cheat facilitator who wants to make bank from desperate students clicking on TikTok ads.
Similar issue with the main AI players adding system card instructions to not do homework on an LMS. Some other AI will quickly fill in the gap if they are not competing with the big companies. Plus the companies obviously want to sell AI as relevant to the new workforce and scientific endeavor and have 0% chance of budging on the idea that many university classes will *want*/need to integrate and use AI as part of college training. So they won’t want it to be blockable, I imagine, even if PR about cheating did somehow hurt them a bit (it doesn’t seem to have touched their bottom line yet…)
I know what you mean about online classes being important for many students, but I also worry about the exploitation of those students if colleges keep accepting their money (ie making students take loans out!) and then many students use AI and graduate without useful skills and have a worthless paper + a bunch of debt. It’s hard to blame students who are busy, have imposter syndrome (and the AI makes their writing sound soo much better!), issues with procrastinating, work full time on top of school…like, it would take inhuman self control not to use AI sometimes. (In my data, 60% of the 700+ undergrads I sampled at an R2 had cheated with AI)
And once you use it to escape/avoid those bad feelings (from procrastination, being overwhelmed, imposter syndrome), doing the behavior of pressing that AI button gets negatively reinforced (ie becomes more likely in that situation in the future because it helped in the moment this time). So they get in a bad habit and soon are screwing themselves over, not because they are bad people, but because they are human. And they see their peers do it anyway, and jobs don’t care about things like how the work gets done, yada yada.
But those students end up with a useless piece of paper, can’t get hired because the hiring manager can also just press the ChatGPT button without having to pay a human to do it, and they still have this college debt for an online degree that sounded so helpful for their life as a full time worker or rural resident, but ended up sucking them (some of them anyway) into the temptation of having AI do all the work and them racking up debt.
I don’t know, I worry about online. A lot. And that’s as someone who teaches some sections online and has also tried to get creative with adapting assignments (e.g., https://teachpsych.org/E-xcellence-in-Teaching-Blog/13504139).
Anyway, thanks for the great post. We need to get everyone talking about this soon. I showed examples of agentic AI in Canvas at a couple talks this past week and people had their minds blown. Too few profs know that we’re at this stage already.
Hey, great read as always, you’re totally on point about the immediate threat these agents pose, though it also makes me wonder if we need to rethink assessment design more brodly than just trying to block the tech.