Your Students' Secret Agent
Agentic AI browsers are yet another AI disruption, with immediate implications for online tests and quizzes

In a recent Substack post, Anna Mills—writing instructor at Cañada College and prolific writer on AI in higher ed—wrote about Comet Assistant, Perplexity’s new AI-powered, multimodal agentic browser that can autonomously publish discussion posts and complete quizzes inside of learning management systems (LMSs) like Blackboard and Canvas in a matter of seconds. Once the student relinquishes their LMS username and password, the browser can log in, complete assignments, and submit on the student’s behalf.
If you haven’t seen it, it’s truly unbelievable—well, maybe this evolution of generative AI was predictable to some, but to actually see it in action is quite startling. Below is a video example of Mills demonstrating how the browser and complete multiple Canvas quizzes with just a simple prompt.
Like a celestial comet hurtling toward the world of teaching and learning, Comet Assistant is a major disruption amid a shower of blinding and dangerous tools in higher ed. In the above example, Comet isn’t assisting the student as the name insinuates—it’s becoming the student in ways that common chatbots like Copilot and ChatGPT 5.0 haven’t before.
An Agentic AI Timeline
Comet Assistant is just the tip of the agentic AI iceberg. Here’s a brief timeline of AI agents:
July 9, 2025 - Perplexity AI’s Limited Release of Comet
Perplexity AI Comet becomes the first multimodal agentic browser available to the public. OpenAI launched its browser extension months prior that infused their generative AI technology, and Opera, a Norwegian software company, established a waitlist for premium subscribers to access its agentic browser in May, but Comet was the first to make an AI browser with web-based task execution publicly available. At the time, all the other mainstream players had AI tools that restricted users to task completion within the LLM (large language model) interface. When Comet first launched, it was only available to Perplexity Max subscribers willing to pay the $200 monthly fee.
October 2, 2025 - Free Comet Browser Launched
Perplexity announces the full release of its free browser, throwing its hat in the browser ring among the current giants, Google and Safari.
October 3 and 9, 2025 - Opera and Dia Launch Free AI Browser Respectively
Opera and The Browser Company launch Opera Neon and Dia, respectively, beating bigger players like OpenAI and Microsoft to the punch.
October 21, 2025 - OpenAI Launches ChatGPT Atlas
OpenAI tosses ChatGPT Search to the wayside and slams its shiny new agentic browser, ChatGPT Atlas, on the table to rival Comet as the new browser in town.
Higher ed practitioners, like Ray Schroeder, have been talking about agents since February, but the launches of Comet and Atlas have sparked a flurry of deep dives and analyses. I’ve tried to contextualize these thoughts by figuring out whether students have actually started using these browsers in the ways that scholars have predicted. Finding such evidence has been challenging. Even TikTok, my go-to source for getting a student pulse on teaching and learning, failed to offer the viral how-tos and student think pieces I expected to see. Perhaps it’s too early; I’m eager to see student survey data and anecdotal profiles about how agents are impacting students and instructors alike.
According to Dr. Aviva Legatt, faculty are already putting Comet to the test, uncovering its capabilities for students and higher ed instructors. Below is an eye-opening quote from one of Legatt’s pieces
“Other faculty who tested agentic tools describe scenarios that border on the surreal:
One professor watched as an agent logged into Canvas, graded multiple assignments, and posted written feedback under their name—without being asked to write feedback at all.
Another reported the tool bypassing Duo two-factor authentication, entering a restricted site even though the human user had to manually approve access on their phone.
Several pointed out that once a browser is duplicated, the agent may inherit all saved passwords and stored sessions—potentially reaching beyond the LMS into banking, email, or medical portals.”
What it All Means for Online Assessments
We Need Better Assessments…ASAP
Marc Watkins talked about the “generally broken” nature of online assessments in a hot-off-the-presses Insider Higher Ed piece, and how assigning students process-oriented work, such as “process notebooks” in which students upload photos of handwritten journal entries about their process for completing a project or their experience engaging with course material—can combat the use of agents.
But Watkins’s comments are a reminder that online assessment in higher ed was in many ways already broken. As instructors, which will we end up prioritizing: ensuring students ‘do the work‘, or ensuring students do valuable work? True-or-false questions, whether done by an agent or by a student sitting in a room of 40 peers all staring at bluebooks, are largely unhelpful. Though in-person assessments prove that students are completing the tasks we assign, their relationship to the work is still disembodied. They know the task isn’t reflective of the type of critical thinking they’ll actually have to do post-graduation, and that impulse to disengage and offload will still be there.
“Define ____”, “Summarize ____”; the days of these questions are finished. I got into a respectful disagreement about this in a workshop I recently facilitated. The faculty member’s premise: synthesizing information is an important skill. I challenged them to think about why this skill is needed and in which contexts. Why would a student need to summarize something for someone? To sell a product? To persuade a congressperson? To give a hard diagnosis to a patient? To introduce an important concept before facilitating an activity for elementary school children?
One potential way to combat students’ use of agents for completing online tests and quizzes is to skip the remedial questions and create the context in which students will use these fundamental skills. For example, an online nursing instructor can ask a student to record themselves—either on a Zoom call with a role-playing classmate or engaging with an AI avatar—delivering a difficult diagnosis to a patient. They may have to explain a complicated condition or answer an unexpected question from the patient about how the condition developed in their body. Synthesizing information is necessary in this scenario, and the instructor could measure students’ ability to summarize and communicate to an audience without explicitly asking for a summary.
I have thoughts about how assignments like this might be implemented at scale and in large classes (potentially the subject of an upcoming article). One of my radical visions for student assessment is a (non-writing) class with nothing but projects, presentations, and scenarios - the three things students will need to manage and deliver well as citizens, regardless of the specific learning goals. Even more radical: have students come up with all the projects and assessments for a course. Doing so makes them responsible for the inputs and outputs of their learning from start to finish, and ensures that each major element of the course is built around their interests and goals. This approach has been studied with promising results, though evidence of instructors trying this in their courses seems limited.
As a final thought, let us be reminded of the top two reasons students cheat, according to Insider Higher Ed’s most recent student voice survey on AI: pressure to get good grades and limited time. My sense is that de-emphasizing grades (low-stakes assignments, even not grading at all) and giving students more time to complete assignments without penalty can slow down the agentic wave. Let these two areas of teaching be our required reading this holiday season.
If you enjoyed this piece, you’ll also enjoy:





