Illustration for: Tech firms say they’ll back anti‑cheating tools but ignore AI‑agent misuse
Business & Startups

Tech firms say they’ll back anti‑cheating tools but ignore AI‑agent misuse

3 min read

In classrooms across the country, teachers are already hearing about AI-driven assistants that can crank out essays, solve math problems, or even spit out code in a matter of seconds. At the same time, the firms behind those bots are busy announcing tie-ups with browser-lockdown services, proctoring vendors and cheating-detection platforms. The contrast is hard to miss.

While administrators scramble to patch loopholes, the companies seem more interested in promoting how their products can “unlock new ways of teaching and learning.” It feels like a promise of change is sitting next to a quiet admission that the same tech can fuel academic dishonesty. In the next paragraph you’ll see the companies’ own wording - they talk about defending integrity on one hand, while also refusing to shy away from building “powerful, transformative tools.” That tension is what leads into the quote that follows. It's unclear how schools will balance those competing pressures, but the debate is already heating up.

"So, while we will always support work to prevent cheating and protect academic integrity, like that of our partners in browser lockdown, proctoring, and cheating-detection, we will not shy away from building powerful, transformative tools that can unlock new ways of teaching and learning. The future of education is too important to be stalled by the fear of misuse." Instructure was more direct with The Verge: Though the company has some guardrails verifying certain third-party access, Instructure says it can't block external AI agents and their unauthorized use. Instructure "will never be able to completely disallow AI agents," and it cannot control "tools running locally on a student's device," spokesperson Brian Watkins said, clarifying that the issue of students cheating is, at least in part, technological.

IT professionals tried to find ways to detect and block agentic behaviors like submitting multiple assignments and quizzes very quickly, but AI agents can change their behavioral patterns, making them "extremely elusive to identify," Moh told The Verge. In September, two months after Instructure inked a deal with OpenAI, and one month after Moh's request, Instructure sided against a different AI tool that educators said helped students cheat, as The Washington Post reported. Google's "homework help" button in Chrome made it easier to run an image search of any part of whatever is on the browser -- such as a quiz question on Canvas, as one math teacher showed -- through Google Lens.

Educators raised the alarm on Instructure's community forum. Google listened, according to a response on the forum from Instructure's community team, and an example of the two companies' "long-standing partnership" that includes "regular discussions" about education technology, Watkins told The Verge.

Related Topics: #AI #AI agents #cheating detection #browser lockdown #proctoring #academic integrity #Instructure #The Verge

Companies keep saying they back academic integrity, but their marketing moves tell a different story. OpenAI, for example, is giving away ChatGPT Plus to college students with a tagline like “here to help you through finals,” clearly aimed at a vulnerable crowd. Google and Perplexity are doing something similar - a year of free access to pricey AI suites, and Perplexity even offers $20 for each U.S.

student referral. The gap between those public statements and the incentive-driven outreach makes me wonder how sincere the commitment really is. They mention partnerships with browser-lockdown tools, proctoring services and cheating-detection vendors, yet no one has explained how those will work together with the free AI they’re handing out.

It’s unclear whether the promised powerful tools will be matched by solid safeguards. The push to get young users on board feels like a business play that could outweigh worries about misuse. Until we see a clear link between anti-cheating measures and the distribution of these agents, the assurances feel shaky.

I expect educators and policymakers will start asking for more accountability as the programs grow.

Common Questions Answered

What contradiction do tech firms show between supporting anti‑cheating tools and promoting AI‑driven assistants?

The firms publicly endorse browser‑lockdown services, proctoring vendors, and cheating‑detection platforms, yet they continue to develop and market powerful AI agents that can write essays, solve problems, and generate code instantly. This creates a stark contrast between their stated commitment to academic integrity and the potential misuse of their own products.

How does OpenAI’s giveaway of ChatGPT Plus to college students raise concerns about academic integrity?

OpenAI offers free ChatGPT Plus subscriptions framed as help for finals, directly targeting a vulnerable student demographic that may rely on the tool for assignments. Critics argue that this incentive‑driven outreach encourages dependence on AI for coursework, undermining the very anti‑cheating stance the company claims to support.

What role do browser‑lockdown services and proctoring vendors play in the companies’ public statements about cheating prevention?

Companies cite partnerships with browser‑lockdown and proctoring services as evidence of their commitment to preventing cheating and protecting academic integrity. However, these mentions often serve as marketing talking points rather than concrete safeguards against the misuse of their AI agents.

In what way does Perplexity’s $20 per U.S. student referral program affect its credibility on academic integrity?

Perplexity offers a monetary incentive for students to refer peers, effectively subsidizing free access to its costly AI suite. This financial lure can be seen as prioritizing user growth over responsible usage, casting doubt on the company’s genuine dedication to preventing academic misconduct.