The AI Cheating Panic Is Loud — But Students Tell a Different Story
A fascinating new study from Cal Poly is challenging the dominant narrative about AI in education. While headlines scream about AI-powered cheating epidemics, student Parker Jones decided to find out what’s actually happening on campus. He interviewed more than 50 students, and what he found is both reassuring and illuminating.
What Students Are Actually Doing with AI
Nearly every student Jones interviewed uses ChatGPT regularly — most of them weekly or daily. But here’s the thing: they’re not using it to cheat. They’re using it for the most mundane purposes imaginable.
As one junior studying electrical engineering put it: “The best way I’ve heard Chat described for school is like 24-7 office hours.”
Students are asking follow-up questions from lectures, getting help unpacking assignment instructions, reviewing their own writing, organizing study plans, and generating practice materials before exams. In other words, they’re using AI the way a good tutor would be used — to understand material better, not to bypass learning.
The Fear of Overreliance
Perhaps the most surprising finding is that students’ biggest concern about AI isn’t getting caught — it’s becoming too dependent on it. A freshman in computer science told Jones: “I don’t use it in my Comp Sci 101 class to code; I think it’s important to understand the fundamentals when you’re learning something brand-new.”
Students are self-regulating. They care about actually learning the material. Cal Poly’s “learn by doing” philosophy runs deep, and students understand that outsourcing the doing means missing the learning.
The Perception Gap
There’s an interesting paradox in the data: students trust their own AI use as responsible, but they’re suspicious of how everyone else is using it. This perception gap mirrors the broader cultural conversation — the loudest voices dominate the narrative, while the quiet, responsible majority stays invisible.
As Jones writes: “Students don’t tell their friends that ChatGPT helped them understand a confusing lecture at 2 a.m. So responsible use stays hidden, while the stereotype of the AI zombie spreads.”
What This Means for Education
The study points to a real opportunity for universities. Instead of treating AI as a threat to be policed, institutions could be helping students develop frameworks for using these tools effectively and ethically. The students are already doing this on their own — imagine what could happen with institutional support.
Jones concludes with a powerful observation: universities face a choice. They can double down on being “degree factories” where AI is always a threat to the business model. Or they can embrace the moment and become spaces that teach students how to work alongside powerful new tools — preparing them for the world that actually exists.
At BrainStream, we believe the second path is the only one that makes sense. AI isn’t going away, and the students who learn to use it thoughtfully will have an enormous advantage.
Source: “The AI cheating panic is loud. The way students actually use ChatGPT is much quieter.” by Parker Jones, guest post on OpenAI’s ChatGPT for Education newsletter, March 26, 2026.
