Half of Students Say They Use AI Too Much — and Can't Stop
A 7,000-student Harvard survey reveals teens know AI is undermining their learning. 40% tried to cut back and failed.
Here's a stat that should make every parent and teacher pause: in a survey of 7,000 high school students, nearly half said they feel they're relying on AI too much for their learning. Over 40 percent said they tried to cut back — and couldn't.
That's not researchers raising the alarm from the outside. That's teenagers telling us, unprompted, that they know something is off.
The findings come from Harvard's Graduate School of Education, shared this week as part of a broader conversation about what happens when the tools that make schoolwork easier also make learning harder. And it arrives alongside a wave of new research all pointing in the same direction: AI is changing how students think, and not always for the better.
The Cognitive Offloading Problem
The term researchers keep using is "cognitive offloading" — the idea that when you hand a mental task to AI, your brain simply doesn't engage the same way.
A study published this month tracked 52 professional programmers learning a new coding skill. Half could use AI. Half couldn't. When tested afterward, the AI group scored 17 percent lower on comprehension — even though they weren't any faster at completing the task. Beginners, intermediates, and experts all showed the same drop.
The key insight: it wasn't that AI gave bad answers. The answers were fine. But the programmers who used AI simply didn't learn as much, because their brains weren't doing the heavy lifting.
Think of it like GPS navigation. You get to the destination, but ask you to draw the route from memory and you're lost. The task got completed. The learning didn't happen.
A Brookings Institution report released in January, drawing on over 400 research papers and interviews across 50 countries, put it bluntly: AI's risks to students currently overshadow its benefits. The ease of getting good grades with minimal effort, combined with our natural preference for shortcuts, is "atrophying students' learning — particularly their mastery of foundational knowledge and critical thinking."
The OECD's Digital Education Outlook 2026 echoed the same concern, warning that offloading cognitive work to chatbots leads to "metacognitive laziness" — students stop thinking about their own thinking. And when the AI crutch gets removed, like during an exam, they struggle.
The Self-Regulation Gap
What makes the Harvard survey so striking is the self-awareness. These students aren't oblivious. They know AI is doing too much of the work. They just can't stop.
Ying Xu, an assistant professor at Harvard's Graduate School of Education who studies how AI affects children's development, identified self-regulation as the critical missing skill. Students need to make a plan: "I'm going to do the thinking myself and only use AI for scaffolding." But resisting the temptation to let AI handle everything — especially when it's fast, fluent, and gets you a good grade — requires a kind of discipline that many teenagers (and, honestly, many adults) haven't developed.
The same survey asked students whether learning math and English still feels as important now that AI exists. The result: a major drop in motivation for both subjects.
That's a deeper problem than cheating on homework. It's a shift in how young people see the purpose of learning itself.
The Paradox: Ban It or Embrace It?
Here's where it gets complicated. The answer isn't just "keep AI away from students."
Michael Brenner, a Harvard applied math professor who also works as a research scientist at Google, made this point directly: anyone who doesn't embrace AI in their learning and career is going to fall behind. The tools are too powerful to ignore.
But his solution was striking. When he discovered that ChatGPT could solve his entire graduate problem set, he didn't ban it. He flipped the assignment. Instead of asking students to solve problems, he told them to invent problems that AI couldn't solve — and prove the solutions were correct.
By the end of the semester, 60 students had created 600 original problems that the best AI models couldn't handle. They published a paper together. And Brenner said these students knew more than any class he'd ever taught — because they had to push past what AI could do in order to create something original.
Tina Grotzer, a cognitive scientist at Harvard's Graduate School of Education, noticed something similar. When she gave students a traditional assignment, the AI-assisted versions came back as "60 pages of glop." But when students used AI more thoughtfully — having it quiz them, generate different perspectives on their work, or provide feedback on specific aspects — the quality improved.
The pattern is the same in both cases: AI as a partner in harder work, not a replacement for thinking.
What's Actually Working
Across the research, a few approaches are showing real promise.
Ask harder questions. When AI can solve the easy stuff, the classroom needs to move to problems that require genuine human thinking — creativity, judgment, synthesis across messy real-world contexts. Brenner's "invent problems AI can't solve" approach is one version of this. Make the thinking visible. Oral exams, explain-your-reasoning assignments, and peer discussions all force students to demonstrate understanding, not just produce answers. It's harder to fake comprehension when you're standing at a blackboard. Teach the skill of using AI well. The coding study found that programmers who asked AI to generate code and explain it performed better than those who just copied the output. The explanation step kept their brains in the game. Start with foundations. The research consistently suggests students need to build basic skills before AI enters the picture. You need to understand the chain rule before you can use AI to push past it.The Bigger Question
What the Harvard survey really reveals is a crisis of purpose.
If the point of school is to get good grades with minimal effort, AI has already won that game. Students know it. They're using it. And they can feel the hollowness of completing tasks without understanding them.
But if the point of learning is to develop a mind that can think, adapt, create, and handle problems nobody's seen before — then AI becomes a tool that extends what you can do, not a shortcut that does it for you.
We've been here before. Calculators didn't kill math. Wikipedia didn't kill research. But both forced education to evolve — to stop testing things machines could do and start valuing things only humans could.
AI is asking the same question, at a much larger scale: What are schools actually for?
The students already know something is wrong. Over 40 percent tried to fix it themselves and failed. That's not a willpower problem. That's a design problem — in the tools, in the classrooms, and in the way we think about what learning means.
The encouraging part? Some educators are already figuring it out. The classrooms that lean into harder work, more creativity, and genuine understanding aren't just surviving the AI era. They're producing students who know more than ever.
The race isn't between students and AI. It's between two versions of education: one that lets AI do the thinking, and one that uses AI to think harder.
Keep Reading
The Pentagon gave Anthropic a Friday deadline. This is what happens when AI safety meets national security.
The only frontier AI with classified Defense Department access just refused to remove usage restrictions. The Pentagon threatened to invoke the Defense Production Act. Friday is the deadline.
AI Just Drove a Rover on Mars—And Picked Its Own Route
NASA's Perseverance completed the first AI-planned drive on Mars. No human operators. Just Claude analyzing terrain and charting 456 meters autonomously.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.