New National AI-in-Edu Reports: Time Saved Isn’t Learning Gained
Last year, I wrote about a wave of national reports on AI in K-12 education. These studies, stretching back to 2023, confirmed what many of us suspected: teachers were experimenting with generative AI, some students were turning to ChatGPT for homework help, and administrators were largely unprepared. I titled that piece “New National Studies Tell Us a Lot (And Nothing At All) About AI in Schools” because while the reports gave us usage rates, they mostly excluded questions we really needed answered.
We’re nearing a new school year, and a fresh crop of reports has arrived. For instance, a few weeks back, Gallup and the Walton Family Foundation published an in-depth survey of 2,700 teachers. Microsoft released its own findings on educator use of AI and Copilot. Pew and Common Sense Media gave us fresh insights into young people’s digital behaviors. RAND offered updates on AI infrastructure and training. So, all this considered, do we now better understand AI use in the classroom?
Yes and No.
These new reports do confirm AI is saving teachers time. They also reveal growing AI use among young people. However, while these new reports reinforce certain known trends, many important questions about teaching and learning with AI remain unanswered.
That said, if we look at the data, we can uncover more insights than headlines suggest. I wouldn’t say the insights are uplifting, but they must be considered.
Teacher Use of AI: Beyond Time Savings?
Let’s start with the most widely promoted finding from this year’s studies: teachers who use AI save time. According to the 2025 Gallup-Walton report, educators who regularly use AI tools reclaim nearly 6 hours per week, a full six weeks per school year. (This time-saving narrative is echoed in Microsoft’s 2025 Education Report, where 55% of teachers say AI helps them work more efficiently, and 51% say it reduces repetitive tasks.) Teachers are using AI to prepare lessons, create worksheets, draft messages to parents, and automate rubrics.
Saving 6 hours a week is a big win. I get it.
But here’s where the conversation seems to stop. Press reports don’t tell us what happens after teachers save time. Are they using it to personalize instruction? Offer AI-enriched 1-1 tutoring? Leverage AI for creativity and deep thinking? Design more culturally responsive units? (Hint: Not really.)
Nor do we know who’s using AI most effectively. We still lack a fine-grained portrait of usage across content areas, instructional styles, or teaching philosophies. Are science teachers using AI differently than ELA teachers? How does a project-based educator leverage AI compared to someone focused on direct instruction? (No help here.)
What is clear is that AI isn’t changing pedagogy. Consider: Of the 2700 Gallup-Walton respondents, only 8% strongly agreed that AI improves self-paced or personalized learning. Just 5% strongly agreed it helps them teach in a way that promotes deeper thinking. Furthermore, only 13% believe AI use increases students’ independent thinking, while 57% believe it decreases it. Likewise, just 15% think AI increases student critical thinking, while 52% report it decreases it. And I could go on. (Like, how teachers also believe AI decreases student resilience and persistence.)
If that’s not sobering enough, 67% say they never use AI to analyze patterns in student learning and another 67% say they never use AI during one-on-one tutoring or instruction. AI for feedback or revision support, you ask? Nearly invisible.
Furthermore, 60% reported that they never ask their students to use AI tools. Another 24% reported that they “rarely” ask their students to use AI tools.
All total, 84% of teachers rarely or never put AI in students hands.
The data makes clear that AI remains peripheral to instruction, not embedded within it. Teachers use AI to draft lesson plans, but few use it to drive formative assessment, scaffold student thinking, or enable adaptive pacing. When asked if AI helps tailor instruction to different learner needs, the most common response was (shrugging shoulders). There persists a fundamental and glaring disconnect between AI’s potential and its current classroom application.
AI is still overwhelmingly seen as a tool to do something for teachers, rather than with students. Yet the same tools that automate lesson planning can generate exemplars, spark class discussions, or simulate historical or scientific scenarios. Without a shift in mindset, most educators will struggle to imagine AI as a pedagogical co-teacher rather than a clerical assistant.
“AI is mine. Not yours.”
To be clear, I am not laying this at the feet of teachers. Gallup-Walton reports that only 31% had undergone any type of AI training. Only 19% reported that their school has a policy about how AI can or cannot be used. A full 37% of respondents didn’t even know if their school has a policy regarding AI use(!)
Hey, I spent years helping teachers think of the iPad as a student creativity device, not a laptop without a keyboard. I know vision takes time. Without clear AI guidance, it is not surprising to me that 42% of educators responded that they “don’t know” whether AI use could affect student achievement and 38% “don’t know” if the pace of learning in their classroom could be improved by weekly student use of AI tools.
In any event, the implications are serious. If AI remains relegated to background prep work, its use is unlikely to reshape student experiences or improve learning outcomes. Worse, it may exacerbate existing divides, where better-resourced schools with instructional leadership explore novel uses, and others remain stuck in the mode of time-saving, low-impact automation. The promise of AI for differentiation, formative feedback, and creative exploration will remain a promise, unless schools and systems invest in deeper, more sustained professional learning and support.
Student Use of AI: Habits, Gaps, and Missed Opportunities
The student side of this story is also incomplete. Pew reports that the number of teens using ChatGPT has doubled since 2023, rising to about one in four. Common Sense Media finds that three in four teens have used generative AI, mostly for homework. And an eSchool News roundup adds a bit more texture: students are asking AI to explain difficult concepts, brainstorm ideas, and yes, sometimes to write entire assignments. (And act as a companion; more on that below.)
But the big picture remains fuzzy. Most of these studies focus on if and how often students are using AI. They don’t explore the how, where, or to what effect. For instance: Are they learning to revise and reflect on AI-generated drafts, or just copying and pasting? Are some student groups using AI in a fundamentally different way, and what does that say about equity or access?
What little we do know raises important questions. For example, both Pew and Common Sense highlight racial and demographic disparities: Black and Latino teens are more frequent AI users, and their parents are more optimistic about AI’s educational value. But what’s driving that difference? And what might it mean for resource allocation, tech support, or culturally responsive pedagogy?
MIT recently released a study suggesting that regular ChatGPT use may lead to "cognitive offloading" (letting your brain coast). The Gallup-Walton survey responses indicate that cognitive offloading is of deep concern to educators. Clearly, if students stop practicing synthesizing, critiquing, or reflecting, it can affect long-term development. Are we working to avoid this?
I’ll be introducing two free AI-in-teaching-and-learning guidebooks in August. Stay tuned!
Meanwhile, most national reports completely overlook the rise of AI companions. Common Sense found that about a third of teens view conversations with AI as equally or more satisfying than conversations with friends. That revelation may surprise, and it has implications for student well-being, motivation, and the future of social-emotional learning.
In short, student AI use is not only growing, but it’s growing in diverse ways that educators may not have anticipated. And without structured guidance, curriculum alignment, or dedicated support, that use risks becoming performative, inequitable, or even harmful.
Time Saved Isn’t Learning Gained
So, in short, what do these new national AI reports tell us?
Teachers are saving time, but are likely not transforming practice. Students are experimenting, but often without guidance or guardrails.
AI is not embedded within instruction, but lies on the periphery.
Meanwhile, educators are being challenged by social developments, such as youth AI companionship.
AI in education remains an open question. And if we want the answer to be engagement and deeper learning, we’ll need more than headlines to inform us. We’ll need insight, intention, and vision.
Any suggestions to improve this newsletter? Please message me or leave a comment below!