AI in EdTech: Teaching, Not Shortcuts
I'm a parent of a 4-year-old and a 6-year-old. They're not in classrooms with AI yet, but they will be soon. I think about this a lot. Not in an abstract "future of education" way, but in a very concrete "will my kids actually learn to think for themselves?" way.
The thing I value most in my own career is the drive to keep learning. New technologies, better ways to lead, deeper understanding of how systems work. I want my kids to grow up with that same instinct. So when I look at AI in education, I'm not asking whether it belongs there. I'm asking whether it strengthens or weakens a student's desire to learn.
AI is already in classrooms. It's becoming part of how students approach homework and how teachers think about lesson planning. As a parent, I'm watching closely, wondering whether the next generation is learning to think or learning to skip the thinking entirely.
The debate about whether AI belongs in education is over. It's here. What matters now is whether we use it to make learning better or let it quietly erode the skills we're supposed to be building.
I believe AI can genuinely help. But only if the edtech industry is thoughtful about how it builds these tools. That means rethinking assessment, rethinking how AI interacts with students, and building real guardrails into the products.
AI Should Teach, Not Answer
Right now, most students interact with AI the same way. Paste the question, copy the answer. That's not learning.
But AI doesn't have to work that way.
A teacher with 30 students can't give each one detailed feedback on every writing draft. An AI tutor can. It can point out where an argument falls apart, ask "what evidence supports this claim?", or nudge a student to reconsider a weak conclusion. Not write the essay for them, but coach them through making it better.
AI can also meet students where they are. Every classroom has kids at different levels. AI can offer harder problems to students who are ready and break things into smaller steps for students who are struggling. That's not replacing the teacher. It's giving them help that scales.
AI is also good at making thinking visible. It can ask a student "you got the right answer, but can you walk me through how?" or "you picked B, what made you rule out C?" Good teachers do this all the time. They just don't have enough minutes in the day to do it with every kid.
The core idea is simple. AI should ask questions, not give answers. The best edtech tools I've seen treat AI as a thinking partner. It prompts reflection, surfaces gaps, and pushes students to wrestle with ideas instead of bypassing them.
Assessment Needs to Change
If students can get answers from AI, then assessments that only test answers are broken. This isn't even AI's fault. It's exposing a weakness that has existed for a long time. Too much of assessment measures recall instead of understanding.
AI gives us a reason to finally fix that.
Focus on the Process
When a student submits an essay, the final product tells you less than you'd think. What actually shows learning is the messy stuff. Drafts, revisions, the evolution of their thinking. Can they explain why they made certain choices? Can they stand up and talk through their reasoning?
AI can produce a polished final product easily. It can't fake the process of getting there. Assessment should reward that process.
Test Understanding, Not Memory
Multiple-choice questions like "What year did X happen?" were never great assessments, and AI makes them pointless. Better to ask students to apply concepts to new situations, analyze something critically, or build an argument from scratch.
One approach I find compelling is having students use AI to generate a first draft, then asking them to evaluate and improve it. What did the AI get right? What did it miss? Why? This teaches students to think alongside AI instead of just accepting whatever it produces. That's a skill they'll use for the rest of their lives.
Spread Assessment Out
When one exam determines the grade, the pressure to use AI as a shortcut is enormous. But when learning is measured continuously through projects, presentations, discussions, and portfolios, there's less to gain from cutting corners and more to gain from actually engaging.
Edtech platforms can support this by building tools that track progress over time, support conversations between students and AI, and measure growth rather than one-time performance. It's harder to build than a quiz engine, but it's also harder to game.
Guardrails Matter
You wouldn't hand a kid a power tool without safety instructions. AI is no different. The tool is useful, but it needs boundaries, especially for young people.
Protect Wellbeing
AI systems built for students should actively support healthy habits. That means reminding them to take breaks after sustained use. It means recognizing when a student seems frustrated and suggesting they step back or try a different approach. It means an AI tutor probably shouldn't be chatting with a 12-year-old at midnight.
These aren't nice-to-haves. If we're putting AI in front of kids, we have a responsibility to think about how it affects them beyond academics.
Protect the Learning
The AI itself needs rules about how it helps. It shouldn't just hand over answers to assigned work. Instead, it should start with hints, then guiding questions, and only offer more direct help if a student is genuinely stuck after trying. The goal is productive struggle. Not frustration, but not effortless answers either.
AI should also be honest about its limits. "I'm not sure about this, you should check with your teacher or your textbook." Teaching kids to question AI is just as important as teaching them to use it.
Build Products That Earn Trust
Banning AI from classrooms won't work. Students will find workarounds, and we'll have missed the chance to teach them how to use it well. The better path is building edtech products that are trustworthy from the start.
That means optimizing for learning outcomes, not time-on-platform. Engagement without learning is just entertainment.
It means making guardrails visible and intentional. Break reminders, scaffolded help, clear data practices. Parents and teachers need to see these and trust them. These aren't limitations on the product. They're the reason people will choose it.
It means supporting teachers instead of sidelining them. The best AI tools help teachers see which students need attention, save time on repetitive feedback, and handle the kind of individualized pacing that one person simply can't do for thirty kids. The teacher is still the relationship, the mentor, the person in the room.
And it means listening to students. They're already the most experienced AI users in most classrooms. Products that bring students into the design process will be better for it.
This Matters
Every generation of technology brings the same worry. Calculators will make kids forget arithmetic. Wikipedia will kill research skills. The internet will destroy attention spans. Each time, the tools that were used thoughtfully made learning better. The ones that got ignored or banned just went underground.
AI is the most capable educational technology we've ever had. It can personalize learning in ways that weren't possible before. It can give teachers more time for the work that actually requires a human. Inspiring students, mentoring them, building real relationships.
But it can just as easily become the ultimate shortcut. It can weaken critical thinking and make intellectual passivity the default. The difference comes down to how we build and deploy these tools.
When edtech gets this right, students don't just learn how to use AI. They learn to think for themselves, ask good questions, and keep learning on their own. That's always been the point of education. AI just makes the stakes clearer.
The technology is here. The guardrails are ours to build.
Related Posts
Agentic Engineering: A Practical Guide to Working With AI Agents
Agentic engineering is how software gets built now. Here's what it actually means, how it works under the hood, and how to set up your projects to get the most out of it.
Technical Debt: A Shared Responsibility
Technical debt isn't just a coding problem - it's a team challenge. Here's how leaders and engineers can work together to manage it effectively.
Code Review Culture: Building Constructive Feedback Loops
Code reviews can be your team's superpower or its biggest source of friction. Here's how to build a review culture that actually works.