AI in Education Is No Longer a Pilot Program
Every conversation about AI in education since late 2022 has been dominated by one question: what do we do about students using ChatGPT to cheat? That question was understandable in 2023. In 2026, it is the wrong question. AI is now embedded in educational infrastructure — in tutoring platforms, assessment systems, curriculum design, administrative operations, and teacher support tools. The question is no longer whether AI belongs in education but how to deploy it in ways that actually improve learning outcomes.
The evidence is still catching up with the deployment speed, but the early data is compelling enough that most major school systems and universities have moved from restriction to integration.
AI Tutoring: The Most Promising Application
One-on-one tutoring is the most effective form of instruction. Benjamin Bloom's 1984 "2 Sigma Problem" showed that students who received individual tutoring performed two standard deviations better than those in conventional classrooms. The problem has always been cost — you cannot hire a personal tutor for every student.
AI tutoring is the closest anyone has come to solving this at scale.
Khanmigo, Khan Academy's AI tutor, is deployed in over 20,000 US schools. It asks Socratic questions, walks through problem-solving steps, and adapts explanations to each student's level. Students using Khanmigo 30+ minutes per week show measurable math proficiency gains — roughly equivalent to moving from the 50th to the 65th percentile.
Duolingo Max uses GPT-powered roleplay for language learning. Over 100 million monthly users access AI conversation partners that adapt to proficiency, correct mistakes in context, and simulate real scenarios.
Carnegie Learning's MATHia models individual mathematical understanding. A 2025 RAND Corporation study found statistically significant algebra gains for MATHia students compared to control groups.
The key insight across all of these: effective AI tutoring is not about replacing teachers. It is about providing the individualized practice and feedback that teachers cannot deliver to 30 students simultaneously.
Personalized Learning at Scale
Beyond tutoring, AI enables adaptive learning pathways that adjust content, pace, and difficulty to individual students.
DreamBox (Discovery Education) adjusts math instruction in real time, analyzing not just right answers but approach patterns — where students hesitate, what strategies they try, what errors recur.
Century Tech, widely deployed in UK schools, creates personalized learning paths and gives teachers dashboards showing exactly where each student is struggling.
Arizona State University uses adaptive courseware to personalize general education courses. Retention rates in adaptive sections run 5-8 percentage points higher than traditional sections.
Personalization works best for well-structured domains — math, language learning, introductory sciences — where learning progressions are clear. It is harder to apply to humanities and advanced critical thinking.
Automated Assessment: More Than Grading
AI grading systems have moved well beyond simple multiple-choice scoring.
Gradescope (Turnitin) uses AI to grade handwritten math and science work, grouping similar solutions and allowing instructors to grade by rubric across hundreds of submissions simultaneously. Instructors report 70% time savings on grading.
ETS and College Board use AI scoring for standardized essay assessments, with AI scores calibrated against human raters. The AI achieves inter-rater reliability comparable to human-human agreement. This is not a replacement for human evaluation of student writing but a scaling tool for high-volume assessment.
Automated feedback is arguably more valuable than automated grading. AI systems can provide immediate, detailed feedback on student writing — identifying structural weaknesses, unclear arguments, and areas for development. The speed of feedback matters: research consistently shows that immediate feedback produces better learning outcomes than feedback delivered days later.
The limitation is real: AI assessment struggles with creativity, originality, and nuanced argumentation. It is excellent at evaluating whether a student has mastered defined competencies. It is poor at evaluating whether a student has produced genuinely original thought.
The Academic Integrity Question
Yes, students use AI to cheat. A Stanford survey in late 2025 found that 68% of undergraduates had used AI tools on assignments at least once. Detection tools like Turnitin's AI writing detector have improved but remain unreliable — false positive rates of 5-10% mean that a significant number of honest students get flagged, creating its own problems.
The institutions handling this best have moved beyond detection to redesign.
Georgia Tech restructured its CS curriculum to incorporate AI explicitly — students use AI assistants but demonstrate understanding through oral exams and live coding. The University of Sydney implemented a tiered policy: some assignments allow unrestricted AI use, some require disclosure, some prohibit it. The International Baccalaureate allows AI in research and drafting stages, provided students document their process.
The pattern: institutions that integrate AI with guardrails fare better than those trying to ban it. Students who graduate without knowing how to work with AI tools will be at a disadvantage.
What Teachers Actually Think
Teacher attitudes toward AI have shifted substantially. A 2026 OECD survey across 30 countries found that 62% of teachers view AI tools positively, up from 38% in 2024. The biggest driver of positive sentiment: AI handling administrative tasks that consume teacher time.
Lesson planning, progress reports, parent communication drafts, IEP documentation, differentiated worksheet creation — these administrative burdens eat hours from every teacher's week. AI tools that handle this work are genuinely welcomed.
But teachers are also clear about what they do not want: AI making decisions about student placement, disciplinary actions, or evaluations without human oversight. And they express consistent concern about equity — the schools with the best AI tools tend to be the schools that already have the most resources.
The Equity Problem
AI in education risks widening existing gaps. Wealthier school districts adopt AI tools faster, train teachers better, and integrate technology more effectively. Students with home internet access and personal devices get more practice with AI tools. Students in under-resourced schools fall further behind.
Some initiatives are attempting to address this. Khan Academy's Khanmigo is offered free to low-income school districts through philanthropic partnerships. The Gates Foundation has invested $160 million in AI-powered educational tools for underserved schools. But the gap remains real.
There is also a language gap. Most AI tutoring systems work best in English. Students learning in Swahili, Bengali, or Tagalog have access to significantly less capable tools. Meta's multilingual Llama models and Google's efforts to expand language coverage are helping, but parity is years away.
Where This Goes
The trajectory is toward AI as a standard component of educational infrastructure — as normal as textbooks or whiteboards. The institutions that will get the most value are those investing not just in technology but in teacher training, curriculum redesign, and equity-focused deployment.
The technology is good enough to help. The question is whether the systems around it — policy, training, funding, access — can keep up.
Further Reading
- AI Applications: 20 Real-World Examples — education in the broader AI landscape
- Generative Artificial Intelligence: A Deep Dive — the technology powering AI tutors
- AI Solutions: What Businesses Actually Use — enterprise parallels to education adoption
- AI Weekly Newsletter — 3x/week briefings on what matters
Last updated: April 2026