Most educational technology is designed to reduce friction. Prof. Rene Kizilcec, information science, thinks that is precisely the problem.
“Learning is not about feeling good,” Kizilcec said, who is the founder of the Future of Learning Lab at Cornell. “The emotion of learning is frustration. That's the emotion that's most predictive of learning.”
It is a deceptively simple observation with remarkable implications — for how universities utilize AI tools, how teachers design their courses and how an industry came to confuse convenience with learning — and it is the driving principle behind the work of the Future of Learning Lab.
‘The Emotion of Learning is Frustration’
Founded by Kizilcec roughly seven and a half years ago, the Future of Learning Lab studies the intersection of technology, education and learning science across all age groups, from primary through post-secondary. The lab’s projects span a variety of application areas, from a national database of tutoring interactions to artificial intelligence powered clinical training tools deployed at medical schools across the country to a language-learning platform used in Cornell’s own classrooms.
The motivator of these projects is a single question: what does good teaching actually look like, and how can we effectively incorporate technology? The answer, Kizilcec argues, begins with an uncomfortable truth about learning itself: “the emotion of learning is frustration.”
That statement goes against the principle of many industries that prioritize seamlessness. Platforms such as Microsoft Copilot are engineered for “low friction” — designed to give consumers what they want, quickly — and are misapplied when universities treat them as educational infrastructure, Kizilcec argued.
“Co-pilot happily gives you the answers to all the assignments. It does not hold back.” Kizilcec said, referring to Microsoft’s AI assistant, which Cornell and many peer institutions have made freely available to students. “That is not a good tool in the education space.”
Kizilcec’s argument is not that AI has no place in learning, but that universities and institutions have an obligation to provide tools based on research and learning science. Deploying general-purpose chatbots in an educational context, he suggested, is an abandonment of that responsibility.
“What we should be giving to students is tools that are designed to support their learning,” he said. “And empower teachers to refine [these tools] so that they're aligned with the specific learning objectives of a course.”
One tool Kizilcec recommended is HiTA.ai, a specialized AI platform for education that can assist students and faculty by providing tailored, conversational support on demand. This platform has been incorporated into courses at Cornell such as INFO 4100: “Learning Analytics” and HADM 4205: “Real Estate Financial Modeling.” HiTA helps facilitate student learning by guiding the students’ thought process with appropriate hints instead of providing direct answers to questions.
From Tutors to Tongue Twisters
The Future of Learning Lab’s projects reflect the philosophy of empowering teachers in practice. One of their efforts is the National Tutoring Observatory, an attempt to build the world’s largest repository of video and transcript data about tutoring interactions.
“Right now there is a lack of good data about what good teaching looks like,” Kizilcec said. “And if we don’t know what good teaching looks like, how can we train models to be like good teachers? How can we advance the sciences of teaching?”
Working with seven providers — ranging from expert human tutors to AI-driven voice systems — the lab is creating the Million Tutor Moves dataset, targeting at least one million interactions between teachers and students across a range of subjects, grade levels and educational contexts.
Doctoral students in the Future of Learning Lab have developed tools that put those principles to work, using AI effectively in appropriate settings. For example, the lab has created MedSimAI, which gives medical students at institutions including Weill Cornell, University of California San Francisco and Yale School of Medicine practice conducting clinical conversations with AI-generated patients, allowing them to refine communication skills in a controlled environment before entering clinical settings with real patients.
Another program called ChitterChatter pairs language learners with AI conversation partners, reducing the social anxiety that can accompany speaking a new language in front of a peer.
Running through all of them, Kizilcec said, is the same foundational question: what does good teaching actually look like, and how do we build technology around that, rather than the other way around?
The Human Teacher Isn’t Going Anywhere
For all the lab’s work on AI tools, Kizilcec is certain about one thing: human teachers remain essential.
“Teaching is really, really complex,” Kizilcec said. “Anyone who tries to break it down to something simple is missing the mark.”
Kizilcec elaborated that children are on developmental trajectories with emotional lives that shape how they engage with educational material. Teachers respond to those things in compassionate ways that systems, however sophisticated, cannot replicate.
What technology can do instead, Kizilcec argued, is supplement the teacher — providing the kind of immediate, individualized feedback that human teachers cannot always offer every student in every moment. The key is anchoring those tools in what learning science already knows: that productive struggle and feedback works.
"It is useful to start from the core principles," Kizilcec said, "and then think about how these tools enhance them — and where they create risk."









