Will learners trust AI systems? Should they?
Innovation moves at the speed of trust, especially during complex times of change. So for innovation to flourish – for the changes that we’re proposing to schools and systems of learning to take hold – we need to build in authentic signals and measures that instill trust.
“Black boxes,” or systems that we’re told cannot be understood, are mysteries by design. We may be told to trust them by advocates or spokespeople. But that confidence is deservedly shallow. We build trust in systems we understand. So core to building trust-inspiring AI systems is to make them transparent, subject to questioning, critiquing and ultimately reshaping by the people who use them.
The chatty, familiarity of generative AI apps gives them a veneer of personalization: They address us by name, remember our comments, and weave those points into additional questions. But commercial AI systems are too often like the chattiness of a stranger. Especially in learning environments, chattiness isn’t true personalization.
The best teachers know that true “personalization” of learning braids together learning objectives, quality curriculum, an understanding of what the learner does – and doesn’t – understand, and encouraging challenges for learners to do more. When teachers can build those kinds of environments for their students, using elements that time has shown will lead to better outcomes, trust soars.
Over the past two years, as Playlab has built out its AI infrastructure, educators have told us what foundational elements they need to have to build effective AI learning apps. They want:
- The technical means to fortify their apps with content they know is high quality and objective;
- Time to develop the practical know-how around app building, supported by many examples of similar, proven apps;
- Support in collecting and analyzing where learners have gaps in their understanding of content;
- AI tools that can review whether the apps they build achieve their goals.
When those elements come together, we see powerful results.
In New York City, we’ve seen instructional designers fuse high-quality curriculum, such as Illustrative Mathematics, with the richness of student-chosen projects to build responsive and finely tuned AI apps that help students and teachers apply math concepts to real world problems. The leading coach there says she has iterated her tool more than 80 times, continually evaluating its relevance and effectiveness.
Her tool, in turn, has become an exemplaire for other math coaches and teachers. With a few clicks, they can see literally all the instructions and background materials she has used – and then make choices about whether their students need different resources.
Similarly, the chief academic officer of a network of charter schools in Texas spent a summer writing 260 distinct AI apps for the teachers in her schools, drawing on her decades of experience. She embedded into the apps she built detailed curriculum, rubrics and scores of instructional steps. Once school was in session, her teaching staff first saved hours of work by using those apps to create lesson plans. Then they began building their own apps using her apps as models. A science teacher described how she built an app that queried her 80 ninth-grade students about their interests and then suggested potential topics for science fair projects. Instead of getting 80 projects about volcanoes and elephant toothpaste, her students began investigating questions that interested them: How much bacteria accumulated on gym equipment and how different natural disasters might affect their school buildings. “I’m honestly blown away, these projects are a huge step up from last year. Our kids really leveled up,” she said.
Another education leader felt his confidence in the apps he built soared when he integrated what’s called a “knowledge graph,” a detailed topology of how concepts are connected. As a result: when he introduced students to new skills, the embedded knowledge graph helped pinpoint gaps in their understanding and design exercises to develop the skills they needed. The knowledge graph made the AI apps uniquely relevant to individual students (and additionally kept the AI from hallucinating about “possible” next steps, untethered to reality).
In these and many more cases, educators are purposefully building “guidance” into their applications. They are not relying on black boxes to do the work. Instead they are purposefully choosing the content, the knowledge graphs and the assessment tools to build applications that respond to the unique needs and context of their students. They can only do this in a transparent environment, one where they can peer into applications and make changes. That environment builds trust.
Teaching and learning are inherently human activities: Students are motivated by teachers who are interested in them. When teachers and their students know how to manage and build AI systems, when they have access to quality content and tools, they can build systems that are responsive to individual learners. Those systems are truly personalized and will become the building blocks of a quality learning ecosystem.
