Stanford study finds AI improves the performance of teachers and students

The AI gives feedback based on transcripts of lessons to help teachers improve student uptake.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Stanford researchers have found that machine learning AI can help improve the performance of teachers, as well as student learning and satisfaction.

Their study, published in Educational Evaluation and Policy Analysis, used the AI as something of a mentor for educators, helping them improve their use of a technique called student uptake, wherein teachers note — then build on — student’s responses.

“When teachers take up student contributions by, for example, revoicing them, elaborating on them, or asking a follow-up question, they amplify student voices and give students agency in the learning process,” the researchers wrote in their paper. 

Stanford researchers have found that machine learning AI can help improve the performance of teachers, as well as student learning and satisfaction.

Because uptake is associated with positive gains in student learning and achievement — who doesn’t like feeling seen? — many experts “consider uptake a core teaching strategy,” the authors noted. 

There’s a problem, though: it’s also considered a very tricky skill to improve.

The team set out to find if an AI would be able to adequately help teachers upgrade their uptake abilities.

“We wanted to see whether an automated tool could support teachers’ professional development in a scalable and cost-effective way, and this is the first study to show that it does,” lead author Dora Demszky, an assistant professor in Stanford’s Graduate School of Education, said.

Getting feedback: Previous research has shown that specific and timely feedback can help improve a teacher’s performance, Demszky said.

That feedback usually comes from observation, where experienced educators sit in on class, observe, and offer actionable feedback. The most commonly used observation tools in the US, like the Framework for Teaching and CLASS, have components that measure uptake.

Unfortunately, teachers in the US say that they have little access to the kind of feedback that academic researchers have found to be so valuable. Most of their feedback comes from school administrators, who have other jobs to attend to, and may be of low quality — not to mention stressful.

The AI program, called M-Powering Teachers, looks to provide high-quality feedback quickly and in a form that’s accessible, scalable, and non-intimidating.

“We make such a big deal in education about the importance of timely feedback for students, but when do teachers get that kind of feedback?” Chris Piech, an assistant professor of computer science education at Stanford and a study co-author, said in a statement.

“Maybe the principal will come in and sit in on your class, which seems terrifying. It’s much more comfortable to engage with feedback that’s not coming from your principal, and you can get it not just after years of practice but from your first day on the job.”

The AI program, called M-Powering Teachers, looked to provide accessible, scalable, and non-intimidating feedback for educators.

Machine learning mentor: M-Powering Teachers is based on natural language processing, a subset of AI that focuses on comprehension and analysis of human speech.

The AI analyzes transcripts of classes, finding patterns in class conversations and flipping them into actionable insights. M-Powering Teachers measures uptake by keeping tabs on how often what the teacher says is derived from student contributions; highlights which questions got lots of responses; and breaks down the ratio between student and teacher talking time.

That feedback is then given to the instructor via an app within a few days of the class, using positive, non-judgemental language and providing specific transcript examples.

Putting it to the test: To test M-Powering Teachers, Demszky and colleagues rolled it out in the spring 2021 session of Stanford’s Code in Place program. A free, five-week, online coding course, Code in Place relies on volunteer coding instructors.

Many of these volunteers have little to no training in education — good candidates for an accessible, scalable mentor. Instructors were all given basic training, course outlines, and lesson goals, and were divided into two groups, one with M-Powering Teachers and the other without.

The team found that, on average, instructors who received the AI’s feedback increased their use of uptake by 13% compared to the control group. By looking at course surveys and completion rates of optional assignments, the team also came to the conclusion that students in the M-Powering Teachers group had improved learning rates and higher satisfaction with the course.

Researchers are now working on evaluating M-Powering Teachers in perhaps its most vital use case: in-person, K-12 education. But the ability to get quality transcripts is a thorny issue.

To test the AI in other environments, Demszky and study co-author Jing Liu, currently at the University of Maryland, gave it to instructors in a mentoring program called Polygence, where they work one-on-one with high school students. They found that the tool increased mentor uptake by 10% on average, while reducing their talking time by 5%. The researchers will present their full data at a conference in July.

Demszky is now working on evaluating M-Powering Teachers in perhaps its most vital use case: in-person, K-12 education. There, she will need to figure out a way around a thorny problem.

“The audio quality from the classroom is not great, and separating voices is not easy,” Demszky said. “Natural language processing can do so much once you have the transcripts – but you need good transcripts.”

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
Should we turn the electricity grid over to AI?
AI could one day be woven throughout the grid management system — here are the pros and cons.
AI skeptic Gary Marcus on AI’s moral and technical shortcomings
From hallucinations to regulatory battles, Gary Marcus argues the AI status quo has failed us and it’s time citizens demand something more.
Flexport is using generative AI to create the “holy grail” of shipping
Flexport is using generative AI to read documents, talk to truckers, and create a “knowledge agent” that’s an expert in shipping.
The West needs more water. This Nobel winner may have the answer.
Paul Migrom has an Emmy, a Nobel, and a successful company. There’s one more big problem on the to-do list.
Can we automate science? Sam Rodriques is already doing it.
People need to anticipate the revolution that’s coming in how humans and AI will collaborate to create discoveries, argues Sam Rodrigues.
Up Next
Subscribe to Freethink for more great stories