Letting Robots Guide The Learning Experience
THINK ABOUT HOW you learned when you were a child. Many of us were schooled in colorful rooms full of visual stimuli and packed with other kids. We were asked to be quiet, stop moving and concentrate, pay attention and listen. These days, however, especially in a time of widespread virtual learning, children sit in front of a screen filled with stimuli and tell them to focus.
Maja Matarić, a renowned roboticist and computer scientist, envisions a different world, one in which learners of all ages and abilities can learn together with the help of socially assistive robots in a mixed-reality environment.
“Learning is a multisensory experience,” said Matarić, the Chan Soon-Shiong Chair and Distinguished Professor of Computer Science, Neuroscience and Pediatrics. “Kids especially learn through their bodies. They want to see, hear [and] touch, and they want to do it all at once. If we want to maximize learning, we need to find ways to bring multiple ways of perceiving together.”
In Matarić’s vision, immersive learning experiences that engage sensing and moving would augment learning in a way that traditional lectures or two-dimensional onscreen demonstrations cannot do. Imagine putting on a pair of glasses and stepping into a world of your imagination, with all sorts of wonderful objects floating around. And you have a resourceful learning companion — your very own robot — along for the journey, that is just as aware of the virtual world as you are, if not more. Together you bump the planets and toss the moon around while learning about the solar system.
Your robotic pal is learning not just about a particular subject, but also about you. “It’s learning in a way that is personalized to the learner,” Matarić explained. “The robot knows if you tend to smile more or move around more under a certain set of conditions. We bring together the kinesthetic and the social experience.”
To test what this might look like, Matarić and her team at the Interaction Lab at USC Viterbi ran a pilot study in which children completed kinesthetic mixed-reality coding exercises alongside a curious robot tutor.
“We wanted to see if we could stir up a learner’s curiosity about coding inside an augmented-reality environment,” said Thomas Groechel, a Ph.D. candidate in Matarić’s Interaction Lab and the study’s lead author. “Can we then model that curiosity and encourage it using a robot?”
Groechel created a mixed-reality visual programming language called MoveToCode, built for the Microsoft Hololens 2, in which learners can move lines of code with their hand motions and watch the code executed in augmented reality. Meanwhile a rolling, talking, video-capturing robot is constantly enticing their curiosity by encouraging the learner to play with the code.
“We found that, at least in the short term, this way of learning does stir up curiosity in a way that can be measured and optimized,” Groechel said. “This has promise in changing the learner’s perceptions about what they believe they can do in coding, and that curiosity eventually leads to mastery.”
Matarić believes this can also benefit two other groups whose lives she’s been working for years to improve: children with autism and the elderly.
“Not many parents have the time and resources to create a multimodal immersive learning environment for their children, and not many elderly people have opportunities for rich, interactive experiences,” Matarić said. “With this new approach that combines socially assistive robots and mixed reality, we can create new worlds of learning and enjoyment for various populations of users that can benefit from such rich, supportive and personalized interactions.”