The Case for Sending Robots to Day Care, Like Toddlers

Human babies don’t seem to make a good amount of sense, evolutionarily speaking. They’re helpless for many years, and not particularly helpful either—they can’t pitch in around the house or get a job. But in reality, these formative years are critical for training nature’s most remarkable brain: With the simple act of play, children explore their world, adapting themselves to a universe of chaos.

Kids can run circles around even the most advanced robots on Earth, which still only function well in strictly controlled environments like factories, where they perform regimented tasks. But as the machines slowly become more advanced and creep deeper into our daily lives, perhaps we’d do well to let them grow up in a way, argues UC Berkeley psychologist and roboticist Alison Gopnik.

“It may be that what we really need is robots that have childhoods,” she says. “What you need is kind of a little, helpless, not-very-strong robot that can’t break things very much, and it’s actually being taken care of by somebody else. And then have that turn into a system that is capable of actually going out in the world and doing things.”

Gopnik’s proposal is a radical departure from how researchers typically get a robot to learn. One common method involves a human taking a robot through its paces, move by move, so that it learns how to, say, pick up a toy. Another approach has a robot trying random movements and earning rewards for successful ones. Neither option gets a robot to be particularly flexible—you can’t train it to pick up one kind of toy and expect it to easily figure out how to grasp another.

Children, by contrast, react with ease to new environments and challenges. “Not only do they go out and explore to find information that’s relevant to the problems they’re trying to solve,” says Gopnik, “but they also do this rather remarkable thing—playing—where they just go out and do things apparently for no reason.”

There is a method to their mania: They’re curiosity-driven agents building a complex model of the world in their brains, allowing them to easily generalize what they learn. When robots are programmed to learn from a strictly scored goal—with points for good behaviors and demerits for bad ones—they’re not encouraged to do things out of the ordinary. “They’re kind of like kids that have helicopter-type parents, who are hovering over them and checking everything that they do,” says Gopnik.

That kind of close attention might get the kids into Harvard, but it won’t prepare them for what follows. “When they actually get there and they have to do something else, they fall apart and don’t know what to do next,” Gopnik adds. Giving robots a sense of curiosity—play without a real purpose—could help them also deal with the unknown.

In the lab, Gopnik and her colleagues have been figuring out how this might work in practice. They need to somehow quantify how kids go about solving problems with play, so … they let the kids play. And things get tricky immediately. “Because, you know, they’re little kids,” Gopnik says. “We ask them what they think about something, and they’ll give you a beautiful monologue about ponies and birthdays, but not anything that sounds very sensible.”

They’ve found one solution is communicating with custom-designed toys that, for instance, only work when the kid stacks blocks on them. “Since we’re designing the toy, we know what the problem is that the children are having to solve, and we know what kinds of data they’re getting about that problem, because we are the ones who are controlling what the toy does,” Gopnik says. What, for instance, are the inferences the kids are making about how the toy works?

Leave a Reply

Your email address will not be published. Required fields are marked *