There’s a robot making its way across Canada right now. It’s not very advanced — just a Nexus tablet attached to an Arduino, really. Yet it’s already made its way from Halifax to Toronto in about a week, and stands a good chance of making it to British Columbia before long. That’s because its most powerful feature is its cuteness.
HitchBot was designed by Professors David Harris Smith and Frauke Zeller to see how humans would interact with a robot in an unstructured real-world setting. Zeller half-jokingly says they’re testing whether robots can trust human beings. It looks sort of like a gum-ball machine with a bright LED smile and yellow rubber boots. If you talk to it, it’ll ask you for a lift, and maybe to bum a charge off your car’s cigarette lighter. So far, people are eager to help it out.
HitchBot is more of an art project than a scientific experiment, but its initial success shows just how powerful a friendly face can be. This is something toymakers have known forever, but certain attributes — a big head, large eyes, a smile — trigger an instinctive desire to care for things, even when you know they’re inanimate. Basically, it comes down to this: things that look like babies are cute. It’s called baby-schema in the academic literature, and everyone from Disney cartoonists to the makers of Furby makes use of it.
But those designs maximized cuteness for one purpose — to be adored by kids — whereas robots are using it for more complex ends. Cuteness can earn people’s trust. It can mean the difference between affection and horror. It can set the tone for entire interactions, subconsciously setting people up to be more forgiving of errors, and even convincing them to help the bot do things it can’t yet do. Which is why so many of the first robots to leave factory cages and laboratories and operate alongside humans are so darn cute.
Baxter was the first robot to work closely alongside humans in an industrial setting, and its cartoon face was a major reason for its success. Rodney Brooks and the engineers at Rethink Robotics realized that the best user interface for a robot working alongside humans was the interface humans already use — the facial expressions and gestures people use to communicate focus, confusion, and need for help. More than anything else, people had to feel comfortable around it, which meant that the robot had to be predictable, its movements telegraphed in advance by its expressions and gestures, and that it had to appear friendly.
They gave Baxter big emotive cartoon eyes — the more human ones were too deep in the uncanny valley and creeped people out — and programmed a repertoire of human-like gestures for them. When Baxter’s sonar detects someone new entering the area, Baxter turns and "looks" at her, raising its eyebrows. When Baxter is about to pick something up, it looks at whichever arm it’s about to move, signaling to its human coworkers what it’s about to do. When it’s confused, it raises an eyebrow and shrugs.
This isn’t about safety, says Jim Lawton, Rethink’s CMO. Baxter's touch sensors are what makes it safe to be near. When it turns to "look" at you, it doesn't continue to track your presence. It’s all a show for the humans, building trust by signaling that it’s going to do something, then doing it. It’s not a powerful, inscrutable machine. It’s a friendly, googly-eyed coworker. "We get an awful lot of feedback about how friendly the robot is," says Lawton. "People always say that the robot is smiling at me — which is intriguing, because it has no mouth. It’s all happening at a very subconscious level."
Nor do things need faces for humans to imbue them with human qualities. The barest hint of agency will do. People name their Roombas and soldiers develop attachments to PackBots, the bomb-defusing droids by the same company, iRobot, repairing them rather than replacing them and giving them funerals when they’re beyond saving. If something moves like a living thing, says social roboticist Cynthia Breazeal, if it appears to move under its own power, or to orient itself toward you, our brain puts it in a category of things governed by mental states rather than the laws of physics, a psychological quality called animacy.
But if you’re not careful, animacy can backfire. Robots too similar to people fall into the uncanny valley. Totally inhuman ones, like Darpa’s Big Dog, appear unpredictable and threatening. Cuteness lets designers take robotic animacy and give it a friendly, comforting spin.
Which helps explain why Google’s self-driving car is so cute. Google is asking people, both passengers and pedestrians, to put a great deal of trust in their robot driver. Lawton pointed out how disconcerting it could be to cross the street in front of a self-driving car. Normally you could make eye contact with the driver, see that she sees you, and cross. How do you instill that same level of trust with a robot car? You give the car a friendly face: a bulbous body, eye-like headlights, button nose, and wide smile. It’s a design straight out of the baby-schema.
Breazeal’s robot, Jibo, is even cuter, though it lacks a face. Spun out of the MIT Media Lab, Jibo is designed to be a general purpose household robot, reminding you about emails, taking pictures, and reading stories to kids. In the two weeks it’s been on Indiegogo, it has raised over a million dollars.
Breazeal, a former student of Rodney Brooks, has spent her career designing robots that interact with humans in a social manner. For Jibo, she decided to focus on form and movement instead of facial features. An animator on her team had the insight that human movement happens in arcs, whereas robots traditionally move rectilinearly, so they constructed Jibo out of three swiveling rings and topped it with a big, baby-schema style head. In lieu of a "face," it has a single eye that blinks sleepily to show you it’s paying attention. It’s cute like the Pixar lamp is cute, swinging its big head around with clumsy enthusiasm.
Breazeal believes social robots will change the way we interact with technology, away from a tool-based relationship to one of partnership. "It’s a very different metaphor when it’s your partner," she says. People adapt themselves to their tools, she says, and right now those tools consist of menus and icons rather than gestures and language. "People don’t want to feel like a machine to interact with a machine," she says. "They want to feel like a person, so they love it when the technology feels like a human."
A cute bot can also get people to do things they normally wouldn’t. Alex Reben, an engineer turned artist and another alum of the MIT Media Lab, has been playing with the power of robotic cuteness ever since he designed Boxie for his master's. Like HitchBot, Boxie used cuteness as an engineering workaround. Robots have a hard time climbing stairs, so Boxie asks humans to carry it up them. To get humans to go out of their way to help a machine, Reben made Boxie cute, combing through the literature to find the optimal proportions — wide eyes, big head, slight smile, childlike voice.
Boxie also asked people questions, and Reben found that people were surprisingly forthcoming. "We think we’ve gotten responses a human wouldn’t get," he says. He designed a new version, BlabDroid, to be strictly an interview robot, traveling the world asking people personal questions and filming their answers, which will be compiled in a documentary.
The tendency of people to treat minimally anthropomorphic bots as humans worries some people. We are, in a sense, falling prey to a powerful illusion. The expressions and gestures are shorthand for internal states you imagine, instinctively, are like yours, but really are only code. If a thing seems alive, if it looks cute, it's triggering two responses evolved long before we could create simulated life. Not that that's necessarily bad, but some worry it’s a power we don’t yet fully understand the implications of.
Reben, who is considering turning a next-generation BlabDroid into a Jibo-style home bot, cautions that it could cause unintentional emotional harm. His BlabDroids have accidentally made people cry by stumbling into delicate territory with its pre-programed questions. "Maybe social robot engineers should be certified like people who build elevators," he says. "Taking people’s emotions into your hands, you could easily build a machine that could push a depressed person to suicide, if the robot was a depressed person’s companion."
Ryan Calo, a lawyer specializing in robotics at University of Washington, believes anthropomorphic bots will raise new privacy concerns. Because we treat them like semi-humans, it feels like they’re always watching us. "If our spaces become populated by these artificial agents, we’ll never feel like we have moments off-stage," he says. The way someone has chosen to program their companion bot, or the way they treat it, could also become a trove of extremely personal information, he says.
We are, in a sense, falling prey to a powerful illusion
Breazeal argues that it’s better that you feel like a camera is watching you, as opposed to a DropCam, which could be on or off. When Jibo is asleep, it closes its eye and sinks its head, continuing to monitor the environment only for its wake-up word.
Sherry Turkle believes social robots will allow people to withdraw into simulacrum friendships, and singled out Breazeal’s work in her book Alone Together. In response, Reben points out that humans have lived with cute technology for thousands of years. "We’ve genetically modified dogs for millennia to be our social companions," he says. "Some people are dog obsessed, some people aren’t. I think in general social robots will be more like dogs and less like Her," the movie where Joaquin Phoenix falls in love with an AI, "though that will still occur."
Robots need to make a lot of progress before they’re human-like enough to raise these issues, and until they get there, cuteness serves yet another purpose: predisposing people to be more forgiving of errors. That was a motivating factor in HitchBot’s design. "HitchBot can’t communicate perfectly, so we have to make it up with cuteness," says Zeller.
It appears to be working. Hitchhiking is risky, Zeller points out, especially for a rudimentary robot with limited power of communication and no ability to move on its own. But put a cute face on it, give it a name and some bright yellow boots, and its vulnerability makes it appear even more baby-like, triggering an even stronger cute response, and rather than tipping HitchBot over, people have given it a lift and let it mooch off their power supply. "I think it says something good about humanity," Zeller says. It also says that people may be ready to live and work closely with robots, provided those robots are cute enough.