Stay up to date with latest news about technology, food, fashion, games, business and everything else that you need.

Thursday, May 3, 2018

How to Interact With Robots Without Embarrassing Yourself

How to Interact With Robots Without Embarrassing Yourself



Few things in this world are as exhausting as interacting with humans. You’ve got to maintain eye contact (ugh) and watch for subtle body language (ugh) and pay attention the whole time (ugh). And if you think that’s tough, wait until you start interacting with robots, which aren’t the sharpest knives in the drawer just yet. It’s going to be hell.
That is unless, of course, a particular breed of roboticist can get humans and machines to form a strange new kind of bond. This is the study of human-robot interaction, or HRI. Labs devoted to solving the many problems that come with working with machines are popping up all over the world.
Pouring energy and grant money into what is essentially robot therapy might seem silly if you’re mostly used to interacting with Roombas. But I can assure you, far more sophisticated machines will soon be entering your life. Security robots are already patrolling malls, while nurse robots deliver medicine and companion robots try to steal your heart.
And they’re not always the kind of hulking machines you’d imagine. “People look at robots and they're made out of metal and they look like they can lift a lot of stuff,” says roboticist Anca Dragan, who studies HRI at UC Berkeley. “They look very, very strong, but not all robots are actually like that.” And a robot’s role will vary dramatically depending on how you engage with it.
Take Kuri. It’s got kind of an R2-D2 vibe going on. It rolls around your house and takes pictures of you. It giggles if you rub its head. But what it doesn’t do is physical labor, so its designers have had to nonverbally telegraph that to the user. That’s why they didn’t give it arms—no sense in getting your hopes up.
As robots grow more sophisticated, they’ll also begin to subtly manipulate us. I can attest that interacting with an advanced robot—especially one designed to be cute, like Kuri—feels exceedingly weird. The temptation is to give them more agency than they really have.
So different robots will have to telegraph different expectations (unlike Kuri, these robot arms can lift 1,000 pounds, for instance), but they’re also united by a common problem. “All these robots suffer from the same challenge, which is they need to anticipate,” says Dragan. “It's not enough to know what people are currently doing. They need to know what's going to happen in the future.”
In a lab at UC Berkeley, researchers are trying to get robots to better predict human behavior, with a system developed by PhD students David Fridovich-Keil and Sylvia Herbert and their advisor Claire Tomlin. A drone is programmed to hover back and forth between two points and to avoid humans that wander into its path. Easy enough. Except the robot has no way of knowing that someone’s spilled coffee on the floor, and that the human has to suddenly change course to avoid slipping. The robot can’t reason why the human would do such a thing, and boom, you get a collision.
Now run that same experiment again and actually make the robot less confident about its model of human behavior. Make it question its understanding of the human’s future goals, and it grows more conservative. Then the robot can cautiously scoot around the person.
A robot has to pull itself out of the present if it intends to be useful. “It's not enough to just know where the person currently is,” says Dragan. “It has to think ahead to where the person will be in the future.”
Think about this in the context of self-driving cars. They’re great at locking onto the car ahead of them and following at a safe distance. But say your robocar wants to change lanes. It can’t just eye the car next to it and make its move—it has to understand where both cars will be in the future, several hundred feet down the road. If it just lives in the moment, it’ll crash.
It’s one of a galaxy of conundrums that designers of advanced robots find themselves tackling. Because HRI isn’t just about the consumer’s experience, but how designers interact with the machines. One particularly tough problem: While robots can be strong and precise and consistent, they struggle with context.
Let’s return to robot vacuums as an example. A logical way to put one of these to work is to say, Look, collect as much dust as you possibly can and you’ll get digital brownie points.
“If you actually deploy this robot, what it will do is it will suck in a little bit of dust and then dump it back out,” says Dragan, “so that it gets to suck it in again and dump it back out and suck it in again.” Yay points! Except also a bit of an inefficient mess.
Call it the King Midas principle of robot behavior. Imagine our King told a little robot helper to turn everything it touched into gold. “Well it turns out that's not actually what he meant, because it's kind of nice to be able to touch food without turning into gold, to touch people without them turning into gold,” says Dragan.
Context is king, the old cliche goes, and without it, robots could well go all King Midas on us. Ugh indeed.

May 3, 2018 at 03:03PM

No comments:

Post a Comment