email a friend iconprinter friendly iconRobots
Page [ 2 ] of 2
« Prev | 

Like many social robots, Snackbot is a cute fellow—four and a half feet tall, with a head and cartoonish features that suggest, barely, a human being. In addition to lowering expectations, this avoids any trespass into the so-called uncanny valley, a term invented by pioneering Japanese roboticist Masahiro Mori more than 40 years ago. Up to a point, we respond positively to robots with a human appearance and motion, Mori observed, but when they get too close to lifelike without attaining it, what was endearing becomes repellent, fast.

Although most roboticists see no reason to tiptoe near that precipice, a few view the uncanny valley as terrain that needs to be crossed if we’re ever going to get to the other side—a vision of robots that look, move, and act enough like us to inspire empathy again instead of disgust. Arguably the most intrepid of these explorers is Hiroshi Ishiguro, the driving force behind the uncanny valley girl Yume, aka Actroid-DER. Ishiguro has overseen the development of a host of innovative robots, some more disturbing than others, to explore this charged component of human-robot interaction (HRI). In just this past year he’s been instrumental in creating a stunningly realistic replica of a Danish university professor called Geminoid DK, with goatee, stubble, and a winning smile, and a “telepresence” cell phone bot called Elfoid, about the size, shape, and quasi cuddliness of a human preemie. Once it’s perfected, you’ll be able to chat with a friend using her own Elfoid, and her doll phone’s appendages will mimic your movements.

Ishiguro’s most notorious creation so far is an earlier Geminoid model that is his own robotic twin. When I visit him in his lab at ATR Intelligent Robotics and Communication Laboratories in Kyoto, Japan, the two of them are dressed head to toe in black, the bot sitting in a chair behind Ishiguro, wearing an identical mane of black hair and thoughtful scowl. Ishiguro, who also teaches at Osaka University two hours away, says he created the silicone doppelgänger so he could literally be in both places at once, controlling the robot through motion-capture sensors on his face so he/it can interact through the Internet with colleagues at ATR, while the mere he stays in Osaka to teach. Like other pioneers of HRI, Ishiguro is interested in pushing not just technological envelopes but philosophical ones as well. His androids are cognitive trial balloons, imperfect mirrors designed to reveal what is fundamentally human by creating ever more accurate approximations, observing how we react to them, and exploiting that response to fashion something even more convincing.

“You believe I’m real, and you believe that thing is not human,” he says, gesturing back at his twin. “But this distinction will become more difficult as the technology advances. If you finally can’t tell the difference, does it really matter if you’re interacting with a human or machine?” An ideal use for his twin, he says, would be to put it at the faraway home of his mother, whom he rarely visits, so she could be with him more.

“Why would your mother accept a robot?” I ask.

Two faces scowl back at me. “Because it is myself,” says one.

Before robotic versions of sons can interact with mothers the way real sons do, much more will be required than flawless mimicry. Witness the challenges HERB faces in navigating through simple human physical environments. Other robots are making tentative forays into the treacherous terrain of human mental states and emotions. Nilanjan Sarkar of Vanderbilt University and his former colleague Wendy Stone, now of the University of Washington, developed a prototype robotic system that plays a simple ball game with autistic children. The robot monitors a child’s emotions by measuring minute changes in heartbeat, sweating, gaze, and other physiological signs, and when it senses boredom or aggravation, it changes the game until the signals indicate the child is having fun again. The system is not sophisticated enough yet for the complex linguistic and physical interplay of actual therapy. But it represents a first step toward replicating one of the benchmarks of humanity: knowing that others have thoughts and feelings, and adjusting your behavior in response to them.

In a 2007 paper provocatively entitled “What Is a Human?” developmental psychologist Peter Kahn of the University of Washington, together with Ishiguro and other colleagues, proposed a set of nine other psychological benchmarks to measure success in designing humanlike robots. Their emphasis was not on the technical capabilities of robots but on how they’re perceived and treated by humans.

Consider the benchmark “intrinsic moral value”—whether we deem a robot worthy of the basic moral considerations we naturally grant other people. Kahn had children and adolescents play guessing games with a cute little humanoid named Robovie. After a few rounds an experimenter would abruptly interrupt just as it was Robovie’s turn to guess, telling the robot the time had come to be put away in a closet. Robovie would protest, declaring it unfair that he wasn’t being allowed to take his turn.

“You’re just a robot. It doesn’t matter,” the experimenter answered. Robovie continued to protest forlornly as he was rolled away. Of course it wasn’t the robot’s reaction that was of interest—it was being operated by another researcher—but the human subjects’ response.

“More than half the people we tested said they agreed with Robovie that it was unfair to put him in the closet, which is a moral response,” says Kahn.

That humans, especially children, might empathize with an unjustly treated robot is perhaps not surprising—after all, children bond with dolls and action figures. For a robot itself to be capable of making moral judgments seems a more distant goal. Can machines ever be constructed that possess a conscience, arguably the most uniquely human of human attributes?

An ethical sense would be most immediately useful in situations where human morals are continually put to the test—a battlefield, for example. Robots are being prepared for increasingly sophisticated roles in combat, in the form of remotely operated drones and ground-based vehicles mounted with machine guns and grenades. Various governments are developing models that one day may be able to decide on their own when—and at whom—to fire. It’s hard to imagine holding a robot accountable for the consequences of making the wrong decision. But we would certainly want it to be equipped to make the right one.

The researcher who has gone the furthest in designing ethical robots is Ronald Arkin of the Georgia Institute of Technology in Atlanta. Arkin says it isn’t the ethical limitations of robots in battle that inspire his work but the ethical limitations of human beings. He cites two incidents in Iraq, one in which U.S. helicopter pilots allegedly finished off wounded combatants, and another in which ambushed marines in the city of Haditha killed civilians. Influenced perhaps by fear or anger, the marines may have “shot first and asked questions later, and women and children died as a result,” he says.

In the tumult of battle, robots wouldn’t be affected by volatile emotions. Consequently they’d be less likely to make mistakes under fire, Arkin believes, and less likely to strike at noncombatants. In short, they might make better ethical decisions than people.

In Arkin’s system a robot trying to determine whether or not to fire would be guided by an “ethical governor” built into its software. When a robot locked onto a target, the governor would check a set of preprogrammed constraints based on the rules of engagement and the laws of war. An enemy tank in a large field, for instance, would quite likely get the go-ahead; a funeral at a cemetery attended by armed enemy combatants would be off-limits as a violation of the rules of engagement.

A second component, an “ethical adapter,” would restrict the robot’s weapons choices. If a too powerful weapon would cause unintended harm—say a missile might destroy an apartment building in addition to the tank—the ordnance would be off-limits until the system was adjusted. This is akin to a robotic model of guilt, Arkin says. Finally, he leaves room for human judgment through a “responsibility adviser,” a component that allows a person to override the conservatively programmed ethical governor if he or she decides the robot is too hesitant or is overreaching its authority. The system is not ready for real-world use, Arkin admits, but something he’s working on “to get the military looking at the ethical implications. And to get the international community to think about the issue.”

Back at Carnegie Mellon it’s the final week of the spring semester, and I have returned to watch the Yume Project team unveil its transformed android to the Entertainment Technology Center’s faculty. It’s been a bumpy ride from realism to believability. Yan Lin, the team’s computer programmer, has devised a user-friendly software interface to more fluidly control Yume’s motions. But an attempt to endow the fembot with the ability to detect faces and make more realistic eye contact has been only half successful. First her eyes latch onto mine, then her head swings around in a mechanical two-step. To help obscure her herky-jerky movements and rickety eye contact, the team has imagined a character for Yume that would be inclined to act that way, with a costume to match—a young girl, according to the project’s blog, “slightly goth, slightly punk, all about getting your attention from across the room.”

That she certainly does. But in spite of her hip outfit—including the long fingerless gloves designed to hide her zombie-stiff hands and the dark lipstick that covers up her inability to ever quite close her mouth—underneath, she’s the same old Actroid-DER. At least now she knows her place. The team has learned the power of lowering expectations and given Yume a new spiel.

“I’m not human!” she confesses. “I’ll never be exactly like you. That isn’t so bad. Actually, I like being an android.” Impressed with her progress, the faculty gives the Yume team an A.

The next month technicians from the Kokoro Company come to pack Actroid-DER for shipment back to Tokyo. Christine Barnes, who’d unsuccessfully lobbied to keep the android at the Entertainment Technology Center, offers to cradle its lolling head as they maneuver it into a crate. The men politely decline. They unceremoniously seal Yume up, still wearing her funky costume.

Chris Carroll covers the Pentagon for Stars and Stripes and has written frequently for National Geographic. Max Aguilera-Hellweg is drawn to stories at the intersection of science and humanity.
Page [ 2 ] of 2
« Prev | 
- ADVERTISEMENT -