Top

news

Stories

 

"Golly, golly, golly," says Jim, who is wearing one of his endless variations of vests with a button-down shirt. They'd finally got Basil's internal compass working, and now this. They haven't even begun to teach him what a bar is or how, in order to obtain beer there, he first needs to get money, know how many beers to order, get himself to the bar and know what kind of beer to ask for (the Gundersons prefer stout). It looks like there won't be time now to add a voice interface before the show, so they'll have to make do with commanding Basil via a wireless keyboard. And don't even get them started on the fact that Basil's wheels keep falling off.

Still, all things considered, they're relatively calm. "We're those evil science types," jokes Jim. "We don't have feelings."

Or maybe it's because Basil, despite the current setback, already has the ability to identify, reason about and then interact with items he may find in a bar — a feat that his creators believe is the hardest problem of all. To figure out how to make Basil do this, the couple pondered some of the most advanced robots around, like unmanned military airplanes and the Mars rovers. These machines handle complex tasks with ease because they rely on a human — someone watching video feeds and identifying for the robot what objects are relevant to its mission and how to handle unexpected developments and so on. The Gundersons were very familiar with this sort of tele-operated robot, having strapped a video camera to a remote-control car and remotely chased cats around their back yard for kicks. But what, exactly, did they, the humans, bring to this person-robot relationship?

Their contribution, the Gundersons decided, was helping the robot simplify and understand all the miscellaneous data with which it's bombarded at any given moment.

"It occurred to us that the key thing that we are doing is taking the little dots on the video screens and turning them into 'chair legs' and 'doorways' and 'cats' and then coming up with a plan about them," says Jim.

It's as if people are living in simplified virtual realities, where they filter out the vast majority of information around them — light gradients and subtle odors and ambient sounds — and just focus on basic abstract concepts. The Gundersons found a quote from twentieth-century philosopher C.I. Lewis that put it well: "We do not see patches of color, but trees and houses; we hear, not indescribable sound, but voices and violins."

The Gundersons call this process "reification," a term they borrowed from philosophy, meaning to mistake an abstract idea for a real thing. They believed they could mathematically model it. If they could program a robot to symbolically identify objects by focusing on just a few key attributes, like basic shapes and sizes, and ignore everything else — just as people do — the machine would be much more adept at navigating its complex and dynamic world. Furthermore, since the robot would be able to recognize objects in his surroundings, the Gundersons could teach it basic attributes of these objects so it didn't see them as general obstacles or targets, but as abstract concepts like people and chairs — abstract concepts that computers are good at reasoning about. Finally, such a robot would be able to store in its memory a basic symbolic mock-up of what these objects look like and where they're located so it wouldn't have to continuously rebuild its concept of the world every time it moved or interacted with it.

Reification, the two believed, was the missing piece between advanced robotics technologies and artificial intelligence. They wrote their new book all about it, but they still had to prove it worked. How do you code something humans do without thinking? How do you figure out which aspects of a chair a robot should focus on to determine that it is, in fact, a chair?

The answer was in teaching the robot to look for the most simple clues imaginable — that a lamp emits light, for example, or that a person has two legs. The Gundersons purposely designed Basil as primitively (and inexpensively) as possible, opting for sonars over video cameras because they figured if they could get reification working on a system as basic as this, they could do it anywhere.

The first trials, however, failed miserably. Jim, using drawing software, sketched up a beautiful three-dimensional model of a chair and uploaded it into Basil's brain — but the robot couldn't, for the artificial life of him, identify chairs in the lab. The problem, they discovered, was the vagaries of the image captured by the sonars never looked like the perfectly designed chair model — so, says Louise, they decided, "Why don't we just have the robot record what it sees?" They instructed him to take sonar image after sonar image of a wooden lab chair, capturing how it appeared from every angle. Then they spent days poring over the data, identifying basic characteristic patterns, like how the chair is waist high and always has legs and a straight back — basic patterns Basil could use to determine whether a given object is a wooden chair.

« Previous Page
 |
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
All
 
Next Page »
 
My Voice Nation Help
0 comments
 
Loading...