Visitors Now:
Total Visits:
Total Stories:
Profile image
By Next Big Future (Reporter)
Contributor profile | More stories
Story Views

Now:
Last Hour:
Last 24 Hours:
Total:

New System Allows Robots To Continuously Map Their Environment

Friday, February 17, 2012 1:21
% of readers think this story is Fact. Add your two cents.

(Before It's News)

From NextBigFuture.com

Robots could one day navigate through constantly changing surroundings with virtually no input from humans, thanks to a system that allows them to build and continuously update a three-dimensional map of their environment using a low-cost camera such as Microsoft’s Kinect
 

The system, being developed by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), could also allow blind people to make their way unaided through crowded buildings such as hospitals and shopping malls.

To explore unknown environments, robots need to be able to map them as they move around — estimating the distance between themselves and nearby walls, for example — and to plan a route around any obstacles, says Maurice Fallon, a research scientist at CSAIL who is developing these systems alongside John J. Leonard, professor of mechanical and ocean engineering, and graduate student Hordur Johannsson.

B4INREMOTE-aHR0cDovLzQuYnAuYmxvZ3Nwb3QuY29tLy1QOExneDJfVHBLZy9UejJyUnJJcE5fSS9BQUFBQUFBQVJRcy9hVXpVeFkxV0VvZy9zMzIwL3JvYm9tYXBwZXIuanBn

The researchers used at PR2 robot, developed by Willow Garage, with a Microsoft's Kinect sensor to test their system.
Image: Hordur Johannsson


 

The new approach, based on a technique called Simultaneous Localization and Mapping (SLAM), will allow robots to constantly update a map as they learn new information over time, he says. The team has previously tested the approach on robots equipped with expensive laser-scanners, but in a paper to be presented this May at the International Conference on Robotics and Automation in St. Paul, Minn., they have now shown how a robot can locate itself in such a map with just a low-cost Kinect-like camera.

As the robot travels through an unexplored area, the Kinect sensor’s visible-light video camera and infrared depth sensor scan the surroundings, building up a 3-D model of the walls of the room and the objects within it. Then, when the robot passes through the same area again, the system compares the features of the new image it has created — including details such as the edges of walls, for example — with all the previous images it has taken until it finds a match.

At the same time, the system constantly estimates the robot’s motion, using on-board sensors that measure the distance its wheels have rotated. By combining the visual information with this motion data, it can determine where within the building the robot is positioned. Combining the two sources of information allows the system to eliminate errors that might creep in if it relied on the robot’s on-board sensors alone, Fallon says.

Once the system is certain of its location, any new features that have appeared since the previous picture was taken can be incorporated into the map by combining the old and new images of the scene, Fallon says.

The team tested the system on a robotic wheelchair, a PR2 robot developed by Willow Garage in Menlo Park, Calif., and in a portable sensor suite worn by a human volunteer. They found it could locate itself within a 3-D map of its surroundings while traveling at up to 1.5 meters per second.

Ultimately, the algorithm could allow robots to travel around office or hospital buildings, planning their own routes with little or no input from humans, Fallon says.

It could also be used as a wearable visual aid for blind people, allowing them to move around even large and crowded buildings independently, says Seth Teller, head of the Robotics, Vision and Sensor Networks group at CSAIL and principal investigator of the human-portable mapping project. “There are also a lot of military applications, like mapping a bunker or cave network to enable a quick exit or re-entry when needed,” he says. “Or a HazMat team could enter a biological or chemical weapons site and quickly map it on foot, while marking any hazardous spots or objects for handling by a remediation team coming later. These teams wear so much equipment that time is of the essence, making efficient mapping and navigation critical.”

While a great deal of research is focused on developing algorithms to allow robots to create maps of places they have visited, the work of Fallon and his colleagues takes these efforts to a new level, says Radu Rusu, a research scientist at Willow Garage who was not involved in this project. That is because the team is using the Microsoft Kinect sensor to map the entire 3-D space, not just viewing everything in two dimensions.

“This opens up exciting new possibilities in robot research and engineering, as the old-school ‘flatland’ assumption that the scientific community has been using for many years is fundamentally flawed,” he says. “Robots that fly or navigate in environments with stairs, ramps and all sorts of other indoor architectural elements are getting one step closer to actually doing something useful. And it all starts with being able to navigate.”

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks


See more and subscribe to NextBigFuture at NextBigFuture.com

Read more at Next Big Future



Source:

Report abuse

Comments

Your Comments
Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

Top Stories
Recent Stories

Register

Newsletter

Email this story
Email this story

If you really want to ban this commenter, please write down the reason:

If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.