Robots could one day navigate through constantly changing surroundings with virtually no input from humans, or blind people could make their way unaided through crowded buildings.

That hope comes with the development of a hardware and software system that allows robots to build and continuously update a three-dimensional map of their environment using a low-cost camera and on-board algorithms. Researchers at Massachusetts Institute of Technology are at work on the system in the Computer Science and Artificial Intelligence Laboratory.

To explore unknown environments, robots need to be able to map them as they move around—estimating the distance between themselves and nearby walls, for example—and to plan a route around any obstacles, said Maurice Fallon, a researcher at the lab. He’s developing the system with John Leonard, professor of mechanical and ocean engineering at the university, and Hordur Johannsson, a graduate student. Seth Teller, head of the robotics, vision, and sensor networks group at the MIT lab, is principal investigator.

 

Make Way for Robots - Robotics

While a large amount of research has been devoted to developing maps that robots can use to navigate around an area, these systems cannot adjust to changes in the surroundings, Fallon said. “If you see objects that were not there previously, it is difficult for a robot to incorporate that into its map,” he said.

The new approach, based on a technique called simultaneous localization and mapping, or SLAM for short, will allow robots to constantly update a map as they learn new information over time, he said.

As the robot travels through an unexplored area, a low-cost sensor’s visible-light video camera and infrared depth sensor scan the surroundings, building up a 3-D model of the walls of the room and the objects within it. Then, when the robot passes through the same area again, the system’s software compares the features of the new image it has created—including details such as the edges of walls—with all the previous images it has taken until it finds a match, Fallon said.

At the same time, the system constantly estimates the robot’s motion, using on-board sensors that measure distance according to rotation of the wheels. By combining the visual information with this motion data, it can determine where within the building the robot is positioned. Combining the two sources of information allows the system to eliminate errors that might creep in if it relied on the robot’s on-board sensors alone, he said.

Once the system is certain of its location, any new features that have appeared since the previous picture was taken can be incorporated into the map by combining the old and new images of the scene, he said.

Ultimately, the algorithms could allow robots—or the blind—to plan their own routes through buildings, or other unknown environments and constantly changing obstacles.