(Credit: Getty Images )

Algorithm lets robots make faster sense of our chaotic world

A new algorithm helps robots sense their environment faster, which could help them better navigate our messy human workplaces.

Nicole Casal Moore-Michigan • futurity
May 24, 2019 7 minSource

Young man wearing CDs in glasses (robots concept)

Researchers have developed an algorithm that lets machines perceive their environments orders of magnitude faster than previous similar approaches.

The new work takes a step toward home-helper robots that can quickly navigate unpredictable and disordered spaces.

“Robot perception is one of the biggest bottlenecks in providing capable assistive robots that can be deployed into people’s homes,” says Karthik Desingh, a graduate student in computer science and engineering at the University of Michigan and lead author of the paper in Science Robotics.

“In industrial settings, where there is structure, robots can complete tasks like build cars very quickly. But we live in unstructured environments, and we want robots to be able to deal with our clutter.”

Odd Job, one of two robots in Chad Jenkin’s lab, grabs for an object in the Beyster Building on October 27, 2016. Odd Job and its double, Cookie, can currently grab objects based on depth and color perception. Jenkin’s lab aims to discover methods for computational reasoning and perception that will enable robots to effectively assist people in common human environments. (Credit: Joseph Xu/Michigan Engineering Multimedia Content Producer/U. Michigan)

Historically, robots operate most effectively in structured environments, behind guard rails or cages to keep humans safe and the robot’s workspace clean and orderly. However, a human’s environment, at work or home, is typically a jumble of objects in various states: papers across a keyboard, a bag hiding car keys, or an apron hiding half-open cupboards.

The researchers call the new algorithm Pull Message Passing for Nonparametric Belief Propagation. In 10 minutes it can compute an accurate understanding of an object’s pose—or position and orientation—to a level of accuracy that takes previous approaches more than an hour and a half.

The team demonstrated this with a Fetch robot. They showed that their algorithm can correctly perceive and use a set of drawers, even when half-covered with a blanket, when a drawer is half-open, or when the robot’s arm itself is hiding a full sensor view of the drawers. The algorithm can also scale beyond a simple dresser to an object with multiple complicated joints. They showed that the robot can accurately perceive its own body and gripper arm.

“The concepts behind our algorithm, such as Nonparametric Belief Propagation, are already used in computer vision and perform very well in capturing the uncertainties of our world. But these models have had limited impact in robotics as they are very expensive computationally, requiring more time than practical for an interactive robot to help in everyday tasks,” says Chad Jenkins, a professor of computer science and engineering and a core faculty member at the Robotics Institute.

‘Push messaging’

Researchers first published the Nonparametric Belief Propagation technique along with the similar Particle Message Passing technique in 2003. They’re effective in computer vision, which attempts to gain a thorough understanding of a scene through images and video. That’s because two-dimensional images or video requires less computational power and time than the three-dimensional scenes involved in robot perception.

These earlier approaches understand a scene by translating it into a graph model of nodes and edges, which represent each component of an object and their relationships between one another. The algorithms then hypothesize—or create beliefs of—component locations and orientations when given a set of constraints. These beliefs, which the researchers call particles, vary across a range of probabilities.

To narrow down the most likely locations and orientations, the components utilize “push messaging” to send probable location information across nodes and back.The system then compares that location information with sensor data. This process takes several iterations to ultimately arrive at an accurate belief of a scene.

For example, given a dresser with three drawers, each component of the object—in this case, each drawer and the dresser itself—would be a node. Constraints would be that the drawers must be within the dresser, and the drawers move laterally but not vertically.

The system then compares that information, which gets passed among the nodes, with real observations from sensors, such as a 2D image and 3D point cloud. The messages repeat through iterations until there is an agreement between the beliefs and sensor data.

‘Pull messaging’

To simplify the demands on computing, the research team utilized what is called “pull messaging.” Their approach turns the cacophony of back-and-forth, information-dense messages into a concise conversation between an object’s components.

In this example, instead of the dresser sending location information to a drawer only after computing information from the other drawers, the dresser checks with the drawers first. It asks each drawer for its own belief of its location, then, for accuracy, weighs that belief against information from the other drawers. It converges on an accurate understanding of a scene through iterations, just as the push approach.

To directly compare their new approach with previous approaches, the researchers tested it on a simple 2D scene of a circle with four rectangular arms hidden among a pattern of similar circles and rectangles.

The previous approaches required more than 20 minutes of processing time per iteration to pass messages, while the team’s new method took fewer than two minutes, and as the number of beliefs or particles increased, this improvement becomes exponentially faster.

In these trials, it took five iterations with their new algorithm to achieve less than a 3.5-inch average error in location estimate of the drawers and dresser, or less than 8-inch average error in location estimate when a blanket is partly obscuring the dresser.

This is on par with previous approaches, and varies depending on an object’s size, numbers of parts, and how much is visible to sensors. Most important, the accuracy increases enough for successful manipulation of objects by a robot through continuing iterations.

“This is just the start of what we can do with belief propagation in robot perception,” Desingh says. “We want to scale our work up to multiple objects and tracking them during action execution, and even if the robot is not currently looking at an object. Then, the robot can use this ability to continually observe the world for goal-oriented manipulation and successfully complete tasks.”

The National Science Foundation supported the research.

Source: University of Michigan

The post Algorithm lets robots make faster sense of our chaotic world appeared first on Futurity.

Share this article:

This article uses material from the Futurity article, and is licenced under a CC BY-SA 4.0 International License. Images, videos and audio are available under their respective licenses.

Related Articles: