A new approach to infuse spatial notions into robotics systems

A new approach to infuse spatial notions into robotics systems
a) 1% of the 2500 exploratory arm configurations mi . b) Two 3D projections of 1% of the sets Mi embedded in the 4D motor space. c) Schematic of the projected manifold and capturing of external parameters. d) Projection in 3D of the 2500 manifolds Mi (gray points) with surfaces corresponding to translations in the working space for different retinal orientations. Credit: Laflaquière et al.

Researchers at Sorbonne Universités and CNRS have recently investigated the prerequisites for the emergence of simplified spatial notions in robotic systems, based on on a robot's sensorimotor flow. Their study, pre-published on arXiv, is a part of a larger project, in which they explored how fundamental perceptual notions (e.g. body, space, object, color, etc.) could be instilled in biological or artificial systems.

So far, the designs of have mainly reflected the way in which human beings perceive the world. Designing robots guided solely by human intuition, however, could limit their perceptions to those experienced by humans.

To design fully autonomous robots, researchers might thus need to step away from human-related constructs, allowing robotic agents to develop their own way of perceiving the world. According to the team of researchers at Sorbonne Universités and CNRS, a robot should gradually develop its own perceptual notions exclusively by analyzing its sensorimotor experiences and identifying meaningful patterns.

"The general hypothesis is that no one gives perceptual notions to biological organisms," Alexander Terekhov, one of the researchers who carried out the study, told TechXplore. "These concepts are instead developed over time, as useful tools that help them to make sense of the vast sensorimotor data they are constantly exposed to. As a consequence, a frog's notion of space will most likely differ from that of a bat, which will in turn differ from that of humans. So when building a robot, what notion of space should we give it? Probably none of these. If we want robots to be truly intelligent, we should not build them using abstract notions, but instead, provide them with algorithms that will allow them to develop such notions themselves."

Terekhov and his colleagues showed that the notion of space as environment-independent cannot be deduced only by exteroceptive information, as this information varies greatly depending on what is found in the environment. This notion could be better defined by looking into functions that link motor commands to changes in stimuli that are external to the agent.

"Important insight came from an old study by famous French mathematician Henri Poincare, who was interested in how mathematics in general and geometry in particular could emerge from human perception," Terekhov said. "He suggested that the coincidence in the sensory input may play a crucial role."

A new approach to infuse spatial notions into robotics systems
The agent can move its sensors in external space using its motor. Although the external agent configuration x can be the same, its sensory experience varies greatly depending on the structure of the environment. Credit: Laflaquière et al.

The ideas introduced by Poincare can be better explained with a simple example. When we look at a given object, the eyes capture a particular image, which will change if the object moves 10 cm to the left. However, if we move 10 cm left, the image we see will remain almost exactly the same.

"This property seems miraculous if you think about how many receptors the human body has," Terekhov said. "It is nearly impossible to have the same input twice in a lifetime, yet we constantly experience it. These low-probability events may be used by the brain to construct general perceptual notions."

To apply these ideas to the design of robotic systems, the researchers programmed a virtual robotic arm with a camera at its tip. The robot noted the measurements coming from the arm's joints every time it received the same visual input. "By associating all these measurements, the robot builds an abstraction that is mathematically equivalent to the position and orientation of its camera, even though it has no explicit access to this information," Terekhov said. "The most important thing is that even though this abstract notion is learned based on the , it ends up being independent from it, and thus works for all environments; the same way our notion of space does not depend on the particular scene we see."

Applying the same principle in another study, the researchers successfully prompted a robot to compensate for an optical distortion caused by a lens placed in front of its camera. Typically, this would be attained by training algorithms on pairs of distorted and undistorted images.

"The tricky part of our study was that the had to complete this task by looking into the distorted images only, just like humans learn to compensate for the distortion introduced by eye glasses," Terekhov said. "We believe that the principles introduced by Poincare, which are the basis of our algorithms, could be more general and are utilized by the brain at multiple levels. We are currently exploring the possibility of using these principles to build neural networks that do not suffer from catastrophic forgetting and can gradually accumulate knowledge."

More information: Learning agent's spatial configuration from sensorimotor invariants. arXiv:1810.01872v1 [cs.LG]. arxiv.org/abs/1810.01872

Henri Poincaré, Science and Hypothesis. www.gutenberg.org/files/37157/37157-pdf.pdf

Unsupervised model-free camera calibration algorithm for robotic applications. ieeexplore.ieee.org/document/7353799

© 2018 Tech Xplore

Citation: A new approach to infuse spatial notions into robotics systems (2018, October 16) retrieved 28 March 2024 from https://techxplore.com/news/2018-10-approach-infuse-spatial-notions-robotics.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Working toward partner-aware humanoid robot control

113 shares

Feedback to editors