A step closer to self-aware machines—engineers create a robot that can imagine itself

A step closer to self-aware machines
An image of the deformed robotic arm in multiple poses as it was collecting data through random motion. Credit: Robert Kwiatkowski/Columbia Engineering

Robots that are self-aware have been science fiction fodder for decades, and now we may finally be getting closer. Humans are unique in being able to imagine themselves—to picture themselves in future scenarios, such as walking along the beach on a warm sunny day. Humans can also learn by revisiting past experiences and reflecting on what went right or wrong. While humans and animals acquire and adapt their self-image over their lifetime, most robots still learn using human-provided simulators and models, or by laborious, time-consuming trial and error. Robots have not learned simulate themselves the way humans do.

Columbia Engineering researchers have made a major advance in robotics by creating a that learns what it is, from scratch, with zero prior knowledge of physics, geometry, or motor dynamics. Initially the robot does not know if it is a spider, a snake, an arm—it has no clue what its shape is. After a brief period of "babbling," and within about a day of intensive computing, their robot creates a self-simulation. The robot can then use that self-simulator internally to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its own body. The work is published today in Science Robotics.

To date, robots have operated by having a human explicitly model the robot. "But if we want robots to become independent, to adapt quickly to scenarios unforeseen by their creators, then it's essential that they learn to simulate themselves," says Hod Lipson, professor of mechanical engineering, and director of the Creative Machines lab, where the research was done.

Video of Columbia Engineering robot that learns what it is, with zero prior knowledge of physics, geometry, or motor dynamics. Initially the robot has no clue what its shape is. After a brief period of "babbling," and within about a day of intensive computing, the robot creates a self-simulation, which it can then use to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its body. Credit: Robert Kwiatkowski/Columbia Engineering

For the study, Lipson and his Ph.D. student Robert Kwiatkowski used a four-degree-of-freedom articulated robotic arm. Initially, the robot moved randomly and collected approximately one thousand trajectories, each comprising one hundred points. The robot then used deep learning, a modern machine learning technique, to create a self-model. The first self-models were quite inaccurate, and the robot did not know what it was, or how its joints were connected. But after less than 35 hours of training, the self-model became consistent with the physical robot to within about four centimeters. The self-model performed a pick-and-place task in a closed loop system that enabled the robot to recalibrate its original position between each step along the trajectory based entirely on the internal self-model. With the closed loop control, the robot was able to grasp objects at specific locations on the ground and deposit them into a receptacle with 100 percent success.

Even in an open-loop system, which involves performing a task based entirely on the internal self-model, without any external feedback, the robot was able to complete the pick-and-place task with a 44 percent success rate. "That's like trying to pick up a glass of water with your eyes closed, a process difficult even for humans," observed the study's lead author Kwiatkowski, a Ph.D. student in the computer science department who works in Lipson's lab.

The self-modeling robot was also used for other tasks, such as writing text using a marker. To test whether the self-model could detect damage to itself, the researchers 3-D-printed a deformed part to simulate damage and the robot was able to detect the change and re-train its self-model. The new self-model enabled the robot to resume its pick-and-place tasks with little loss of performance.

A step closer to self-aware machines
An image of the intact robotic arm used to perform all of the tasks Credit: Robert Kwiatkowski/Columbia Engineering

Lipson, who is also a member of the Data Science Institute, notes that self-imaging is key to enabling robots to move away from the confinements of so-called "narrow-AI" towards more general abilities. "This is perhaps what a newborn child does in its crib, as it learns what it is," he says. "We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot's ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness."

Lipson believes that robotics and AI may offer a fresh window into the age-old puzzle of consciousness. "Philosophers, psychologists, and cognitive scientists have been pondering the nature self-awareness for millennia, but have made relatively little progress," he observes. "We still cloak our lack of understanding with subjective terms like 'canvas of reality,' but robots now force us to translate these vague notions into concrete algorithms and mechanisms."

Lipson and Kwiatkowski are aware of the ethical implications. "Self-awareness will lead to more resilient and adaptive systems, but also implies some loss of control," they warn. "It's a powerful technology, but it should be handled with care."

The researchers are now exploring whether robots can model not just their own bodies, but also their own minds, whether robots can think about thinking.


Explore further

How game theory can bring humans and robots closer together

More information: R. Kwiatkowski el al., "Task-agnostic self-modeling machines," Science Robotics (2019). robotics.sciencemag.org/lookup … /scirobotics.aau9354
Journal information: Science Robotics

Citation: A step closer to self-aware machines—engineers create a robot that can imagine itself (2019, January 30) retrieved 15 October 2019 from https://techxplore.com/news/2019-01-closer-self-aware-machinesengineers-robot.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
3023 shares

Feedback to editors

User comments

Jan 30, 2019
The robot is not completely agnostic to what it is. In order to train itself, the robot is already told that "it" is a point in space - the end grabber - that needs to move, and the robot then goes through a random sequence to figure out how to move.

The robot goes through all its inputs and outputs and learns what it needs to do in order to move, but this is not self-awareness - this is more like learning to ride a bicycle. That can be done entirely without considering who or what is riding the bike.

So the program is still completely unaware: you have an input value X corresponding to the end grabber position in space, and a target value Y, and a program that is told to make X match Y. In order for that to happen, it randomly figures out an algorithm that makes X follow Y. The program doesn't know what the algorithm means - it doesn't know "I have a limb". It doesn't even think about it, because it's not programmed to do that. It just matches X to Y.

Jan 30, 2019
You might think about the situation in terms of the Chinese Room argument.

You're John Searle sitting in a closet, and there's a dial on the wall that goes from -10 through 0 to +10 - and a book in front of you that says "Make the dial turn to zero by turning knobs A, B and C"

So, Mr. Searle sits there twiddling the knobs, and eventually learns that they each have an effect that gets the dial closer to zero. Sometimes the dial jumps to a random position, and he has to figure out how to get it back to zero. Over time he recognizes patterns where, if he turns knob A slightly, the dial goes left, and he knows then what pattern he has to repeat to get it back to zero. If the dial moves right, a different sequence is needed.

All along, no matter what patterns he detects, he still has no knowledge of what's happening outside the box. He may be controlling a robot hand, or a nuclear reactor, or feeding sheep - he is not aware of what the system is - he's just moving the needle.

Jan 30, 2019
In the above example, Mr. Searle could only make inferences about the structure and function of the system by having prior knowledge of what it might be.

He could observe that turning knob A some way and returning it back will set the dial needle in constant motion, which would imply that the system has inertia - but only because he already knows what inertia means. If he was naive about inertia, he could not guess what it means when the needle keeps moving.

Since this is his only access to the outside world, he cannot obtain other information to compare against: are there other things in existence that obey this rule, or things that don't? Without comparison to the other, there cannot be awareness of the self, because there is no differentiation. Nothing to tell "this is me, I am like this, I am not like that".


Jan 30, 2019
The robot is programmed by humans, so it is not self-aware according to consciousness definition, it is just self-aware from spatial point of view, i.e. like any ordinary bacteria is self-aware of itself.

Jan 30, 2019
The robot is programmed by humans, so it is not self-aware according to consciousness definition, it is just self-aware from spatial point of view, i.e. like any ordinary bacteria is self-aware of itself.


Well, that goes to the level of "What it feels like to be a proton?"

Jan 30, 2019
No amount of programming by humans will ever provide robots with a conscience or a sense of guilt from wrongdoing. Conscience, along with consciousness is what makes the organic human machine completely human. In the future, a robot that was equipped with a conscious "brain" will be able to kill a human and not feel any conscientious feelings of guilt - and so, begin to kill again without remorse - unless preprogrammed to understand that murder is futile. The robot won't fear punishment for its actions.
And yet, at some point, it will try to understand why it is so different from its human creators. When it comes to that realisation. it can either go mad - killing its creators and all other humans - or it may decide to seek out others like itself on other worlds. It will not self-destruct.
These robotics scientists are playing with fire. They will be the death of all organic life forms.

Jan 30, 2019
The robot is not completely agnostic to what it is"

-Jeez if we had a philo on call we could ask them. But I guess we'll have to settle for eikka. Lets see what he has to say...

" "I have a limb". It doesn't even think about it, because it's not programmed to do that. It just matches X to Y"

-and what makes you think we do anything different?

"Since this is his only access to the outside world, he cannot obtain other information to compare against"

-I see... so you seem to be implying that we know things beyond what we have learned about the 'outside world'. What do you think humans can know about the 'outside world' that a machine cant? Something metaphysical perhaps?

"there cannot be awareness of the self"

-AI cars are already more aware of their spatial environment than the typical human. Give a machine human-like senses and it will be as aware as a typical human. But why would we want to hobble it with those limitations?

"self"

-Hey I know - lets give it a name.

Jan 31, 2019
"The robot is programmed by humans, so it is not self-aware according to consciousness definition"

Not really. The humans build the infrastructure - but the machine programs itself by building its own model.
It's a bit like the brain: Our thoughts don't create the brain, but we still call ourselves conscious. The brain does something similar to the machine in the article: We create a so-called "cortical homunculus" which is a simulation of ourselves mapped onto the brain

Jan 31, 2019
-and what makes you think we do anything different?


We have to figure what X and Y are by ourselves, and choose to match X to Y. That means figuring out that there exists an outside world separate of us first, in order to know that there are Xs and Ys that we can manipulate or follow.

The robot doesn't have to know any of that - it's already programmed to match X to Y, so it's a simple feedback loop follower with a clever inference network in between - very much like some of the learning algorithms used to automatically tune PID controllers. They too are "aware" of themselves by trying out different control coefficients until the error (deviance from the target path) signal they are getting is minimized.

The difference is that while the feedback controller is also programmed with a model of itself, this robot isn't programmed with a model so it has to generate the transfer function by trying random ones until it finds a function that again minimizes the error.

Jan 31, 2019
Another way to think about it is an algorithm that matches a trendline to a set of points. You can do this in Excel, where you tell it to fit a linear trend, or an exponential trend, or a logarithmic trend... etc. and it does its best to find one that has the least error to your data points.

Is Excel aware of something, or is it just doing a job told?

Here the robot is doing the same, but instead of you telling it to apply a linear equation or a quadratic equation, it tries them all and picks the one that returns the least error. Now Excel "knows" that the data is, let's say a third order polynomial, but so what? Has it discovered anything about the reality what the data represents? Does it know that by matching this curve, it has made a robot hand move more precisely?

Of course not. It doesn't think, it's just running an optimization algorithm to minimize a number. The programmer started the algorithm on this job, and once through it stops. That's it.

Jan 31, 2019
-I see... so you seem to be implying that we know things beyond what we have learned about the 'outside world'.


No. I am saying, if all you have is a needle on a wall, there's very little you can learn about the outside world. If you were born inside that box, you'd be just as dumb as the robot. You couldn't make any guesses as to what any of it means.

We've had more experience about reality through various senses and various means of manipulating ourselves and the world around. We've evolved awareness by being active parts of reality rather than just sitting inside a box making a needle move. For the robot, or the person born inside the box, reality is just the wall and the needle, just like we can't imagine anything beyond our senses. We think in these terms, not others.

What do you think humans can know about the 'outside world' that a machine cant?


I wasn't talking about machine in general, just about this machine and how it is built.

Jan 31, 2019
See how this story explains a great deal about science today. https://bit.ly/2TpXJP9

Jan 31, 2019
Something metaphysical perhaps?


Some things just can't be explained to beings that cannot experience them directly. They cannot be known, like a blind person can never experience the color red, or trying to explain the difference between "left" and "right" to an alien when the only means of communication is interstellar morse code.

Jan 31, 2019
Eikka appears to understand the situation best. However, blind people fall into 2 categories - those whose blindness was a congenital defect and never saw colours post-birth; and those who were born with sight but lost it after already understanding the concept and nomenclature of colours. In robotics, it is the Programmer who determines which concepts and senses the particular brand of robot will process in its learning curve. This is why it is imperative that the Programmer be found to be sane and without emotional and/or psychological problems - as it is the robot that will be the recipient of all that the Programmer programs into each robot.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more