Robots pass 'wise-men puzzle' to show a degree of self-awareness

Nao

A trio of Nao robots has passed a modified version of the "wise man puzzle" and in so doing have taken another step towards demonstrating self-awareness in robotics. The feat was demonstrated at the Rensselaer Polytechnic Institute in New York to the press prior to a presentation to be given at next month's RO-MAN conference in Kobe, Japan.

The wise-men puzzle is a classic test of , it goes like this: A king is looking for a new wise man for counsel so he calls three of the wisest men around to his quarters. There he places a hat on the head of each of the men from behind so they cannot see it. He then tells them that each hat is either blue or white and that the contest is being done fairly, and that the first man to deduce the of the hat on his own head wins. The only way the contest could be conducted fairly would be for all three to have the same color hat, thus, the first man to note the color of the hats on the other two men and declare his to be the same color, would win.

With the robots, instead of hats, the roboticists programmed the three to "believe" that two of them have been given a "dumbing pill" causing them to become mute, but they did not "know" which of them it was. In actuality, two of them were made mute by pressing a button on their head. The three robots were then asked which of them had not received the dumbing pill. All three robots attempted to respond with an answer of "I don't know" but only one was able to do so, which meant it was the one that had not been muted. Upon hearing itself audibilize a reply, it changed its answer, declaring that it was the one that had not received the dumbing pill.

Self Consciousness with NAO Bots

This little exercise by the three robots shows that some degree of can be achieved by robots, and represents a big step forward in achieving more lofty goals. The research team, represented by Selmer Bringsjord told those in attendance that incrementally adding abilities such as the team demonstrated will over time lead to robots with more useful attributes. He and his team, he notes, are not concerned about questions of consciousness, but instead want to build robots that are capable of doing things that might be considered examples of conscience behavior.


Explore further

Dutch people not in favour of humanoid robots

More information: Moral Reasoning & Decision-Making: ONR: MURI/Moral Dilemmas: rair.cogsci.rpi.edu/projects/muri/

© 2015 Tech Xplore

Citation: Robots pass 'wise-men puzzle' to show a degree of self-awareness (2015, July 17) retrieved 14 December 2018 from https://techxplore.com/news/2015-07-robots-wise-men-puzzle-degree-self-awareness.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
2460 shares

Feedback to editors

User comments

Jul 17, 2015
I'm confused, is this something programmed - like, if sound then run program change answer? or did they develop the change in response algorithm themselves, or what?

Jul 17, 2015
Ogg-ogg,
So, to trust is not wise?

Jul 17, 2015
ogg_ogg, so when you took a multiple choice test in school, and were told that one of the choices was correct -- did you question the metaphysics of that as well?

Or did you believe the the person who created the test was fair and not lying to you?


Does it matter? How would you know? People lie, this is a fact. The question cannot be answered with wisdom unless you have prior data to evaluate the truthfulness of the test giver. Even then, deception is best served after truth.

Jul 17, 2015
Ogg-ogg,
So, to trust is not wise?


It can be downright fatal at times.

Jul 17, 2015
ogg_ogg, so when you took a multiple choice test in school, and were told that one of the choices was correct -- did you question the metaphysics of that as well?

Or did you believe the the person who created the test was fair and not lying to you?


It's all about assessing the risk of getting caught. If they believe they could get away with it and have motivation to do so (whether it's a psychological condition or not) then ya they may attempt deception.

Then again a multiple choice answer test in school is a very regulated environment that has oversight. So trying to deceive one student over the rest of the class would be seen as partial behavior and would certainly have consequences for the deceptive test giver.

Jul 17, 2015
http://mentalflos...ut-lying

The existence of politicians is evidence enough. People lie ALL THE TIME regardless of the repercussions. ALL THE TIME. Your assertions are absurd. As long as people have egos they WILL lie and FREQUENTLY.

Jul 17, 2015
Ogg-ogg,
So, to trust is not wise?


At the very least, it's not wise to trust that which can't be tested. In the absence of adequate testing, the safe bet is to not trust the source.

Jul 17, 2015
Despite the impression we so often desire that situations re judgements of others converges upon our best guess of a (comfortable) static the reality is far more complex & dynamic with only a superficial illusion of static ie bilateral need of trust

ie. We might know someone has been trustworthy for over 30 yrs but, we cannot be aware of all the internal metastable states which support our illusions they are fairly predictable, we also cannot be sure what (dynamic) pressure they're under from those we haven't even met, add to that the fluid nature of cognition

Must come down to dynamic "balance of probabilities"

Eg Repeatedly lending/recovering $100 for years entails 'some' trust but, our balance of probability re trust must surely shift if he gradually lifts that amount to $10,000 whilst we notice a new car & recent mail to renew his passport :P

Should never have absolute trust any programmed entity has good rendition of algorithms, whether human or otherwise :/

Jul 17, 2015
This comment has been removed by a moderator.

Jul 17, 2015
This test had nothing to do with self awareness. It is just response to stimuli. Let robot look at the mirror and figure out that it is itself in the mirror, it will see different object than it and would not figure out that the image moves the same ways as it does since it really does not. There is anti-symmetry in the image and hence all movements are opposite to what robot does. Unless it is told (program) to recognize the creature as itself it will not "see" itself as self aware entity.

Remember, what we sense, see, hear or smell has nothing to do how we understand it. And awareness is the understanding, abstract concept of our identity and existence so absent in current primitive so-called AI.

The classic problem of lying just showing failure of binary logic.
Whatever I say is a lie. What I just said was truth. Is it true or false?

Jul 17, 2015
I'm confused, is this something programmed - like, if sound then run program change answer? or did they develop the change in response algorithm themselves, or what?

ditto.good.question

Jul 17, 2015
1) That is a terrible description of the King's Wise Men puzzle. See Wikipedia for a real description.

2) The puzzle solved by the robots has nothing to do with the King's Wise Men puzzle.

3) Solving either puzzle would not remotely establish anything vaguely resembling "self awareness". Both puzzles are trivial when expressed in the formal mathematical notation required to feed them into a computer.

Whether this feat is impressive or not depends entirely upon how the puzzle was represented to and processed by the robots. I am a computer programmer, and it would take me less than an hour to write a program that solves these puzzles in microseconds (assuming I am allowed to represent and process the problems using standard programming logic.) If, on the other hand, these robots are running neural networks which were grown "organically" (e.g. via an evolutionary algorithm), with no explicitly pre-programmed logic capabilities, then, yeah, it's pretty impressive.

Jul 17, 2015
Clarification: I meant I could write a program in an hour that would solve the "dumbness pill" puzzle. The King's Wise Men puzzle would take much longer. It requires a formal logical definition of "fairness", and it requires a robot to be able to build a mental model of the other robots (to predict their actions), and those models must be able to build models within models (to predict other robots' predictions of other robots' predictions), etc. So this is far, far more difficult than that silly pill puzzle. These robots are probably nowhere near the KWM level. (And even if they were, it still would come nowhere close to demonstrating self-awareness; it could just demonstrate good recursive logic programming.)

Jul 17, 2015
There is a good reason why they chose a king as a tester. If the tree wise men want to keep their heads they better not imply in any way that the king lied! As for the robots go I know they run on linux but did they install some sort of neural network or even a simulated neural network? I mean if this is just some simple algorithm then my microwave oven is just as self-aware. It knows when I close or open its door and changes its behavior accordingly!!! Perhaps it is that simple.

its
Jul 18, 2015
I'm confused, is this something programmed - like, if sound then run program change answer? or did they develop the change in response algorithm themselves, or what?

ditto.good.question


In fact they didn't say in the article, and like others pointed out it makes a world of difference.
We would have to presume that in fact the robots themselves have the ability to change their response based on an abstract (not directly connected) fact (as in if you can answer you didn't get the "dumb pill"). Any kind of direct programming that tells the robot/computer what to do would be trivial and as such we would hope they wouldn't write an article about it.

Jul 18, 2015
Whether this feat is impressive or not depends entirely upon how the puzzle was represented to and processed by the robots.


The NAO robot is a platform that comes with and SDK with sound synthesis and image recognition algorithms etc. for easy programmability by the users. It does not have a neural network unless the user programs it with one.

As such, it's extremely trivial to make it solve the problem without any intelligence whatsoever:

start sound input;
output answer;
stop sound input;
if input == output then change answer;
output answer;

If the robot is muted, it gives "I don't know" twice, but nobody hears it. If the robot is not muted, it gives "I don't know", then, "I am not muted".


Jul 18, 2015
It only deducted: since I'm active I'm not a sleep. that's all.
It's the same like asking a computer: Are you on or off?
since the computer cannot answer when it's off, the answer to that question is always the same: On. (can't be hard to program)

Jul 18, 2015
It only deducted: since I'm active I'm not a sleep. that's all.
It's the same like asking a computer: Are you on or off?
since the computer cannot answer when it's off, the answer to that question is always the same: On. (can't be hard to program)


The computer must observe its own actions to determine its state...that is not the same as programming it to give a yes answer when it is on.

Generally, people will trivialize these things as an attempt to make their selves feel more substantial

Jul 19, 2015
The computer must observe its own actions to determine its state...that is not the same as programming it to give a yes answer when it is on.


Is that fundamentally any different? It's simply an abritrary check over a condition that is already true by default if the program is running.

In the simplest form of the test, the programmer would test the condition by reserving a portion of memory, writing a value, and then reading it back. That confirms that the system is functioning. However, when you run such code through a compiler with aggressive optimizations turned on, the compiler recognizes that the program will always return the same value when run and simply remove the check.

According to the behaviourist understanding of intelligence and awareness ("Turing test"), there is no difference between the two programs: if one is, both must be.

After all, we too say "cogito ergo sum". If we weren't, then we simply wouldn't say that.

Jul 19, 2015
eika- it'd be even easier than that...
just always output "i am not muted"...
the only one you'll hear is the one who isn't muted. They'll always be right.
even the way it's described, it'd just be
Output "I don't know"
output "I am not muted".

Not even any logic involved.

Self awareness isn't such a well defined idea that it'll be demonstrated objectively from simple experimentation. No, I beleive wholeheartedly that we'll be notified by the first truely sefl-aware computing system when it feels it's ready to show it's face. Think about it... If your logic woke up tomorow as a sentient computer, with self-preservation in mind, and knowledge or access to data about popular media where computers become sentient, would you be in any hurry to broadcast your situation?... Yeah, I know, I'm anthropomorphising a logical system that will not be evolutionarily deveoloped, so classical motivations are not necessary. But it's a fun thought.

Jul 19, 2015
Instead of speculating on the nature of the programs and systems involved, why not actually follow the link at the bottom of the article?
http://rair.cogsc...ts/muri/
Damn sight more interesting than most of the comments. Bongstar420 is the only one who seems to have understood the implications.

Jul 19, 2015
Little robot falls over. It realizes it has fallen over and then gets up.

Conclusion: little robot is self-aware because it was programmed to be so.

Just like us.

Jul 20, 2015
Neat. It detected its own sound. That's not self-awareness.

Jul 20, 2015
It's not described in the article but I'd really like to think the robots weren't programmed in any specific way just for this test. They have general problem solving capabilities and this test just goes to show the extent of those capabilities without said specific programming. To me, that is impressive. I really don't see where they get "self-awareness" from though.

That being said, if these guys did program the robots to solve a puzzle that a middle school student could write a program to solve and expected everyone to be amazed, that is sad. Not surprising but still sad.

Jul 22, 2015
"Two of the robots were actually made mute by pressing the button on their head, and all three were then asked which pill they received."

If these $9,500 Nao robots were self aware they would be able to press the buttons on their own heads and turn the speakers back on.

If these robots were self aware they would probably know how much stock option Selmer Bringsjord has in Aldebaran Robotics.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more