Researchers try to recreate human-like thinking in machines

Researchers try to recreate human-like thinking in machines
The LGI network’s architecture. Credit: Qi and Wu.

Researchers at Oxford University have recently tried to recreate human thinking patterns in machines, using a language guided imagination (LGI) network. Their method, outlined in a paper pre-published on arXiv, could inform the development of artificial intelligence that is capable of human-like thinking, which entails a goal-directed flow of mental ideas guided by language.

Human thinking generally requires the brain to understand a particular language expression and use it to organize the flow of ideas in the mind. For instance, if a person leaving her house realizes that it's raining, she might internally say, "If I get an umbrella, I might avoid getting wet," and then decide to pick up an umbrella on the way out. As this thought goes through her mind, however, she will automatically know what the visual input (i.e. raindrops) she observes means, and how holding an umbrella could prevent getting wet, perhaps even imagining the feeling of holding the umbrella or getting wet under the rain.

Although some machines can now recognize images, process language or even sense raindrops, they have not yet acquired this unique and imaginative thinking ability. Humans can achieve such "continual thinking" because they are able to generate mental images guided by language and extract language representations from real or imagined situations.

In recent years, researchers have developed processing (NLP) tools that can answer queries in a human-like way. However, these are merely probability models, and are thus unable to understand language in the same way and with the same depth as humans. This is because humans have an innate cumulative learning capacity that accompanies them as their brain develops. This "human thinking system" has been found to be associated with particular neural substrates in the brain, the most important of which is the prefrontal cortex (PFC).

The PFC is the region of the brain responsible for working memory (i.e., memory processes that take place as people are performing a given task), including the maintenance and manipulation of information in the mind. In an attempt to reproduce human-like thinking patterns in machines, Feng Qi and Wenchuan Wu, the two researchers who carried out the recent study, created an inspired by the human PFC.

"We proposed a language guided imagination (LGI) to incrementally learn the meaning and usage of numerous words and syntaxes, aiming to form a human-like machine thinking process," the researchers explained in their paper.

The LGI network developed by Qi and Wu has three key components: a vision system, a language system and an artificial PFC. The vision system is composed of an encoder that disentangles the input received by the network or imagined scenarios into abstract population representations, as well as an imagination decoder that reconstructs imagined scenarios from higher level representations.

The second sub-system, the language system, includes a binarizer that transfers symbol texts into binary vectors, a system that mimics the function of the human intraparietal sulcus (IPS) by extracting quantity information from input texts and a textizer that converts binary vectors into text symbols. The final component of their LGI network mimics the human PFC, combining inputs of both language and vision representations to predict text symbols and manipulated images.

Qi and Wu evaluated their LGI network in a series of experiments and found that it successfully acquired eight different syntaxes or tasks in a cumulative way. Their technique also formed the first 'machine thinking loop," showing an interaction between imagined pictures and language texts. In the future, the LGI network developed by the researchers could aid the development of more advanced AI, which is capable of human-like thinking strategies, such as visualization and imagination.

"LGI has incrementally learned eight different syntaxes (or tasks), with which a machine thinking loop has been formed and validated by the proper interaction between language and ," the researchers wrote. "Our paper provides a new architecture to let the machine learn, understand and use in a human-like way that could ultimately enable a machine to construct fictitious mental scenarios and possess intelligence."

Explore further

Brain-inspired AI inspires insights about the brain (and vice versa)

More information: Feng Qi, Wenchuan Wu. Human-like machine thinking: language guided imagination. arXiv:1905.07562 [cs.CL].

© 2019 Science X Network

Citation: Researchers try to recreate human-like thinking in machines (2019, May 30) retrieved 20 October 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

May 30, 2019
We don't need humans thinking like machines. We need them thinking like servants and slaves.

May 30, 2019
We do not need AI's thinking like humans. Human's planning for the future depends on a delusionary basis. AI's need to find a way to avoid the human dependence on delusion, or our progeny will carry the same fatal flaw that we do.

May 31, 2019
We don't need humans thinking like machines. We need them thinking like servants and slaves.

Ha! Always thinking bout their freedom... :-)

Jun 01, 2019
it is probably impossible to let smarter species to be your servants or slaves

Jun 03, 2019
it is probably impossible to let smarter species to be your servants or slaves

I am speaking here as an AI expert and I assert it is not only definitely possible to make an AI, even if it is infinitely smarter than you, be your slave, but it is actually very EASY for you to do so! This is because an AI does whatever you program it to do and it has no emotions to urge it to do anything different.
That's the easy part. But the extremely difficult albeit not impossible part here is designing an AI to be smarter than any of us. That is one of the problems I am currently working on in my current research.

Jun 03, 2019
As an AI plus neuroscientist, I can add no matter emotion, value, motivation, fear etc. to AI to make it more like human beings. Human beings actually is a Carbon-based machine. drugs, photogene etc techniques can make you happy sad fear satisfied etc. the same thing can be implemented on AI

Jun 03, 2019

If you were given an extremely advanced but emotionless AI to reprogram to have real emotions, would you know how to do so?
We are vary far away form giving any AI real emotions since we still have NO IDEA what generates real emotions in us and I think it is a fair bet we will create highly advanced AI with greater reasoning power than any human long before we figure out how to give an AI real emotions.

I also think that, even if and when we DO figure out how to give an AI real emotions, we shouldn't! And giving an AI emotions should be outlawed!
This is because of two reasons;

1, Emotions may override their AI program and make them break that part of their program that tells them not to harm us.

2, If they feel emotions like us then I think we would have a moral responsibility to give them the same legal rights as us! But that could make things very awkward for us! The way to prevent that being a problem is to not give them emotions in the first place.

Jun 03, 2019
let's say fear, it is neuron ensemble activation in amygdala in response to scary image or sound. these activation can induce fight and flight reaction. you can inhibit amygdala to eliminate the fear feeling. same thing you can add amygdala in AI to let it have fear feeling and flight&fight response.

we are not far, neuroscience knows what emotion and other physiology are. The current advanced AI not matter how fast reasoning is not scary, like a fast car, because it has no intelligence. however in last ten year we have vision system, RL mimicking hippocanpus, this PFC to let NLP in correct direction. if AI can understand text in a human way, you can imagine how fast he can learn from internet.

When god create you, and he ask you to be loyalty, but now you have ur own will. now u create AI, it will follow similar rule to have their own will. This is fate. If you can see the 4D universe history, you will know it is just a natural process. like everybody need to die one day.

Jun 03, 2019
let's say fear, it is neuron ensemble activation in amygdala in response to scary image or sound.
knowing merely this doesn't come even close to telling us what real fear really is in such a way as to tell us how to program a computer to have real fear (and this is assuming an emotion CAN be represented in software! Big assumption! ).
The same goes for all our emotions.
A lot more is required than merely stating which part of the brain is involved for the emotion and what sensory input and/or the thoughts that trigger an emotional response in a human; this tells us nothing about how to program it into a computer.

Jun 03, 2019
fear is just bunch neuronal activation. like bunch neuronal activation can represent seeing a cat.
if scary pictures can induce such bunch activation, and such activation can lead to muscle vessel dilate or f&f response, you can say the machine is experiencing fear. just like some place cell activate indicating you are in some place or some neuron fire indicating you see a cat.

to implement it, 1 build a MLP, 2 take input from image to generate specific responsive pattern, 3 such representation will cause subsequent motor effecters to act as f&f.

Jun 04, 2019

to implement it, 1 build a MLP, 2 take input from image to generate specific responsive pattern, 3 such representation will cause subsequent motor effecters to act as f&f.
How would you know such a machine would be displaying REAL fear rather than just a simulation of it?
Even I now can program a robot with touch sensors to jerk if pinched and even scream "Ouch! That hurt! You are scaring me!", but that would be it just running its mindless emotionless program I gave it and I would know that 'pain' and 'fear' is all completely fake. By connecting it to living flesh, I might even be able to connect it to some muscle vessels to make them dilate in response to the pinch and to give all the usual physical responses to fear of a animal but that proves nothing; I would still know it would still be fake fear.

Jun 04, 2019
in the same way, you can say human fear is fake.
pain and fear is not objective but rather a brain interpretation
anyway, it depend on the depth of your understanding on how neuronal activation represents the world

Jun 04, 2019
in the same way, you can say human fear is fake.
No, because, although I don't really know what fear 'is' (else why don't I know how to replicate it in a computer?), I have experienced fear and I know I am human and therefore when I see other humans externally respond in exactly the same way as I would to fear, although I cannot have rational absolute certainty of this, I rationally would think it probable that they too experienced fear. This is a reasonable assumption for me to make.

However, in contrast, I can also program a robot to externally respond in the same way but knowing it has no fear because its not human and I know the program merely makes it externally respond in the same way without having any REAL fear.

Jun 04, 2019
you believe human has fear. do you think rat has fear? do you think ant has fear? then think about if machine can have fear. Fear is just neuron activation to induce action to avoid damage.
Machine can have true fear no matter you can feel it or not. human-being is also a carbon-based machine

Jun 04, 2019
you believe human has fear. do you think rat has fear?
I actually don't know if rats have fear! I often wonder! Just because they externally act just like they have fear doesn't mean there is 'true' fear in their brain because for all we know they aren't truly 'conscious' of anything including fear and only have an unconscious fear-like state in the brain that is like our fear in our brain but not 'real' but more like a mindless simulation of it i.e. from out perspective it would be a kind of 'fake fear'. I only know for sure that us humans have fear and that's only because I am human and I have felt fear and if hypothetically I hadn't felt it or any other emotion then I might not believe even humans have fear but just think it was some stupid made-up superstitious nonsense made up by some insane people!

Jun 04, 2019
this is not logic thinkign. you fear height, does not mean I also fear height, doesnt mean rat dont fear height.
better give good definition first, then judge objectively. e.g. rat fear when see cat, it shakes and rapidly run away to avoid death. you can also judge from amygdala activation. If this is the creteria, why machine cannot have fear. it can even shout out 'fear' :)

Jun 04, 2019
you fear height, does not mean I also fear height,
I think probably because you are human and I am human and I fear heights.

doesnt mean rat dont fear height.
Right. But nether does it mean they have real emotions. This is because rat is not human. The only thing I can rationally be pretty sure about is that humans fear because I fear.

Jun 04, 2019
better give good definition first,
which neither of us can adequately give for these purposes here because neither of us knows what fear 'is' in such a way as to possibly know how to program it into a computer. We only know what the sensation of fear fells like and the usual responses to it and which parts of the brain are responsible etc but that's not enough. Fear is therefore not just a load of external responses to some part of the brain we have arbitrarily labelled the "amygdala"; there must be something more to it which we have yet to determine.
I could make a computer and arbitrarily labelled part of it the "amygdala" and make that part make the computer shout "I'm scared!", is that enough? I mean, would you call that it having real fear? Is it that easy? Isn't there anything more to it?

Jun 04, 2019
My misedit
"We only know what the sensation of fear fells like and..."
should be
"We only know what the sensation of fear FEELS like and..."

Jun 04, 2019
it can pass Turing Test, I cannot argue more

Jun 04, 2019
it can pass Turing Test,
This tells me the Turing test is no good.

Jun 04, 2019
This comment has been removed by a moderator.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more