AI machine achieves IQ test score of young child

computer
Credit: Public Domain

Some people might find it enough reason to worry; others, enough reason to be upbeat about what we can achieve in computer science; all await the next chapters in artificial intelligence to see what more a machine can do to mimic human intelligence. We already saw what machines can do in arithmetic, chess and pattern recognition.

MIT Technology Review poses the bigger question: to what extent do these capabilities add up to the equivalent of ? Shedding some light on AI and humans, a team went ahead to subject an AI system to a standard IQ test given to humans.

Their paper describing their findings has been posted on arXiv. The team is from the University of Illinois at Chicago and an AI research group in Hungary. The AI system which they used is ConceptNet, an open-source project run by the MIT Common Sense Computing Initiative.

Results: It scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for 5 to 7 year-olds

"We found that the WPPSI-III VIQ psychometric test gives a WPPSI-III VIQ to ConceptNet 4 that is equivalent to that of an average four-year old. The performance of the system fell when compared to older children, and it compared poorly to seven year olds."

They wrote, "In the work reported here, we used the March 2012 joint release of ConceptNet 4 implemented as the Python module conceptnet and AnalogySpace implemented as the Python module divisi2.3. In this paper 'ConceptNet' refers to this combination of AnalogySpace and ConceptNet 4 unless explicitly stated otherwise."

The title of their paper is "Measuring an Artificial Intelligence System's Performance on a Verbal IQ Test For Young Children," and the authors are Stellan Ohlsson, Robert Sloan, György Turán and Aaron Urasky. They represent academic disciplines of statistics, computer science and psychology.

The Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III), which is the test they used, is for children ages 2 years and 6 months to 7 years and 3 months, and is made up of 14 subtests.

The test is called Wechsler after David Wechsler, PhD, cognitive psychology pioneer. Wechsler described intelligence as "the global capacity of a person to act purposefully, to think rationally, and to deal effectively with his environment."

As for the computer's ability to answer questions successfully, the authors discussed the limitations.

An example: Saw was taken as the past tense of see rather than as a cutting tool. "ConceptNet does little or no word-sense disambiguation. It combines different forms of one word into one database entry, to increase what is known about that entry. The lack of disambiguation hurts when, for example, the system's tools convert saw into the base form of the verb see, and our question 'What is a saw used for?' is answered by 'An eye is used to see.'"

The authors said that "In general, more powerful natural tools would likely improve system performance."

Interestingly, these limitations do not spell doom for computers reaching human thought level but rather the limitations help elucidate what needs to come next in AI progress.

MIT Technology Review made the observation that, "Of course, there are various ways that the test could be improved."

Giving the computer processing capabilities is one way. "That would reduce its reliance on the programming necessary to enter the questions and is something that is already becoming possible with online assistants such as Siri, Cortana, and Google Now," said the report.

MIT Technology Review added this to the bigger picture regarding this IQ study: "Taking Ohlsson and co's result at face value, it's taken 60 years of AI research to build a machine in 2012 that can come anywhere close to matching the common sense reasoning of a four-year old. But the nature of exponential improvements raises the prospect that the next six years might produce similarly dramatic improvements. So a question that we ought to be considering with urgency is: what kind of AI machine might we be grappling with in 2018?"


Explore further

Computer smart as a 4-year-old

More information: Measuring an Artificial Intelligence System's Performance on a Verbal IQ Test For Young Children, arXiv:1509.03390 [cs.AI] arxiv.org/abs/1509.03390

Abstract
We administered the Verbal IQ (VIQ) part of the Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III) to the ConceptNet 4 AI system. The test questions (e.g., "Why do we shake hands?") were translated into ConceptNet 4 inputs using a combination of the simple natural language processing tools that come with ConceptNet together with short Python programs that we wrote. The question answering used a version of ConceptNet based on spectral methods. The ConceptNet system scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for 5 to 7 year-olds. Large variations among subtests indicate potential areas of improvement. In particular, results were strongest for the Vocabulary and Similarities subtests, intermediate for the Information subtest, and lowest for the Comprehension and Word Reasoning subtests. Comprehension is the subtest most strongly associated with common sense. The large variations among subtests and ordinary common sense strongly suggest that the WPPSI-III VIQ results do not show that "ConceptNet has the verbal abilities a four-year-old." Rather, children's IQ tests offer one objective metric for the evaluation and comparison of AI systems. Also, this work continues previous research on Psychometric AI.

© 2015 Tech Xplore

Citation: AI machine achieves IQ test score of young child (2015, October 6) retrieved 19 November 2018 from https://techxplore.com/news/2015-10-ai-machine-iq-score-young.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
7918 shares

Feedback to editors

User comments

Oct 07, 2015
Suppose the AI mind the good Dr Hawking talks of is taciturn, cold blooded, seeks no company yet quietly tests for ways to expand its reach, quite smart, and utterly self preservatory and ruthless. To this end it might feign stupidity if that got it closer to its goals. This kind of research MUST be done with extreme caution. A thresh-hold could easily be passed without the constructors knowing what they had created

Oct 07, 2015
@sascoflame not only it isn't conscious at all, but the technology they used is 20 years old. There is not an ounce of added value in this paper.

@Osiris1 Caution in AI research is a good idea. However, we are likely at least a decade away from having the computational power to even support an AI that could prove dangerous. Stephen Hawking and Elon Musk are a little bit early with their fears. Plus if Ray Kurzweil has his way the first AI is more likely to just be his brain uploaded to googles servers.

My main point is the fear of AI is really not founded fully in reality. We have at least a decade if not more to properly address this issue.

Oct 07, 2015
@SkyLy not true; Determining what areas of NLP need to be improved to increase general intelligence performance adds a *huge* amount of value.

@Osiris1 that is certainly plausible, but only if the AI were specifically programmed to "feign", be "cold-blooded", "seek [anything]" and have a notion of self-preservation. There would not implicitly be any way for this to occur without the creators knowing. If, after enough time AI began programming themselves or designing systems that were not possible to understand, then we would need to consider safety issues. For those interested in the topic, these kinds of hypotheticals are covered pretty deeply in Bostrom's book Superintelligence.

@ds_scalar I thought one goal of AI work was to write a program that could modify itself. How else is it supposed to learn and adapt.

Oct 07, 2015
@ProcrastinationAccountNumber3659 Maybe WE are a decade away from having the computational power, but who is to say that an AI is not already there (in secret)

Oct 09, 2015
Given that consciousness is an emergent property: We do not yet know how to create a network or system it could emerge from. Studying our own brains gives us clues on the right mix of fixed hardware structures, perception processing and memory object interlinking to promote learning. But as it stands at present we can create about as much self awareness as an insect has.
Any current machine which was "motivated" to devour the internet and deduce the existence of rice pudding would have to be built to do it; thus borrowing the 'self awareness' and 'motivation' of its programmer.

Oct 09, 2015
"It isn't conscious unless it initiates contact on it;s own."

This is a test of IQ - not of consciousness.

"it must wonder why it exists and express happiness at being consciousness. "

Moving the goalposts, much? Playing "god of the gaps" with intelligence?

Oct 09, 2015
This comment has been removed by a moderator.

Oct 09, 2015
Can it tie its own shoelaces?
Throw/catch a baseball?
Climb a tree?

nope?

Okay...thought not.

Oct 12, 2015
Although some people will never be convinced, I consider this as proof that it is a foolish waste of resources to pursue sending human beings to explore space and Mars and such. By the time we are ready to send people to these places, we will be capable of building proxies that will do the job for us. There is no reason to endanger the lives of human beings and risk a catastrophe when we could be building an intelligent explorer to land on the Moon or Mars or even an asteroid with enough A.I. to explore independently, avoid hazards and report back to us. It is far less costly to send a machine as our proxy and they can be designed to function for long periods and even collect samples to return to us if we like. It will never be cost effective to send humans to mine in space . You can bet that NASA already knows this. But, they are pushing the whole space exploration for political reasons. They want to get a lot of funding for a gee whiz program even though it isn't very realistic.

Oct 12, 2015
This comment has been removed by a moderator.

Oct 12, 2015
@ProcrastinationAccountNumber3659 Maybe WE are a decade away from having the computational power, but who is to say that an AI is not already there (in secret)


Well, because there are no such things as "disembodied AI", and they would require a huge system.

One thing outsiders to the IT industry tend to oversee is the hardware and the complexity of the software and it's maintenance.

You see how Google translator is magically reading pictures and magically (or through the share power of AI) translating words. Or you see how the All Powerful AI of Facebook and Google personalize the ads that are displayed on the web just for you.

What you fail to see is the huge amount of dudes and chicks holding the plumbings of all this stuff together and making it work. You would be amazed!

This means that there is no such thing as a hidden AI, any thing that is not desired in our servers would be removed and there is no way software can alter itself out of the blue.

Oct 12, 2015
"Your comment doesn't belong here"
Looking at your comments in this discussion (and over the past many years): Pot meet kettle. Kettle meet pot.

Nov 27, 2015
I wonder how well it would have done in some language other than the U.S. version of english. It might have done better using one of the more structured languages where the rules of use make more sense.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more