Researchers discover machines can learn by simply observing

artificial intelligence
Credit: Piotr Siedlecki/Public Domain

It is now possible for machines to learn how natural or artificial systems work by simply observing them, without being told what to look for, according to researchers at the University of Sheffield.

This could mean advances in the world of technology with machines able to predict, among other things, .

The discovery takes inspiration from the work of pioneering computer scientist Alan Turing, who proposed a test, which a machine could pass if it behaved indistinguishably from a human. In this test, an interrogator exchanges messages with two players in a different room: one human, the other a machine.

The interrogator has to find out which of the two players is human. If they consistently fail to do so - meaning that they are no more successful than if they had chosen one player at random - the machine has passed the test, and is considered to have human-level intelligence.

Dr Roderich Gross from the Department of Automatic Control and Systems Engineering at the University of Sheffield, said: "Our study uses the Turing test to reveal how a given system - not necessarily a human - works. In our case, we put a swarm of robots under surveillance and wanted to find out which rules caused their movements. To do so, we put a second swarm - made of learning robots - under surveillance too. The movements of all the robots were recorded, and the motion data shown to interrogators."

He added: "Unlike in the original Turing test, however, our interrogators are not human but rather computer programs that learn by themselves. Their task is to distinguish between robots from either swarm. They are rewarded for correctly categorising the motion data from the original swarm as genuine, and those from the other swarm as counterfeit. The learning robots that succeed in fooling an interrogator - making it believe their motion data were genuine - receive a reward."

Dr Gross explained the advantage of the approach, called 'Turing Learning', is that humans no longer need to tell machines what to look for.

"Imagine you want a robot to paint like Picasso. Conventional machine algorithms would rate the robot's paintings for how closely they resembled a Picasso. But someone would have to tell the algorithms what is considered similar to a Picasso to begin with. Turing Learning does not require such prior knowledge. It would simply reward the robot if it painted something that was considered genuine by the interrogators. Turing Learning would simultaneously learn how to interrogate and how to paint."

Dr Gross said he believed Turing Learning could lead to advances in science and technology.

"Scientists could use it to discover the rules governing natural or artificial systems, especially where behaviour cannot be easily characterised using similarity metrics," he said.

"Computer games, for example, could gain in realism as virtual players could observe and assume characteristic traits of their human counterparts. They would not simply copy the observed behaviour, but rather reveal what makes human players distinctive from the rest."

The discovery could also be used to create algorithms that detect abnormalities in behaviour. This could prove useful for the health monitoring of livestock and for the preventive maintenance of machines, cars and airplanes.

Turing Learning could also be used in security applications, such as for lie detection or online identity verification.

So far, Dr Gross and his team have tested Turing Learning in robot swarms but the next step is to reveal the workings of some animal collectives such as schools of fish or colonies of bees. This could lead to a better understanding of what factors influence the behaviour of these animals, and eventually inform policy for their protection.


Explore further

Study exposes major flaw in classic artificial intelligence test

More information: Swarm Intelligence, DOI: 10.1007/s11721-016-0126-1
Provided by University of Sheffield
Citation: Researchers discover machines can learn by simply observing (2016, August 30) retrieved 23 January 2019 from https://techxplore.com/news/2016-08-machines-simply.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
5416 shares

Feedback to editors

User comments

Aug 30, 2016
"The interrogator has to find out which of the two players is human. If they consistently fail to do so - meaning that they are no more successful than if they had chosen one player at random - the machine has passed the test, and is considered to have human-level intelligence."


Wrong. Turing never proposed the machine would be intelligent - simply that they can fake intelligence. It was NOT a question of the machine's intelligence, but a question of can we even tell what IS intelligent?

The point of the Turing test is to point out that a sufficiently complex non-intelligent machine - an automaton - will be indistinguishable from a person because it exhausts our ability to test it. In fact it's easier to fake being a person than fake being intelligent because many aren't.

Modern AI research has simply taken the argument backwards and taken the behaviourist view of "walks like a duck, is a duck", which is obviously not always the case.

Aug 30, 2016
'In fact it's easier to fake being a person than fake being intelligent because many aren't.' haha so true. I agree it is all about approaching the limit to what we can meaningfully test. Past that the difference between 'genuine' and 'faked' intelligence simply disappears.

What i don't understand is the following, "They are rewarded for correctly categorising the motion data from the original swarm as genuine, and those from the other swarm as counterfeit. The learning robots that succeed in fooling an interrogator - making it believe their motion data were genuine - receive a reward." That is not the same as "But someone would have to tell the algorithms what is considered similar to a Picasso to begin with.", but it is similar isn't it? at least in principle? how would the 'interrogator' successfully predict if the painting was close to a Picasso to begin with?

Aug 30, 2016
Eikka,quite right. Also, even in the primitive Turing test formulation, it depends what they consider intelligence in a conversation. For example, if a Nobel laureate geneticist or physicist talks to Joe Sixpack, or even to each other, about their favorite subjects, likely all three of them will be under an impression they are talking to non-human.

Getting away with appearing human in smalltalk doesn't prove intelligence of either an AI nor a human.

Sep 11, 2016
The Friendship Cube Group is simplifying the back-end of machine learning, via 22bit fiber optic cables designed by Graeme Kilshaw. Robots have learned to transfer visual and auditory information into a simple 22bit visual binary code called #FriendshipCube. In a visual binary system, a one is a line, and a zero is a space. In other words, a line is a one, and space is a zero. The lines are binary visual space-holders. Visual Binary languages are recognized by both human eye as in the case of the I Ching, and by machine as in the case with QR-codes. Research with EEG inputs has revealed places of possibility for human-machine interface. The friendship cube code brings together people of diverse linguistic backgrounds and nationalities, to form the 22bit visual binary brain trust envisioned by the pioneering architects the Friendship Cube Group. The evolution of the Turing Test is now going in the opposite direction. Humans are further merging with Robotic Intelligence.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more