Researchers working with swarm robots say it is now possible for machines to learn how natural or artificial systems work by observing them—without being told what to look for.
This could lead to advances in how machines infer knowledge and use it to detect behaviors and abnormalities.
“Unlike in the original Turing test, however, our interrogators are not human but rather computer programs that learn by themselves.”
The technology could improve security applications, such as lie detection or identity verification, and make computer gaming more realistic.
It also means machines are able to predict, among other things, how people and other living things behave.
The Turing test
The discovery, published in the journal Swarm Intelligence, takes inspiration from the work of pioneering computer scientist Alan Turing, who proposed a test, which a machine could pass if it behaved indistinguishably from a human. In this test, an interrogator exchanges messages with two players in a different room: one human, the other a machine.
The interrogator has to find out which of the two players is human. If they consistently fail to do so—meaning that they are no more successful than if they had chosen one player at random—the machine has passed the test, and is considered to have human-level intelligence.
“Our study uses the Turing test to reveal how a given system—not necessarily a human—works. In our case, we put a swarm of robots under surveillance and wanted to find out which rules caused their movements,” explains Roderich Gross from the automatic control and systems engineering department at the University of Sheffield.
“To do so, we put a second swarm—made of learning robots—under surveillance, too. The movements of all the robots were recorded, and the motion data shown to interrogators,” he adds.
“Unlike in the original Turing test, however, our interrogators are not human but rather computer programs that learn by themselves. Their task is to distinguish between robots from either swarm. They are rewarded for correctly categorizing the motion data from the original swarm as genuine, and those from the other swarm as counterfeit. The learning robots that succeed in fooling an interrogator—making it believe their motion data were genuine—receive a reward.”
Gross says the advantage of the approach, called “Turing Learning,” is that humans no longer need to tell machines what to look for.
Robot paints like Picasso
Imagine you want a robot to paint like Picasso. Conventional machine learning algorithms would rate the robot’s paintings for how closely they resembled a Picasso. But someone would have to tell the algorithms what is considered similar to a Picasso to begin with.
Turing Learning does not require such prior knowledge. It would simply reward the robot if it painted something that was considered genuine by the interrogators. Turing Learning would simultaneously learn how to interrogate and how to paint.
Gross says he believes Turing Learning could lead to advances in science and technology.
“Scientists could use it to discover the rules governing natural or artificial systems, especially where behavior cannot be easily characterized using similarity metrics,” he says.
“Computer games, for example, could gain in realism as virtual players could observe and assume characteristic traits of their human counterparts. They would not simply copy the observed behavior, but rather reveal what makes human players distinctive from the rest.”
So far, Gross and his team have tested Turing Learning in robot swarms but the next step is to reveal the workings of some animal collectives such as schools of fish or colonies of bees. This could lead to a better understanding of what factors influence the behavior of these animals, and eventually inform policy for their protection.
Source: University of Sheffield
The post ‘Turing learning’ means machines don’t need our help appeared first on Futurity.