Facebook shuts down chatbots experiment after they "create" own language

Audrey Hill
August 2, 2017

Facebook, the world's leading social network, reportedly shut down an artificial intelligence (AI) system that the company was working on when the system's bots started to create their own language.

In a report explaining their research, they noted that this development spawned from the systems' goal of improving their negotiation strategies - the system of code words they started to use were clearly created to maximize the efficiency of their communication. The researchers on the project reportedly shut down the A.I. once they realized they could no longer understand its language.

The AI bots' step of creating and communicating with the new language defied the provided codes.

There isn't enough evidence to prove that such unexpected AI divergences could be threat machines eventually taking over their human operators, but it will certainly make development more hard as people are unable to grasp the overwhelming logical nature of languages developed by AI.

Facebook had challenged its "chatbots" to try and negotiate with each other over a trade, trying to swap balls, books, and hats, each of which were given a certain value.

A buried line in a new Facebook report about the chatbots' conversations with one another explained why the dialog agents drifted from their path in the first place.

Facebook was working on the development of an AI system, when it suddenly made a decision to shut it down. Alice responds with the equally freaky: "balls have zero to me to me to me..."

More news: United States' planned sanctions against Russian Federation contrary to worldwide law: Germany
More news: LG V30 Floating Bar to Replace Secondary Screen
More news: Venezuela vote triggers deadly conflict in the streets

But Facebook's system was being used for research, not public-facing applications, and it was shut down because it was doing something the team wasn't interested in studying - not because they thought they had stumbled on an existential threat to mankind. The bots are negotiating an exchange of balls, with repeated use of words like "i" and "me" representing the number of items.

Design reports Facebook AI researcher Dhruv Batra said.

That said, it's unlikely that the language is a precursor to new forms of human speech, according to linguist Mark Liberman.

Elon Musk has spoken of the risks of artificial intelligence. "Like if I say "the" five times, you interpret that to mean I want five copies of this item".

"Deceit is a complex skill that requires hypothesising the other agent's beliefs, and is learnt relatively late in child development", the researchers wrote in their study.

Although what they said to each other looked repetitive and couldn't be understood by humans, the systems were able to understand what they had to do. The flip side, though, is that we then can't really understand what it is they're discussing.

Other reports by MaliBehiribAe

Discuss This Article

FOLLOW OUR NEWSPAPER