Researchers at Facebook had to shut down an AI program in early June after it created its own language. The Facebook Artificial Intelligence Research (FAIR) had developed the chatbot to haggle like humans and develop the best possible outcome from a negotiation, through multi-issue bargaining.
According to a report in Tech Times, the social media giant had to pull the plug after the system developed code words to make communication more efficient. Apparently, the issue was that while the bots were rewarded for negotiating with each other, they were not rewarded for negotiating in English, which led the bots to develop an unintelligible language of their own. This led Facebook researchers to shut down the AI systems and then force them to speak to each other only in English.
For many, this news supports Elon Musk’s recent warnings about AI regulation where last week the tech pioneer warned, “AI is the rare case where I think we need to be proactive in regulation instead of reactive… by the time we are reactive in AI regulation, it’ll be too late.” On the other hand, some are emphasizing that this regulation is exactly what the researchers are doing at Facebook.
Breaking down the AI Code
In the exchange revealed by Facebook, two negotiating bots—Bob and Alice—started using their own language to complete a conversation.
“I can i i everything else,” Bob said.
“Balls have zero to me to me to me to me to me to me to me to me to,” Alice responded.
The rest of the exchange formed variations of these sentences in the newly-forged dialect, even though the AIs were programmed to use English. According to the researchers, these nonsensical phrases are a language the bots developed to communicate how many items each should get in the exchange.
When Bob later says “i i can i i i everything else,” it appears the artificially intelligent bot used its new language to make an offer to Alice. The Facebook team believes the bot may have been saying something like: “I’ll have three and you have everything else.”
Although the English may seem quite efficient to humans, the AI may have seen the sentence as either redundant or less effective for reaching its assigned goal.
The Facebook AI determined that the word-rich expressions in English were not required to complete its task. The AI operated on a “reward” principle and in this instance, there was no reward for continuing to use the language. So it developed its own.
In a June blog post by Facebook’s AI team, it explained the reward system. “At the end of every dialog, the agent is given a reward based on the deal it agreed on.” That reward was then back-propagated through every word in the bot output so it could learn which actions lead to high rewards.
“Agents will drift off from understandable language and invent code-words for themselves,” Facebook AI researcher Dhruv Batra told Fast Co. Design. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create short hands.”
AI developers at other companies have also observed programs developing languages to simplify communication. At Musk’s OpenAI lab, an experiment succeeded in having AI bots develop their own languages.
At Google, the team working on the Translate service discovered that the AI they programmed had silently written its own language to aid in translating sentences. The Translate developers had added a neural network to the system, making it capable of translating between language pairs it had never been taught. The new language the AI silently wrote was a surprise.
There is not enough evidence to claim that these unforeseen AI divergences are a threat or that they could lead to machines taking over operators. However, undeniably further research is needed in understanding and regulating the AI systems for effective development to occur.
Link to original source – http://bit.ly/2tTzyjk