The claim: Facebook discontinued two “AI robots” after they developed their own language
It’s hard to escape artificial intelligence. From algorithms curating social media feeds to personal assistants on smartphones and home devices, AI has become part of everyday life for millions of people across the world.
The future of that human-tech relationship may one day involve AI systems being able to learn entirely on their own, becoming more efficient, self-supervised and integrated within a variety of applications and professions.
But some on social media claim this evolution toward AI autonomy has already happened.
“Facebook recently shut down two of its AI robots named Alice & Bob after they started talking to each other in a language they made up,” reads a graphic shared July 18 by the Facebook group Scary Stories & Urban Legends.
The post, which has more than 1,500 interactions, goes on to claim the two AIs created their language to “communicate faster and more efficiently.” Above the text is an image of Han the Robot, which debuted at the RISE Technology Conference in Hong Kong in July 2017.
Some elements of this futuristic tale are true. Facebook did have two AI-powered chatbots named Alice and Bob that learned to communicate with each other in a more efficient way.
But this didn’t happen recently. And Facebook didn’t shut down Alice and Bob.
USA TODAY reached out to Scary Stories & Urban Legends for comment.
Alice and Bob were negotiation chatbots
Chatbots are computer programs that mimic human conversations through text. Because chatbots aren’t yet capable of more sophisticated functions beyond, say, answering customer questions or ordering food, Facebook’s Artificial Intelligence Research Group (FAIR) set out to see if these programs could be taught to negotiate.
The result: Alice and Bob.
Using a game where the two chatbots, as well as human players, bartered virtual items such as books, hats and balls, Alice and Bob demonstrated they could make deals with varying degrees of success, the New Scientist reported.
The post’s claim that the bots spoke to each other in a made-up language checks out.
Facebook observed the language when Alice and Bob were negotiating among themselves. Researchers realized they hadn’t incentivized the bots to stick to rules of English, so what resulted was seemingly nonsensical dialog.
“Agents will drift off understandable language and invent codewords for themselves,” Dhruv Batra, a visiting researcher at FAIR, told Fast Company in 2017. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
In a July 2017 Facebook post, Batra said this behavior wasn’t alarming, but rather “a well-established sub-field of AI, with publications dating back decades.”
Facebook didn’t ‘shut down’ bots
The post’s claim that Facebook shut down Alice and Bob for creating their own language is also misleading.
Creating chatbots that can communicate intelligently with humans was FAIR’s primary research interest. So when the bots started using their own shorthand, Facebook directed them to prioritize correct English usage.
“Simply put, agents in environments attempting to solve a task will often find unintuitive ways to maximize a reward,” Batra wrote in the July 2017 Facebook post. “Analyzing the reward function and changing parameters of an experiment is NOT the same as ‘unplugging’ or ‘shutting down AI.’ If that were the case, every AI researcher has been ‘shutting down AI’ every time they kill a job on a machine.”
Our rating: Partly false
Based on our research, we rate PARTLY FALSE the claim Facebook discontinued two AIs after they developed their own language. Facebook did develop two AI-powered chatbots to see if they could learn how to negotiate. During the process, the bots formed a derived shorthand that allowed them to communicate faster. This is a common phenomenon observed among AIs. But this happened in 2017, not recently, and Facebook didn’t shut the bots down—the researchers simply directed them to prioritize correct English usage.
Our fact-check sources:
— Pew Research Center, Dec. 10, 2018, Improvements ahead: How humans and AI might evolve together in the next decade
— China Daily HK, Nov. 8, 2018, World’s first AI news anchor makes ‘his’ China debut
— DELL Technologies, April 8, A.I. Powered Chatbots in the Enterprise
— New Scientist, June 14, 2017, Chatbots learn how to negotiate and drive a hard bargain
— Gizmodo, July 31, 2017, No, Facebook Did Not Panic and Shut Down an AI Program That Was Getting
— Fast Company, July 14, 2017, AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?
— Facebook Engineering, June 14, 2017, Deal or no deal? Training AI bots to negotiate
— Dhruv Batra, July 31, 2017, Facebook post
©2021 USA Today
Distributed by Tribune Content Agency, LLC.
Citation:
Fact check: Facebook didn’t pull the plug on two chatbots because they created a language (2021, July 28)
retrieved 28 July 2021
from https://techxplore.com/news/2021-07-fact-facebook-didnt-chatbots-language.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.