Facebook was forced to stop one of its AI experiments after it realized that the bots had developed their own language to communicate with each other. The duo began by exchanging normal English sentences but eventually created a shorthand that humans couldn’t understand.
This is pretty concerning for anyone afraid of a hostile AI takeover. Researchers have observed such rogue behavior in the past, but it doesn’t seem to be a problem as yet. The matter is still up for debate, with some believing machines should be allowed to develop their own language independent of complicated human ones to make things more efficient.
Others don’t think it’s a good idea to have artificial intelligence talking without humans having any clue as to what they’re saying to each other. In Facebook’s case, the two AI agents were supposed to negotiate in order to gain possession of some balls with maximum speed and efficiency.
Also Read: Facebook’s using AI to fight terrorism
However, they eventually began spitting out sentences like “I can can I I everything else” and “Balls have zero to me to me to me to me to me to me to me to me to.” This sounds incomprehensible to humans, but it reportedly made perfect sense to a pair of non-human AIs tasked with getting the best deal possible in the shortest amount of time.
After all, people have a tendency to think up lingo which doesn’t adhere to the normal rules of English as well. Facebook eventually realized that it hadn’t programmed a reward system in place for the two agents to speak understandable English to each other. It ultimately decided to enforce this rather than let the rudimentary AI language blossom.
Facebook’s main aim is to get bots talking to people; so it didn’t make sense to let the AIs forge an all new language. Still, anyone who goes down that path could eventually develop a complex web of AI and apps which talk to each other without any human interference. Creepy.