Know

AI Still Can’t Understand Language, but There’s an Easy Way To Teach It To

3 min read
Jan 14, 2021
  • Post on Twitter
  • Share on Facebook
  • Post on LinkedIn
  • Post on Reddit
  • Copy link to clipboard
    Link copied to clipboard

There’s been a lot of hype surrounding artificial intelligence (AI) and its ability to understand, process, and even write in human languages. Along with that hype has come the usual deluge of articles questioning the role, if any, humans will have in the future when it comes to writing, advertising, marketing, etc. While the context is new, the point is the same. It’s that, basically, computers are going to take over the world, and we’re all going to lose our jobs. 

Not so fast.

It now appears that even the AIs that seem to have some understanding of language and score better than humans on basic comprehension tests, actually don’t have a clue. In fact, when the words in some sentences were mixed up, the AIs didn’t notice or thought they meant the same thing. This was first reported by MIT Technology Review.

Adobe Research, along with researchers from Auburn University, noticed the bug when attempting to get a natural language processing system to explain its behavior. The researchers wanted to know why the NLP system often stated that two different sentences meant the same thing. They discovered that the AI’s explanation was the same even when the words in the sentences were moved around.

For example, “Does marijuana cause cancer?” and “Does cancer cause marijuana?” were recognized by the AI as the same question. On the plus side, the system was able to tell that “Does marijuana cause cancer?” and “How can smoking marijuana give you lung cancer?” were paraphrases. However, the systems were even more convinced that “You smoking cancer how marijuana lung can give?” and “Lung can give marijuana smoking how you cancer?” also had the same meaning.

As many as 90% of the systems tested provided the same explanation when the words in a sentence were moved around. Only when the systems were tasked with examining the grammatical structure of a sentence did the word order matter. 

The reason for this flaw is that the way conversational AI models are trained doesn’t take into account word order or context. They are typically trained to recognize key words no matter the order they come in. In short, they’re not able to understand language, and all the variables surrounding it, like humans can.

But it’s exactly that training that provides hope that they can be easily trained to understand language much better. The researchers determined that by training the systems to recognize that word order matters, like when looking for grammatical mistakes, the AIs were able to achieve better results in other tasks.

So, good news. It looks like our jobs are safe for at least a little while.