Yep, I said it. Modern AI is dumber than we may perceive.

But humans are as well.

This week I had an interesting conversation around intelligence. It originated from a thought expressed in Nassim Taleb’s book “Fooled By Randomness”.

In his book, Nassim proposes the introduction of a “Reverse Turing Test”. A Turing test labels a machine is “intelligent” if it can be mistaken for a human. In the reverse Turing test, a human is “unintelligent” if a machine can replicate his/her writing.

The nuances of human speech have never been AI’s cup of tea. How many times a chatbot or virtual assistant has frustrated you?

This sad reality seemed to start crippling down a few months ago, when open AI introduced a model named GPT-2. This model is able of taking as an input some text, and keep writing maintaining the same context and style. [You can try GPT-2 at this link, and read more about it here].

GPT-2 is, hands down, the best generative model out there. It worked so well that open AI, decided against releasing its source code. A first in the history of the no-profit organisation (a company that has “open” in its own name).

The conversation around GPT-2 has been focused on its implications on AI and society. I want to flip the conversation, and look at what that means for humans instead.

First, we need to understand how openAI researchers managed to make GPT-2 so powerful. Let’s make it clear: GPT-2 does nothing more than extracting patterns from text. Its “secrets” are nothing more than bigger neural networks, and more data. I don’t want to discredit openAI researchers - training bigger NNs and using more data is NOT trivial - but that’s that’s the gist of what they’ve done.

More data and bigger Neural Networks mean that GPT-2 can extract patterns from text better than any other model. It can understand that “dog” and “play” often come together, which words are simple and which are fancy, that exclamation marks call for bolder language. It gets as far as figuring out the existence of abstractions like rhymes, sarcasm and humor.

Let’s be clear: GPT-2 hasn’t understood anything at all about the concepts behind words. All it does is pattern recognition: Identify patterns from data, learn, replicate.

If you look at GPT-2 demos, you’ll have to admit that it’s pretty damn good. I do struggle myself believing that a machine wrote some of these texts. The scary thought is what this means for us. If all it took GPT-2 to simulate human speech is recognizing patterns, that’s all we do as well.

How many “interesting” blogposts have you read that are just a rearrangement of some pattern? How many journalists write content that is 99% seen elsewhere and 1% original? Am I being pretentious too by thinking that this blogpost is original? Its thoughts and style are probably also not novel.

GPT-2 taught me that most of our words have already been said and our thoughts have already been thought is scary. Most of what we think is smart, is not more than pattern exploitation.

But this thought is not just scary. It’s also a stimulus to be bold. Once you accept that most of your thinking and work is not original, you’re free to look for outliers. Which thoughts have never been thought? What work has never been done?

If most of our thoughts are correlated in some way, outlier thinking is all that matters. And that’s what we should focus on.

So here’s one for you: by teaching machines to be human, machines taught me something back. I think this is a fascinating thought, maybe even original. Or maybe not.