Why do I think that everyone lost in the OpenAI drama

OpenAI is the key player in the industry, and while I don’t know what actually happened in their office there are just two possible macro outcomes...

The only topic in tech for the past week has been the OpenAI drama. My team at AI Academy did a great job at covering all the steps of this chaos in yesterday’s newsletter, but in short, this is what happened:

  • OpenAI’s board fired Sam Altman (CEO) for lying to them on Friday

  • Microsoft hired Sam Altman during the weekend (with announcements from Satya Nadella too)

  • Lots of employees at OpenAI threatened to quit

  • Sam Altman got his job at OpenAI back yesterday, 5 days after he was fired

What the hell happened?

There are so many funky hypotheses on the internet. Here are some (btw, I think they’re all wrong):

  • OpenAI has some form of super-intelligent AI ready to take over and there’s disagreement over what to do with it

  • OpenAI had trained GPT4 using data scraped by Chinese hackers, Sam Altman didn’t disclose it to the board and after the NSA figured it out they fired him

  • Sam Altman is Jewish and the interim CEO Mira Murati is a Muslim

Etc. etc.. As you can see, people on the internet got really creative.

Honestly, I have no idea who’s right, and probably we’ll never know. There are a few things we know for sure: everyone is losing because of this chaos.

The number one worry I hear from customers who are looking into adopting AI is whether they can trust the technology and the companies building and selling it. OpenAI is the key player in the industry, and while I don’t know what actually happened in their office there are just two possible macro outcomes:

  • The board was right: Sam Altman lied to them → OpenAI is run by a man who lied to his own board.

  • The board was not right: Sam Altman did not lie to them → OpenAI’s board invented a fact about their CEO (Sam Altman lied) to get rid of him.

You can see how it’s completely irrelevant to know what actually happened to understand that this is bad in any case.

I believe that AI is good enough today to change completely the way we work, helping us do the best work we can in the shortest time we can. I want to see that world, and the biggest hurdle now is to successfully manage the migration from a “raw” technology (LLMs) to real products people can use. In other words, we have electricity, now we need to invent lightbulbs, dishwashers, TV screens, etc..

And we can’t do it without trusting each other.

p.s.: My company AI Academy opened early bird registrations for the 4th edition of the Master in Prompt Engineering. I’ll be the teacher again after 4 sold-out editions. You should reserve your spot now.