The future of AI businesses

Competing in AI vs Competing ON AI

Disclaimer: This article is a long one, but imho worth a read - I hope you’ll enjoy it.

Before we start, next Tuesday we’ll close the early bird offer on the 5th edition of my favorite program: the Master in Prompt Engineering. I hope I’ll see you in class.

What comes to mind if I mention “Apple”? If you’re a normal person probably a fruit, but since you’re following my crazy newsletter you’re probably thinking about a technology company that is focused on making beautiful products. A company focused on design user experience and profitability.

Now, what comes to mind if I tell you “MIT computer science research lab”? Probably something different. You’re probably imagining an office full of scientists typing on large screens, writing math on whiteboards, without any commercial interest and a very long-term agenda.

Now, what comes to mind when I say OpenAI? This is a tricky one. If I asked you this question three or four years ago you probably would’ve said “Open who?”. If I asked that question to AI practitioners they would’ve said “oh definitely a research company”, something like the MIT computer science lab that I mentioned before.

But if I ask this question now it’s unclear how we should answer it. Is it still a research company focused on foundational research, or a product company focused on profit? Is it possible to be both? Obviously, Apple has research teams and MIT Research Lab may drop some products, but they are fundamentally different organizations with different setups, different goals, and different cultures.

Why are we even asking this question? Well, I think it can help us answer what a company should focus on if we want to be successful, in the AI era.

And oh, boy, I have some opinions in it.

Let’s start with a little bit of history. OpenAI started as a non-profit company, but at some point, it became evident that it was going to be impossible to raise the kind of money they needed to train huge AI models as nonprofit models aren’t attractive to investors. So they had to switch to a “capped profit” model (investors can’t make infinite money from them, “just” 100x their investment). The bet was that they could’ve stayed in a research lab, but with investors who were expecting to make some money at some point in the future.

Then ChatGPT came out. Most people have forgotten these or maybe don’t even know, but ChatGPT initially was a research preview. It was just a demo to show people what the company was capable of doing.

OpenAI didn’t expect the success they got. This is how their co-founder Greg Brockman described ChatGPT’s launch one year later.

Fast-forward to this week. In the same week:

  • CEO Sam Altman says they are “extremely focused on AGI” (artificial general intelligence)

  • They made a major release for ChatGPT, adding new “memory features” that allow users to easily ask it to remember specific preferences and facts so that our experience is always tailored and personalized

Let’s talk about the latter. That’s pretty much a product announcement focused on user experience and on creating a “lock-in” for users. Imagine you’ve spent months personalizing ChatGPT to make it perfect for you, and Google releases a new AI model that is technically better, but that you have to personalize from scratch. Are you going to switch? Probably not.

The fun part about this announcement is that it’s product-startup strategy 101. The kind of strategy they’d teach you if your startup got selected for an accelerator, something like Y Combinator.

Which by the way, was led by Sam Altman.

So by now, I should have laid out a few elements that make the question “What company is OpenAI” much easier to answer:

  • They are led by Sam Altman, someone who has advised and funded high-growth product startups for years.

  • It stumbled upon the most viral product in history, ChatGPT.

  • It’s releasing features that are 100% about UX and “product lock-in”, like memory and personalization

This sounds like a pivot to a product company to me. Which leads to another interesting question: why?

The obvious answer is that if you stumble upon the most viral product in history, money is kinda nice. But there’s a more subtle take.

As LLMs like ChatGPT or Gemini get closer to “almost perfect” it becomes increasingly harder to compete with others on AI - 99% of people don’t care about a slightly larger token window or a 0.1% difference in accuracy on tech benchmarks.

If it becomes impossible to compete with others on research, you can do two things:

  • Drop prices → make your tech a commodity

  • Create artificial barriers and compete in the “classic” startup product game → become a “normal” startup company like AirBnB or Dropbox.

OpenAI is doing both while trying to do research on the side. We talked about the product strategy, now let’s cover price drops. I went through the pricing for GPT-3 and GPT 3.5-turbo in the last 2 years. OpenAI went from a cost of $60 for 1M tokens, to something between $0.50 and $1.5 (they now differentiate between input and output tokens).

So basically a ~100x reduction in costs in 2 years.

At the beginning of this article, I promised there was going to be some form of practical advice for businesses. What can we learn from this analysis of OpenAI’s strategy?

My take is that we’re entering a phase in which AI is good enough to be useful in the real world, and the technology itself is so commoditized already that what makes a business successful is the product around the technology, not the technology itself.

I remember pitching investors months ago, and some of the least sophisticated ones were wondering whether we had some “proprietary technology” or “just a wrapper around OpenAI’s API”. Spoiler: everything will be a “wrapper around an API”. ChatGPT itself is a wrapper around its own API.

We’re going back into a world where the companies who win are the ones who execute best on the “classic” product startup playbook: focus on UX, build network effects, and create lock-in. OpenAI is doing the same with ChatGPT (UX = personalization, network effects = GPTs, lock-in = memory).

Is your business doing it?

Thank you for reading this far! If you’re interested in building useful AI products, I think you’ll love the Master in Prompt Engineering. The next edition starts on the 4th of March, but early bird pricing expires on Tuesday. Join now before the price increases.