How OpenAI sees the future vs how I see it

Previously known as Tech Pizza 🍕

Hi all,

this is the first week without Tech Pizza! We miss it but we’re proud of this next iteration. You should have received the AI Academy newsletter yesterday (if you didn’t, check out your inbox + let me know by replying to this email). I would love to hear your feedback! Personally, I love the glossary and the educational pill section.

For this first edition of my own newsletter (“Diaries of an AI Troublemaker”), I want to give you my comment about OpenAI’s announcement from the Dev Day. We’ve covered them in the AI Academy newsletter, and here I want to give you some personal takes on it.

I’ve had so many emotions regarding these announcements because I sit in a bit of a special spot. I run 2 AI companies: AI Academy to help people embrace AI, and a new company (still in stealth mode, can’t wait to share more) which is an AI product startup.

Lots of people commented on the announcements saying that “OpenAI killed thousands of startups” and I have to admit: initially I agreed. These are the two things that scared me:

  • GPTs: custom versions of ChatGPT that anyone can build without any code and will be distributed and sold through some sort of “OpenAI App Store”. GPTs create a new distribution channel that may make it easier for startups to reach users (yay!), but I suspect most of the revenues will go to OpenAI (boo!), plus they would own the customer and that’s a huge liability (extra boo!).

  • The Assistant API lowers even further the barrier for developers to build AI products. It’s 10x easier now to keep memory logs (allow AI to remember users’ conversations) and upload custom documents. These were features I had built for my new startup and now OpenAI promises that anyone will be able to replicate them without my “AI skills”. Cool for democratization (AI Academy is happy!), but lower barrier to entry and more competition for my new startup.

After the initial “wow” factor of the demo, I started thinking about what these things mean for the future of AI and tech in general.

I’ll focus mostly on GPTs. GPTs are not particularly new if you think about it: they are a new, more customizable version of plugins (with a marketplace economy). However, plugins were a failure. I think Sam Altman himself said they failed because “people want the ChatGPT experience in other products, rather than other products in the ChatGPT experience”, and I don’t think GPTs solve that problem.

The other big bet OpenAI is making is that the future of AI interactions will be chat-based. I disagree with that. Not everything needs to be a chat: I don’t want to talk to a computer, I want computers to solve my problems, period. And most of the time I’d argue the most efficient way to get something done is not through a chat.

Why did they release GPTs then? I suspect there are two reasons:

  1. People struggle to find use cases for ChatGPT. It may seem weird to you, but that’s my experience training thousands of people in companies. You can do ANYTHING with that tool, which is overwhelming for many people who just need 2-3 things done. By creating narrower, more specialized experiences, OpenAI may boost adoption.

  2. To create lock-in and network effects. There was something odd in the keynote: while OpenAI bragged about 92% of Fortune 500 companies using their technology and 100M weekly active users for ChatGPT (without specifying how many are paying), they also reduced pricing for their API by up to 3x. Why the hell do you aggressively cut your pricing if you’re winning? Maybe you’re not winning? My thesis is that they’re facing competition from new AI products and other LLM providers and they need to create a lock-in for customers.

So all of that being said, what do I think will happen?

I think GPTs will be a lukewarm success: they’ll help some people get more of ChatGPT, but they won’t be a disrupting force in the industry and won’t radically change the landscape. However, they’ll still kill thousands of “startups” that were trying to market very simple experiences with very little value added compared to ChatGPT.

This brings me to the most pressing point for me: what should a startup do to succeed, and what’s the future of tech and AI? I think there’s no more space for simple, incremental improvements. The ease at which people can now create their own GPT or build new products with the Assistants API means that we’ll see another explosion of simple, tiny products. If that’s what you want to do, good luck with that! I don’t want to compete with thousands of other hard-to-distinguish products, fighting for customer attention and competing over costs and market share.

I think from now on it’s go big or go home: either you re-think completely how an industry works through generative AI, or you may as well give up. In some sense, there’s more risk in being conservative and making small bets than aiming high and shooting for the stars.

And that’s what I’ll do ✨

Upcoming events

Future occasions to chat about AI or join an educational program with me:

  1. Analyzing the new OpenAI Products - What is the future of AI? (open webinar, Friday 10th, 5pm CET).👉 Join here

  2. Building a Gen-AI prototype in 1 hour without code - (open Live Masterclass, November 21st at 6pm CET) 👉 Register here

  3. AI Productivity Workshop (99€ fee, November 18th, 3.30 pm CET) 👉 Lock your seat

  4. Get Early Access to AI Academy’s 4th edition of the Master in Prompt Engineering and design an AI product. 👉 Join the waiting list