- Diaries of an AI trouble maker
- Posts
- 🍕 Is AI profitable? 🤔
🍕 Is AI profitable? 🤔
+ OpenAI's new AI-generated image detection tool, Amazon's AI image generation tool, Airbnb uses AI to detect Halloween parties, and more!
Hey there,
I’ve announced that this newsletter will have a few changes coming up soon. You’ll get to know what these changes are in the next edition, so stay tuned 🎉
In the meantime, we’ve kickstarted the new edition of the Master in Prompt Engineering with 30 new amazing students. The waiting list for the fourth edition is open - I recommend you join it because seats sell out fast and people on the waiting list get to join the program first (with a discount).
This week’s news:
Is AI profitable? 🤔
Pizza Bytes 🍕: OpenAI's new AI-generated image detection tool, Amazon's AI image generation tool, Airbnb uses AI to detect Halloween parties, and more!
Now let’s get started 🕺
Is AI profitable? 🤔
We’ve seen how hundreds of companies have been rushing to integrate AI into their products. However, with great power comes great financial responsibility.
Most companies have integrated AI into their products by leveraging providers like OpenAI. All these companies charge you per usage: every time a user does something that requires you to use AI, your company gets billed. In the case of OpenAI, their pricing changes based on the model. The most expensive model (GPT-4) costs between 0.045$ for every 750 words (the calculation is a bit more complex than that, this is an average).
Let’s put ourselves in the shoes of a startup selling a “writing copilot” tool, charging users 10$/month. Our startup leverages GPT-4 and has built an amazing user experience around it, with functionalities specific for writing. Let’s say one of our users uses our tool to write a book of 100,000 words. That means we’d have to pay roughly 6$ to OpenAI to use GPT-4 and plug its outputs into your product. We’re left with a 4$ margin. What if he writes 5 books in a month? We’ve lost $20/month.
This is not a random case: Github Copilot (a code-writing assistant) costs $10/month, and rumors say that in the first few months, the company was losing on average more than $20 a month per user, with some users costing the company as much as $80 a month.
The classic startup playbook is to “focus on growth now, think about profitability later” - if you have enough cash. Some of this cash may be running out, however. We’re already seeing many startups move away from OpenAI to cheaper alternatives.
Does this mean that AI is a big bubble that will burst soon? Yes and no. From one side, lots of hype has led to startups that will never be profitable as they’re using expensive tech to deliver marginal value to users (think “delivering a pizza with a Lamborghini”).
On the other side, I believe AI has the potential to truly redefine industries, and the companies that do will deliver so much value (that will be paid so much by customers) that the cost of running these large models will be totally sustainable.
In the meantime, startups should focus on a few things to be successful in this AI craze:
Focus on delivering real value (as said above)
Stay on top of technological innovation, looking at new AI technologies that are more cost-efficient
I believe that competition will bring costs down and make AI close to free (I’ve written about it). It will take time though. Try to survive long enough :)
Pizza Bytes 🍕
Investment firm Thrive Capital is set to acquire OpenAI shares from employees, raising the company's valuation to at least $80 billion. This deal marks a significant increase from a transaction made six months ago and further strengthens Thrive's investment in OpenAI.
OpenAI is debating whether to release a tool that can determine if an image was generated by their AI art model, DALL-E 3. The tool's accuracy is high, but OpenAI is hesitant due to concerns of mislabeling images and the ongoing philosophical question of what qualifies as an AI-generated image.
Reddit is considering blocking Google and Bing's web crawlers if AI companies like OpenAI don't pay for scraping Reddit content to train AI models. This could result in Reddit posts and comments no longer showing up in search engine results, potentially impacting how people find information online.
The tool called Nightshade allows artists to add invisible changes to their artwork, causing chaos and unpredictability in AI models that use the art for training. It can render AI-generated images useless, such as turning dogs into cats or cars into cows. This tool could be a game-changer in protecting artists' rights and making AI companies think twice before using their work without permission.
Amazon Ads has launched a generative AI tool called image generation, making it easier for advertisers to create engaging product listings with lifestyle and brand-themed images.
Motorola unveiled AI features at Lenovo Tech World '23, including a flexible device with an adaptive display that can be molded and AI concepts like generative theming and an on-device personal assistant named MotoAI.
Airbnb is using AI to deploy an anti-party system for Halloween in the US and Canada. The system analyzes various signals to identify higher-risk bookings and prevent disruptive or unauthorized parties.
Scientists from the University of Science and Technology of China and Tencent's YouTu Lab have developed a tool called "Woodpecker" to combat hallucination in AI models.
Direct, a startup initially focused on chatbot monetization, now offers a platform for publishers to implement their own chatbots and engage directly with readers, using generative AI to bridge the gap between Big Tech and the news industry.
Microsoft CEO Satya Nadella discussed the potential of an AI-powered Copilot being the new Start button for Windows 12.
UK Prime Minister Rishi Sunak acknowledges the "new opportunities" for economic growth offered by AI but warns of the "new dangers" including risks of cybercrime, designing bioweapons, disinformation, and job displacement.
The Internet Watch Foundation (IWF) has expressed concerns about the surge of AI-generated child sexual abuse imagery.
A recent study conducted by German and Ecuadorian scientists shows that AI, combined with acoustic and DNA-based surveys, effectively monitors biodiversity recovery in tropical forests, as demonstrated in the Chocó rainforest over 25 years.
LexisNexis, known for its legal research and business information services, has released Lexis+ AI, an AI-powered tool that provides accurate and trustworthy legal information while minimizing the risk of hallucinations or invented content.
You reached the end of this edition 😢
I’d LOVE to know if you liked it.Let me know by simply clicking on one of those links! |
I’ll talk to you next week.
Ciao 👋