Rob Bensinger: If you’re an AI developer who’s fine with AI wiping out humanity, the thing that should terrify you is AI wiping out AI.
The wrong starting seed for the future can permanently lock in AIs that fill the universe with non-sentient matter, pain, or stagnant repetition.
For those interested in this angle (how AI outcomes without humans could still go a number of ways, and what variables could make them go better/worse), I recently brainstormed here and here some things that might matter.
Well done finding the direct contradiction. (I also thought the claims seemed fishy but didn’t think of checking whether model running costs are bigger than revenue from subscriptions.)
Two other themes in the article that seem in a bit of tension for me:
Models have little potential/don’t provide much value.
People use their subscriptions so much that the company loses money on its subscriptions.
It feels like if people max out use on their subscriptions, then the models are providing some kind of value (promising to keep working on them even if just to make inference cheaper). By contrast, if people don’t use them much, you should at least be able to make a profit on existing subscriptions (even if you might be worried about user retention and growth rates).
All of that said, I also get the impression “OpenAI is struggling.” I just think it has more to do with their specific situation rather than with the industry (plus I’m not as confident in this take as the author seems to be).