Well done finding the direct contradiction. (I also thought the claims seemed fishy but didn’t think of checking whether model running costs are bigger than revenue from subscriptions.)
Two other themes in the article that seem in a bit of tension for me:
Models have little potential/don’t provide much value.
People use their subscriptions so much that the company loses money on its subscriptions.
It feels like if people max out use on their subscriptions, then the models are providing some kind of value (promising to keep working on them even if just to make inference cheaper). By contrast, if people don’t use them much, you should at least be able to make a profit on existing subscriptions (even if you might be worried about user retention and growth rates).
All of that said, I also get the impression “OpenAI is struggling.” I just think it has more to do with their specific situation rather than with the industry (plus I’m not as confident in this take as the author seems to be).
If you’re not elderly or otherwise at risk of irreversible harms in the near future, then pausing for a decade (say) to reduce the chance of AI ruin by even just a few percentage points still seems good. So the crux is still “can we do better by pausing.” (This assumes pauses on the order of 2-20years; the argument changes for longer pauses.)
Maybe people think the background level of xrisk is higher than it used to be over the last decades because the world situation seems to be deteriorating. But IMO this also increases the selfishness aspect of pushing AI forward because if you’re that desperate for a deus ex machina, surely you also have to thihnk that there’s a good chance things will get worse when you push technology forward.
(Lastly, I also want to note that for people who care less about living forever and care more about near-term achievable goals like “enjoy life with loved ones,” the selfish thing would be to delay AI indefinitely because rolling the dice for a longer future is then less obvioiusly worth it.)