I’ve formerly done research for MIRI and what’s now the Center on Long-Term Risk; I’m now making a living as an emotion coach and Substack writer.
Most of my content becomes free eventually, but if you’d like to get a paid subscription to my Substack, you’ll get it a week early and make it possible for me to write more.
So I read another take on OpenAI’s finances and was wondering, does anyone know why Altman is doing such a gamble on collecting enormous investments into new models in the hopes that they’ll get sufficiently insane profits to make it worthwhile? Even ignoring the concerns around alignment etc., there’s still the straightforward issue of “maybe the models are good and work fine but aren’t good enough to pay back the investment”.
Even if you did expect scaling to probably bring in huge profits, naively it’d still be wiser to pick a growth strategy that didn’t require your company to become literally the most profitable company in the history of all companies or go bankrupt.
The obvious answer is something like “he believes they’re on the way to ASI and whoever gets there first, wins the game”, but I’m not sure if it makes sense even under that assumption—his strategy requires not only getting to ASI first, but never once faltering on the path there. Even if ASI is really imminent but it only takes like two years longer than he expected, that alone might be enough that OpenAI is done for. He could have raised much more conservative investment and still been in the game—especially since much of the current arms race is plausibly a response to the sums OpenAI has been raising.