It’s not even a personality cult. Until the other day Altman was a despicable doomer and decel, advocating for regulations that would clip humanity’s wings. As soon as he was fired and the “what did Ilya see” narrative emerged (I don’t even think it was all serious at the beginning), the immediate response from the e/acc crowd was to elevate him to the status of martyr in minutes and recast the Board as some kind of reactionary force for evil that wants humanity to live in misery forever rather than bask in the Glorious AI Future.
Honestly even without the doom stuff I’d be extremely worried about this being the cultural and memetic environment in which AI gets developed anyway. This stuff is pure poison.
It doesn’t seem to me like e/acc has contributed a whole lot to this beyond commentary. The rallying of OpenAI employees behind Altman is quite plausibly his general popularity + ability to gain control of a situation.
At least that seems likely if Paul Graham’s assessment of him as a master persuader is to be believed (and why wouldn’t it?).
I mean, the employees could be motivated by a more straightforward sense that the firing is arbitrary and threatens the functioning of OpenAI and thus their immediate livelihood. I’d be curious to understand how much of this is calculated self-interest and how much indeed personal loyalty to Sam Altman, which would make this incident very much a crossing of the Rubicon.
I do find it quite surprising that so many who work at OpenAI are so eager to follow Altman to Microsoft—I guess I assumed the folks at OpenAI valued not working for big tech (that’s more(?) likely to disregard safety) more than it appears they actually did.
My guess is they feel that Sam and Greg (and maybe even Ilya) will provide enough of a safety net (compared to a randomized Board overlord) but also a large dose of self-interest once it gains steam and you know many of your coworkers will leave
It’s not even a personality cult. Until the other day Altman was a despicable doomer and decel, advocating for regulations that would clip humanity’s wings. As soon as he was fired and the “what did Ilya see” narrative emerged (I don’t even think it was all serious at the beginning), the immediate response from the e/acc crowd was to elevate him to the status of martyr in minutes and recast the Board as some kind of reactionary force for evil that wants humanity to live in misery forever rather than bask in the Glorious AI Future.
Honestly even without the doom stuff I’d be extremely worried about this being the cultural and memetic environment in which AI gets developed anyway. This stuff is pure poison.
It doesn’t seem to me like e/acc has contributed a whole lot to this beyond commentary. The rallying of OpenAI employees behind Altman is quite plausibly his general popularity + ability to gain control of a situation.
At least that seems likely if Paul Graham’s assessment of him as a master persuader is to be believed (and why wouldn’t it?).
I mean, the employees could be motivated by a more straightforward sense that the firing is arbitrary and threatens the functioning of OpenAI and thus their immediate livelihood. I’d be curious to understand how much of this is calculated self-interest and how much indeed personal loyalty to Sam Altman, which would make this incident very much a crossing of the Rubicon.
I do find it quite surprising that so many who work at OpenAI are so eager to follow Altman to Microsoft—I guess I assumed the folks at OpenAI valued not working for big tech (that’s more(?) likely to disregard safety) more than it appears they actually did.
My guess is they feel that Sam and Greg (and maybe even Ilya) will provide enough of a safety net (compared to a randomized Board overlord) but also a large dose of self-interest once it gains steam and you know many of your coworkers will leave