It reminds me of the loyalty successful generals like Caesar and Napoleon commanded from their men. The engineers building GPT-X weren’t loyal to The Charter, and they certainly weren’t loyal to the board. They were loyal to the projects they were building and to Sam, because he was the one providing them resources to build and pumping the value of their equity-based compensation.
They were not loyal to the board, but it is not clear if they were loyal to The Charter since they were not given any concrete evidence of a conflict between Sam and the Charter.
Feels like an apt comparison given that the way we find out now is what happens when some kind of Senate tries to cut to size the upstart general and the latter basically goes “you and what army?”.
Another key difference is that the growth is currently capped at 10x. Similar to their overall company structure, the PPUs are capped at a growth of 10 times the original value.
As the company was doing well recently, with ongoing talks about a investment imply a market cap of $90B, this would mean many employees might have hit their 10x already. The highest payout they would ever get. So all incentive to cash out now (or as soon as the 2-year lock will allow), 0 financial incentive to care about long term value.
This seems worse in aligning employee interest with the long term interest of the company even compare to regular (unlimited allowed growth) equity, where each employee might hope that the valuation could get even higher.
Also:
It’s important to reiterate that the PPUs inherently are not redeemable for value if OpenAI does not turn a profit
So it seems the growth cap actually encourages short term thinking, which seems against their long term mission.
It’s not even a personality cult. Until the other day Altman was a despicable doomer and decel, advocating for regulations that would clip humanity’s wings. As soon as he was fired and the “what did Ilya see” narrative emerged (I don’t even think it was all serious at the beginning), the immediate response from the e/acc crowd was to elevate him to the status of martyr in minutes and recast the Board as some kind of reactionary force for evil that wants humanity to live in misery forever rather than bask in the Glorious AI Future.
Honestly even without the doom stuff I’d be extremely worried about this being the cultural and memetic environment in which AI gets developed anyway. This stuff is pure poison.
It doesn’t seem to me like e/acc has contributed a whole lot to this beyond commentary. The rallying of OpenAI employees behind Altman is quite plausibly his general popularity + ability to gain control of a situation.
At least that seems likely if Paul Graham’s assessment of him as a master persuader is to be believed (and why wouldn’t it?).
I mean, the employees could be motivated by a more straightforward sense that the firing is arbitrary and threatens the functioning of OpenAI and thus their immediate livelihood. I’d be curious to understand how much of this is calculated self-interest and how much indeed personal loyalty to Sam Altman, which would make this incident very much a crossing of the Rubicon.
I do find it quite surprising that so many who work at OpenAI are so eager to follow Altman to Microsoft—I guess I assumed the folks at OpenAI valued not working for big tech (that’s more(?) likely to disregard safety) more than it appears they actually did.
My guess is they feel that Sam and Greg (and maybe even Ilya) will provide enough of a safety net (compared to a randomized Board overlord) but also a large dose of self-interest once it gains steam and you know many of your coworkers will leave
Whatever else, there were likely mistakes from the side of the board, but man does the personality cult around Altman make me uncomfortable.
It reminds me of the loyalty successful generals like Caesar and Napoleon commanded from their men. The engineers building GPT-X weren’t loyal to The Charter, and they certainly weren’t loyal to the board. They were loyal to the projects they were building and to Sam, because he was the one providing them resources to build and pumping the value of their equity-based compensation.
They were not loyal to the board, but it is not clear if they were loyal to The Charter since they were not given any concrete evidence of a conflict between Sam and the Charter.
Feels like an apt comparison given that the way we find out now is what happens when some kind of Senate tries to cut to size the upstart general and the latter basically goes “you and what army?”.
From your last link:
As the company was doing well recently, with ongoing talks about a investment imply a market cap of $90B, this would mean many employees might have hit their 10x already. The highest payout they would ever get. So all incentive to cash out now (or as soon as the 2-year lock will allow), 0 financial incentive to care about long term value.
This seems worse in aligning employee interest with the long term interest of the company even compare to regular (unlimited allowed growth) equity, where each employee might hope that the valuation could get even higher.
Also:
So it seems the growth cap actually encourages short term thinking, which seems against their long term mission.
Do you also understand these incentives this way?
It’s not even a personality cult. Until the other day Altman was a despicable doomer and decel, advocating for regulations that would clip humanity’s wings. As soon as he was fired and the “what did Ilya see” narrative emerged (I don’t even think it was all serious at the beginning), the immediate response from the e/acc crowd was to elevate him to the status of martyr in minutes and recast the Board as some kind of reactionary force for evil that wants humanity to live in misery forever rather than bask in the Glorious AI Future.
Honestly even without the doom stuff I’d be extremely worried about this being the cultural and memetic environment in which AI gets developed anyway. This stuff is pure poison.
It doesn’t seem to me like e/acc has contributed a whole lot to this beyond commentary. The rallying of OpenAI employees behind Altman is quite plausibly his general popularity + ability to gain control of a situation.
At least that seems likely if Paul Graham’s assessment of him as a master persuader is to be believed (and why wouldn’t it?).
I mean, the employees could be motivated by a more straightforward sense that the firing is arbitrary and threatens the functioning of OpenAI and thus their immediate livelihood. I’d be curious to understand how much of this is calculated self-interest and how much indeed personal loyalty to Sam Altman, which would make this incident very much a crossing of the Rubicon.
I do find it quite surprising that so many who work at OpenAI are so eager to follow Altman to Microsoft—I guess I assumed the folks at OpenAI valued not working for big tech (that’s more(?) likely to disregard safety) more than it appears they actually did.
My guess is they feel that Sam and Greg (and maybe even Ilya) will provide enough of a safety net (compared to a randomized Board overlord) but also a large dose of self-interest once it gains steam and you know many of your coworkers will leave