It seemed like a classic case of prisoner’s dilemma, so (5) and (7). The more of your company that signs the petition, the lower the value of your PPUs, making it more attractive to sign. It reached a point where they felt OpenAI’s value and their PPUs went to nothing if a critical mass joined Microsoft. In fact, if MS was willing to match compensation, everyone “cooperating” by not signing the petition is a worse outcome for everyone than just joining MS because they had already seen other players move first (Altman, Brockman, other resignations) - that is if we look purely at compensation (not even taking into account the possibility that PPU-equivalent at MS would not be profit capped). In textbook prisoner’s dilemma, cooperation leads to the best overall outcome for everyone, yet the best move is to defect if you are unable to coordinate, which is not really the case here.
Further, even if an OAI employee did not care about PPUs at all, and all they care about is the non-profit mission of AI for the betterment of all humanity, they might have felt there was a greater likelihood of achieving that mission at Microsoft than the empty shell of OAI (the safety teams for example—might as well do you best to help safety at the new “leading” organisation, and get paid too).
Not sure if this page is broken or I’m technically inept, but I can’t figure out how to reply to qualiia’s comment directly:
Primarily #5 and #7 was my gut reaction, but quailia’s post articulates rationale better than I could.
One useful piece of information that would influence my weights: what was OAI’s general hiring criteria? If they sought solely “best and brightest” on technical skills and enticed talent primarily with premiere pay packages, I’d lean #5 harder. If they sought cultural/mission fits in some meaningful way I might update lower on #5/7 and higher on others. I read the external blog post about the bulk of OAI compensation being in PPUs, but that’s not necessarily incompatible with mission fit.
Well done on the list overall, seems pretty complete, though aphyer provides a good unique reason (albeit adjacent to #2).
It seemed like a classic case of prisoner’s dilemma, so (5) and (7). The more of your company that signs the petition, the lower the value of your PPUs, making it more attractive to sign. It reached a point where they felt OpenAI’s value and their PPUs went to nothing if a critical mass joined Microsoft. In fact, if MS was willing to match compensation, everyone “cooperating” by not signing the petition is a worse outcome for everyone than just joining MS because they had already seen other players move first (Altman, Brockman, other resignations) - that is if we look purely at compensation (not even taking into account the possibility that PPU-equivalent at MS would not be profit capped). In textbook prisoner’s dilemma, cooperation leads to the best overall outcome for everyone, yet the best move is to defect if you are unable to coordinate, which is not really the case here.
Further, even if an OAI employee did not care about PPUs at all, and all they care about is the non-profit mission of AI for the betterment of all humanity, they might have felt there was a greater likelihood of achieving that mission at Microsoft than the empty shell of OAI (the safety teams for example—might as well do you best to help safety at the new “leading” organisation, and get paid too).
Not sure if this page is broken or I’m technically inept, but I can’t figure out how to reply to qualiia’s comment directly:
Primarily #5 and #7 was my gut reaction, but quailia’s post articulates rationale better than I could.
One useful piece of information that would influence my weights: what was OAI’s general hiring criteria? If they sought solely “best and brightest” on technical skills and enticed talent primarily with premiere pay packages, I’d lean #5 harder. If they sought cultural/mission fits in some meaningful way I might update lower on #5/7 and higher on others. I read the external blog post about the bulk of OAI compensation being in PPUs, but that’s not necessarily incompatible with mission fit.
Well done on the list overall, seems pretty complete, though aphyer provides a good unique reason (albeit adjacent to #2).