re: accumulating status in hope of future counterfactual impact.
I model status-qua-status (as opposed to status as a side effect of something real) as something like a score for “how good are you at cooperating with this particular machine?”. The more you demonstrate cooperation, the more the machine will trust and reward you. But you can’t leverage that into getting the machine to do something different- that would immediately zero out your status/cooperation score.
There are exceptions. If you’re exceptionally strategic you might make good use of that status by e.g. changing what the machine thinks it wants, or coopting the resources and splintering. It is also pretty useful to accumulate evidence you’re a generally responsible adult before you go off and do something weird. But this isn’t the vibe I get from people I talk to with the ‘status then impact’ plan, or from any of 80ks advice. Their plans only make sense if either that status is a fungible resource like money, or if you plan on cooperating with the machine indefinitely.
So I don’t think people should pursue status as a goal in and of itself, especially if there isn’t a clear sign for when they’d stop and prioritize something else.
Thank you for this. As you note, this seems like a very important insight / clarification, for power- accrual / status-accrual based plans. In general, I observe people thinking only very vaguely about these kinds of plans, and this post gives me a sense of the kind of crisp modeling that is possible here.
I agree with your overall point re: 80k hours, but I think my model of how this works differs somewhat from yours.
“But you can’t leverage that into getting the machine to do something different- that would immediately zero out your status/cooperation score.”
The machines are groups of humans, so the degree to which you can change the overall behaviour depends on a few things.
1) The type of status (which as you hint, is not always fungible). If you’re widely considered to be someone who is great at predicting future trends and risks, other humans in the organisation will be more willing to follow when you suggest a new course of action. If you’ve acquired status by being very good at one particular niche task, people won’t necessarily value your bold suggestion for changing the organisations direction.
2) Strategic congruence. Some companies in history have successfully pivoted their business model (the example that comes to mind is Nokia). This transition is possible because while the machine is operating in a new way, the end goal of the machine remains the same (make money). If your suggested course of action conflicts with the overall goals of the machine, you will have more trouble changing the machine.
3) Structure of the machine. Some decision making structures give specific individuals a high degree of autonomy over the direction of the machine. In those instances, having a lot of status among a small group may be enough for you to exercise a high degree of control (or get yourself placed in a decision making role).
Of course, each of these variables all interact with each other in complex ways.
Sam Altman’s high personal status as an excellent leader and decision maker, combined with his strategic alignment to making lots of money, meant that he was able to out-manoeuvre a more safety focused board when he came into apparent conflict with the machine.
reposting comment from another post, with edits:
re: accumulating status in hope of future counterfactual impact.
I model status-qua-status (as opposed to status as a side effect of something real) as something like a score for “how good are you at cooperating with this particular machine?”. The more you demonstrate cooperation, the more the machine will trust and reward you. But you can’t leverage that into getting the machine to do something different- that would immediately zero out your status/cooperation score.
There are exceptions. If you’re exceptionally strategic you might make good use of that status by e.g. changing what the machine thinks it wants, or coopting the resources and splintering. It is also pretty useful to accumulate evidence you’re a generally responsible adult before you go off and do something weird. But this isn’t the vibe I get from people I talk to with the ‘status then impact’ plan, or from any of 80ks advice. Their plans only make sense if either that status is a fungible resource like money, or if you plan on cooperating with the machine indefinitely.
So I don’t think people should pursue status as a goal in and of itself, especially if there isn’t a clear sign for when they’d stop and prioritize something else.
Thank you for this. As you note, this seems like a very important insight / clarification, for power- accrual / status-accrual based plans. In general, I observe people thinking only very vaguely about these kinds of plans, and this post gives me a sense of the kind of crisp modeling that is possible here.
I agree with your overall point re: 80k hours, but I think my model of how this works differs somewhat from yours.
“But you can’t leverage that into getting the machine to do something different- that would immediately zero out your status/cooperation score.”
The machines are groups of humans, so the degree to which you can change the overall behaviour depends on a few things.
1) The type of status (which as you hint, is not always fungible).
If you’re widely considered to be someone who is great at predicting future trends and risks, other humans in the organisation will be more willing to follow when you suggest a new course of action. If you’ve acquired status by being very good at one particular niche task, people won’t necessarily value your bold suggestion for changing the organisations direction.
2) Strategic congruence.
Some companies in history have successfully pivoted their business model (the example that comes to mind is Nokia). This transition is possible because while the machine is operating in a new way, the end goal of the machine remains the same (make money). If your suggested course of action conflicts with the overall goals of the machine, you will have more trouble changing the machine.
3) Structure of the machine.
Some decision making structures give specific individuals a high degree of autonomy over the direction of the machine. In those instances, having a lot of status among a small group may be enough for you to exercise a high degree of control (or get yourself placed in a decision making role).
Of course, each of these variables all interact with each other in complex ways.
Sam Altman’s high personal status as an excellent leader and decision maker, combined with his strategic alignment to making lots of money, meant that he was able to out-manoeuvre a more safety focused board when he came into apparent conflict with the machine.