(One could reasonably say that the upside of removing Sam Altman from OpenAI is not high enough to be worth dishonor over the matter, in which case percent chance of success doesn’t matter that much.
Indeed, success probabilities in practice only range over 1-2 orders of magnitude before they stop being worth tracking at all, so probably the value one assigns to removing Sam Altman at all dominates the whole question, and success probabilities just aren’t that relevant.)
Ok. Then you can least-convenient-possible-world the question. What’s the version of the situation where removing the guy is important enough that the success probabilities start mattering in the calculus?
Also to be clear, I think my answer to this is “It’s just fine for some things to be sacred. Especially for features like honor or honesty, in which their strength comes from them being reliable under all circumstances (or at least that they have strength in proportion to the circumstances in which they hold).
My intuition (not rigorous) is there a multiple levels in the consequentialist/deontoligical/consequentialist dealio.
I believe that unconditional friendship is approximately something one can enter into, but one enters into it for contingent reasons (perhaps in a Newcomb-like way – I’ll unconditionally be your friend because I’m betting that you’ll unconditionally be my friend). Your ability to credibly enter such relationships (at least in my conception of them) is dependent on you not starting to be more “conditional” because you doubt that the other person is also being there. This I think is related to not being a “fair weather” friend. I continue to be your friend even when it’s not fun (you’re sick, need taking care of whatever) even if I wouldn’t have become your friend to do that. And vice versa. Kind of a mutual insurance policy.
Same thing could be with contracts, agreements, and other collaborations. In a Newcomb-like way, I commit to being honest, being cooperative, etc to a very high degree even in the face of doubts about you. (Maybe you stop by the time someone is threatening your family, not sure what Ben, et al, think about that.) But the fact I entered into this commitment was based on the probabilities I assigned to your behavior at the start.
(One could reasonably say that the upside of removing Sam Altman from OpenAI is not high enough to be worth dishonor over the matter, in which case percent chance of success doesn’t matter that much.
Indeed, success probabilities in practice only range over 1-2 orders of magnitude before they stop being worth tracking at all, so probably the value one assigns to removing Sam Altman at all dominates the whole question, and success probabilities just aren’t that relevant.)
Ok. Then you can least-convenient-possible-world the question. What’s the version of the situation where removing the guy is important enough that the success probabilities start mattering in the calculus?
Also to be clear, I think my answer to this is “It’s just fine for some things to be sacred. Especially for features like honor or honesty, in which their strength comes from them being reliable under all circumstances (or at least that they have strength in proportion to the circumstances in which they hold).
My intuition (not rigorous) is there a multiple levels in the consequentialist/deontoligical/consequentialist dealio.
I believe that unconditional friendship is approximately something one can enter into, but one enters into it for contingent reasons (perhaps in a Newcomb-like way – I’ll unconditionally be your friend because I’m betting that you’ll unconditionally be my friend). Your ability to credibly enter such relationships (at least in my conception of them) is dependent on you not starting to be more “conditional” because you doubt that the other person is also being there. This I think is related to not being a “fair weather” friend. I continue to be your friend even when it’s not fun (you’re sick, need taking care of whatever) even if I wouldn’t have become your friend to do that. And vice versa. Kind of a mutual insurance policy.
Same thing could be with contracts, agreements, and other collaborations. In a Newcomb-like way, I commit to being honest, being cooperative, etc to a very high degree even in the face of doubts about you. (Maybe you stop by the time someone is threatening your family, not sure what Ben, et al, think about that.) But the fact I entered into this commitment was based on the probabilities I assigned to your behavior at the start.