I think you’re missing at least one key element in your model: uncertainty about future predictions. Commitments have a very high cost in terms of future consequence-effecting decision space. Consequentialism does _not_ imply a very high discount rate, and we’re allowed to recognize the limits of our prediction and to give up some power in the short term to reserve our flexibility for the future.
Also, one of the reasons that this kind of interaction is rare among humans is that commitment is impossible for humans. We can change our minds even after making an oath—often with some reputational consequences, but still possible if we deem it worthwhile. Even so, we’re rightly reluctant to make serious committments. An agent who can actually enforce it’s self-limitations is going to be orders of magnitude more hesitant to do so.
All that said, it’s worth recognizing that an agent that’s significantly better at predicting the consequences of potential commitments will pay a lower cost for the best of them, and has a material advantage over those who need flexibility because they don’t have information. This isn’t a race in time, it’s a race in knowledge and understanding. I don’t think there’s any way out of that race—more powerful agents are going to beat weaker ones most of the time.
I don’t think I was missing that element. The way I think about it is: There is some balance that must be struck between making commitments sooner (risking making foolish decisions due to ignorance) and later (risking not having the right commitments made when a situations arises in which they would be handy). A commitment race is a collective action problem where individuals benefit from going far to the “sooner” end of the spectrum relative to the point that would be optimal for everyone if they could coordinate.
I agree about humans not being able to make commitments—at least, not arbitrary commitments. (Arguably, getting angry and seeking revenge when someone murders your family is a commitment you made when you were born.) I think we should investigate whether this inability is something evolution “chose” or not.
I agree it’s a race in knowledge/understanding as well as time. (The two are related.) But I don’t think more knowledge = more power. For example, if I don’t know anything and decide to commit to plan X which benefits me, else war, and you know more than me—in particular, you know enough about me to know what I will commit to—and you are cowardly, then you’ll go along with my plan.
I think you’re missing at least one key element in your model: uncertainty about future predictions. Commitments have a very high cost in terms of future consequence-effecting decision space. Consequentialism does _not_ imply a very high discount rate, and we’re allowed to recognize the limits of our prediction and to give up some power in the short term to reserve our flexibility for the future.
Also, one of the reasons that this kind of interaction is rare among humans is that commitment is impossible for humans. We can change our minds even after making an oath—often with some reputational consequences, but still possible if we deem it worthwhile. Even so, we’re rightly reluctant to make serious committments. An agent who can actually enforce it’s self-limitations is going to be orders of magnitude more hesitant to do so.
All that said, it’s worth recognizing that an agent that’s significantly better at predicting the consequences of potential commitments will pay a lower cost for the best of them, and has a material advantage over those who need flexibility because they don’t have information. This isn’t a race in time, it’s a race in knowledge and understanding. I don’t think there’s any way out of that race—more powerful agents are going to beat weaker ones most of the time.
I don’t think I was missing that element. The way I think about it is: There is some balance that must be struck between making commitments sooner (risking making foolish decisions due to ignorance) and later (risking not having the right commitments made when a situations arises in which they would be handy). A commitment race is a collective action problem where individuals benefit from going far to the “sooner” end of the spectrum relative to the point that would be optimal for everyone if they could coordinate.
I agree about humans not being able to make commitments—at least, not arbitrary commitments. (Arguably, getting angry and seeking revenge when someone murders your family is a commitment you made when you were born.) I think we should investigate whether this inability is something evolution “chose” or not.
I agree it’s a race in knowledge/understanding as well as time. (The two are related.) But I don’t think more knowledge = more power. For example, if I don’t know anything and decide to commit to plan X which benefits me, else war, and you know more than me—in particular, you know enough about me to know what I will commit to—and you are cowardly, then you’ll go along with my plan.