Idk, I have a bad feeling about this, for reasons I attempted to articulate in this post.
I’m not sure how commitment races are relevant here? We’re not committing against each other here, we’re just considering the set of all possible mutual commitments to compute the Pareto frontier. If you apply this principle to Chicken then the result is, flip a coin to determine who goes first and let them go first, there’s no “throwing out the steering wheel” dynamics. Or, you mean commitment races between us and other agents? The intent here is making decision theoretic commitments towards each other, not necessarily committing to any decision theory towards the outside more than we normally would be.
Why use all this specific, concrete decision theory jargon when you can just say “I promise to take my partner’s interest (as they judge it, not as I judge it) into account to a significant extent” or something like that.
Well, we could but this formal specification shows just how significant the extent is (which is very significant).
Or, you mean commitment races between us and other agents? The intent here is making decision theoretic commitments towards each other, not necessarily committing to any decision theory towards the outside more than we normally would be.
Ah, good, that negates most of my concern. If you didn’t already you should specify that this only applies to your actions and commitments “towards each other.” This is an awkward source of vagueness perhaps, since many actions and commitments affect both your spouse and other entities in the world and thus are hard to classify.
Re: the usefulness of precision: Perhaps you could put a line at the end of the policy that says “We aren’t actually committing to all that preceding stuff. However, we do commit to take each other’s interests into account to a similar extent to the extent implied by the preceding text.”
To phrase my intent more precisely: whatever the decision theory we will come to believe in[1] is, we vow to behave in a way which is the closest analogue in that decision theory of the formal specification we gave here in the framework of ordinary Bayesian sequential decision making.
It is also possible we will disagree about decision theory. In that case, I guess we need to defer to whatever is the most concrete “metadecision theory” we can agree upon.
I like where you are going with this. One issue with that phrasing is that it may be hard to fulfill that vow, since you don’t yet know what decision theory you will come to believe in.
I’m not sure how commitment races are relevant here? We’re not committing against each other here, we’re just considering the set of all possible mutual commitments to compute the Pareto frontier. If you apply this principle to Chicken then the result is, flip a coin to determine who goes first and let them go first, there’s no “throwing out the steering wheel” dynamics. Or, you mean commitment races between us and other agents? The intent here is making decision theoretic commitments towards each other, not necessarily committing to any decision theory towards the outside more than we normally would be.
Well, we could but this formal specification shows just how significant the extent is (which is very significant).
Ah, good, that negates most of my concern. If you didn’t already you should specify that this only applies to your actions and commitments “towards each other.” This is an awkward source of vagueness perhaps, since many actions and commitments affect both your spouse and other entities in the world and thus are hard to classify.
Re: the usefulness of precision: Perhaps you could put a line at the end of the policy that says “We aren’t actually committing to all that preceding stuff. However, we do commit to take each other’s interests into account to a similar extent to the extent implied by the preceding text.”
Also: Congratulations by the way! I’m happy for you! Also, I think it’s really cool that you are putting this much thought into your vows. :)
Thank you :)
To phrase my intent more precisely: whatever the decision theory we will come to believe in[1] is, we vow to behave in a way which is the closest analogue in that decision theory of the formal specification we gave here in the framework of ordinary Bayesian sequential decision making.
It is also possible we will disagree about decision theory. In that case, I guess we need to defer to whatever is the most concrete “metadecision theory” we can agree upon.
I like where you are going with this. One issue with that phrasing is that it may be hard to fulfill that vow, since you don’t yet know what decision theory you will come to believe in.
Well, at any given moment we will use the best-guess decision theory we have at the time.