Everything I do will be according to the policy which is the Kalai-Smorodinski solution to the bargaining problem defined by my [spouse]’s and my own priors and utility functions, with the disagreement point set at the counterfactual in which we did not marry. This policy is deemed to be determined a priori and not a posteriori. That is, it requires us to act as if we made all precommitments that would a priori be beneficial from a Kalai-Smorodinksi bargaining point of view[6]. Moreover, if I deviate from this policy for any reason then I will return to optimal behavior as soon as possible, while preserving my [spouse]’s a priori expected utility if at all possible
Idk, I have a bad feeling about this, for reasons I attempted to articulate in this post. The notion of optimal behavior you are using here may in fact be bad, and I question whether the benefits outweigh the costs. What are the benefits exactly? Why use all this specific, concrete decision theory jargon when you can just say “I promise to take my partner’s interest (as they judge it, not as I judge it) into account to a significant extent” or something like that. Much more vague, but I think that’s a feature not a bug since you have the good faith clause and since you are both nice people who presumably don’t have really fucked up notions of good faith.
Idk, I have a bad feeling about this, for reasons I attempted to articulate in this post.
I’m not sure how commitment races are relevant here? We’re not committing against each other here, we’re just considering the set of all possible mutual commitments to compute the Pareto frontier. If you apply this principle to Chicken then the result is, flip a coin to determine who goes first and let them go first, there’s no “throwing out the steering wheel” dynamics. Or, you mean commitment races between us and other agents? The intent here is making decision theoretic commitments towards each other, not necessarily committing to any decision theory towards the outside more than we normally would be.
Why use all this specific, concrete decision theory jargon when you can just say “I promise to take my partner’s interest (as they judge it, not as I judge it) into account to a significant extent” or something like that.
Well, we could but this formal specification shows just how significant the extent is (which is very significant).
Or, you mean commitment races between us and other agents? The intent here is making decision theoretic commitments towards each other, not necessarily committing to any decision theory towards the outside more than we normally would be.
Ah, good, that negates most of my concern. If you didn’t already you should specify that this only applies to your actions and commitments “towards each other.” This is an awkward source of vagueness perhaps, since many actions and commitments affect both your spouse and other entities in the world and thus are hard to classify.
Re: the usefulness of precision: Perhaps you could put a line at the end of the policy that says “We aren’t actually committing to all that preceding stuff. However, we do commit to take each other’s interests into account to a similar extent to the extent implied by the preceding text.”
To phrase my intent more precisely: whatever the decision theory we will come to believe in[1] is, we vow to behave in a way which is the closest analogue in that decision theory of the formal specification we gave here in the framework of ordinary Bayesian sequential decision making.
It is also possible we will disagree about decision theory. In that case, I guess we need to defer to whatever is the most concrete “metadecision theory” we can agree upon.
I like where you are going with this. One issue with that phrasing is that it may be hard to fulfill that vow, since you don’t yet know what decision theory you will come to believe in.
Idk, I have a bad feeling about this, for reasons I attempted to articulate in this post. The notion of optimal behavior you are using here may in fact be bad, and I question whether the benefits outweigh the costs. What are the benefits exactly? Why use all this specific, concrete decision theory jargon when you can just say “I promise to take my partner’s interest (as they judge it, not as I judge it) into account to a significant extent” or something like that. Much more vague, but I think that’s a feature not a bug since you have the good faith clause and since you are both nice people who presumably don’t have really fucked up notions of good faith.
I’m not sure how commitment races are relevant here? We’re not committing against each other here, we’re just considering the set of all possible mutual commitments to compute the Pareto frontier. If you apply this principle to Chicken then the result is, flip a coin to determine who goes first and let them go first, there’s no “throwing out the steering wheel” dynamics. Or, you mean commitment races between us and other agents? The intent here is making decision theoretic commitments towards each other, not necessarily committing to any decision theory towards the outside more than we normally would be.
Well, we could but this formal specification shows just how significant the extent is (which is very significant).
Ah, good, that negates most of my concern. If you didn’t already you should specify that this only applies to your actions and commitments “towards each other.” This is an awkward source of vagueness perhaps, since many actions and commitments affect both your spouse and other entities in the world and thus are hard to classify.
Re: the usefulness of precision: Perhaps you could put a line at the end of the policy that says “We aren’t actually committing to all that preceding stuff. However, we do commit to take each other’s interests into account to a similar extent to the extent implied by the preceding text.”
Also: Congratulations by the way! I’m happy for you! Also, I think it’s really cool that you are putting this much thought into your vows. :)
Thank you :)
To phrase my intent more precisely: whatever the decision theory we will come to believe in[1] is, we vow to behave in a way which is the closest analogue in that decision theory of the formal specification we gave here in the framework of ordinary Bayesian sequential decision making.
It is also possible we will disagree about decision theory. In that case, I guess we need to defer to whatever is the most concrete “metadecision theory” we can agree upon.
I like where you are going with this. One issue with that phrasing is that it may be hard to fulfill that vow, since you don’t yet know what decision theory you will come to believe in.
Well, at any given moment we will use the best-guess decision theory we have at the time.