Or, you mean commitment races between us and other agents? The intent here is making decision theoretic commitments towards each other, not necessarily committing to any decision theory towards the outside more than we normally would be.
Ah, good, that negates most of my concern. If you didn’t already you should specify that this only applies to your actions and commitments “towards each other.” This is an awkward source of vagueness perhaps, since many actions and commitments affect both your spouse and other entities in the world and thus are hard to classify.
Re: the usefulness of precision: Perhaps you could put a line at the end of the policy that says “We aren’t actually committing to all that preceding stuff. However, we do commit to take each other’s interests into account to a similar extent to the extent implied by the preceding text.”
To phrase my intent more precisely: whatever the decision theory we will come to believe in[1] is, we vow to behave in a way which is the closest analogue in that decision theory of the formal specification we gave here in the framework of ordinary Bayesian sequential decision making.
It is also possible we will disagree about decision theory. In that case, I guess we need to defer to whatever is the most concrete “metadecision theory” we can agree upon.
I like where you are going with this. One issue with that phrasing is that it may be hard to fulfill that vow, since you don’t yet know what decision theory you will come to believe in.
Ah, good, that negates most of my concern. If you didn’t already you should specify that this only applies to your actions and commitments “towards each other.” This is an awkward source of vagueness perhaps, since many actions and commitments affect both your spouse and other entities in the world and thus are hard to classify.
Re: the usefulness of precision: Perhaps you could put a line at the end of the policy that says “We aren’t actually committing to all that preceding stuff. However, we do commit to take each other’s interests into account to a similar extent to the extent implied by the preceding text.”
Also: Congratulations by the way! I’m happy for you! Also, I think it’s really cool that you are putting this much thought into your vows. :)
Thank you :)
To phrase my intent more precisely: whatever the decision theory we will come to believe in[1] is, we vow to behave in a way which is the closest analogue in that decision theory of the formal specification we gave here in the framework of ordinary Bayesian sequential decision making.
It is also possible we will disagree about decision theory. In that case, I guess we need to defer to whatever is the most concrete “metadecision theory” we can agree upon.
I like where you are going with this. One issue with that phrasing is that it may be hard to fulfill that vow, since you don’t yet know what decision theory you will come to believe in.
Well, at any given moment we will use the best-guess decision theory we have at the time.