What I do like about the post is its suggestion that paying Omega for the ride is not simply utility-maximizing behavior, but acceptance of a constraint (filter).
I dislike complicating the theory by using two kinds of entities (utilities and constraints). That strikes me as going one entity “beyond necessity” Furthermore, how do we find out what the constraints are? We have “revealed preference” theory for the utilities. Do you think you can construct a “revealed constraint” algorithm?
Robert Nozick used the term “side constraint”. That seems descriptively accurate for typical refusals to break promises—more so than anything that can be stated non-tortuously in goal-seeking terms.
My opinion is exactly the opposite. I have rarely encountered a person who had made a promise which wouldn’t be broken if the stakes were high enough. It is not a constraint. It is a (finite) disutility.
I dislike complicating the theory by using two kinds of entities (utilities and constraints). That strikes me as going one entity “beyond necessity” Furthermore, how do we find out what the constraints are? We have “revealed preference” theory for the utilities. Do you think you can construct a “revealed constraint” algorithm?
My opinion is exactly the opposite. I have rarely encountered a person who had made a promise which wouldn’t be broken if the stakes were high enough. It is not a constraint. It is a (finite) disutility.