I dislike the suggestion that natural selection finetuned (or filtered) our decision theory to the optimal degree of irrationality
Or for that matter, the (globally) optimal degree of anything. For all we know, much of human morality may be an evolutionary spandrel. Perhaps, like the technological marvel of condoms, parts of morality are fitness-reducing byproducts of generally fitness-enhancing characteristics.
What I do like about the post is its suggestion that paying Omega for the ride is not simply utility-maximizing behavior, but acceptance of a constraint (filter). Robert Nozick used the term “side constraint”. That seems descriptively accurate for typical refusals to break promises—more so than anything that can be stated non-tortuously in goal-seeking terms.
Now as a normative thesis, on the other hand, utility-maximization … also isn’t convincing. YMMV.
What I do like about the post is its suggestion that paying Omega for the ride is not simply utility-maximizing behavior, but acceptance of a constraint (filter).
I dislike complicating the theory by using two kinds of entities (utilities and constraints). That strikes me as going one entity “beyond necessity” Furthermore, how do we find out what the constraints are? We have “revealed preference” theory for the utilities. Do you think you can construct a “revealed constraint” algorithm?
Robert Nozick used the term “side constraint”. That seems descriptively accurate for typical refusals to break promises—more so than anything that can be stated non-tortuously in goal-seeking terms.
My opinion is exactly the opposite. I have rarely encountered a person who had made a promise which wouldn’t be broken if the stakes were high enough. It is not a constraint. It is a (finite) disutility.
Or for that matter, the (globally) optimal degree of anything. For all we know, much of human morality may be an evolutionary spandrel. Perhaps, like the technological marvel of condoms, parts of morality are fitness-reducing byproducts of generally fitness-enhancing characteristics.
What I do like about the post is its suggestion that paying Omega for the ride is not simply utility-maximizing behavior, but acceptance of a constraint (filter). Robert Nozick used the term “side constraint”. That seems descriptively accurate for typical refusals to break promises—more so than anything that can be stated non-tortuously in goal-seeking terms.
Now as a normative thesis, on the other hand, utility-maximization … also isn’t convincing. YMMV.
I dislike complicating the theory by using two kinds of entities (utilities and constraints). That strikes me as going one entity “beyond necessity” Furthermore, how do we find out what the constraints are? We have “revealed preference” theory for the utilities. Do you think you can construct a “revealed constraint” algorithm?
My opinion is exactly the opposite. I have rarely encountered a person who had made a promise which wouldn’t be broken if the stakes were high enough. It is not a constraint. It is a (finite) disutility.