Sure. Morals = the part of our utility function that benefits our genes more than us. But is this telling us anything we didn’t know since reading The Selfish Gene? Or any problems with standard decision theory? There’s no need to invoke Omega, or a new decision theory. Instead of recognizing that you can use standard decision theory, but measure utility as gene copies rather than as a human carrier’s qualia, you seem to be trying to find a decision theory for the human that will implement the gene’s utility function.
What I added to decision theory beyond Selfish Gene’s arguments are:
An explanation for the psychological mechanisms of moral intuitions—i.e. why reasoning about moral issues feels different, and why we have such a category.
Why you shouldn’t take existing and ideal utility functions as being peppered with numerous terminal values (like “honor” and “gratefulness” and “non-backstabbing”), but rather, can view them as having few terminal values, but attached to agents who pursue them by acting on SAMELs. Thus you have a simpler explanation for existing utility functions, and a simpler constraint to satisfy when identifying your own (or forming your own decision theory, given what you regard as your values).
Sure. Morals = the part of our utility function that benefits our genes more than us. But is this telling us anything we didn’t know since reading The Selfish Gene? Or any problems with standard decision theory? There’s no need to invoke Omega, or a new decision theory. Instead of recognizing that you can use standard decision theory, but measure utility as gene copies rather than as a human carrier’s qualia, you seem to be trying to find a decision theory for the human that will implement the gene’s utility function.
What I added to decision theory beyond Selfish Gene’s arguments are:
An explanation for the psychological mechanisms of moral intuitions—i.e. why reasoning about moral issues feels different, and why we have such a category.
Why you shouldn’t take existing and ideal utility functions as being peppered with numerous terminal values (like “honor” and “gratefulness” and “non-backstabbing”), but rather, can view them as having few terminal values, but attached to agents who pursue them by acting on SAMELs. Thus you have a simpler explanation for existing utility functions, and a simpler constraint to satisfy when identifying your own (or forming your own decision theory, given what you regard as your values).