What I added to decision theory beyond Selfish Gene’s arguments are:
An explanation for the psychological mechanisms of moral intuitions—i.e. why reasoning about moral issues feels different, and why we have such a category.
Why you shouldn’t take existing and ideal utility functions as being peppered with numerous terminal values (like “honor” and “gratefulness” and “non-backstabbing”), but rather, can view them as having few terminal values, but attached to agents who pursue them by acting on SAMELs. Thus you have a simpler explanation for existing utility functions, and a simpler constraint to satisfy when identifying your own (or forming your own decision theory, given what you regard as your values).
What I added to decision theory beyond Selfish Gene’s arguments are:
An explanation for the psychological mechanisms of moral intuitions—i.e. why reasoning about moral issues feels different, and why we have such a category.
Why you shouldn’t take existing and ideal utility functions as being peppered with numerous terminal values (like “honor” and “gratefulness” and “non-backstabbing”), but rather, can view them as having few terminal values, but attached to agents who pursue them by acting on SAMELs. Thus you have a simpler explanation for existing utility functions, and a simpler constraint to satisfy when identifying your own (or forming your own decision theory, given what you regard as your values).