I think the way in which moral behaviour gradually emerges out of ‘enlightened self-interest’ is profoundly relevant to anyone interested in the intersection of ethics and AI.
I agree, with the caveat that what applies to ethics might not apply naturally to Friendliness.
I agree, with the caveat that what applies to ethics might not apply naturally to Friendliness.
Newcomb’s problem is not required for that. The usual story says that moral behavior emerges from repeated games.