An ordinary utility maximiser calculates its future utility conditional on it choosing to defect—and do the same conditional on it choosing to cooperate. If it knows it its is playing the Prisoner’s Dilemma against its clone it will expect the clone to make the same deterministic decisions that it does. So, it will choose to cooperate—since that maximises its own utility. That is what behaviour to expect from a standard utility maximiser.
...and...
05:55 - What about the well-known list of informational harms? E.g. see the Bostrom “Information Hazards” paper.
I notice that multiple critical comments have been incorrectly flagged as spam on this video. Some fans have a pretty infantile way of expressing disagreement.
My comments on the video: Eliezer Yudkowsky: Open Problems in Friendly Artificial Intelligence
...and...
I notice that multiple critical comments have been incorrectly flagged as spam on this video. Some fans have a pretty infantile way of expressing disagreement.