@Caledonian: So if an AI wants to wipe out the human race we should be happy about it? What if it wants to treat as cattle? Which/whose preferences should it follow? (Notice the weasel words?)
When I was a teenager I used to think just like you. A superintelligence would have better goals than ordinary humans, because it is superintelligent. Then I grew up and realized that minds are not blank slates, and you can’t just create a “value-free” AI and see what kinds of terminal values it chooses for itself.
@Caledonian: So if an AI wants to wipe out the human race we should be happy about it? What if it wants to treat as cattle? Which/whose preferences should it follow? (Notice the weasel words?)
When I was a teenager I used to think just like you. A superintelligence would have better goals than ordinary humans, because it is superintelligent. Then I grew up and realized that minds are not blank slates, and you can’t just create a “value-free” AI and see what kinds of terminal values it chooses for itself.