It’s a curiosity stopper in the sense that people don’t worry any more about risks from AI when they assume that intelligence correlates with doing the right thing, and that superintelligence would do the right thing all the time.
Stuart is trying to answer a different question, which is “Given that we think that’s probably false, what are some good examples that help people to see its falsity?”
It’s a curiosity stopper in the sense that people don’t worry any more about risks from AI when they assume that intelligence correlates with doing the right thing, and that superintelligence would do the right thing all the time.
Stuart is trying to answer a different question, which is “Given that we think that’s probably false, what are some good examples that help people to see its falsity?”