one question i get every once in a while and wish i had a canonical answer to is (probably can be worded more pithily):
“humans have always thought their minds are equivalent to whatever’s their latest technological achievement—eg, see the steam engines. computers are just the latest fad that we currently compare our minds to, so it’s silly to think they somehow pose a threat. move on, nothing to see here.”
note that the canonical answer has to work for people whose ontology does not include the concepts of “computation” nor “simulation”. they have seen increasingly universal smartphones and increasingly realistic computer games (things i’ve been gesturing at in my poor attempts to answer) but have no idea how they work.
Agents are the real problem. Intelligent goal-directed adversarial behavior is something almost everyone understands whether it is other humans or ants or crop-destroying pests.
We’re close to being able to create new, faster, more intelligent agents out of computers.
I think the technical answer comes down to the Church-Turing thesis and the computability of the physical universe, but obviously that’s not a great answer for the compscidegreeless among us.
looks great, thanks for doing this!
one question i get every once in a while and wish i had a canonical answer to is (probably can be worded more pithily):
“humans have always thought their minds are equivalent to whatever’s their latest technological achievement—eg, see the steam engines. computers are just the latest fad that we currently compare our minds to, so it’s silly to think they somehow pose a threat. move on, nothing to see here.”
note that the canonical answer has to work for people whose ontology does not include the concepts of “computation” nor “simulation”. they have seen increasingly universal smartphones and increasingly realistic computer games (things i’ve been gesturing at in my poor attempts to answer) but have no idea how they work.
Most of the threat comes from the space of possible super-capable minds that are not human.
(This does not mean that human-like AIs would be less dangerous, only that they are a small part of the space of possibilities.)
Agents are the real problem. Intelligent goal-directed adversarial behavior is something almost everyone understands whether it is other humans or ants or crop-destroying pests.
We’re close to being able to create new, faster, more intelligent agents out of computers.
I think the technical answer comes down to the Church-Turing thesis and the computability of the physical universe, but obviously that’s not a great answer for the compscidegreeless among us.
yup, i tried invoking church-turing once, too. worked about as well as you’d expect :)