Yup, we seem safe for the moment because we simply lack the ability to create anything dangerous.
Actually your scenario already happened… Fukushima reactor failure: they used computer modelling to simulate tsunami, it was 1960s, the computers were science woo, and if computer said so, then it was true.
For more subtle cases though—see, the problem is substitution of ‘intellectually omnipotent omniscient entity’ for AI. If the AI tells to assassinate foreign official, nobody’s going to do that; got to be starting the nuclear war via butterfly effect, and that’s pretty much intractable.
For more subtle cases though—see, the problem is substitution of ‘intellectually omnipotent omniscient entity’ for AI. If the AI tells to assassinate foreign official, nobody’s going to do that; got to be starting the nuclear war via butterfly effect, and that’s pretty much intractable.
I would prefer our only line of defense not be “most stupid solutions are going to look stupid”. It’s harder to recognize stupid solutions in say, medicine (although there we can verify with empirical data).
It is unclear to me that artificial intelligence adds any risk there, though, that isn’t present from natural stupidity.
Right now, look, so many plastics around us, food additives, and other novel substances. Rising cancer rates even after controlling for age. With all the testing, when you have hundred random things a few bad ones will slip through. Or obesity. This (idiotic solutions) is a problem with technological progress in general.
edit: actually, our all natural intelligence is very prone to quite odd solutions. Say, reproductive drive, secondary sex characteristics, yadda yadda, end result, cosmetic implants. Desire to sell more product, end result, overconsumption. Etc etc.
Which are even more prone to outputting crap solutions even without being superintelligent.
Yup, we seem safe for the moment because we simply lack the ability to create anything dangerous.
Sorry you’re being downvoted. It’s not me.
Actually your scenario already happened… Fukushima reactor failure: they used computer modelling to simulate tsunami, it was 1960s, the computers were science woo, and if computer said so, then it was true.
For more subtle cases though—see, the problem is substitution of ‘intellectually omnipotent omniscient entity’ for AI. If the AI tells to assassinate foreign official, nobody’s going to do that; got to be starting the nuclear war via butterfly effect, and that’s pretty much intractable.
I would prefer our only line of defense not be “most stupid solutions are going to look stupid”. It’s harder to recognize stupid solutions in say, medicine (although there we can verify with empirical data).
It is unclear to me that artificial intelligence adds any risk there, though, that isn’t present from natural stupidity.
Right now, look, so many plastics around us, food additives, and other novel substances. Rising cancer rates even after controlling for age. With all the testing, when you have hundred random things a few bad ones will slip through. Or obesity. This (idiotic solutions) is a problem with technological progress in general.
edit: actually, our all natural intelligence is very prone to quite odd solutions. Say, reproductive drive, secondary sex characteristics, yadda yadda, end result, cosmetic implants. Desire to sell more product, end result, overconsumption. Etc etc.