Even strongly superhuman 1 by itself is entirely harmless, even if very general within the problem space of 1.
Type 1 intelligence is dangerous as soon as you try to use it for anything practical simply because it is powerful. If you ask it “how can we reduce global temperatures” and “causing a nuclear winter” is in its solution space, it may return that. Powerful tools must be wielded precisely.
See, that’s what is so incredibly irritating about dealing with people who lack any domain specific knowledge. You can’t ask it, “how can we reduce global temperatures” in the real world.
You can ask it how to make a model out of data, you can ask it what to do to the model so that such and such function decreases, it may try nuking this model (inside the model), and generate such solution. You got to actually put a lot of effort, like replicating it’s in-model actions in real world in mindless manner, for this nuking to happen in real world. (and you’ll also have the model visualization to examine, by the way)
What if instead of giving the solution “cause nuclear war” it simply returns a seemingly innocuous solution expected to cause nuclear war? I’m assuming that the modelling portion is a black box so you can’t look inside and see why that solution is expected to lead to a reduction in global temperatures.
If the software is using models we can understand and check ourselves then it isn’t nearly so dangerous.
I’m assuming that the modelling portion is a black box so you can’t look inside and see why that solution is expected to lead to a reduction in global temperatures.
Let’s just assume that mister president sits on nuclear launch button by accident, shall we?
It isn’t an amazing novel philosophical insight that type-1 agents ‘love’ to solve problems in the wrong way. It is fact of life apparent even in the simplest automated software of that kind. You, of course, also have some pretty visualization of what is the scenario where the parameter was minimized or maximized.
edit: also the answers could be really funny. How do we solve global warming? Okay, just abduct the prime minister of china! That should cool the planet off.
It isn’t an amazing novel philosophical insight that type-1 agents ‘love’ to solve problems in the wrong way. It is fact of life apparent even in the simplest automated software of that kind.
Of course it isn’t.
Let’s just assume that mister president sits on nuclear launch button by accident, shall we?
There are machine learning techniques like genetic programming that can result in black-box models. As I stated earlier, I’m not sure humans will ever combine black-box problem solving techniques with self-optimization and attempt to use the product to solve practical problems; I just think it is dangerous to do so once the techniques become powerful enough.
Yup, we seem safe for the moment because we simply lack the ability to create anything dangerous.
Actually your scenario already happened… Fukushima reactor failure: they used computer modelling to simulate tsunami, it was 1960s, the computers were science woo, and if computer said so, then it was true.
For more subtle cases though—see, the problem is substitution of ‘intellectually omnipotent omniscient entity’ for AI. If the AI tells to assassinate foreign official, nobody’s going to do that; got to be starting the nuclear war via butterfly effect, and that’s pretty much intractable.
For more subtle cases though—see, the problem is substitution of ‘intellectually omnipotent omniscient entity’ for AI. If the AI tells to assassinate foreign official, nobody’s going to do that; got to be starting the nuclear war via butterfly effect, and that’s pretty much intractable.
I would prefer our only line of defense not be “most stupid solutions are going to look stupid”. It’s harder to recognize stupid solutions in say, medicine (although there we can verify with empirical data).
It is unclear to me that artificial intelligence adds any risk there, though, that isn’t present from natural stupidity.
Right now, look, so many plastics around us, food additives, and other novel substances. Rising cancer rates even after controlling for age. With all the testing, when you have hundred random things a few bad ones will slip through. Or obesity. This (idiotic solutions) is a problem with technological progress in general.
edit: actually, our all natural intelligence is very prone to quite odd solutions. Say, reproductive drive, secondary sex characteristics, yadda yadda, end result, cosmetic implants. Desire to sell more product, end result, overconsumption. Etc etc.
Type 1 intelligence is dangerous as soon as you try to use it for anything practical simply because it is powerful. If you ask it “how can we reduce global temperatures” and “causing a nuclear winter” is in its solution space, it may return that. Powerful tools must be wielded precisely.
See, that’s what is so incredibly irritating about dealing with people who lack any domain specific knowledge. You can’t ask it, “how can we reduce global temperatures” in the real world.
You can ask it how to make a model out of data, you can ask it what to do to the model so that such and such function decreases, it may try nuking this model (inside the model), and generate such solution. You got to actually put a lot of effort, like replicating it’s in-model actions in real world in mindless manner, for this nuking to happen in real world. (and you’ll also have the model visualization to examine, by the way)
What if instead of giving the solution “cause nuclear war” it simply returns a seemingly innocuous solution expected to cause nuclear war? I’m assuming that the modelling portion is a black box so you can’t look inside and see why that solution is expected to lead to a reduction in global temperatures.
If the software is using models we can understand and check ourselves then it isn’t nearly so dangerous.
Let’s just assume that mister president sits on nuclear launch button by accident, shall we?
It isn’t an amazing novel philosophical insight that type-1 agents ‘love’ to solve problems in the wrong way. It is fact of life apparent even in the simplest automated software of that kind. You, of course, also have some pretty visualization of what is the scenario where the parameter was minimized or maximized.
edit: also the answers could be really funny. How do we solve global warming? Okay, just abduct the prime minister of china! That should cool the planet off.
Of course it isn’t.
There are machine learning techniques like genetic programming that can result in black-box models. As I stated earlier, I’m not sure humans will ever combine black-box problem solving techniques with self-optimization and attempt to use the product to solve practical problems; I just think it is dangerous to do so once the techniques become powerful enough.
Which are even more prone to outputting crap solutions even without being superintelligent.
Yup, we seem safe for the moment because we simply lack the ability to create anything dangerous.
Sorry you’re being downvoted. It’s not me.
Actually your scenario already happened… Fukushima reactor failure: they used computer modelling to simulate tsunami, it was 1960s, the computers were science woo, and if computer said so, then it was true.
For more subtle cases though—see, the problem is substitution of ‘intellectually omnipotent omniscient entity’ for AI. If the AI tells to assassinate foreign official, nobody’s going to do that; got to be starting the nuclear war via butterfly effect, and that’s pretty much intractable.
I would prefer our only line of defense not be “most stupid solutions are going to look stupid”. It’s harder to recognize stupid solutions in say, medicine (although there we can verify with empirical data).
It is unclear to me that artificial intelligence adds any risk there, though, that isn’t present from natural stupidity.
Right now, look, so many plastics around us, food additives, and other novel substances. Rising cancer rates even after controlling for age. With all the testing, when you have hundred random things a few bad ones will slip through. Or obesity. This (idiotic solutions) is a problem with technological progress in general.
edit: actually, our all natural intelligence is very prone to quite odd solutions. Say, reproductive drive, secondary sex characteristics, yadda yadda, end result, cosmetic implants. Desire to sell more product, end result, overconsumption. Etc etc.