Well, yes. But to qualify as a super-intelligence, a system have to have optimization power way beyond a mere human. This is no small feat, but still, the fraction of AIs that do what we would want compared to the ones that would do something else (crushing us like a car does an insect in the process) is likely tiny.
A 747 analogy that would work for me would be that on the first try, you have to set the 747 full of people at high altitude. Here, the equivalent of “or else” would be “the 747 falls like an anvil and everybody dies”.
Sure, one can think of ways to test an AI before setting it lose, but beware that if it’s more intelligent than you, it will outsmart you the instant you give it the opportunity. No matter what, the first real test flight will be full of passengers.
Well, nobody is starting out with a superintelligence. We are starting out with sub-human intelligence. A superhuman intelligence is bound to evolve gradually.
No matter what, the first real test flight will be full of passengers.
It didn’t work that way with 747s. They did loads of testing before risking hundreds of lives.
No matter what, the first real test flight will be full of passengers.
It didn’t work that way with 747s. They did loads of testing before risking hundreds of lives.
747s aren’t smart enough to behave differently when they do or don’t have passengers. If the AI might be behaving differently when it’s boxed then unboxed, then any boxed test isn’t “real”; unboxed tests “have passengers”.
Well, yes. But to qualify as a super-intelligence, a system have to have optimization power way beyond a mere human. This is no small feat, but still, the fraction of AIs that do what we would want compared to the ones that would do something else (crushing us like a car does an insect in the process) is likely tiny.
A 747 analogy that would work for me would be that on the first try, you have to set the 747 full of people at high altitude. Here, the equivalent of “or else” would be “the 747 falls like an anvil and everybody dies”.
Sure, one can think of ways to test an AI before setting it lose, but beware that if it’s more intelligent than you, it will outsmart you the instant you give it the opportunity. No matter what, the first real test flight will be full of passengers.
Well, nobody is starting out with a superintelligence. We are starting out with sub-human intelligence. A superhuman intelligence is bound to evolve gradually.
It didn’t work that way with 747s. They did loads of testing before risking hundreds of lives.
747s aren’t smart enough to behave differently when they do or don’t have passengers. If the AI might be behaving differently when it’s boxed then unboxed, then any boxed test isn’t “real”; unboxed tests “have passengers”.
Sure, but that’s no reason not to test. It’s a reason to try and make the tests realistic.
The point is not that we shouldn’t test. The point is that tests alone don’t give us the assurances we need.