I guess he was talking about the kind of precision more specific to AI, which goes like “compared to the superintelligence space, the Friendly AI space is a tiny dot. We should aim precisely, at the first try, or else”.
And because the problem is impossible to solve, you have to think precisely in the first place, or you won’t be able to aim precisely at the first try. (Because whatever Skynet we build won’t give us the shadow of a second chance).
I guess he was talking about the kind of precision more specific to AI, which goes like “compared to the superintelligence space, the Friendly AI space is a tiny dot. We should aim precisely, at the first try, or else”.
Compared to the space of possible 747 component configurations, the space of systems representing working 747s is tiny. We should aim precisely, at the first try, or else!
Well, yes. But to qualify as a super-intelligence, a system have to have optimization power way beyond a mere human. This is no small feat, but still, the fraction of AIs that do what we would want compared to the ones that would do something else (crushing us like a car does an insect in the process) is likely tiny.
A 747 analogy that would work for me would be that on the first try, you have to set the 747 full of people at high altitude. Here, the equivalent of “or else” would be “the 747 falls like an anvil and everybody dies”.
Sure, one can think of ways to test an AI before setting it lose, but beware that if it’s more intelligent than you, it will outsmart you the instant you give it the opportunity. No matter what, the first real test flight will be full of passengers.
Well, nobody is starting out with a superintelligence. We are starting out with sub-human intelligence. A superhuman intelligence is bound to evolve gradually.
No matter what, the first real test flight will be full of passengers.
It didn’t work that way with 747s. They did loads of testing before risking hundreds of lives.
No matter what, the first real test flight will be full of passengers.
It didn’t work that way with 747s. They did loads of testing before risking hundreds of lives.
747s aren’t smart enough to behave differently when they do or don’t have passengers. If the AI might be behaving differently when it’s boxed then unboxed, then any boxed test isn’t “real”; unboxed tests “have passengers”.
I guess he was talking about the kind of precision more specific to AI, which goes like “compared to the superintelligence space, the Friendly AI space is a tiny dot.
But that stance makes assumptions that he does not share, as he does not believe that AGI will become uncontrollable.
I guess he was talking about the kind of precision more specific to AI, which goes like “compared to the superintelligence space, the Friendly AI space is a tiny dot. We should aim precisely, at the first try, or else”.
And because the problem is impossible to solve, you have to think precisely in the first place, or you won’t be able to aim precisely at the first try. (Because whatever Skynet we build won’t give us the shadow of a second chance).
Compared to the space of possible 747 component configurations, the space of systems representing working 747s is tiny. We should aim precisely, at the first try, or else!
Well, yes. But to qualify as a super-intelligence, a system have to have optimization power way beyond a mere human. This is no small feat, but still, the fraction of AIs that do what we would want compared to the ones that would do something else (crushing us like a car does an insect in the process) is likely tiny.
A 747 analogy that would work for me would be that on the first try, you have to set the 747 full of people at high altitude. Here, the equivalent of “or else” would be “the 747 falls like an anvil and everybody dies”.
Sure, one can think of ways to test an AI before setting it lose, but beware that if it’s more intelligent than you, it will outsmart you the instant you give it the opportunity. No matter what, the first real test flight will be full of passengers.
Well, nobody is starting out with a superintelligence. We are starting out with sub-human intelligence. A superhuman intelligence is bound to evolve gradually.
It didn’t work that way with 747s. They did loads of testing before risking hundreds of lives.
747s aren’t smart enough to behave differently when they do or don’t have passengers. If the AI might be behaving differently when it’s boxed then unboxed, then any boxed test isn’t “real”; unboxed tests “have passengers”.
Sure, but that’s no reason not to test. It’s a reason to try and make the tests realistic.
The point is not that we shouldn’t test. The point is that tests alone don’t give us the assurances we need.
But that stance makes assumptions that he does not share, as he does not believe that AGI will become uncontrollable.