So, I think that the formalization will lead to the conclusion that
“we can NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly lead to bad consequences for humanity”
“we can also NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly NOT lead to bad consequences for humanity”
I agree with both those statements, but think the more relevant question would be:
“conditional on it turning out, to the enormous surprise of most everyone in AI, that this AGI design is actually very close to producing an ‘artificial toddler’, what is the sign of the expected effect on the probability of an OK outcome for the world, long-term and taking into account both benefits and risks?”
.
I agree with both those statements, but think the more relevant question would be:
“conditional on it turning out, to the enormous surprise of most everyone in AI, that this AGI design is actually very close to producing an ‘artificial toddler’, what is the sign of the expected effect on the probability of an OK outcome for the world, long-term and taking into account both benefits and risks?” .
[deleted]