I understand the scenario say it isn’t because the demonstrations are incomprehensible
Yes, if demonstrations are comprehensible then I don’t think you need much explicit AI conflict to whistleblow since we will train some systems to explain risks to us.
why/how?
The global camera grab must involve plans that aren’t clearly bad to humans even when all the potential gotchas are pointed out. For example they may involve dynamics that humans just don’t understand, or where a brute force simulation or experiment would be prohibitively expensive without leaps of intuition that machines can make but humans cannot. Maybe that’s about tiny machines behaving in complicated ways or being created covertly, or crazy complicated dynamics of interacting computer systems that humans can’t figure out. It might involve the construction of new AI-designed AI systems which operate in different ways whose function we can’t really constrain except by seeing predictions of their behavior from an even-greater distance (machines which are predicted to lead to good-looking outcomes, which have been able to exhibit failures to us if so-incentivized, but which are even harder to control).
(There is obviously a lot you could say about all the tools at the human’s disposal to circumvent this kind of problem.)
This is one of the big ways in which the story is more pessimistic than my default, and perhaps the highlighted assumptions rule out the most plausible failures, especially (i) multi-year takeoff, (ii) reasonable competence on behalf of the civilization, (iii) “correct” generalization.
Even under those assumptions I do expect events to eventually become incomprehensible in the necessary ways, but it feels more likely that there will be enough intervening time for ML systems to e.g. solve alignment or help us shift to a new world order or whatever. (And as I mention, in the worlds where the ML systems can’t solve alignment well enough in the intervening time, I do agree that it’s unlikely we can solve it in advance.)
Yes, if demonstrations are comprehensible then I don’t think you need much explicit AI conflict to whistleblow since we will train some systems to explain risks to us.
The global camera grab must involve plans that aren’t clearly bad to humans even when all the potential gotchas are pointed out. For example they may involve dynamics that humans just don’t understand, or where a brute force simulation or experiment would be prohibitively expensive without leaps of intuition that machines can make but humans cannot. Maybe that’s about tiny machines behaving in complicated ways or being created covertly, or crazy complicated dynamics of interacting computer systems that humans can’t figure out. It might involve the construction of new AI-designed AI systems which operate in different ways whose function we can’t really constrain except by seeing predictions of their behavior from an even-greater distance (machines which are predicted to lead to good-looking outcomes, which have been able to exhibit failures to us if so-incentivized, but which are even harder to control).
(There is obviously a lot you could say about all the tools at the human’s disposal to circumvent this kind of problem.)
This is one of the big ways in which the story is more pessimistic than my default, and perhaps the highlighted assumptions rule out the most plausible failures, especially (i) multi-year takeoff, (ii) reasonable competence on behalf of the civilization, (iii) “correct” generalization.
Even under those assumptions I do expect events to eventually become incomprehensible in the necessary ways, but it feels more likely that there will be enough intervening time for ML systems to e.g. solve alignment or help us shift to a new world order or whatever. (And as I mention, in the worlds where the ML systems can’t solve alignment well enough in the intervening time, I do agree that it’s unlikely we can solve it in advance.)