In this story, I’m not imagining that we limited the strategy space of reduced the search quality. I’m imagining that we just scaled up capabilities, used debate without any bells and whistles like interpretability, and the empirical situation just happened to be that the AI systems didn’t develop #4-style “trying” (but did develop #2-style “trying”) before they became capable enough to e.g. establish a stable governance regime that regulates AI development or do alignment research better than any existing human alignment researchers that leads to a solution that we can be justifiably confident in.
You (a human) already exhibit #2-style trying. Despite this, you are not capable of “establishing a stable governance regime that regulates AI development” or “doing alignment research better than any existing human alignment researchers” (the latter is tautologically true, even).
So it seems reasonable to conclude that this level of “trying” is not enough to enact the pivotal acts you described (or, indeed, most any pivotal act that we might recognize as “pivotal”). It then follows that if a system is capable enough to enact some such pivotal act, some part of that system must have been running a stronger search than the kind of search described in “#2-style trying”. And if you buy Eliezer’s/Nate’s argument that it’s the search itself that’s dangerous, rather than the fact that you (maybe) wrapped up the search in an outer shell you happen to call “oracle AI” (or something), then it’s not a large jump from there to “maybe the search decides ‘killing all humans’ rates highly according to its search criteria”.
But perhaps you’re conceptualizing this whole “trying” thing differently, because you go on to say:
Idk, I’m also worried about sufficiently scaled-up reflex-like things, in the sense that I think sufficiently scaled-up reflex-like things are capable both of pivotal acts and causing human extinction. But on my prediction of what actually happens I expect at least #2-style reasoning before reducing x-risk to ~zero (because that’s more efficient than scaled-up reflex-like things).
which actually just does not parse in my native ontology. Like, in my ontology “sufficiently scaled-up reflex-like things” stop behaving reflexively. It’s not that you have this abstract label “reflex-like”, that you can slap onto some system, such that if you then scale that system up the label stays stuck to it indefinitely; in my model reflexiveness is a property of actions, not of systems, and if you make a system sufficiently powerful it leaves the regime where reflex-like behavior is its default. It automatically goes from #1 to #2 to #3 to #4 in the limit of sufficient scaling; this is, from my perspective, what is meant by the claim “these things exist on a continuum” (which claim it seems like you agreed with in a parallel comment thread, which simply furthers my confusion).
So it seems reasonable to conclude that this level of “trying” is not enough to enact the pivotal acts you described
Stated differently than how I’d say it, but I agree that a single human performing human-level reasoning is not enough to enact those pivotal acts.
in my model reflexiveness is a property of actions,
Yeah, in my ontology (and in this context) reflexiveness is a property of cognitions, not of actions. I can reflexively reach into a transparent pipe to pick up a sandwich, without searching over possible plans for getting the sandwich (or at least, without any conscious search, and without any search via trying different plans and seeing if they work); one random video I’ve seen suggests that (some kind of) monkeys struggle to do this and may have to experiment with different plans to get the food. (I use this anecdote as an illustration; I don’t know if it is actually true.)
See also the first few sections of Argument, intuition, and recursion; in the language of that post I’m thinking of “explicit argument” as “trying”, and “intuition” as “reflex-like”, even though they output the same thing.
Within my ontology, you could define behavioral-reflexivity as those behaviors / actions that a human could do with reflexive cognition, and then more competent actions are behavioral-trying. These concepts might match yours. In that case I’m saying that it’s plausible that there’s a wide gap between behavioral-trying-2 and behavioral-trying-3, but really my intuition is coming much more from finding the trying-2 cognitions significantly more likely than the trying-3 cognitions, and thinking that the trying-2 cognitions could scale without becoming trying-3 cognitions.
Or, to try and say things a bit more concretely, I find it plausible that there is more scaling from improving the efficiency of the search (e.g. by having better tuned heuristics and intuitions), than from expanding the domain of possible plans considered by the search. The 4 styles of trying that Rob mentioned exist on a continuum like “domain of possible plans”, but instead we mostly walk up the continuum of “efficiency / competence of search within the domain”.
(The resulting world looks more like CAIS than like a singular superintelligence with a DSA.)
(And I’ll reiterate again because I anticipate being misunderstood that this is not a prediction of how the world must be and thus we are obviously safe; it is instead a story that I think is not ruled out by our current understanding and thus one to which I assign non-trivial probability.)
You (a human) already exhibit #2-style trying. Despite this, you are not capable of “establishing a stable governance regime that regulates AI development” or “doing alignment research better than any existing human alignment researchers” (the latter is tautologically true, even).
So it seems reasonable to conclude that this level of “trying” is not enough to enact the pivotal acts you described (or, indeed, most any pivotal act that we might recognize as “pivotal”). It then follows that if a system is capable enough to enact some such pivotal act, some part of that system must have been running a stronger search than the kind of search described in “#2-style trying”. And if you buy Eliezer’s/Nate’s argument that it’s the search itself that’s dangerous, rather than the fact that you (maybe) wrapped up the search in an outer shell you happen to call “oracle AI” (or something), then it’s not a large jump from there to “maybe the search decides ‘killing all humans’ rates highly according to its search criteria”.
But perhaps you’re conceptualizing this whole “trying” thing differently, because you go on to say:
which actually just does not parse in my native ontology. Like, in my ontology “sufficiently scaled-up reflex-like things” stop behaving reflexively. It’s not that you have this abstract label “reflex-like”, that you can slap onto some system, such that if you then scale that system up the label stays stuck to it indefinitely; in my model reflexiveness is a property of actions, not of systems, and if you make a system sufficiently powerful it leaves the regime where reflex-like behavior is its default. It automatically goes from #1 to #2 to #3 to #4 in the limit of sufficient scaling; this is, from my perspective, what is meant by the claim “these things exist on a continuum” (which claim it seems like you agreed with in a parallel comment thread, which simply furthers my confusion).
(I endorse dxu’s entire reply.)
Stated differently than how I’d say it, but I agree that a single human performing human-level reasoning is not enough to enact those pivotal acts.
Yeah, in my ontology (and in this context) reflexiveness is a property of cognitions, not of actions. I can reflexively reach into a transparent pipe to pick up a sandwich, without searching over possible plans for getting the sandwich (or at least, without any conscious search, and without any search via trying different plans and seeing if they work); one random video I’ve seen suggests that (some kind of) monkeys struggle to do this and may have to experiment with different plans to get the food. (I use this anecdote as an illustration; I don’t know if it is actually true.)
See also the first few sections of Argument, intuition, and recursion; in the language of that post I’m thinking of “explicit argument” as “trying”, and “intuition” as “reflex-like”, even though they output the same thing.
Within my ontology, you could define behavioral-reflexivity as those behaviors / actions that a human could do with reflexive cognition, and then more competent actions are behavioral-trying. These concepts might match yours. In that case I’m saying that it’s plausible that there’s a wide gap between behavioral-trying-2 and behavioral-trying-3, but really my intuition is coming much more from finding the trying-2 cognitions significantly more likely than the trying-3 cognitions, and thinking that the trying-2 cognitions could scale without becoming trying-3 cognitions.
Or, to try and say things a bit more concretely, I find it plausible that there is more scaling from improving the efficiency of the search (e.g. by having better tuned heuristics and intuitions), than from expanding the domain of possible plans considered by the search. The 4 styles of trying that Rob mentioned exist on a continuum like “domain of possible plans”, but instead we mostly walk up the continuum of “efficiency / competence of search within the domain”.
(The resulting world looks more like CAIS than like a singular superintelligence with a DSA.)
(And I’ll reiterate again because I anticipate being misunderstood that this is not a prediction of how the world must be and thus we are obviously safe; it is instead a story that I think is not ruled out by our current understanding and thus one to which I assign non-trivial probability.)