Consider the following pivotal act: “launch a nuclear weapon at every semiconductor fab on earth”.
Any human of even average intelligence could have thought of this.
And by that very same token, the described plan would not actually work.
We do not need a smarter-than-all-humans-ever AI to achieve a pivotal act.
Unless we want the AI in question to output a plan that has a chance of actually working.
A boxed AI should be able to think of pivotal acts and describe them to humans without being so smart that it by necessity escapes the box and destroys all humans.
If an actually workable pivotal act existed that did not require better-than-human intelligence to come up with, we would already be in the process of implementing said pivotal act, because someone would have thought of it already. The fact that this is obviously not the case should therefore cause a substantial update against the antecedent.
If an actually workable pivotal act existed that did not require better-than-human intelligence to come up with, we would already be in the process of implementing said pivotal act, because someone would have thought of it already. The fact that this is obviously not the case should therefore cause a substantial update against the antecedent.
This is an incredibly bad argument. Saying something cannot possibly work because no one has done it yet would mean that literally all innovation is impossible.
Saying something cannot possibly work because no one has done it yet would mean that literally all innovation is impossible.
You are attempting to generalize conclusions about an extremely loose class of achievements (“innovation”), to an extremely tight class of achievements (“commit, using our current level of knowledge and resources, a pivotal act”). That this generalization is invalid ought to go without saying, but in the interest of constructiveness I will point out one (relevant) aspect of the disanalogy:
“Innovation”, at least as applied to technology, is incremental; new innovations are allowed to build on past knowledge in ways that (in principle) place no upper limit on the technological improvements thus achieved (except whatever limits are imposed by the hard laws of physics and mathematics). There is also no time limit on innovation; by default, anything that is possible at all is assumed to be realized eventually, but there are no guarantees as to when that will happen for any specific technology.
“Commit a pivotal act using the knowledge and resources currently available to us”, on the other hand, is the opposite of incremental: it demands that we execute a series of actions that leads to some end goal (such as “take over the world”) while holding fixed our level of background knowledge/acumen. Moreover, whereas there is no time limit on technological “innovation”, there is certainly a time limit on successfully committing a pivotal act; and moreover this time limit is imposed precisely by however long it takes before humanity “innovates” itself to AGI.
In summary, your analogy leaks, and consequently so does your generalization. In fact, however, your reasoning is further flawed: even if your analogy were tight, it would not suffice to establish what you need to establish. Recall your initial claim:
We do not need a smarter-than-all-humans-ever AI to achieve a pivotal act.
This claim does not, in fact, become more plausible if we replace “achieve a pivotal act” with e.g. “vastly increase the pace of technological innovation”. This is true even though technological innovation is, as a human endeavor, far more tractable than saving/taking over the world. This is because the load-bearing part of the argument is that the AI must produce relevant insights (whether related to “innovation” or “pivotal acts”) at a rate vastly superior to that of humans, in order for it to be able to reliably produce innovations/world-saving plans. (I leave it unargued that humans do not reliably do either of these things.) In other words, it certainly requires an AI whose ability in the relevant domains exceeds that of “all humans ever”, because “all humans ever” empirically do not (reliably) accomplish these tasks.
For your argument to go through, in other words, you cannot get away with arguing merely that something is “possible” (though in fact you have not even established this much, because the analogy with technological innovation does not hold). Your argument actually requires you to argue for the (extremely strong) claim that the ambient probability with which humans successfully generate world-saving plans, is sufficient to the task of generating a successful world-saving plan before unaligned AGI is built. And this claim is clearly false, since (once again)
If an actually workable pivotal act existed that did not require better-than-human intelligence to come up with, we would already be in the process of implementing said pivotal act, because someone would have thought of it already. The fact that this is obviously not the case should therefore cause a substantial update against the antecedent.
the AI must produce relevant insights (whether related to “innovation” or “pivotal acts”) at a rate vastly superior to that of humans, in order for it to be able to reliably produce innovations/world-saving plans
This is precisely the claim we are arguing about! I disagree that the AI needs to produce insights “at a rate vastly superior to all humans”.
On the contrary, I claim that there is one borderline act (start a catastrophe that sets back AI progress by decades) that can be done with current human knowledge. And I furthermore claim that there is one pivotal act (design an aligned AI) that may well be achieved via incremental progress.
If the AI does not need to produce relevant insights at a faster rate than humans, then that implies the rate at which humans produce relevant insights is sufficiently fast already. And if that’s your claim, then you—again—need to explain why no humans have been able to come up with a workable pivotal act to date.
On the contrary, I claim that there is one borderline act (start a catastrophe that sets back AI progress by decades) that can be done with current human knowledge.
How do you propose to accomplish this? Your initial suggestion, “launch nukes at every semiconductor fab”, is not workable. If all of the candidate solutions you have in mind are of similar quality to that, then I reiterate: humans cannot, with their current knowledge and resources, execute a pivotal act in the real world.
And I furthermore claim that there is one pivotal act (design an aligned AI) that may well be achieved via incremental progress.
This is the hope, yes. Note, however, that this is a path that routes directly through smarter-than-human AI, which necessity is precisely what you are disputing. So the existence of this path does not particularly strengthen your case.
Your initial suggestion, “launch nukes at every semiconductor fab”, is not workable.
In what way is it not workable? Perhaps we have different intuitions about how difficult it is to build a cutting-edge semiconductor facility? Alternatively you may disagree with me that AI is largely hardware-bound and thus cutting off the supply of new compute will also prevent the rise of superhuman AI?
Do you also think that “the US president launches every nuclear weapon at his command, causing nuclear winter?” would fail to prevent the rise of superhuman AGI?
And by that very same token, the described plan would not actually work.
Unless we want the AI in question to output a plan that has a chance of actually working.
If an actually workable pivotal act existed that did not require better-than-human intelligence to come up with, we would already be in the process of implementing said pivotal act, because someone would have thought of it already. The fact that this is obviously not the case should therefore cause a substantial update against the antecedent.
This is an incredibly bad argument. Saying something cannot possibly work because no one has done it yet would mean that literally all innovation is impossible.
You are attempting to generalize conclusions about an extremely loose class of achievements (“innovation”), to an extremely tight class of achievements (“commit, using our current level of knowledge and resources, a pivotal act”). That this generalization is invalid ought to go without saying, but in the interest of constructiveness I will point out one (relevant) aspect of the disanalogy:
“Innovation”, at least as applied to technology, is incremental; new innovations are allowed to build on past knowledge in ways that (in principle) place no upper limit on the technological improvements thus achieved (except whatever limits are imposed by the hard laws of physics and mathematics). There is also no time limit on innovation; by default, anything that is possible at all is assumed to be realized eventually, but there are no guarantees as to when that will happen for any specific technology.
“Commit a pivotal act using the knowledge and resources currently available to us”, on the other hand, is the opposite of incremental: it demands that we execute a series of actions that leads to some end goal (such as “take over the world”) while holding fixed our level of background knowledge/acumen. Moreover, whereas there is no time limit on technological “innovation”, there is certainly a time limit on successfully committing a pivotal act; and moreover this time limit is imposed precisely by however long it takes before humanity “innovates” itself to AGI.
In summary, your analogy leaks, and consequently so does your generalization. In fact, however, your reasoning is further flawed: even if your analogy were tight, it would not suffice to establish what you need to establish. Recall your initial claim:
This claim does not, in fact, become more plausible if we replace “achieve a pivotal act” with e.g. “vastly increase the pace of technological innovation”. This is true even though technological innovation is, as a human endeavor, far more tractable than saving/taking over the world. This is because the load-bearing part of the argument is that the AI must produce relevant insights (whether related to “innovation” or “pivotal acts”) at a rate vastly superior to that of humans, in order for it to be able to reliably produce innovations/world-saving plans. (I leave it unargued that humans do not reliably do either of these things.) In other words, it certainly requires an AI whose ability in the relevant domains exceeds that of “all humans ever”, because “all humans ever” empirically do not (reliably) accomplish these tasks.
For your argument to go through, in other words, you cannot get away with arguing merely that something is “possible” (though in fact you have not even established this much, because the analogy with technological innovation does not hold). Your argument actually requires you to argue for the (extremely strong) claim that the ambient probability with which humans successfully generate world-saving plans, is sufficient to the task of generating a successful world-saving plan before unaligned AGI is built. And this claim is clearly false, since (once again)
This is precisely the claim we are arguing about! I disagree that the AI needs to produce insights “at a rate vastly superior to all humans”.
On the contrary, I claim that there is one borderline act (start a catastrophe that sets back AI progress by decades) that can be done with current human knowledge. And I furthermore claim that there is one pivotal act (design an aligned AI) that may well be achieved via incremental progress.
If the AI does not need to produce relevant insights at a faster rate than humans, then that implies the rate at which humans produce relevant insights is sufficiently fast already. And if that’s your claim, then you—again—need to explain why no humans have been able to come up with a workable pivotal act to date.
How do you propose to accomplish this? Your initial suggestion, “launch nukes at every semiconductor fab”, is not workable. If all of the candidate solutions you have in mind are of similar quality to that, then I reiterate: humans cannot, with their current knowledge and resources, execute a pivotal act in the real world.
This is the hope, yes. Note, however, that this is a path that routes directly through smarter-than-human AI, which necessity is precisely what you are disputing. So the existence of this path does not particularly strengthen your case.
In what way is it not workable? Perhaps we have different intuitions about how difficult it is to build a cutting-edge semiconductor facility? Alternatively you may disagree with me that AI is largely hardware-bound and thus cutting off the supply of new compute will also prevent the rise of superhuman AI?
Do you also think that “the US president launches every nuclear weapon at his command, causing nuclear winter?” would fail to prevent the rise of superhuman AGI?