If the AI does not need to produce relevant insights at a faster rate than humans, then that implies the rate at which humans produce relevant insights is sufficiently fast already. And if that’s your claim, then you—again—need to explain why no humans have been able to come up with a workable pivotal act to date.
On the contrary, I claim that there is one borderline act (start a catastrophe that sets back AI progress by decades) that can be done with current human knowledge.
How do you propose to accomplish this? Your initial suggestion, “launch nukes at every semiconductor fab”, is not workable. If all of the candidate solutions you have in mind are of similar quality to that, then I reiterate: humans cannot, with their current knowledge and resources, execute a pivotal act in the real world.
And I furthermore claim that there is one pivotal act (design an aligned AI) that may well be achieved via incremental progress.
This is the hope, yes. Note, however, that this is a path that routes directly through smarter-than-human AI, which necessity is precisely what you are disputing. So the existence of this path does not particularly strengthen your case.
Your initial suggestion, “launch nukes at every semiconductor fab”, is not workable.
In what way is it not workable? Perhaps we have different intuitions about how difficult it is to build a cutting-edge semiconductor facility? Alternatively you may disagree with me that AI is largely hardware-bound and thus cutting off the supply of new compute will also prevent the rise of superhuman AI?
Do you also think that “the US president launches every nuclear weapon at his command, causing nuclear winter?” would fail to prevent the rise of superhuman AGI?
If the AI does not need to produce relevant insights at a faster rate than humans, then that implies the rate at which humans produce relevant insights is sufficiently fast already. And if that’s your claim, then you—again—need to explain why no humans have been able to come up with a workable pivotal act to date.
How do you propose to accomplish this? Your initial suggestion, “launch nukes at every semiconductor fab”, is not workable. If all of the candidate solutions you have in mind are of similar quality to that, then I reiterate: humans cannot, with their current knowledge and resources, execute a pivotal act in the real world.
This is the hope, yes. Note, however, that this is a path that routes directly through smarter-than-human AI, which necessity is precisely what you are disputing. So the existence of this path does not particularly strengthen your case.
In what way is it not workable? Perhaps we have different intuitions about how difficult it is to build a cutting-edge semiconductor facility? Alternatively you may disagree with me that AI is largely hardware-bound and thus cutting off the supply of new compute will also prevent the rise of superhuman AI?
Do you also think that “the US president launches every nuclear weapon at his command, causing nuclear winter?” would fail to prevent the rise of superhuman AGI?