Isn’t Stuart Russell an AI doomer as well, separated from Eliezer only by nuances?
I’m only going off of his book and this article, but I think they differ in far more than nuances. Stuart is saying “I don’t want my field of research destroyed”, while Eliezer is suggesting a global treaty to airstrike all GPU clusters, including on nuclear-armed nations. He seems to think the control problem is solvable if enough effort is put into it.
Eliezers beliefs are very extreme, and almost every accomplished expert disagrees with him. I’m not saying you should stop listening to his takes, just that you should pay more attention to other people.
You know the expression “hope for the best, prepare for the worst”? A true global ban on advanced AI is “preparing for the worst”—the worst case being (1) sufficiently advanced AI has a high risk of killing us all, unless we know exactly how to make it safe, and (2) we are very close to the threshold of danger.
Regarding (2), we may not know how close we are to the threshold of danger, but we have already surpassed a certain threshold of understanding (see the quote in Stuart Russell’s article—“we have no idea” whether GPT-4 forms its own goals), and capabilities are advancing monthly—ChatGPT, then GPT-4, now GPT-4 with reflection. Because performance depends so much on prompt engineering, we are very far from knowing the maximum capabilities of the LLMs we already have. Sufficient reflection applied to prompt engineering may already put us on the threshold of danger. It’s certainly driving us into the unknown.
Regarding (1), the attitude of the experts seems to be, let’s hope it’s not that dangerous, and/or not that hard to figure out safety, before we arrive at the threshold of danger. That’s not “preparing for the worst”; that’s “hoping for the best”.
Eliezer believes that with overwhelming probability, creating superintelligence will kill us unless we have figured out safety beforehand. I would say the actual risk is unknown, but it really could be huge. The combination of power and unreliability we already see in language models, gives us a taste of what that’s like.
Therefore I agree with Eliezer that in a safety-first world, capable of preparing for the worst in a cooperative way, we would see something like a global ban on advanced AI; at least until the theoretical basis of AI safety was more or less ironclad. We live in a very different world, a world of commercial and geopolitical competition that is driving an AI capabilities race. For that reason, and also because I am closer to the technical side than the political side, I prefer to focus on achieving AI safety rather than banning advanced AI. But let’s not kid ourselves; the current path involves taking huge unknown risks, and it should not have required a semi-outsider like Eliezer to forcefully raise, not just the idea of a pause, but the idea of a ban.
Sorry, I should have specified, I am very aware of Eliezers beliefs. I think his policy prescriptions are reasonable, if his beliefs are true. I just don’t think his beliefs are true. Established AI experts have heard his arguments with serious consideration and an open mind, and still disagree with them. This is evidence that they are probably flawed, and I don’t find it particularly hard to think of potential flaws in his arguments.
The type of global ban envisioned by yudkowsky really only makes sense if you agree with his premises. For example, setting the bar at “more powerful than GPT-5” is a low bar that is very hard to enforce, and only makes sense given certain assumptions about the compute requirement for AGI. The idea that bombing any datacentres in nuclear-armed nations is “worth it” only makes sense if you think that any particular cluster has an extremely high chance of killing everyone, which I don’t think is the case.
The type of global ban envisioned by yudkowsky really only makes sense if you agree with his premises
I think Eliezer’s current attitude is actually much closer to how an ordinary person thinks or would think about the problem. Most people don’t feel a driving need to create a potential rival to the human race in the first place! It’s only those seduced by the siren call of technology, or who are trying to engage with the harsh realities of political and economic power, who think we just have to keep gambling in our current way. Any politician who seriously tried to talk about this issue would soon be trapped between public pressure to shut it all down, and private pressure to let it keep happening.
setting the bar at “more powerful than GPT-5” is a low bar that is very hard to enforce
It may be hard to enforce but what other kind of ban would be meaningful? Consider just GPT 3.5 and 4, embedded in larger systems that give them memory, reflection, and access to the real world, something which multiple groups are working on right now. It would require something unusual for that not to lead to “AGI” within a handful of years.
I’m only going off of his book and this article, but I think they differ in far more than nuances. Stuart is saying “I don’t want my field of research destroyed”, while Eliezer is suggesting a global treaty to airstrike all GPU clusters, including on nuclear-armed nations. He seems to think the control problem is solvable if enough effort is put into it.
Eliezers beliefs are very extreme, and almost every accomplished expert disagrees with him. I’m not saying you should stop listening to his takes, just that you should pay more attention to other people.
You know the expression “hope for the best, prepare for the worst”? A true global ban on advanced AI is “preparing for the worst”—the worst case being (1) sufficiently advanced AI has a high risk of killing us all, unless we know exactly how to make it safe, and (2) we are very close to the threshold of danger.
Regarding (2), we may not know how close we are to the threshold of danger, but we have already surpassed a certain threshold of understanding (see the quote in Stuart Russell’s article—“we have no idea” whether GPT-4 forms its own goals), and capabilities are advancing monthly—ChatGPT, then GPT-4, now GPT-4 with reflection. Because performance depends so much on prompt engineering, we are very far from knowing the maximum capabilities of the LLMs we already have. Sufficient reflection applied to prompt engineering may already put us on the threshold of danger. It’s certainly driving us into the unknown.
Regarding (1), the attitude of the experts seems to be, let’s hope it’s not that dangerous, and/or not that hard to figure out safety, before we arrive at the threshold of danger. That’s not “preparing for the worst”; that’s “hoping for the best”.
Eliezer believes that with overwhelming probability, creating superintelligence will kill us unless we have figured out safety beforehand. I would say the actual risk is unknown, but it really could be huge. The combination of power and unreliability we already see in language models, gives us a taste of what that’s like.
Therefore I agree with Eliezer that in a safety-first world, capable of preparing for the worst in a cooperative way, we would see something like a global ban on advanced AI; at least until the theoretical basis of AI safety was more or less ironclad. We live in a very different world, a world of commercial and geopolitical competition that is driving an AI capabilities race. For that reason, and also because I am closer to the technical side than the political side, I prefer to focus on achieving AI safety rather than banning advanced AI. But let’s not kid ourselves; the current path involves taking huge unknown risks, and it should not have required a semi-outsider like Eliezer to forcefully raise, not just the idea of a pause, but the idea of a ban.
Sorry, I should have specified, I am very aware of Eliezers beliefs. I think his policy prescriptions are reasonable, if his beliefs are true. I just don’t think his beliefs are true. Established AI experts have heard his arguments with serious consideration and an open mind, and still disagree with them. This is evidence that they are probably flawed, and I don’t find it particularly hard to think of potential flaws in his arguments.
The type of global ban envisioned by yudkowsky really only makes sense if you agree with his premises. For example, setting the bar at “more powerful than GPT-5” is a low bar that is very hard to enforce, and only makes sense given certain assumptions about the compute requirement for AGI. The idea that bombing any datacentres in nuclear-armed nations is “worth it” only makes sense if you think that any particular cluster has an extremely high chance of killing everyone, which I don’t think is the case.
I think Eliezer’s current attitude is actually much closer to how an ordinary person thinks or would think about the problem. Most people don’t feel a driving need to create a potential rival to the human race in the first place! It’s only those seduced by the siren call of technology, or who are trying to engage with the harsh realities of political and economic power, who think we just have to keep gambling in our current way. Any politician who seriously tried to talk about this issue would soon be trapped between public pressure to shut it all down, and private pressure to let it keep happening.
It may be hard to enforce but what other kind of ban would be meaningful? Consider just GPT 3.5 and 4, embedded in larger systems that give them memory, reflection, and access to the real world, something which multiple groups are working on right now. It would require something unusual for that not to lead to “AGI” within a handful of years.