I must admit as an outsider I am somewhat confused as to why Eliezer’s opinion is given so much weight, relative to all the other serious experts that are looking into AI problems. I understand why this was the case a decade ago, when not many people were seriously considering the issues, but now there are AI heavyweights like Stuart Russell on the case, whose expertise and knowledge of AI is greater than Eliezer’s, proven by actual accomplishments in the field. This is not to say Eliezer doesn’t have achievements to his belt, but I find his academic work lackluster when compared to his skills in awareness raising, movement building, and persuasive writing.
well it’s more than eliezer is being loud right now and so he’s affecting what folks are talking about a lot. stuart russell level shouting is the open letter, then eliezer shows up and goes “I can be louder than you!” and says to ban datacenters internationally by treaty as soon as possible, using significant military threat in negotiation.
Isn’t Stuart Russell an AI doomer as well, separated from Eliezer only by nuances?
I’m only going off of his book and this article, but I think they differ in far more than nuances. Stuart is saying “I don’t want my field of research destroyed”, while Eliezer is suggesting a global treaty to airstrike all GPU clusters, including on nuclear-armed nations. He seems to think the control problem is solvable if enough effort is put into it.
Eliezers beliefs are very extreme, and almost every accomplished expert disagrees with him. I’m not saying you should stop listening to his takes, just that you should pay more attention to other people.
You know the expression “hope for the best, prepare for the worst”? A true global ban on advanced AI is “preparing for the worst”—the worst case being (1) sufficiently advanced AI has a high risk of killing us all, unless we know exactly how to make it safe, and (2) we are very close to the threshold of danger.
Regarding (2), we may not know how close we are to the threshold of danger, but we have already surpassed a certain threshold of understanding (see the quote in Stuart Russell’s article—“we have no idea” whether GPT-4 forms its own goals), and capabilities are advancing monthly—ChatGPT, then GPT-4, now GPT-4 with reflection. Because performance depends so much on prompt engineering, we are very far from knowing the maximum capabilities of the LLMs we already have. Sufficient reflection applied to prompt engineering may already put us on the threshold of danger. It’s certainly driving us into the unknown.
Regarding (1), the attitude of the experts seems to be, let’s hope it’s not that dangerous, and/or not that hard to figure out safety, before we arrive at the threshold of danger. That’s not “preparing for the worst”; that’s “hoping for the best”.
Eliezer believes that with overwhelming probability, creating superintelligence will kill us unless we have figured out safety beforehand. I would say the actual risk is unknown, but it really could be huge. The combination of power and unreliability we already see in language models, gives us a taste of what that’s like.
Therefore I agree with Eliezer that in a safety-first world, capable of preparing for the worst in a cooperative way, we would see something like a global ban on advanced AI; at least until the theoretical basis of AI safety was more or less ironclad. We live in a very different world, a world of commercial and geopolitical competition that is driving an AI capabilities race. For that reason, and also because I am closer to the technical side than the political side, I prefer to focus on achieving AI safety rather than banning advanced AI. But let’s not kid ourselves; the current path involves taking huge unknown risks, and it should not have required a semi-outsider like Eliezer to forcefully raise, not just the idea of a pause, but the idea of a ban.
Sorry, I should have specified, I am very aware of Eliezers beliefs. I think his policy prescriptions are reasonable, if his beliefs are true. I just don’t think his beliefs are true. Established AI experts have heard his arguments with serious consideration and an open mind, and still disagree with them. This is evidence that they are probably flawed, and I don’t find it particularly hard to think of potential flaws in his arguments.
The type of global ban envisioned by yudkowsky really only makes sense if you agree with his premises. For example, setting the bar at “more powerful than GPT-5” is a low bar that is very hard to enforce, and only makes sense given certain assumptions about the compute requirement for AGI. The idea that bombing any datacentres in nuclear-armed nations is “worth it” only makes sense if you think that any particular cluster has an extremely high chance of killing everyone, which I don’t think is the case.
The type of global ban envisioned by yudkowsky really only makes sense if you agree with his premises
I think Eliezer’s current attitude is actually much closer to how an ordinary person thinks or would think about the problem. Most people don’t feel a driving need to create a potential rival to the human race in the first place! It’s only those seduced by the siren call of technology, or who are trying to engage with the harsh realities of political and economic power, who think we just have to keep gambling in our current way. Any politician who seriously tried to talk about this issue would soon be trapped between public pressure to shut it all down, and private pressure to let it keep happening.
setting the bar at “more powerful than GPT-5” is a low bar that is very hard to enforce
It may be hard to enforce but what other kind of ban would be meaningful? Consider just GPT 3.5 and 4, embedded in larger systems that give them memory, reflection, and access to the real world, something which multiple groups are working on right now. It would require something unusual for that not to lead to “AGI” within a handful of years.
A big part of it is simply that he’s still very good at being loud and sounding intensely spooky. He also doesn’t do a very good job explaining his reasons and has leveled up his skill in explaining why it seems spooky to him without ever explaining the mechanics of the threat, because he did a good job thinking abstractly and did not do a good job compiling that into median-human-understandable explanation. Notice how oddly he talks—it’s related to why he realized there was a problem, I suspect.
I have seen him on video several times, including the Bankless podcast, and it has never seemed to me that he talks at all “oddly”. What seems “odd” to you?
I don’t know what you’re pointing to with that, but I don’t see any “rationalistic” manner that distinguishes him from, say, his interlocutors on Bankless, or from Lex Fridman. (I’ve not seen Eliezer’s conversation with him, but I’ve seen other interviews by Fridman.)
I mean, he’s really smart, and articulate, and has thought about these things for a long time, and can speak spontaneously and cogently to the subject, and field unprearranged questions. Being in the top whatever percentile in these attributes is, by definition, uncommon, but not “odd”, which means more than just uncommon.
I must admit as an outsider I am somewhat confused as to why Eliezer’s opinion is given so much weight, relative to all the other serious experts that are looking into AI problems. I understand why this was the case a decade ago, when not many people were seriously considering the issues, but now there are AI heavyweights like Stuart Russell on the case, whose expertise and knowledge of AI is greater than Eliezer’s, proven by actual accomplishments in the field. This is not to say Eliezer doesn’t have achievements to his belt, but I find his academic work lackluster when compared to his skills in awareness raising, movement building, and persuasive writing.
Isn’t Stuart Russell an AI doomer as well, separated from Eliezer only by nuances? Are you asking why Less Wrong favors Eliezer’s takes over his?
well it’s more than eliezer is being loud right now and so he’s affecting what folks are talking about a lot. stuart russell level shouting is the open letter, then eliezer shows up and goes “I can be louder than you!” and says to ban datacenters internationally by treaty as soon as possible, using significant military threat in negotiation.
I’m only going off of his book and this article, but I think they differ in far more than nuances. Stuart is saying “I don’t want my field of research destroyed”, while Eliezer is suggesting a global treaty to airstrike all GPU clusters, including on nuclear-armed nations. He seems to think the control problem is solvable if enough effort is put into it.
Eliezers beliefs are very extreme, and almost every accomplished expert disagrees with him. I’m not saying you should stop listening to his takes, just that you should pay more attention to other people.
You know the expression “hope for the best, prepare for the worst”? A true global ban on advanced AI is “preparing for the worst”—the worst case being (1) sufficiently advanced AI has a high risk of killing us all, unless we know exactly how to make it safe, and (2) we are very close to the threshold of danger.
Regarding (2), we may not know how close we are to the threshold of danger, but we have already surpassed a certain threshold of understanding (see the quote in Stuart Russell’s article—“we have no idea” whether GPT-4 forms its own goals), and capabilities are advancing monthly—ChatGPT, then GPT-4, now GPT-4 with reflection. Because performance depends so much on prompt engineering, we are very far from knowing the maximum capabilities of the LLMs we already have. Sufficient reflection applied to prompt engineering may already put us on the threshold of danger. It’s certainly driving us into the unknown.
Regarding (1), the attitude of the experts seems to be, let’s hope it’s not that dangerous, and/or not that hard to figure out safety, before we arrive at the threshold of danger. That’s not “preparing for the worst”; that’s “hoping for the best”.
Eliezer believes that with overwhelming probability, creating superintelligence will kill us unless we have figured out safety beforehand. I would say the actual risk is unknown, but it really could be huge. The combination of power and unreliability we already see in language models, gives us a taste of what that’s like.
Therefore I agree with Eliezer that in a safety-first world, capable of preparing for the worst in a cooperative way, we would see something like a global ban on advanced AI; at least until the theoretical basis of AI safety was more or less ironclad. We live in a very different world, a world of commercial and geopolitical competition that is driving an AI capabilities race. For that reason, and also because I am closer to the technical side than the political side, I prefer to focus on achieving AI safety rather than banning advanced AI. But let’s not kid ourselves; the current path involves taking huge unknown risks, and it should not have required a semi-outsider like Eliezer to forcefully raise, not just the idea of a pause, but the idea of a ban.
Sorry, I should have specified, I am very aware of Eliezers beliefs. I think his policy prescriptions are reasonable, if his beliefs are true. I just don’t think his beliefs are true. Established AI experts have heard his arguments with serious consideration and an open mind, and still disagree with them. This is evidence that they are probably flawed, and I don’t find it particularly hard to think of potential flaws in his arguments.
The type of global ban envisioned by yudkowsky really only makes sense if you agree with his premises. For example, setting the bar at “more powerful than GPT-5” is a low bar that is very hard to enforce, and only makes sense given certain assumptions about the compute requirement for AGI. The idea that bombing any datacentres in nuclear-armed nations is “worth it” only makes sense if you think that any particular cluster has an extremely high chance of killing everyone, which I don’t think is the case.
I think Eliezer’s current attitude is actually much closer to how an ordinary person thinks or would think about the problem. Most people don’t feel a driving need to create a potential rival to the human race in the first place! It’s only those seduced by the siren call of technology, or who are trying to engage with the harsh realities of political and economic power, who think we just have to keep gambling in our current way. Any politician who seriously tried to talk about this issue would soon be trapped between public pressure to shut it all down, and private pressure to let it keep happening.
It may be hard to enforce but what other kind of ban would be meaningful? Consider just GPT 3.5 and 4, embedded in larger systems that give them memory, reflection, and access to the real world, something which multiple groups are working on right now. It would require something unusual for that not to lead to “AGI” within a handful of years.
A big part of it is simply that he’s still very good at being loud and sounding intensely spooky. He also doesn’t do a very good job explaining his reasons and has leveled up his skill in explaining why it seems spooky to him without ever explaining the mechanics of the threat, because he did a good job thinking abstractly and did not do a good job compiling that into median-human-understandable explanation. Notice how oddly he talks—it’s related to why he realized there was a problem, I suspect.
I have seen him on video several times, including the Bankless podcast, and it has never seemed to me that he talks at all “oddly”. What seems “odd” to you?
Talking like a rationalist. I do it too, so do you.
I don’t know what you’re pointing to with that, but I don’t see any “rationalistic” manner that distinguishes him from, say, his interlocutors on Bankless, or from Lex Fridman. (I’ve not seen Eliezer’s conversation with him, but I’ve seen other interviews by Fridman.)
I mean, he’s really smart, and articulate, and has thought about these things for a long time, and can speak spontaneously and cogently to the subject, and field unprearranged questions. Being in the top whatever percentile in these attributes is, by definition, uncommon, but not “odd”, which means more than just uncommon.
The people here,. on lesswrong,. give EY’s opinion a lot of weight because LW was founded by EY, and functions as a kind of fan club.
https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts