Sorry, I should have specified, I am very aware of Eliezers beliefs. I think his policy prescriptions are reasonable, if his beliefs are true. I just don’t think his beliefs are true. Established AI experts have heard his arguments with serious consideration and an open mind, and still disagree with them. This is evidence that they are probably flawed, and I don’t find it particularly hard to think of potential flaws in his arguments.
The type of global ban envisioned by yudkowsky really only makes sense if you agree with his premises. For example, setting the bar at “more powerful than GPT-5” is a low bar that is very hard to enforce, and only makes sense given certain assumptions about the compute requirement for AGI. The idea that bombing any datacentres in nuclear-armed nations is “worth it” only makes sense if you think that any particular cluster has an extremely high chance of killing everyone, which I don’t think is the case.
The type of global ban envisioned by yudkowsky really only makes sense if you agree with his premises
I think Eliezer’s current attitude is actually much closer to how an ordinary person thinks or would think about the problem. Most people don’t feel a driving need to create a potential rival to the human race in the first place! It’s only those seduced by the siren call of technology, or who are trying to engage with the harsh realities of political and economic power, who think we just have to keep gambling in our current way. Any politician who seriously tried to talk about this issue would soon be trapped between public pressure to shut it all down, and private pressure to let it keep happening.
setting the bar at “more powerful than GPT-5” is a low bar that is very hard to enforce
It may be hard to enforce but what other kind of ban would be meaningful? Consider just GPT 3.5 and 4, embedded in larger systems that give them memory, reflection, and access to the real world, something which multiple groups are working on right now. It would require something unusual for that not to lead to “AGI” within a handful of years.
Sorry, I should have specified, I am very aware of Eliezers beliefs. I think his policy prescriptions are reasonable, if his beliefs are true. I just don’t think his beliefs are true. Established AI experts have heard his arguments with serious consideration and an open mind, and still disagree with them. This is evidence that they are probably flawed, and I don’t find it particularly hard to think of potential flaws in his arguments.
The type of global ban envisioned by yudkowsky really only makes sense if you agree with his premises. For example, setting the bar at “more powerful than GPT-5” is a low bar that is very hard to enforce, and only makes sense given certain assumptions about the compute requirement for AGI. The idea that bombing any datacentres in nuclear-armed nations is “worth it” only makes sense if you think that any particular cluster has an extremely high chance of killing everyone, which I don’t think is the case.
I think Eliezer’s current attitude is actually much closer to how an ordinary person thinks or would think about the problem. Most people don’t feel a driving need to create a potential rival to the human race in the first place! It’s only those seduced by the siren call of technology, or who are trying to engage with the harsh realities of political and economic power, who think we just have to keep gambling in our current way. Any politician who seriously tried to talk about this issue would soon be trapped between public pressure to shut it all down, and private pressure to let it keep happening.
It may be hard to enforce but what other kind of ban would be meaningful? Consider just GPT 3.5 and 4, embedded in larger systems that give them memory, reflection, and access to the real world, something which multiple groups are working on right now. It would require something unusual for that not to lead to “AGI” within a handful of years.