It makes me… surprised? feeling sadly that I don’t understand you?… to read you think a floor is 10% after reading Quintin Pope’s summary of disagreements with EY. His guess was 5%, and his theories seem way more clear, predictive and articulable than EY’s.
The main answer here is I hadn’t read Quintin’s post in full detail and didn’t know that. I’ll want to read it in more detail but mostly expect to update my statement to “5%”. Thank you for pointing it out.
(I was aware of Scott Aaronson being like 3%, but honestly hadn’t been very impressed with his reasoning and understanding and was explicitly not counting him. Sorry Scott).
I have more thoughts on where my own P(Doom) comes from, and how I relate to all this, but I think basically I should write a top level post about it and take some time to get it well articulated. I think I already said, but a quick recap: I don’t think you need particularly Yudkowskian views to think an international shut down treaty is a good idea. My own P(Doom) is somewhat confused but I put >50% odds. A major reason is the additional disjunctive worries of “you don’t just need the first superintelligence to go well, you need a world with lots of strong-but-narrow AIs interacting to go well, or a multipolar take off to go well.
Sooner or later you definitely need something about as strict (well, more actually) as the global control Eliezer advocates here, since compute costs go down, compute itself goes up, and AI models become more accessible and more powerful. Even if alignment is easy I don’t see how you can expect to survive an AI-heavy world without a level of control and international alignment that feels draconian by today’s standards.
(I don’t know yet if Quinton argues against all these points, but will give it a read. I haven’t been keeping up with everything because there’s a lot to read but seems important to be familiar with his take)
But maybe for right now maybe I most want to say “Yeah man this is very intense and sad. It sounds like I disagree with your epistemic state but I don’t think your epistemic state is crazy.”
The main answer here is I hadn’t read Quintin’s post in full detail and didn’t know that. I’ll want to read it in more detail but mostly expect to update my statement to “5%”. Thank you for pointing it out.
(I was aware of Scott Aaronson being like 3%, but honestly hadn’t been very impressed with his reasoning and understanding and was explicitly not counting him. Sorry Scott).
I have more thoughts on where my own P(Doom) comes from, and how I relate to all this, but I think basically I should write a top level post about it and take some time to get it well articulated. I think I already said, but a quick recap: I don’t think you need particularly Yudkowskian views to think an international shut down treaty is a good idea. My own P(Doom) is somewhat confused but I put >50% odds. A major reason is the additional disjunctive worries of “you don’t just need the first superintelligence to go well, you need a world with lots of strong-but-narrow AIs interacting to go well, or a multipolar take off to go well.
Sooner or later you definitely need something about as strict (well, more actually) as the global control Eliezer advocates here, since compute costs go down, compute itself goes up, and AI models become more accessible and more powerful. Even if alignment is easy I don’t see how you can expect to survive an AI-heavy world without a level of control and international alignment that feels draconian by today’s standards.
(I don’t know yet if Quinton argues against all these points, but will give it a read. I haven’t been keeping up with everything because there’s a lot to read but seems important to be familiar with his take)
But maybe for right now maybe I most want to say “Yeah man this is very intense and sad. It sounds like I disagree with your epistemic state but I don’t think your epistemic state is crazy.”
I hope you do, since these might reveal cruxes about AI safety, and I might agree or disagree with the post you write.