There is a huge difference in the responses to Q1 (“Will AGI cause an existential catastrophe?”) and Q2 (“...without additional intervention from the existing AI Alignment research community”), to a point that seems almost unjustifiable to me. To pick the first matching example I found (and not to purposefully pick on anybody in particular), Daniel Kokotajlo thinks there’s a 93% chance of existential risk without the AI Alignment community’s involvement, but only 53% with. This implies that there’s a ~43% chance of the AI Alignment community solving the problem, conditional on it being real and unsolved otherwise, but only a ~7% chance of it not occurring for any other reason, including the possibility of it being solved by the researchers building the systems, or the concern being largely incorrect.
What makes people so confident in the AI Alignment research community solving this problem, far above that of any other alternative?
I also noticed Daniel’s difference in probabilities there, and thought they were substantial. But it doesn’t seem unreasonable to me. The existing AI x-risk community has changed the global conversation on AI and also been responsible for much in the way of funding and direct research on many related technical problems. I could talk about the specific technical work, or the impact that things like the AI FOOM Debate had on Superintelligence had on OpenPhil, or CFAR on FLI on Musk on OpenAI. Or I could go into detail about the research being done on topics like Iterated Amplification and Agent Foundations and so on and ways that this seems to me to be clear progress on subproblems. I’m not sure exactly what alternatives you might have in mind.
To emphasize, the clash I’m perceiving is not the chance assigned to these problems being tractable, but to the relative probability of ‘AI Alignment researchers’ solving the problems, as compared to everyone else and every other explanation. In particular, people building AI systems intrinsically spend a degree of their effort, even if completely unconvinced about the merits of AI risk, trying to make systems aligned, just because that’s a fundamental part of building a useful AI.
I could talk about the specific technical work, or the impact that things like the AI FOOM Debate had on Superintelligence had on OpenPhil, or CFAR on FLI on Musk on OpenAI. Or I could go into detail about the research being done on topics like Iterated Amplification and Agent Foundations and so on and ways that this seems to me to be clear progress on subproblems.
I have a sort of Yudkowskian pessimism towards most of these things (policy won’t actually help; Iterated Amplification won’t actually work), but I’ll try to put that aside here for a bit. What I’m curious about is what makes these sort of ideas only discoverable in this specific network of people, under these specific institutions, and particularly more promising than other sorts of more classical alignment.
Isn’t Iterated Amplification in the class of things you’d expect people to try just to get their early systems to work, at least with ≥20% probability? Not, to be clear, exactly that system, but just fundamentally RL systems that take extra steps to preserve the intentionality of the optimization process.
To rephrase a bit, it seems to me that a worldview in which AI alignment is sufficiently tractable that Iterated Amplification is a huge step towards a solution, would also be a worldview in which AI alignment is sufficiently easy (though not necessarily easy) that there should be a much larger prior belief that it gets solved anyway.
FWIW, I made these judgments quickly and intuitively and thus could easily have just made a silly mistake. Thank you for pointing this out.
So, what do I think now, reflecting a bit more?
--The 7% judgment still seems correct to me. I feel pretty screwed in a world where our entire community stops thinking about this stuff. I think it’s because of Yudkowskian pessimism combined with the heavy-tailed nature of impact and research. A world without this community would still be a world where people put some effort into solving the problem, but there would be less effort, by less capable people, and it would be more half-hearted/not directed at actually solving the problem/not actually taking the problem seriously.
--The other judgment? Maybe I’m too optimistic about the world where we continue working. But idk, I am rather impressed by our community and I think we’ve been making steady progress on all our goals over the last few years. Moreover, OpenAI and DeepMind seem to be taking safety concerns mildly seriously due to having people in our community working there. This makes me optimistic that if we keep at it, they’ll take it very seriously, and that would be great.
I interpreted the question as something like “if nobody cares about safety and there isn’t a community that takes a special interest in it, will we be safe”. I don’t think it’s specifically this AI Alignment community solving it, it’s just that if nobody tries to solve the problem, the problem will stay unsolved.
Edit: And I do now see that I misinterpreted the question. Updated my second estimate downwards because of that. Thanks for pointing this out!
There is a huge difference in the responses to Q1 (“Will AGI cause an existential catastrophe?”) and Q2 (“...without additional intervention from the existing AI Alignment research community”), to a point that seems almost unjustifiable to me. To pick the first matching example I found (and not to purposefully pick on anybody in particular), Daniel Kokotajlo thinks there’s a 93% chance of existential risk without the AI Alignment community’s involvement, but only 53% with. This implies that there’s a ~43% chance of the AI Alignment community solving the problem, conditional on it being real and unsolved otherwise, but only a ~7% chance of it not occurring for any other reason, including the possibility of it being solved by the researchers building the systems, or the concern being largely incorrect.
What makes people so confident in the AI Alignment research community solving this problem, far above that of any other alternative?
I also noticed Daniel’s difference in probabilities there, and thought they were substantial. But it doesn’t seem unreasonable to me. The existing AI x-risk community has changed the global conversation on AI and also been responsible for much in the way of funding and direct research on many related technical problems. I could talk about the specific technical work, or the impact that things like the AI FOOM Debate had on Superintelligence had on OpenPhil, or CFAR on FLI on Musk on OpenAI. Or I could go into detail about the research being done on topics like Iterated Amplification and Agent Foundations and so on and ways that this seems to me to be clear progress on subproblems. I’m not sure exactly what alternatives you might have in mind.
To emphasize, the clash I’m perceiving is not the chance assigned to these problems being tractable, but to the relative probability of ‘AI Alignment researchers’ solving the problems, as compared to everyone else and every other explanation. In particular, people building AI systems intrinsically spend a degree of their effort, even if completely unconvinced about the merits of AI risk, trying to make systems aligned, just because that’s a fundamental part of building a useful AI.
I have a sort of Yudkowskian pessimism towards most of these things (policy won’t actually help; Iterated Amplification won’t actually work), but I’ll try to put that aside here for a bit. What I’m curious about is what makes these sort of ideas only discoverable in this specific network of people, under these specific institutions, and particularly more promising than other sorts of more classical alignment.
Isn’t Iterated Amplification in the class of things you’d expect people to try just to get their early systems to work, at least with ≥20% probability? Not, to be clear, exactly that system, but just fundamentally RL systems that take extra steps to preserve the intentionality of the optimization process.
To rephrase a bit, it seems to me that a worldview in which AI alignment is sufficiently tractable that Iterated Amplification is a huge step towards a solution, would also be a worldview in which AI alignment is sufficiently easy (though not necessarily easy) that there should be a much larger prior belief that it gets solved anyway.
FWIW, I made these judgments quickly and intuitively and thus could easily have just made a silly mistake. Thank you for pointing this out.
So, what do I think now, reflecting a bit more?
--The 7% judgment still seems correct to me. I feel pretty screwed in a world where our entire community stops thinking about this stuff. I think it’s because of Yudkowskian pessimism combined with the heavy-tailed nature of impact and research. A world without this community would still be a world where people put some effort into solving the problem, but there would be less effort, by less capable people, and it would be more half-hearted/not directed at actually solving the problem/not actually taking the problem seriously.
--The other judgment? Maybe I’m too optimistic about the world where we continue working. But idk, I am rather impressed by our community and I think we’ve been making steady progress on all our goals over the last few years. Moreover, OpenAI and DeepMind seem to be taking safety concerns mildly seriously due to having people in our community working there. This makes me optimistic that if we keep at it, they’ll take it very seriously, and that would be great.
I interpreted the question as something like “if nobody cares about safety and there isn’t a community that takes a special interest in it, will we be safe”. I don’t think it’s specifically this AI Alignment community solving it, it’s just that if nobody tries to solve the problem, the problem will stay unsolved.
Edit: And I do now see that I misinterpreted the question. Updated my second estimate downwards because of that. Thanks for pointing this out!