Will AI See Sudden Progress?
Will advanced AI let some small group of people or AI systems take over the world?
AI X-risk folks and others have accrued lots of arguments about this over the years, but I think this debate has been disappointing in terms of anyone changing anyone else’s mind, or much being resolved. I still have hopes for sorting this out though, and I thought a written summary of the evidence we have so far (which often seems to live in personal conversations) would be a good start, for me at least.
To that end, I started a collection of reasons to expect discontinuous progress near the development of AGI.
I do think the world could be taken over without a step change in anything, but it seems less likely, and we can talk about the arguments around that another time.
Paul Christiano had basically the same idea at the same time, so for a slightly different take, here is his account of reasons to expect slow or fast take-off.
Please tell us in the comments or feedback box if your favorite argument for AI Foom is missing, or isn’t represented well. Or if you want to represent it well yourself in the form of a short essay, and send it to me here, and we will gladly consider posting it as a guest blog post.
I’m also pretty curious to hear which arguments people actually find compelling, even if they are already listed. I don’t actually find any of the ones I have that compelling yet, and I think a lot of people who have thought about it do expect ‘local takeoff’ with at least substantial probability, so I am probably missing things.
Crossposted from AI Impacts.
- 2018 Review: Voting Results! by Jan 24, 2020, 2:00 AM; 135 points) (
- 2018 Review: Voting Results! by Jan 24, 2020, 2:00 AM; 135 points) (
- [AN #60] A new AI challenge: Minecraft agents that assist human players in creative mode by Jul 22, 2019, 5:00 PM; 23 points) (
- Jul 7, 2019, 10:10 PM; 7 points) 's comment on Musings on Cumulative Cultural Evolution and AI by (
- Jul 7, 2018, 8:50 PM; 5 points) 's comment on Another take on agent foundations: formalizing zero-shot reasoning by (
- Dec 2, 2019, 3:38 AM; 4 points) 's comment on Arguments about fast takeoff by (
- Feb 26, 2018, 3:36 AM; 2 points) 's comment on Likelihood of discontinuous progress around the development of AGI by (
Updated me quite strongly towards continuous takeoff (from a position of ignorance)
Seconding Rohin.
I think this is basically the same nomination as the post “Arguments Against Fast Takeoff”, it’s all one conversation, but just wanted to nominate it to be clear.