A summary of the Hanson-Yudkowsky FOOM debate
In late spring this year, Luke tasked me with writing a summary and analysis of the Hanson-Yudkowsky FOOM debate, with the intention of having it eventually published in somewhere. Due to other priorities, this project was put on hold for the time being. Because it doesn’t look like it will be finished in the near future, and because Curiouskid asked to see it, we thought that we might as well share the thing.
I have reorganized the debate, presenting it by topic rather than in chronological order: I start by providing some brief conceptual background that’s useful for understanding Eliezer’s optimization power argument, after which I present his argument. Robin’s various objections follow, after which there is a summary of Robin’s view of how the Singularity will be like, together with Eliezer’s objections to that view. Hopefully, this should make the debate easier to follow. This summary also incorporates material from the 90-minute live debate on the topic that they had in 2011. The full table of contents:
Introduction
Overview
The optimization power argument
Conceptual background
The argument: Yudkowsky
Recursive self-improvement
Hard takeoff
Questioning optimization power: the question of abstractions
Questioning optimization power: the historical record
Questioning optimization power: the UberTool question
Hanson’s Singularity scenario
Architecture vs. content, sharing of information
Modularity of knowledge
Local or global singularity?
Wrap-up
Conclusions
References
Here’s the link to the current draft, any feedback is welcomed. Feel free to comment if you know of useful references, if you think I’ve misinterpreted something that was said, or if you think there’s any other problem. I’d also be curious to hear to what extent people think that this outline is easier to follow than the original debate, or whether it’s just as confusing.
- 27 Dec 2012 21:39 UTC; 6 points) 's comment on Intelligence explosion in organizations, or why I’m not worried about the singularity by (
- 18 Jan 2013 7:29 UTC; 0 points) 's comment on Thoughts on the Singularity Institute (SI) by (
As someone who has not yet read through all the sequences and found it difficult on a few occasions where I attempted to follow the Hanson-Yudkowsky FOOM debate, I find this summary very helpful.
I find it really helps if you do the voices. Like pretend Yudkowsky sounds like Harry Potter from the movie and is having a conversation with Dumbledore (in the HPMOR universe, obviously).
For anyone following the sequence rerun going on right now, this summary is highly recommended. It is much more manageable than the blog posts, and doesn’t leave out anything important (that I noticed).
In Conceptual background, bullet 2, you should emphasize the importance of the number of goal states compared to the total number of states—e.g frac{all;possible;molecules;with;carbon}{all;possible;molecules} This should also improve bullets 3 & 4.
Thanks, I implemented that as what is now bullet 3. Do you want to be credited in the acknowledgements, and if so, under what name?
Sure. Tarn Somervell—more comments may follow when I get around to reading it through.
Thank you for making this!
I tried to organize it a bit further by porting it to workflowy and grouping some of the bullets together.
Let me know if you find this summary of a summary useful.
This is very useful indeed—thank you!
This is an absolutely amazing read—somehow it never occurred to me that so many singularities had already happened.
...it all makes sense now.
The bleak grey world turns lucid at once!