Superintelligence reading group
In just over two weeks I will be running an online reading group on Nick Bostrom’s Superintelligence, on behalf of MIRI. It will be here on LessWrong. This is an advance warning, so you can get a copy and get ready for some stimulating discussion. MIRI’s post, appended below, gives the details.
Added: At the bottom of this post is a list of the discussion posts so far.
Nick Bostrom’s eagerly awaited Superintelligence comes out in the US this week. To help you get the most out of it, MIRI is running an online reading group where you can join with others to ask questions, discuss ideas, and probe the arguments more deeply.
The reading group will “meet” on a weekly post on the LessWrong discussion forum. For each ‘meeting’, we will read about half a chapter of Superintelligence, then come together virtually to discuss. I’ll summarize the chapter, and offer a few relevant notes, thoughts, and ideas for further investigation. (My notes will also be used as the source material for the final reading guide for the book.)
Discussion will take place in the comments. I’ll offer some questions, and invite you to bring your own, as well as thoughts, criticisms and suggestions for interesting related material. Your contributions to the reading group might also (with permission) be used in our final reading guide for the book.
We welcome both newcomers and veterans on the topic. Content will aim to be intelligible to a wide audience, and topics will range from novice to expert level. All levels of time commitment are welcome.
We will follow this preliminary reading guide, produced by MIRI, reading one section per week.
If you have already read the book, don’t worry! To the extent you remember what it says, your superior expertise will only be a bonus. To the extent you don’t remember what it says, now is a good time for a review! If you don’t have time to read the book, but still want to participate, you are also welcome to join in. I will provide summaries, and many things will have page numbers, in case you want to skip to the relevant parts.
If this sounds good to you, first grab a copy of Superintelligence. You may also want to sign up here to be emailed when the discussion begins each week. The first virtual meeting (forum post) will go live at 6pm Pacific on Monday, September 15th. Following meetings will start at 6pm every Monday, so if you’d like to coordinate for quick fire discussion with others, put that into your calendar. If you prefer flexibility, come by any time! And remember that if there are any people you would especially enjoy discussing Superintelligence with, link them to this post!
Topics for the first week will include impressive displays of artificial intelligence, why computers play board games so well, and what a reasonable person should infer from the agricultural and industrial revolutions.
Posts in this sequence
Week 1: Past developments and present capabilities
Week 2: Forecasting AI
Week 3: AI and uploads
Week 4: Biological cognition, BCIs, organizations
Week 5: Forms of superintelligence
Week 6: Intelligence explosion kinetics
Week 7: Decisive strategic advantage
Week 8: Cognitive superpowers
Week 9: The orthogonality of intelligence and goals
Week 10: Instrumentally convergent goals
Week 11: The treacherous turn
Week 12: Malignant failure modes
Week 13: Capability control methods
Week 14: Motivation selection methods
Week 15: Oracles, genies and sovereigns
Week 16: Tool AIs
Week 17: Multipolar scenarios
Week 18: Life in an algorithmic economy
Week 19: Post-transition formation of a singleton
Week 20: The value-loading problem
Week 21: Value learning
Week 22: Emulation modulation and institution design
Week 23: Coherent extrapolated volition
Week 24: Morality models and “do what I mean”
Week 25: Components list for acquiring values
Week 26: Science and technology strategy
Week 27: Pathways and enablers
Week 28: Collaboration
Week 29: Crunch time
- ‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting by 18 Oct 2022 22:54 UTC; 111 points) (EA Forum;
- Rationality: From AI to Zombies online reading group by 21 Mar 2015 9:54 UTC; 46 points) (
- Superintelligence Reading Group—Section 1: Past Developments and Present Capabilities by 16 Sep 2014 1:00 UTC; 43 points) (
- Superintelligence 5: Forms of Superintelligence by 14 Oct 2014 1:00 UTC; 22 points) (
- Superintelligence 7: Decisive strategic advantage by 28 Oct 2014 1:01 UTC; 19 points) (
- Superintelligence Reading Group 2: Forecasting AI by 23 Sep 2014 1:00 UTC; 17 points) (
- Superintelligence Reading Group 3: AI and Uploads by 30 Sep 2014 1:00 UTC; 17 points) (
- Superintelligence 11: The treacherous turn by 25 Nov 2014 2:00 UTC; 16 points) (
- Superintelligence 12: Malignant failure modes by 2 Dec 2014 2:02 UTC; 15 points) (
- Superintelligence 23: Coherent extrapolated volition by 17 Feb 2015 2:00 UTC; 15 points) (
- Superintelligence 6: Intelligence explosion kinetics by 21 Oct 2014 1:00 UTC; 15 points) (
- Superintelligence 27: Pathways and enablers by 17 Mar 2015 1:00 UTC; 15 points) (
- Running an EA reading group by 2 Dec 2016 16:21 UTC; 14 points) (EA Forum;
- Superintelligence 13: Capability control methods by 9 Dec 2014 2:00 UTC; 14 points) (
- Superintelligence 8: Cognitive superpowers by 4 Nov 2014 2:01 UTC; 14 points) (
- Superintelligence 9: The orthogonality of intelligence and goals by 11 Nov 2014 2:00 UTC; 14 points) (
- Superintelligence 29: Crunch time by 31 Mar 2015 4:24 UTC; 14 points) (
- SRG 4: Biological Cognition, BCIs, Organizations by 7 Oct 2014 1:00 UTC; 14 points) (
- Superintelligence 26: Science and technology strategy by 10 Mar 2015 1:43 UTC; 14 points) (
- Superintelligence 10: Instrumentally convergent goals by 18 Nov 2014 2:00 UTC; 13 points) (
- Superintelligence 22: Emulation modulation and institutional design by 10 Feb 2015 2:06 UTC; 13 points) (
- Superintelligence 24: Morality models and “do what I mean” by 24 Feb 2015 2:00 UTC; 13 points) (
- Superintelligence 28: Collaboration by 24 Mar 2015 1:29 UTC; 13 points) (
- Superintelligence 19: Post-transition formation of a singleton by 20 Jan 2015 2:00 UTC; 12 points) (
- Superintelligence 21: Value learning by 3 Feb 2015 2:01 UTC; 12 points) (
- Superintelligence 16: Tool AIs by 30 Dec 2014 2:00 UTC; 12 points) (
- Superintelligence 25: Components list for acquiring values by 3 Mar 2015 2:01 UTC; 11 points) (
- Superintelligence 15: Oracles, genies and sovereigns by 23 Dec 2014 2:01 UTC; 11 points) (
- Superintelligence 18: Life in an algorithmic economy by 13 Jan 2015 2:00 UTC; 10 points) (
- Superintelligence 17: Multipolar scenarios by 6 Jan 2015 6:44 UTC; 9 points) (
- Superintelligence 14: Motivation selection methods by 16 Dec 2014 2:00 UTC; 9 points) (
- Superintelligence 20: The value-loading problem by 27 Jan 2015 2:00 UTC; 8 points) (
- 16 Sep 2014 4:17 UTC; 7 points) 's comment on Open thread, September 15-21, 2014 by (
- Meetup : West LA—What Is FAI? by 1 Jan 2015 17:35 UTC; 4 points) (
- 25 Sep 2014 21:13 UTC; 1 point) 's comment on Superintelligence Reading Group 2: Forecasting AI by (
- 22 Jan 2015 18:38 UTC; 1 point) 's comment on Open thread, October 2011 by (
- 16 Mar 2017 10:10 UTC; 0 points) 's comment on Meetup : Superintellignce chapter 2 by (
WBEs are a worry. They can be used to carry dangerous information which a normal [suppressed laughter] may recoil from. But worse if this is carried off it may also attract sentient consciousness-awareness just like us. Frankenstein 2.0 Anyway we got 7 billion [6 too many] humans. Why would se want to do this? Space exploration by remote control to get the human feel of alien environments. Again-my only worry is that this process-construct may become a-live. And have it’s own ideas which are not in conjunction for the very reason it was crafted. Or it may outsmart its creators. And if controlled by whatever means—insertion of compliant resonant mind-states- it could rebel and become a terrorist. We are mad enough as it is. Personally as stated initially this is not the best solution to AI.
WBEs are a worry. They can be used to carry dangerous information which a normal [suppressed laughter] may recoil from. But worse if this is carried off it may also attract sentient consciousness-awareness just like us. Frankenstein 2.0 Anyway we got 7 billion [6 too many] humans. Why would se want to do this? Space exploration by remote control to get the human feel of alien environments. Again-my only worry is that this process-construct may become a-live. And have it’s own ideas which are not in conjunction for the very reason it was crafted. Or it may outsmart its creators. And if controlled by whatever means—insertion of compliant resonant mind-states- it could rebel and become a terrorist. We are mad enough as it is. Personally as stated initially this is not the best solution to AI.