For all of the hubbub about trying to elaborate better arguments for AI x-risk, it seems like a lot of people are describing the arguments in Superintelligence as relying on FOOM, agenty AI systems, etc. without actually justifying that description via references to the text.
It’s been a while since I read Superintelligence, but my memory was that it anticipated a lot of counter-arguments quite well. I’m not convinced that it requires such strong premises to make a compelling case. So maybe someone interested in this project of clarifying the arguments should start with establishing that the arguments in superintelligence really have the weaknesses they are claimed to?
For all of the hubbub about trying to elaborate better arguments for AI x-risk, it seems like a lot of people are describing the arguments in Superintelligence as relying on FOOM, agenty AI systems, etc. without actually justifying that description via references to the text.
It’s been a while since I read Superintelligence, but my memory was that it anticipated a lot of counter-arguments quite well. I’m not convinced that it requires such strong premises to make a compelling case. So maybe someone interested in this project of clarifying the arguments should start with establishing that the arguments in superintelligence really have the weaknesses they are claimed to?