Oh, I do think Superintelligence was extremely important.
writing books that manage to stay respectable and manage to “speak accurately and concretely about the future of AI without sounding like a sci-fi weirdo”(?)
I think Superintelligence has an academic tone (and, e.g., hedges a lot), but its actual contents are almost maximally sci-fi weirdo—the vast majority of public AI risk discussion today, especially when it comes to intro resources, is much less willing to blithely discuss crazy sci-fi scenarios.
Overall, I think that Superintelligence’s success is some evidence against the Elon Musk strategy, but it’s weaker evidence inasmuch as it was still a super weird book that mostly ignores the Overton window and just talks about arbitrarily crazy stuff, rather than being as trying-to-be-normal as most other intro resources.
(E.g., “Most Important Century” is a lot weirder than most intro sources, but is still trying a lot harder than Superintelligence to sound normal. I’d say that Stuart Russell’s stuff and “Risks from Learned Optimization” are mostly trying a lot harder to sound normal than that, and “Concrete Problems” is trying harder still.)
(Re my comparison of “Most Important Century” and Superintelligence: I’d say this is true on net, but not true in all respects. “Most Important Century” is trying to be a much more informal, non-academic document than Superintelligence, which I think allows it to be candid and explicit in some ways Superintelligence isn’t.)
Oh, I do think Superintelligence was extremely important.
I think Superintelligence has an academic tone (and, e.g., hedges a lot), but its actual contents are almost maximally sci-fi weirdo—the vast majority of public AI risk discussion today, especially when it comes to intro resources, is much less willing to blithely discuss crazy sci-fi scenarios.
Overall, I think that Superintelligence’s success is some evidence against the Elon Musk strategy, but it’s weaker evidence inasmuch as it was still a super weird book that mostly ignores the Overton window and just talks about arbitrarily crazy stuff, rather than being as trying-to-be-normal as most other intro resources.
(E.g., “Most Important Century” is a lot weirder than most intro sources, but is still trying a lot harder than Superintelligence to sound normal. I’d say that Stuart Russell’s stuff and “Risks from Learned Optimization” are mostly trying a lot harder to sound normal than that, and “Concrete Problems” is trying harder still.)
(Re my comparison of “Most Important Century” and Superintelligence: I’d say this is true on net, but not true in all respects. “Most Important Century” is trying to be a much more informal, non-academic document than Superintelligence, which I think allows it to be candid and explicit in some ways Superintelligence isn’t.)