I do buy the explanations I listed in the OP (and other, complementary explanations, like the ones in Inadequate Equilbria), and I think they’re sufficient to ~fully make sense of what’s going on. So I don’t feel confused about the situation anymore. By “shocking” I meant something more like “calls for an explanation”, not “calls for an explanation, and I don’t have an explanation that feels adequate”. (With added overtones of “horrifying”.)
Yeah, OK, I think that helps clarify things for me.
As someone who was working at MIRI in 2014 and watched events unfolding, I think the Hawking article had a negligible impact and the Musk stuff had a huge impact. Eliezer might be wrong about why Hawking had so little impact, but I do think it didn’t do much.
Maybe we’re misunderstanding each other here. I don’t really doubt what you’re saying there^ i.e. I am fully willing to believe that the Hawking thing had negligible impact and the Musk tweet had a lot. I’m more pointing to why Musk had a lot rather than why Hawking had little: Trying to point out that since Musk was reacting to Superintelligence, one might ask whether he could have had a similar impact without Superintelligence. And so maybe the anecdote could be used as evidence that Superintelligence was really the thing that helped ‘break the silence’. However, Superintelligence feels way less like “being blunt” and “throwing a brick” and—at least from the outside—looks way more like the “scripts, customs, and established protocols” of “normal science” (i.e. Oxford philosophy professor writes book with somewhat tricky ideas in it, published by OUP, reviewed by the NYT etc. etc.) and clearly is an attempt to make unusual ideas sound “sober and serious”. So I’m kind of saying that maybe the story doesn’t necessarily argue against the possibility of doing further work like that that—i.e. writing books that manage to stay respectable and manage to “speak accurately and concretely about the future of AI without sounding like a sci-fi weirdo”(?)
Oh, I do think Superintelligence was extremely important.
writing books that manage to stay respectable and manage to “speak accurately and concretely about the future of AI without sounding like a sci-fi weirdo”(?)
I think Superintelligence has an academic tone (and, e.g., hedges a lot), but its actual contents are almost maximally sci-fi weirdo—the vast majority of public AI risk discussion today, especially when it comes to intro resources, is much less willing to blithely discuss crazy sci-fi scenarios.
Overall, I think that Superintelligence’s success is some evidence against the Elon Musk strategy, but it’s weaker evidence inasmuch as it was still a super weird book that mostly ignores the Overton window and just talks about arbitrarily crazy stuff, rather than being as trying-to-be-normal as most other intro resources.
(E.g., “Most Important Century” is a lot weirder than most intro sources, but is still trying a lot harder than Superintelligence to sound normal. I’d say that Stuart Russell’s stuff and “Risks from Learned Optimization” are mostly trying a lot harder to sound normal than that, and “Concrete Problems” is trying harder still.)
(Re my comparison of “Most Important Century” and Superintelligence: I’d say this is true on net, but not true in all respects. “Most Important Century” is trying to be a much more informal, non-academic document than Superintelligence, which I think allows it to be candid and explicit in some ways Superintelligence isn’t.)
Thanks for the nice reply.
Yeah, OK, I think that helps clarify things for me.
Maybe we’re misunderstanding each other here. I don’t really doubt what you’re saying there^ i.e. I am fully willing to believe that the Hawking thing had negligible impact and the Musk tweet had a lot. I’m more pointing to why Musk had a lot rather than why Hawking had little: Trying to point out that since Musk was reacting to Superintelligence, one might ask whether he could have had a similar impact without Superintelligence. And so maybe the anecdote could be used as evidence that Superintelligence was really the thing that helped ‘break the silence’. However, Superintelligence feels way less like “being blunt” and “throwing a brick” and—at least from the outside—looks way more like the “scripts, customs, and established protocols” of “normal science” (i.e. Oxford philosophy professor writes book with somewhat tricky ideas in it, published by OUP, reviewed by the NYT etc. etc.) and clearly is an attempt to make unusual ideas sound “sober and serious”. So I’m kind of saying that maybe the story doesn’t necessarily argue against the possibility of doing further work like that that—i.e. writing books that manage to stay respectable and manage to “speak accurately and concretely about the future of AI without sounding like a sci-fi weirdo”(?)
Oh, I do think Superintelligence was extremely important.
I think Superintelligence has an academic tone (and, e.g., hedges a lot), but its actual contents are almost maximally sci-fi weirdo—the vast majority of public AI risk discussion today, especially when it comes to intro resources, is much less willing to blithely discuss crazy sci-fi scenarios.
Overall, I think that Superintelligence’s success is some evidence against the Elon Musk strategy, but it’s weaker evidence inasmuch as it was still a super weird book that mostly ignores the Overton window and just talks about arbitrarily crazy stuff, rather than being as trying-to-be-normal as most other intro resources.
(E.g., “Most Important Century” is a lot weirder than most intro sources, but is still trying a lot harder than Superintelligence to sound normal. I’d say that Stuart Russell’s stuff and “Risks from Learned Optimization” are mostly trying a lot harder to sound normal than that, and “Concrete Problems” is trying harder still.)
(Re my comparison of “Most Important Century” and Superintelligence: I’d say this is true on net, but not true in all respects. “Most Important Century” is trying to be a much more informal, non-academic document than Superintelligence, which I think allows it to be candid and explicit in some ways Superintelligence isn’t.)