The risk that it simply ends up being owned by the few who create it leading thus to a total concentration of the productive power of humanity isn’t immaterial, in fact it looks like the default outcome.
Yes, this is why I’ve been frustrated (and honestly aghast, given timelines) at the popular focus on AI doom and paperclips rather than the fact that this is the default (if not nigh-unavoidable) outcome of AGI/ASI, even if “alignment” gets solved. Comparisons with industrialization and other technological developments are specious because none of them had the potential to do anything close to this.
I think there’s a case to be made for AGI/ASI development and deployment as a “hostis humani generis” act; and others have made the case as well. I am confused (and let’s be honest, increasingly aghast) as to why AI doomers rarely try to press this angle in their debates/public-facing writings.
To me it feels like AI doomers have been asleep on sentry duty, and I’m not exactly sure why. My best guesses look somewhat like “some level of agreement with the possible benefits of AGI/ASI” or “a belief that AGI/ASI is overwhelmingly inevitable and so it’s better not to show any sign of adversariality towards those developing it, so as to best influence them to mind safety”, but this is quite speculative on my part. I think LW/EA stuff inculcates in many a grievous and pervasive fear of upsetting AGI accelerationists/researchers/labs (fear of retaliatory paperclipping? fear of losing mostly illusory leverage and influence? getting memed into the idea that AGI/ASI is inevitable and unstoppable?).
It seems to me like people whose primary tool of action/thinking/orienting is some sort of scientific/truth-finding rational system will inevitably lose against groups of doggedly motivated, strategically+technically competent, cunning unilateralists who gleefully use deceit / misdirection to prevent normies from catching on to what they’re doing and motivated by fundamentalist pseudo-religious impulses (“the prospect of immortality, of solving philosophy”).
I feel like this foundational dissonance makes AI doomers come across as confused fawny wordcels or hectoring cultists whenever they face AGI accelerationists / AI risk deniers (who in contrast tend to come across as open/frank/honest/aligned/of action/assertive/doers/etc). This vibe is really not conducive to convincing people of the risks/consequences of AGI/ASI.
I do have hopes but they feel kinda gated on “AI doomers” being many orders of magnitudes more honest, unflinchingly open, and unflatteringly frank about the ideologies that motivate AGI/ASI researchers and the intended/likely consequences of their success—even if “alignment/control” gets solved—of total technological unemployment and consequential social/economic human disempowerment, instead of continuing to treat AGI/ASI as some sort of neutral(if not outright necessary)-but-highly-risky technology like rockets or nukes or recombinant DNA technology. Also gated on explicitly countering the contentions that AGI/ASI—even if aligned—is inevitable/necessary/good or that China is a viable contender in this omnicidal race or that we need AGI/ASI to fight climate change or asteroids or pandemics or all the other (sorry for being profane) bullshit that gets trotted out to justify AGI/ASI development. And gated on explicitly saying that AGI/ASI accelerationists are transhumanist fundamentalists who are willing to sacrifice the entire human species on the altar of their ideology.
I don’t think AGI/ASI is inherently inevitable, but as long as AI doomers shy away from explaining that the AGI/ASI labs are specifically seeking (and likely soonish succeeding) to build systems strong enough to turn the yet-unbroken—from hunter-gatherer bands to July 2023 -- bedrock (“human labor is irreplaceably valuable”) assumption of human society into fine sand; I think there’s little hope of stopping AGI/ASI development.