I have read your comments on the EA forum and the points do resonate with me.
As a layman, I do have a personal distrust with the (what I’d call) anti-human ideologies driving the actors you refer to and agree that a majority of people do as well. It is hard to feel much joy in being extinct and replaced by synthetic beings in probably a way most would characterize as dumb (clippy being the extreme)
I also believe that fundamental changing of the human subjective experience (radical bioengineering or uploading to an extent) in order to erase the ability to suffer in general (not just medical cases like depression) as I have seen being brought up by futurist circles is also akin to death. I think it could possibly be a somewhat literal death, where my conscious experience actually stops if radical changes would occur, but I am completely uneducated and unqualified on how consciousness works.
I think that a hypothetical me, even with my memories, who is physically unable to experience any negative emotions would be philosophically dead. It would be unable to learn nor reflect and its fundamentally different subjective experience would be so radically different from me, and any future biological me should I grow older naturally, that I do not think memories alone would be enough to keep my identity. To my awareness, the majority of people would think similarly and that there is value ascribed to our human nature, including limitations, which has been reinforced by our media and culture. Though whether this attachment is purely a product of coping, I do not know. What I do know is that it is the current reality for every functional human being now and has been for thousands of years. I believe people would prefer sticking with it than relinquishing it for vague promises of ascended consciousness. I think this is somewhat supported by my subjective observation that to a lot of people who want a posthuman existence and what it entails, their end goal seems to often come back to creating simulations they themselves can live in normally.
I’m curious though if you have any hopes for the situation regarding the nebulous motivations of some AGI researchers, especially as AI and its risks have recently started becoming “mainstream”. Do you expect to see changes and their views challenged? My question is loaded, but it seems you are already invested in its answer.
I’m curious though if you have any hopes for the situation regarding the nebulous motivations of some AGI researchers, especially as AI and its risks have recently started becoming “mainstream”. Do you expect to see changes and their views challenged? My question is loaded, but it seems you are already invested in its answer.
I think there’s a case to be made for AGI/ASI development and deployment as a “hostis humani generis” act; and others have made the case as well. I am confused (and let’s be honest, increasingly aghast) as to why AI doomers rarely try to press this angle in their debates/public-facing writings.
To me it feels like AI doomers have been asleep on sentry duty, and I’m not exactly sure why. My best guesses look somewhat like “some level of agreement with the possible benefits of AGI/ASI” or “a belief that AGI/ASI is overwhelmingly inevitable and so it’s better not to show any sign of adversariality towards those developing it, so as to best influence them to mind safety”, but this is quite speculative on my part. I think LW/EA stuff inculcates in many a grievous and pervasive fear of upsetting AGI accelerationists/researchers/labs (fear of retaliatory paperclipping? fear of losing mostly illusory leverage and influence? getting memed into the idea that AGI/ASI is inevitable and unstoppable?).
I feel like this foundational dissonance makes AI doomers come across as confused fawny wordcels or hectoring cultists whenever they face AGI accelerationists / AI risk deniers (who in contrast tend to come across as open/frank/honest/aligned/of action/assertive/doers/etc). This vibe is really not conducive to convincing people of the risks/consequences of AGI/ASI.
I do have hopes but they feel kinda gated on “AI doomers” being many orders of magnitudes more honest, unflinchingly open, and unflatteringly frank about the ideologies that motivate AGI/ASI researchers and the intended/likely consequences of their success—even if “alignment/control” gets solved—of total technological unemployment and consequential social/economic human disempowerment, instead of continuing to treat AGI/ASI as some sort of neutral(if not outright necessary)-but-highly-risky technology like rockets or nukes or recombinant DNA technology. Also gated on explicitly countering the contentions that AGI/ASI—even if aligned—is inevitable/necessary/good or that China is a viable contender in this omnicidal race or that we need AGI/ASI to fight climate change or asteroids or pandemics or all the other (sorry for being profane) bullshit that gets trotted out to justify AGI/ASI development. And gated on explicitly saying that AGI/ASI accelerationists are transhumanist fundamentalists who are willing to sacrifice the entire human species on the altar of their ideology.
I don’t think AGI/ASI is inherently inevitable, but as long as AI doomers shy away from explaining that the AGI/ASI labs are specifically seeking (and likely soonish succeeding) to build systems strong enough to turn the yet-unbroken—from hunter-gatherer bands to July 2023 -- bedrock (“human labor is irreplaceably valuable”) assumption of human society into fine sand; I think there’s little hope of stopping AGI/ASI development.
I have read your comments on the EA forum and the points do resonate with me.
As a layman, I do have a personal distrust with the (what I’d call) anti-human ideologies driving the actors you refer to and agree that a majority of people do as well. It is hard to feel much joy in being extinct and replaced by synthetic beings in probably a way most would characterize as dumb (clippy being the extreme)
I also believe that fundamental changing of the human subjective experience (radical bioengineering or uploading to an extent) in order to erase the ability to suffer in general (not just medical cases like depression) as I have seen being brought up by futurist circles is also akin to death. I think it could possibly be a somewhat literal death, where my conscious experience actually stops if radical changes would occur, but I am completely uneducated and unqualified on how consciousness works.
I think that a hypothetical me, even with my memories, who is physically unable to experience any negative emotions would be philosophically dead. It would be unable to learn nor reflect and its fundamentally different subjective experience would be so radically different from me, and any future biological me should I grow older naturally, that I do not think memories alone would be enough to keep my identity. To my awareness, the majority of people would think similarly and that there is value ascribed to our human nature, including limitations, which has been reinforced by our media and culture. Though whether this attachment is purely a product of coping, I do not know. What I do know is that it is the current reality for every functional human being now and has been for thousands of years. I believe people would prefer sticking with it than relinquishing it for vague promises of ascended consciousness. I think this is somewhat supported by my subjective observation that to a lot of people who want a posthuman existence and what it entails, their end goal seems to often come back to creating simulations they themselves can live in normally.
I’m curious though if you have any hopes for the situation regarding the nebulous motivations of some AGI researchers, especially as AI and its risks have recently started becoming “mainstream”. Do you expect to see changes and their views challenged? My question is loaded, but it seems you are already invested in its answer.
I think there’s a case to be made for AGI/ASI development and deployment as a “hostis humani generis” act; and others have made the case as well. I am confused (and let’s be honest, increasingly aghast) as to why AI doomers rarely try to press this angle in their debates/public-facing writings.
To me it feels like AI doomers have been asleep on sentry duty, and I’m not exactly sure why. My best guesses look somewhat like “some level of agreement with the possible benefits of AGI/ASI” or “a belief that AGI/ASI is overwhelmingly inevitable and so it’s better not to show any sign of adversariality towards those developing it, so as to best influence them to mind safety”, but this is quite speculative on my part. I think LW/EA stuff inculcates in many a grievous and pervasive fear of upsetting AGI accelerationists/researchers/labs (fear of retaliatory paperclipping? fear of losing mostly illusory leverage and influence? getting memed into the idea that AGI/ASI is inevitable and unstoppable?).
It seems to me like people whose primary tool of action/thinking/orienting is some sort of scientific/truth-finding rational system will inevitably lose against groups of doggedly motivated, strategically+technically competent, cunning unilateralists who gleefully use deceit / misdirection to prevent normies from catching on to what they’re doing and motivated by fundamentalist pseudo-religious impulses (“the prospect of immortality, of solving philosophy”).
I feel like this foundational dissonance makes AI doomers come across as confused fawny wordcels or hectoring cultists whenever they face AGI accelerationists / AI risk deniers (who in contrast tend to come across as open/frank/honest/aligned/of action/assertive/doers/etc). This vibe is really not conducive to convincing people of the risks/consequences of AGI/ASI.
I do have hopes but they feel kinda gated on “AI doomers” being many orders of magnitudes more honest, unflinchingly open, and unflatteringly frank about the ideologies that motivate AGI/ASI researchers and the intended/likely consequences of their success—even if “alignment/control” gets solved—of total technological unemployment and consequential social/economic human disempowerment, instead of continuing to treat AGI/ASI as some sort of neutral(if not outright necessary)-but-highly-risky technology like rockets or nukes or recombinant DNA technology. Also gated on explicitly countering the contentions that AGI/ASI—even if aligned—is inevitable/necessary/good or that China is a viable contender in this omnicidal race or that we need AGI/ASI to fight climate change or asteroids or pandemics or all the other (sorry for being profane) bullshit that gets trotted out to justify AGI/ASI development. And gated on explicitly saying that AGI/ASI accelerationists are transhumanist fundamentalists who are willing to sacrifice the entire human species on the altar of their ideology.
I don’t think AGI/ASI is inherently inevitable, but as long as AI doomers shy away from explaining that the AGI/ASI labs are specifically seeking (and likely soonish succeeding) to build systems strong enough to turn the yet-unbroken—from hunter-gatherer bands to July 2023 -- bedrock (“human labor is irreplaceably valuable”) assumption of human society into fine sand; I think there’s little hope of stopping AGI/ASI development.