The child labour example seems potentially hopeful for AI given that fears of AI taking jobs are very real and salient, even if not everyone groks the existential risks. Possible takeaway: rationalists should be a lot more willing to amplify, encourage and give resources to protectionist campaigns to ban AI from taking jobs, even though we are really worried about x-risk not jobs.
Related point: I notice that the human race has not banned gain-of-function research even though it seems to have high and theoretically even existential risks. I am trying to think of something that’s banned purely for having existential risk and coming up blank[^1].
Also related: are there religious people who could be persuaded to object to AI in the same way they object to eg human gene editing? Can we persuade religious influencers that building AI is ‘playing God’ in some way? (Our very atheist community are probably the wrong people to reach out to the religious—do we know any intermediaries who could be persuaded?)
Or to summarise: if we can’t get AGI banned/regulated for the right reasons (and we should keep trying), can we support or encourage those who want to ban AGI for the wrong reasons? Or at minimum, not stand in their way? (I don’t like advocating Dark Arts, but my p(doom) is high enough that I would encourage any peaceful effort to ban, restrict, or slow AI development, even if it means working with people I disagree with on practically everything else.)
[^1]European quasi-bans on genetic modification of just about anything are one possibility. But those seem more like reflexive anti-corporatism plus religious fear of playing God, plus a pre-existing precautionary attitude applied to food items.)
I am trying to think of something that’s banned purely for having existential risk and coming up blank.
Weren’t CFC’s banned for existential reasons (although only after an alternative was found, because it would be better to die than not have refrigerators..)?
OP discusses CFCs in the main post. But yes, that’s the most hopeful precedent. The problem being that CFCs could be replaced by alternatives that were reasonably profitable for the manufacturers, whereas AI can’t be.
Even before the invention of sufficiently viable refrigerants, physical chemists had already calculated the guaranteed existence of viable alternatives because the possibility space is quite finite. The only roadblock was manufacturing them at scale.
I am religious enough and consider AI some blend of being a soulless monster and perhaps an undead creature that is sucking up the mental states of humanity to live off our corpses.
So there is definitely the argument. The “playing God” angle does not actually work imo: none of us actually think we can be God(we lack the ability to be outside time and space).
The soullessness argument is strong. This is also our/my opposition to mind copying.
The child labour example seems potentially hopeful for AI given that fears of AI taking jobs are very real and salient, even if not everyone groks the existential risks. Possible takeaway: rationalists should be a lot more willing to amplify, encourage and give resources to protectionist campaigns to ban AI from taking jobs, even though we are really worried about x-risk not jobs.
Related point: I notice that the human race has not banned gain-of-function research even though it seems to have high and theoretically even existential risks. I am trying to think of something that’s banned purely for having existential risk and coming up blank[^1].
Also related: are there religious people who could be persuaded to object to AI in the same way they object to eg human gene editing? Can we persuade religious influencers that building AI is ‘playing God’ in some way? (Our very atheist community are probably the wrong people to reach out to the religious—do we know any intermediaries who could be persuaded?)
Or to summarise: if we can’t get AGI banned/regulated for the right reasons (and we should keep trying), can we support or encourage those who want to ban AGI for the wrong reasons? Or at minimum, not stand in their way? (I don’t like advocating Dark Arts, but my p(doom) is high enough that I would encourage any peaceful effort to ban, restrict, or slow AI development, even if it means working with people I disagree with on practically everything else.)
[^1]European quasi-bans on genetic modification of just about anything are one possibility. But those seem more like reflexive anti-corporatism plus religious fear of playing God, plus a pre-existing precautionary attitude applied to food items.)
Weren’t CFC’s banned for existential reasons (although only after an alternative was found, because it would be better to die than not have refrigerators..)?
OP discusses CFCs in the main post. But yes, that’s the most hopeful precedent. The problem being that CFCs could be replaced by alternatives that were reasonably profitable for the manufacturers, whereas AI can’t be.
The dynamics are not comparable at all.
Even before the invention of sufficiently viable refrigerants, physical chemists had already calculated the guaranteed existence of viable alternatives because the possibility space is quite finite. The only roadblock was manufacturing them at scale.
I am religious enough and consider AI some blend of being a soulless monster and perhaps an undead creature that is sucking up the mental states of humanity to live off our corpses.
So there is definitely the argument. The “playing God” angle does not actually work imo: none of us actually think we can be God(we lack the ability to be outside time and space).
The soullessness argument is strong. This is also our/my opposition to mind copying.