Very good question. It is awful that we find ourselves in a situation in which there are only tiny shreds of hope for our species’s surviving “the AI program” (the community pushing AI capabilities as far as they will go).
One tiny shred of hope is some “revolution in political affairs” that allow a small group to take over the world, and this small group understands how dangerous the AI program is. One way such a revolution might come about is the creation of sufficiently good technology to “measure loyalty” by scanning the brain somehow: the first group to use the brain-scanning tech to take over the world (by giving police, military and political power only to loyal people) can (hopefully) prevent other groups from using the brain-scanning tech, yielding a stable regime, and hopefully they use their stable hegemony to shut down the AI labs, to make it illegal to teach or publish about AI and to stop “progress” in GPU tech.
The greater the number of “centers of autonomous power” on Earth, the greater the AI extinction risk because the survival of our species basically requires every center of autonomous power to choose to refrain from continuing the AI program and because when there is only one dominant center of power, that center’s motive to use AI to gain an advantage over rival centers is greatly diminished relative to the current situation. Parenthetically, this is why I consider the United States the greatest source of AI extinction risk: Russia and China are arranged so as to make it easy or at least possible for a few people in the central government to shut down things going on in the country whereas the United States is arranged to keep power as dispersed as practical a la, “the government that governs least governs best”.
Another tiny shred of hope is our making contact with an alien civilization and asking them to help us out of the situation. This need not entail the arrival of an alien ship or probe because a message from the aliens can contain a computer program and that computer program might be (and AFAICT probably would be) an AGI which (after we receive it) we can run or emulate on our computers. Yes, doing that definitely does give an alien civilization we know almost nothing about complete power over us, but the situation around human-created AI is so dire that I’m tentatively tepidly in favor of putting our hope in the possibility that the alien civilization that sent the message will turn out to be nice or if it is not intrinsically nice at least has some extrinsic motive to treat us well. (One possible extrinsic motive would be protecting its reputation among civilizations that might receive the message that aren’t as helpless as we are and consequently might be worthwhile to trade with. I can expand on this if there is interest.)
IIUC the Vera Rubin telescope coming online in a few years will increase our civilization’s capacity to search for messages in the form laser beams by at least 3 orders of magnitude.
I can imagine other tiny shreds of hope, but they have the property that if the people (i.e., most people with power or influence) who don’t understand the severity of AI extinction risk knew about them, they’d probably try to block them, and blocking them would probably be fairly easy for them to do, so it doesn’t make any sense to talk about them on a public forum.
Unlike most of the technical alignment/safety research going on these days, pursuing these tiny shreds of hope at least doesn’t make the problem worse by providing (unintended) assistance to the AI project.
Good point: I was planning to amend my comment to say that I also support efforts to stop or hinder the AI project through ordinary political processes and that the “revolution in political affairs” is interesting to think about mainly because it might become apparent (years from now) that working within the political system has failed.
I also regret the choice of phrase, “tiny shred of hope”. I have a regrettable (mostly unconscious) motivation to direct people’s attention to the harsher aspects of the human condition, and I think I let some of that motivation creep into my previous comment. Nevertheless I really do think the global situation is quite alarming because of the AI project.
I’m very supportive of efforts to postpone the day when the AI project kills us all (or deprives us of the ability to influence our future) because that allows more time for us to be saved by some means that seems very unlikely to us now or that we are unable to even imagine now. I’m very skeptical of the policy of looking forward to or being neutral about the arrival of human-level AI because (according to proponents) then we can start more effective efforts at trying to align it, which I think has much less hope than a lot of people here think it does, which made me want to describe some alternative veins of hope.
Very good question. It is awful that we find ourselves in a situation in which there are only tiny shreds of hope for our species’s surviving “the AI program” (the community pushing AI capabilities as far as they will go).
One tiny shred of hope is some “revolution in political affairs” that allow a small group to take over the world, and this small group understands how dangerous the AI program is. One way such a revolution might come about is the creation of sufficiently good technology to “measure loyalty” by scanning the brain somehow: the first group to use the brain-scanning tech to take over the world (by giving police, military and political power only to loyal people) can (hopefully) prevent other groups from using the brain-scanning tech, yielding a stable regime, and hopefully they use their stable hegemony to shut down the AI labs, to make it illegal to teach or publish about AI and to stop “progress” in GPU tech.
The greater the number of “centers of autonomous power” on Earth, the greater the AI extinction risk because the survival of our species basically requires every center of autonomous power to choose to refrain from continuing the AI program and because when there is only one dominant center of power, that center’s motive to use AI to gain an advantage over rival centers is greatly diminished relative to the current situation. Parenthetically, this is why I consider the United States the greatest source of AI extinction risk: Russia and China are arranged so as to make it easy or at least possible for a few people in the central government to shut down things going on in the country whereas the United States is arranged to keep power as dispersed as practical a la, “the government that governs least governs best”.
Another tiny shred of hope is our making contact with an alien civilization and asking them to help us out of the situation. This need not entail the arrival of an alien ship or probe because a message from the aliens can contain a computer program and that computer program might be (and AFAICT probably would be) an AGI which (after we receive it) we can run or emulate on our computers. Yes, doing that definitely does give an alien civilization we know almost nothing about complete power over us, but the situation around human-created AI is so dire that I’m tentatively tepidly in favor of putting our hope in the possibility that the alien civilization that sent the message will turn out to be nice or if it is not intrinsically nice at least has some extrinsic motive to treat us well. (One possible extrinsic motive would be protecting its reputation among civilizations that might receive the message that aren’t as helpless as we are and consequently might be worthwhile to trade with. I can expand on this if there is interest.)
IIUC the Vera Rubin telescope coming online in a few years will increase our civilization’s capacity to search for messages in the form laser beams by at least 3 orders of magnitude.
I can imagine other tiny shreds of hope, but they have the property that if the people (i.e., most people with power or influence) who don’t understand the severity of AI extinction risk knew about them, they’d probably try to block them, and blocking them would probably be fairly easy for them to do, so it doesn’t make any sense to talk about them on a public forum.
Unlike most of the technical alignment/safety research going on these days, pursuing these tiny shreds of hope at least doesn’t make the problem worse by providing (unintended) assistance to the AI project.
Is this an accurate summary of your suggestions?
Realistic actions an AI Safety researcher can take to save the world:
✅ Pray for a global revolution
✅ Pray for an alien invasion
❌ Talk to your representative
Good point: I was planning to amend my comment to say that I also support efforts to stop or hinder the AI project through ordinary political processes and that the “revolution in political affairs” is interesting to think about mainly because it might become apparent (years from now) that working within the political system has failed.
I also regret the choice of phrase, “tiny shred of hope”. I have a regrettable (mostly unconscious) motivation to direct people’s attention to the harsher aspects of the human condition, and I think I let some of that motivation creep into my previous comment. Nevertheless I really do think the global situation is quite alarming because of the AI project.
I’m very supportive of efforts to postpone the day when the AI project kills us all (or deprives us of the ability to influence our future) because that allows more time for us to be saved by some means that seems very unlikely to us now or that we are unable to even imagine now. I’m very skeptical of the policy of looking forward to or being neutral about the arrival of human-level AI because (according to proponents) then we can start more effective efforts at trying to align it, which I think has much less hope than a lot of people here think it does, which made me want to describe some alternative veins of hope.