You’ve said before, don’t lobby to ban AI, because that might make you an enemy of the deep state. Here you say, focus on speeding up AI safety, rather than slowing down AI capabilities, for the same reason.
read more about how rapid AI capabilities research is considered a US national security priority
That is a 2021 document. It dates from the era before ChatGPT. AI is treated as a mighty new factor in human affairs, but not as a potentially sovereign factor, independent of human agency.
You know, I’m a transhumanist, and my own push is towards more work on what OpenAI has dubbed “superalignment”, because a successful global ban seems difficult, and I do think there is some chance of using rising AI capabilities to solve the unknown number of problems that have to be solved, before we know how to perform superalignment. Also, I don’t know what the odds are that superintelligent AI will cause human extinction. There’s clearly a nonzero risk, and it may be a very high risk, but maybe emergent behaviors and something less than superalignment, do sometimes add up to a human-friendly outcome.
Nonetheless, I actually think it would be a somewhat healthier situation, if there was a movement and a lobby group overtly devoted to banning superhuman AI—to preventing it from ever coming into being. I think that would be something distinct from MIRI and Eliezer, who in the end are not luddite human preservationists; their utopia is a transhumanist technocracy that has (through the “pivotal act”) made the world safe for superalignment research.
Looking at existing political forces, I can imagine a Green Party supporting this; and (in America) maybe populist elements of the two main parties. I don’t think it’s politically impossible; maybe RFK Jr would support it, and he’s been endorsed by Jack Dorsey.
It’s funny, but Perry Metzger, an extropian who is arguably Eliezer’s main Twitter antagonist at the moment, has been tweeting about how EA billionaires have installed anti-AI policy in the EU, and they might do it in the US too, and it’s time for pro-AI groups to fight back; and judging by his public statements, Marc Andreessen (for example) might yet end up backing covertly accelerationist lobby groups. The PR problem for accelerationism, is that there isn’t much of a political base for making the human race obsolete. Publicly, they’ll just talk about using AI to cure everything and increase quality of life, and they’ll save their cyborgist thoughts for their fellow alts on e/acc Twitter.
Another element of elite opinion that’s not to be dismissed, are the outright skeptics. There’s a particular form of this that has emerged in the wake of the great AI panic of 2023, according to which the tech CEOs are backing all this talk of dangerous AI gods, in order to distract from social justice issues, or to keep the VC money coming in, or to make themselves a regulated monopoly. (If /r/sneerclub was still active, you could read about this there, but they actually suspended operations in protest at Reddit’s new regime, and retreated to Mastodon.) This line of thought seems to be most popular among progressives, who are intellectually equipped for cynical deflationary readings of everything, but not so much for the possibility of a genuine sci-fi scenario actually coming true; and it might get an increasing platform in mass media reporting on AI—I’m thinking of recent articles by Nitasha Tiku and Kevin Roose.
My advice for “AI deniers” is that, if they truly want to be relevant, they need to support an outright ban—not just snipe at the tech CEOs and cast aspersions at the doomer activists. But then, I guess they just don’t think that superhuman AI has any actual chance of emerging in the near future.
It’s weird to think I’m getting a reputation for that stance! But it makes sense since I keep making that point. I’m not actually particularly attached to that stance, I just think that awareness of key factors are undersupplied in the rationalist community. I think that more people can easily read more about how rapid AI capabilities research is considered a US national security priority.
That is a 2021 document. It dates from the era before ChatGPT. AI is treated as a mighty new factor in human affairs, but not as a potentially sovereign factor, independent of human agency.
You know, I’m a transhumanist, and my own push is towards more work on what OpenAI has dubbed “superalignment”, because a successful global ban seems difficult, and I do think there is some chance of using rising AI capabilities to solve the unknown number of problems that have to be solved, before we know how to perform superalignment. Also, I don’t know what the odds are that superintelligent AI will cause human extinction. There’s clearly a nonzero risk, and it may be a very high risk, but maybe emergent behaviors and something less than superalignment, do sometimes add up to a human-friendly outcome.
Nonetheless, I actually think it would be a somewhat healthier situation, if there was a movement and a lobby group overtly devoted to banning superhuman AI—to preventing it from ever coming into being. I think that would be something distinct from MIRI and Eliezer, who in the end are not luddite human preservationists; their utopia is a transhumanist technocracy that has (through the “pivotal act”) made the world safe for superalignment research.
Looking at existing political forces, I can imagine a Green Party supporting this; and (in America) maybe populist elements of the two main parties. I don’t think it’s politically impossible; maybe RFK Jr would support it, and he’s been endorsed by Jack Dorsey.
It’s funny, but Perry Metzger, an extropian who is arguably Eliezer’s main Twitter antagonist at the moment, has been tweeting about how EA billionaires have installed anti-AI policy in the EU, and they might do it in the US too, and it’s time for pro-AI groups to fight back; and judging by his public statements, Marc Andreessen (for example) might yet end up backing covertly accelerationist lobby groups. The PR problem for accelerationism, is that there isn’t much of a political base for making the human race obsolete. Publicly, they’ll just talk about using AI to cure everything and increase quality of life, and they’ll save their cyborgist thoughts for their fellow alts on e/acc Twitter.
Another element of elite opinion that’s not to be dismissed, are the outright skeptics. There’s a particular form of this that has emerged in the wake of the great AI panic of 2023, according to which the tech CEOs are backing all this talk of dangerous AI gods, in order to distract from social justice issues, or to keep the VC money coming in, or to make themselves a regulated monopoly. (If /r/sneerclub was still active, you could read about this there, but they actually suspended operations in protest at Reddit’s new regime, and retreated to Mastodon.) This line of thought seems to be most popular among progressives, who are intellectually equipped for cynical deflationary readings of everything, but not so much for the possibility of a genuine sci-fi scenario actually coming true; and it might get an increasing platform in mass media reporting on AI—I’m thinking of recent articles by Nitasha Tiku and Kevin Roose.
My advice for “AI deniers” is that, if they truly want to be relevant, they need to support an outright ban—not just snipe at the tech CEOs and cast aspersions at the doomer activists. But then, I guess they just don’t think that superhuman AI has any actual chance of emerging in the near future.