Thank you, I’ve always been curious about this point of view because a lot of people have a similar view to yours.
I do think that alignment success is the most likely avenue, but my argument doesn’t require this assumption.
Your view isn’t just that “alternative paths are more likely to succeed than alignment,” but that “alternative paths are so much more likely to succeed than alignment, that the marginal capabilities increase caused by alignment research (or at least Anthropic), makes them unworthwhile.”
To believe that alignment is that hopeless, there should be stronger proof than “we tried it for 22 years, and the prior probability of the threshold being between 22 years and 23 years is low.” That argument can easily be turned around to argue why more alignment research is equally unlikely to cause harm (and why Anthropic is unlikely to cause harm). I also think multiplying funding can multiply progress (e.g. 4x funding ≈ 2x duration).
If you really want a singleton controlling the whole world (which I don’t agree with), your most plausible path would be for most people to see AI risk as a “desperate” problem, and for governments under desperation to agree on a worldwide military which swears to preserve civilian power structures within each country.[1]
Otherwise, the fact that no country took over the world during the last centuries strongly suggests that no country will in the next few years, and this feels more solid than your argument that “no one figured out alignment in the last 22 years, so no one will in the next few years.”
Out of curiosity, would you agree with this being the most plausible path, even if you disagree with the rest of my argument?
The most plausible story I can imagine quickly right now is the US and China fight a war and the US wins and uses some of the political capital from that win to slow down the AI project, perhaps through control over the world’s leading-edge semiconductor fabs plus pressuring Beijing to ban teaching and publishing about deep learning (to go with a ban on the same things in the West). I believe that basically all the leading-edge fabs in existence or that will be built in the next 10 years are in the countries the US has a lot of influence over or in China. Another story: the technology for “measuring loyalty in humans” gets really good fast, giving the first group to adopt the technology so great an advantage that over a few years the group gets control over the territories where all the world’s leading-edge fabs and most of the trained AI researchers are.
I want to remind people of the context of this conversation: I’m trying to persuade people to refrain from actions that on expectation make human extinction arrive a little quicker because most of our (sadly slim) hope for survival IMHO flows from possibilities other than our solving (super-)alignment in time.
I would go one step further and argue you don’t need to take over territory to shut down the semiconductor supply chain, if enough large countries believed AI risk was a desperate problem they could convince and negotiate the shutdown of the supply chain.
Shutting down the supply chain (and thus all leading-edge semiconductor fabs) could slow the AI project by a long time, but probably not “150 years” since the uncooperative countries will eventually build their own supply chain and fabs.
The ruling coalition can disincentivize the development of a semiconductor supply chain outside the territories it controls by selling world-wide semiconductors that use “verified boot” technology to make it really hard to use the semiconductor to run AI workloads similar to how it is really hard even for the best jailbreakers to jailbreak a modern iPhone.
That’s a good idea! Even today it may be useful for export controls (depending on how reliable it can be made).
The most powerful chips might be banned from export, and have “verified boot” technology inside in case they are smuggled out.
The second most powerful chips might be only exported to trusted countries, and also have this verified boot technology in case these trusted countries end up selling them to less trusted countries who sell them yet again.
Thank you, I’ve always been curious about this point of view because a lot of people have a similar view to yours.
I do think that alignment success is the most likely avenue, but my argument doesn’t require this assumption.
Your view isn’t just that “alternative paths are more likely to succeed than alignment,” but that “alternative paths are so much more likely to succeed than alignment, that the marginal capabilities increase caused by alignment research (or at least Anthropic), makes them unworthwhile.”
To believe that alignment is that hopeless, there should be stronger proof than “we tried it for 22 years, and the prior probability of the threshold being between 22 years and 23 years is low.” That argument can easily be turned around to argue why more alignment research is equally unlikely to cause harm (and why Anthropic is unlikely to cause harm). I also think multiplying funding can multiply progress (e.g. 4x funding ≈ 2x duration).
If you really want a singleton controlling the whole world (which I don’t agree with), your most plausible path would be for most people to see AI risk as a “desperate” problem, and for governments under desperation to agree on a worldwide military which swears to preserve civilian power structures within each country.[1]
Otherwise, the fact that no country took over the world during the last centuries strongly suggests that no country will in the next few years, and this feels more solid than your argument that “no one figured out alignment in the last 22 years, so no one will in the next few years.”
Out of curiosity, would you agree with this being the most plausible path, even if you disagree with the rest of my argument?
The most plausible story I can imagine quickly right now is the US and China fight a war and the US wins and uses some of the political capital from that win to slow down the AI project, perhaps through control over the world’s leading-edge semiconductor fabs plus pressuring Beijing to ban teaching and publishing about deep learning (to go with a ban on the same things in the West). I believe that basically all the leading-edge fabs in existence or that will be built in the next 10 years are in the countries the US has a lot of influence over or in China. Another story: the technology for “measuring loyalty in humans” gets really good fast, giving the first group to adopt the technology so great an advantage that over a few years the group gets control over the territories where all the world’s leading-edge fabs and most of the trained AI researchers are.
I want to remind people of the context of this conversation: I’m trying to persuade people to refrain from actions that on expectation make human extinction arrive a little quicker because most of our (sadly slim) hope for survival IMHO flows from possibilities other than our solving (super-)alignment in time.
I would go one step further and argue you don’t need to take over territory to shut down the semiconductor supply chain, if enough large countries believed AI risk was a desperate problem they could convince and negotiate the shutdown of the supply chain.
Shutting down the supply chain (and thus all leading-edge semiconductor fabs) could slow the AI project by a long time, but probably not “150 years” since the uncooperative countries will eventually build their own supply chain and fabs.
The ruling coalition can disincentivize the development of a semiconductor supply chain outside the territories it controls by selling world-wide semiconductors that use “verified boot” technology to make it really hard to use the semiconductor to run AI workloads similar to how it is really hard even for the best jailbreakers to jailbreak a modern iPhone.
That’s a good idea! Even today it may be useful for export controls (depending on how reliable it can be made).
The most powerful chips might be banned from export, and have “verified boot” technology inside in case they are smuggled out.
The second most powerful chips might be only exported to trusted countries, and also have this verified boot technology in case these trusted countries end up selling them to less trusted countries who sell them yet again.