Great analysis! I’m curious about the disagreement with needing a pivotal act. Is this disagreement more epistemic or normative? That is to say do you think they assign a very low probability of needing a pivotal act to prevent misaligned AGI? Or do they have concerns about the potential consequences of this mentality? (people competing with each other to create powerful AGI, accidentally creating a misaligned AGI as a result, public opinion, etc.)
I would say the primary disagreement is epistemic—I think most of us would assign a low probability to a pivotal act defined as “a discrete action by a small group of people that flips the gameboard” being necessary. We also disagree on a normative level with the pivotal act framing, e.g. for reasons described in Critch’s post on this topic.
If I had to say why I’d disagree with a pivotal act framing, PR would probably be the obvious risk, but another risk is that it’s easy to by default politicize this type of thing, and by default there are no guardrails.
It will be extremely tempting to change the world to fit your ideology and politics, and that gets very dangerous very quickly.
Now unfortunately this might really happen, we can be in the unhappy scenario where pivotal acts are required, but if it ever comes to that, they will need to be narrow pivotal acts, and the political system shouldn’t be changed unless it relates to AI Safety. Narrowness is a virtue, and one of the most important ones during pivotal acts.
Great analysis! I’m curious about the disagreement with needing a pivotal act. Is this disagreement more epistemic or normative? That is to say do you think they assign a very low probability of needing a pivotal act to prevent misaligned AGI? Or do they have concerns about the potential consequences of this mentality? (people competing with each other to create powerful AGI, accidentally creating a misaligned AGI as a result, public opinion, etc.)
I would say the primary disagreement is epistemic—I think most of us would assign a low probability to a pivotal act defined as “a discrete action by a small group of people that flips the gameboard” being necessary. We also disagree on a normative level with the pivotal act framing, e.g. for reasons described in Critch’s post on this topic.
If I had to say why I’d disagree with a pivotal act framing, PR would probably be the obvious risk, but another risk is that it’s easy to by default politicize this type of thing, and by default there are no guardrails.
It will be extremely tempting to change the world to fit your ideology and politics, and that gets very dangerous very quickly.
Now unfortunately this might really happen, we can be in the unhappy scenario where pivotal acts are required, but if it ever comes to that, they will need to be narrow pivotal acts, and the political system shouldn’t be changed unless it relates to AI Safety. Narrowness is a virtue, and one of the most important ones during pivotal acts.