Hiring people for your pivotal act project is going to be tricky. [...] People on your team will have a low trust and/or adversarial stance towards neighboring institutions and collaborators, and will have a hard time forming good-faith collaboration. This will alienate other institutions and make them not want to work with you or be supportive of you.
This is in a context where the ‘pivotal act’ example is using a safe ASI to shut down all AI labs.[1]
My thought is that I don’t see why a pivotal act needs to be that. I don’t see why shutting down AI labs or using nanotech to disassemble GPUs on Earth would be necessary. These may be among the ‘most direct’ or ‘simplest to imagine’ possible actions, but in the case of superintelligence, simplicity is not a constraint.
We can instead select for the ‘kindest’ or ‘least adversarial’ or actually: functional-decision-theoretically optimal actions that save the future while minimizing the amount of adversariality this creates in the past (present).
Which can be broadly framed as ‘using ASI for good’. Which is what everyone wants, even the ones being uncareful about its development.
Capabilities orgs would be able to keep working on fun capabilities projects in those days during which the world is saved, because a group following this policy would choose to use ASI to make the world robust to the failure modes of capabilities projects rather than shutting them down. Because superintelligence is capable of that, and so much more.
side note: It’s orthogonal to the point of this post, but this example also makes me think: if I were working on a safe ASI project, I wouldn’t mind if another group who had discreetly built safe ASI used it to shut my project down, since my goal is ‘ensure the future lightcone is used in a valuable, tragedy-averse way’ and not ‘gain personal power’ or ‘have a fun time working on AI’ or something. In my morality, it would be naive to be opposed to that shutdown. But to the extent humanity is naive, we can easily do something else in that future to create better present dynamics (as the maintext argues).
If there is a group for whom using ASI to make the world robust to risks and free of harm, in a way where its actions don't infringe on ongoing non-violent activities is problematic, then this post doesn’t apply to them as their issue all along was not with the character of the pivotal act, but instead possibly with something like ‘having my personal cosmic significance as a capabilities researcher stripped away by the success of an external alignment project’.
Another disclaimer: This post is about a world in which safely usable superintelligence has been created, but I’m not confident that anyone (myself included) currently has a safe and ready method to create it with. This post shouldn’t be read as an endorsement of possible current attempts to do this. I would of course prefer if this civilization were one which could coordinate such that no groups were presently working on ASI, precluding this discourse.
These may be among the ‘most direct’ or ‘simplest to imagine’ possible actions, but in the case of superintelligence, simplicity is not a constraint.
I think it is considered a constraint by some because they think that it would be easier/safer to use a superintelligent AI to do simpler actions, while alignment is not yet fully solved. In other words, if alignment was fully solved, then you could use it to do complicated things like what you suggest, but there could be an intermediate stage of alignment progress where you could safely use SI to do something simple like “melt GPUs” but not to achieve more complex goals.
it is considered a constraint by some because they think that it would be easier/safer to use a superintelligent AI to do simpler actions, while alignment is not yet fully solved
Agreed that some think this, and agreed that formally specifying a simple action policy is easier than a more complex one.[1]
I have a different model of what the earliest safe ASI will look like, in most futures where one exists. Rather than a ‘task-aligned’ agent, I expect it to be a non-agentic system which can be used to e.g come up with pivotal actions for the human group to take / information to act on.[2]
although formal ‘task-aligned agency’ seems potentially more complex than the attempt at a ‘full’ outer alignment solution that I’m aware of (QACI), as in specifying what a {GPU, AI lab, shutdown of an AI lab} is seems more complex than it.
I think these systems are more attainable, see this post to possibly infer more info (it’s proven very difficult for me to write in a way that I expect will be moving to people who have a model focused on ‘formal inner + formal outer alignment’, but I think evhub has done so well).
Reflecting on this more, I wrote in a discord server (then edited to post here):
I wasn’t aware the concept of pivotal acts was entangled with the frame of formal inner+outer alignment as the only (or only feasible?) way to cause safe ASI.
I suspect that by default, I and someone operating in that frame might mutually believe each others agendas to be probably-doomed. This could make discussion more valuable (as in that case, at least one of us should make a large update).
For anyone interested in trying that discussion, I’d be curious what you think of the post linked above. As a comment on it says:
I found myself coming back to this now, years later, and feeling like it is massively underrated. Idk, it seems like the concept of training stories is great and much better than e.g. “we have to solve inner alignment and also outer alignment” or “we just have to make sure it isn’t scheming.”
In my view, solving formal inner alignment, i.e. devising a general method to create ASI with any specified output-selection policy, is hard enough that I don’t expect it to be done.[1] This is why I’ve been focusing on other approaches which I believe are more likely to succeed.
Though I encourage anyone who understands the problem and thinks they can solve it to try to prove me wrong! I can sure see some directions and I think a very creative human could solve it in principle. But I also think a very creative human might find a different class of solution that can be achieved sooner. (Like I’ve been trying to do :)
Imagining a pivotal act of generating very convincing arguments for like voting and parliamentary systems that would turn government into 1) an working democracy 2) that’s capable of solving the problem. Citizens and congress read arguments, get fired up, problem is solved through proper channels.
the least dangerous plan is not the plan that seems to contain the fewest material actions that seem risky in a conventional sense, but rather the plan that requires the least dangerous cognition from the AGI executing it
On Pivotal Acts
(edit: status: not a crux, instead downstream of different beliefs about what the first safe ASI will look like in predicted futures where it exists. If I instead believed ‘task-aligned superintelligent agents’ were the most feasible form of pivotally useful AI, I would then support their use for pivotal acts.)
I was rereading some of the old literature on alignment research sharing policies after Tamsin Leake’s recent post and came across some discussion of pivotal acts as well.
This is in a context where the ‘pivotal act’ example is using a safe ASI to shut down all AI labs.[1]
My thought is that I don’t see why a pivotal act needs to be that. I don’t see why shutting down AI labs or using nanotech to disassemble GPUs on Earth would be necessary. These may be among the ‘most direct’ or ‘simplest to imagine’ possible actions, but in the case of superintelligence, simplicity is not a constraint.
We can instead select for the ‘kindest’ or ‘least adversarial’ or actually: functional-decision-theoretically optimal actions that save the future while minimizing the amount of adversariality this creates in the past (present).
Which can be broadly framed as ‘using ASI for good’. Which is what everyone wants, even the ones being uncareful about its development.
Capabilities orgs would be able to keep working on fun capabilities projects in those days during which the world is saved, because a group following this policy would choose to use ASI to make the world robust to the failure modes of capabilities projects rather than shutting them down. Because superintelligence is capable of that, and so much more.
side note: It’s orthogonal to the point of this post, but this example also makes me think: if I were working on a safe ASI project, I wouldn’t mind if another group who had discreetly built safe ASI used it to shut my project down, since my goal is ‘ensure the future lightcone is used in a valuable, tragedy-averse way’ and not ‘gain personal power’ or ‘have a fun time working on AI’ or something. In my morality, it would be naive to be opposed to that shutdown. But to the extent humanity is naive, we can easily do something else in that future to create better present dynamics (as the maintext argues).
If there is a group for whom
using ASI to make the world robust to risks and free of harm, in a way where its actions don't infringe on ongoing non-violent activities
is problematic, then this post doesn’t apply to them as their issue all along was not with the character of the pivotal act, but instead possibly with something like ‘having my personal cosmic significance as a capabilities researcher stripped away by the success of an external alignment project’.Another disclaimer: This post is about a world in which safely usable superintelligence has been created, but I’m not confident that anyone (myself included) currently has a safe and ready method to create it with. This post shouldn’t be read as an endorsement of possible current attempts to do this. I would of course prefer if this civilization were one which could coordinate such that no groups were presently working on ASI, precluding this discourse.
I think it is considered a constraint by some because they think that it would be easier/safer to use a superintelligent AI to do simpler actions, while alignment is not yet fully solved. In other words, if alignment was fully solved, then you could use it to do complicated things like what you suggest, but there could be an intermediate stage of alignment progress where you could safely use SI to do something simple like “melt GPUs” but not to achieve more complex goals.
Agreed that some think this, and agreed that formally specifying a simple action policy is easier than a more complex one.[1]
I have a different model of what the earliest safe ASI will look like, in most futures where one exists. Rather than a ‘task-aligned’ agent, I expect it to be a non-agentic system which can be used to e.g come up with pivotal actions for the human group to take / information to act on.[2]
although formal ‘task-aligned agency’ seems potentially more complex than the attempt at a ‘full’ outer alignment solution that I’m aware of (QACI), as in specifying what a {GPU, AI lab, shutdown of an AI lab} is seems more complex than it.
I think these systems are more attainable, see this post to possibly infer more info (it’s proven very difficult for me to write in a way that I expect will be moving to people who have a model focused on ‘formal inner + formal outer alignment’, but I think evhub has done so well).
Reflecting on this more, I wrote in a discord server (then edited to post here):
I wasn’t aware the concept of pivotal acts was entangled with the frame of formal inner+outer alignment as the only (or only feasible?) way to cause safe ASI.
I suspect that by default, I and someone operating in that frame might mutually believe each others agendas to be probably-doomed. This could make discussion more valuable (as in that case, at least one of us should make a large update).
For anyone interested in trying that discussion, I’d be curious what you think of the post linked above. As a comment on it says:
In my view, solving formal inner alignment, i.e. devising a general method to create ASI with any specified output-selection policy, is hard enough that I don’t expect it to be done.[1] This is why I’ve been focusing on other approaches which I believe are more likely to succeed.
Though I encourage anyone who understands the problem and thinks they can solve it to try to prove me wrong! I can sure see some directions and I think a very creative human could solve it in principle. But I also think a very creative human might find a different class of solution that can be achieved sooner. (Like I’ve been trying to do :)
Imagining a pivotal act of generating very convincing arguments for like voting and parliamentary systems that would turn government into 1) an working democracy 2) that’s capable of solving the problem. Citizens and congress read arguments, get fired up, problem is solved through proper channels.
See minimality principle:
Okay. Why do you think Eliezer proposed that, then?
(see reply to Wei Dai)