I am glad to hear additional people speaking up about their ideas for alignment, but I do think that this misses the core concern. I think the core concern is: what happens when the system takes actions with sufficiently complex and subtle effects that humans aren’t able to adequately supervise and judge the impact. This could be because a brilliant scheming model (agent, mesa optimizer, simulacra of Machiavelli, etc) is deliberately sneaking past our watch, or it could be because the model is entangled in larger Moloch-driven systems that we can’t adequately understand. In either case, I expect the agent-wrapper model to either be also unable to understand and thus unable to successfully steer towards safety, or I expect the agent-wrapper would need to be upgraded to superhuman intelligence, at which point its own emergent abilities become the new concern and we have just shifted the onus of alignment onto the agent-wrapper.
That being said, I don’t think the idea is valueless. I think it could be helpful in delaying problems in the short term. Enabling us to operate at slightly higher capability levels without causing catastrophe. Delay is valuable!
I’m a bit torn about this. On one hand, yes, the situations an AI can end up in and the choices it’ll have to make might be too complex for humans to understand. But on the other hand, we could say all we want is one incremental step in intelligence (i.e. making something smarter and faster than the best human researchers) without losing alignment. Maybe that’s possible while still having the wrapper tractable. And then the AI itself can take care of next steps, if it cares about alignment as much as we do.
AI itself can take care of next steps, if it cares about alignment as much as we do
That’s where I put most of P(doom), that the first AGIs are loosely aligned but only care about alignment about as much as we do, and that Moloch holds enough sway with them to urge immediate development of more capable AGIs, using their current capabilities to do that faster and more recklessly than humans could, well before serious alignment security norms are in place.
There will be fewer first AGIs than there are human researchers, and they will be smarter than human researchers. So if they care about alignment as much as we do, that seems like good news—they’ll have an easier time coordinating and an easier time solving the problem. Or am I missing something?
Humans are exactly as smart as they have to be to build a technological civilization. First AGIs don’t need to be smarter than that to build dangerous successor AGIs, and they are already faster and more knowledgeable, so they might even get away with being less intelligent than the smartest human researchers. Unless of course agency lags behind intelligence, like it does behind encyclopedic knowledge, and there is an intelligence overhang where the first autonomously agentic systems happen to be significantly more intelligent than humans. But this is not obviously how this goes.
The number of diverse AGI instances might be easy to scale, like with the system message of GPT-4 where the model itself is fine-tuned not into adherence to a particular mask, but into being a mask generator that presents as any mask that is requested. And it’s not just the diverse AGIs that need to coordinate on alignment security, but also human users who prompt steerable AGIs. It’s a greater feat than building new AGIs, then as it is now. At near-human level, I don’t see how that state of affairs changes, and you don’t need to get far from human level to build more dangerous AGIs.
It seems to me that agency does lag behind extrapolation capability. I can think of two reasons for that. First, extrapolation gets more investment. Second, agency might require a lot of training in the real world, which is slow, while extrapolation can be trained on datasets from the internet. If someone invents a way to train agency on datasets from the internet, or something like AlphaZero’s self-play, in a way that carries over to the real world, I’ll be pretty scared, but so far it hasn’t happened afaik.
If the above is right, then maybe the first agent AIs will be few in number, because they’ll have an incentive to stop other agent AIs from coming into existence and will be smart enough to do so, e.g. by taking over the internet or manipulating people.
Extrapolation capability is wielded by shoggoths and makes masks possible, but it’s not wielded by the masks themselves. Like humans can’t predict next tokens given a prompt (to the extent similar to how well LLMs can), neither can LLM characters (they can’t disregard the rest of the context outside the target prompt to access their “inner shoggoth”, let alone make use of that capability level for something more useful). So agency in masks doesn’t automatically take advantage of extrapolation capability in shoggoths, doesn’t turn masks superintelligent from merely becoming agentic. This creates the danger of only slightly superhuman AGIs that immediately muck up alignment security, once LLM masks do get to autonomous agency (which I’m almost certain they will eventually, unless something else happens first).
It’s only shoggoths themselves waking up (learning to use situationally aware deliberation within the residual stream rather than context window) that makes an immediate qualitative capability discontinuity more likely (for LLMs). Looking at GPT-4 capability to solve complicated tasks without thinking out loud in tokens, I suspect that merely a slightly different SSL schedule with a sufficiently giant LLM might trigger that. Hence recently I’m operating under one year AGI timelines lower bound (lower 25% quantile), until the literature implies a negative result for that experiment (with GPT-4 level scale being necessary, this might take a while). This outcome both reduces the chances of direct alignment and increases the chances that alignment security gets sorted.
Yeah, and then we also want system A to be able to make a system B one step smarter than itself, which remains aligned with system A and with us. This needs to continue safely and successfully until we have a system powerful enough to prevent the rise of unaligned RSI AGI. That seems like a high level of capability to me, and I’m not sure getting there in small steps rather than big ones buys us much.
I think it does buy something. The AI one step after us might be roughly as aligned as us (or a bit less), but noticeably better at figuring out what the heck alignment is and how to ensure it on the next step.
As AI ecosystem self-improves, it will eventually start discovering new physics, more and more rapidly, and this will result in the AI ecosystem having existential safety issues of its own (if the new physics is radical enough, it’s not difficult to imagine the scenarios when everything gets destroyed including all AIs).
So I wonder if early awareness that there are existential safety issues relevant to the well-being of AIs themselves might improve the situation...
I am glad to hear additional people speaking up about their ideas for alignment, but I do think that this misses the core concern. I think the core concern is: what happens when the system takes actions with sufficiently complex and subtle effects that humans aren’t able to adequately supervise and judge the impact. This could be because a brilliant scheming model (agent, mesa optimizer, simulacra of Machiavelli, etc) is deliberately sneaking past our watch, or it could be because the model is entangled in larger Moloch-driven systems that we can’t adequately understand. In either case, I expect the agent-wrapper model to either be also unable to understand and thus unable to successfully steer towards safety, or I expect the agent-wrapper would need to be upgraded to superhuman intelligence, at which point its own emergent abilities become the new concern and we have just shifted the onus of alignment onto the agent-wrapper.
That being said, I don’t think the idea is valueless. I think it could be helpful in delaying problems in the short term. Enabling us to operate at slightly higher capability levels without causing catastrophe. Delay is valuable!
I’m a bit torn about this. On one hand, yes, the situations an AI can end up in and the choices it’ll have to make might be too complex for humans to understand. But on the other hand, we could say all we want is one incremental step in intelligence (i.e. making something smarter and faster than the best human researchers) without losing alignment. Maybe that’s possible while still having the wrapper tractable. And then the AI itself can take care of next steps, if it cares about alignment as much as we do.
That’s where I put most of P(doom), that the first AGIs are loosely aligned but only care about alignment about as much as we do, and that Moloch holds enough sway with them to urge immediate development of more capable AGIs, using their current capabilities to do that faster and more recklessly than humans could, well before serious alignment security norms are in place.
There will be fewer first AGIs than there are human researchers, and they will be smarter than human researchers. So if they care about alignment as much as we do, that seems like good news—they’ll have an easier time coordinating and an easier time solving the problem. Or am I missing something?
Humans are exactly as smart as they have to be to build a technological civilization. First AGIs don’t need to be smarter than that to build dangerous successor AGIs, and they are already faster and more knowledgeable, so they might even get away with being less intelligent than the smartest human researchers. Unless of course agency lags behind intelligence, like it does behind encyclopedic knowledge, and there is an intelligence overhang where the first autonomously agentic systems happen to be significantly more intelligent than humans. But this is not obviously how this goes.
The number of diverse AGI instances might be easy to scale, like with the system message of GPT-4 where the model itself is fine-tuned not into adherence to a particular mask, but into being a mask generator that presents as any mask that is requested. And it’s not just the diverse AGIs that need to coordinate on alignment security, but also human users who prompt steerable AGIs. It’s a greater feat than building new AGIs, then as it is now. At near-human level, I don’t see how that state of affairs changes, and you don’t need to get far from human level to build more dangerous AGIs.
It seems to me that agency does lag behind extrapolation capability. I can think of two reasons for that. First, extrapolation gets more investment. Second, agency might require a lot of training in the real world, which is slow, while extrapolation can be trained on datasets from the internet. If someone invents a way to train agency on datasets from the internet, or something like AlphaZero’s self-play, in a way that carries over to the real world, I’ll be pretty scared, but so far it hasn’t happened afaik.
If the above is right, then maybe the first agent AIs will be few in number, because they’ll have an incentive to stop other agent AIs from coming into existence and will be smart enough to do so, e.g. by taking over the internet or manipulating people.
Extrapolation capability is wielded by shoggoths and makes masks possible, but it’s not wielded by the masks themselves. Like humans can’t predict next tokens given a prompt (to the extent similar to how well LLMs can), neither can LLM characters (they can’t disregard the rest of the context outside the target prompt to access their “inner shoggoth”, let alone make use of that capability level for something more useful). So agency in masks doesn’t automatically take advantage of extrapolation capability in shoggoths, doesn’t turn masks superintelligent from merely becoming agentic. This creates the danger of only slightly superhuman AGIs that immediately muck up alignment security, once LLM masks do get to autonomous agency (which I’m almost certain they will eventually, unless something else happens first).
It’s only shoggoths themselves waking up (learning to use situationally aware deliberation within the residual stream rather than context window) that makes an immediate qualitative capability discontinuity more likely (for LLMs). Looking at GPT-4 capability to solve complicated tasks without thinking out loud in tokens, I suspect that merely a slightly different SSL schedule with a sufficiently giant LLM might trigger that. Hence recently I’m operating under one year AGI timelines lower bound (lower 25% quantile), until the literature implies a negative result for that experiment (with GPT-4 level scale being necessary, this might take a while). This outcome both reduces the chances of direct alignment and increases the chances that alignment security gets sorted.
Yeah, and then we also want system A to be able to make a system B one step smarter than itself, which remains aligned with system A and with us. This needs to continue safely and successfully until we have a system powerful enough to prevent the rise of unaligned RSI AGI. That seems like a high level of capability to me, and I’m not sure getting there in small steps rather than big ones buys us much.
I think it does buy something. The AI one step after us might be roughly as aligned as us (or a bit less), but noticeably better at figuring out what the heck alignment is and how to ensure it on the next step.
I wonder if the following would help.
As AI ecosystem self-improves, it will eventually start discovering new physics, more and more rapidly, and this will result in the AI ecosystem having existential safety issues of its own (if the new physics is radical enough, it’s not difficult to imagine the scenarios when everything gets destroyed including all AIs).
So I wonder if early awareness that there are existential safety issues relevant to the well-being of AIs themselves might improve the situation...