It seems to me that agency does lag behind extrapolation capability. I can think of two reasons for that. First, extrapolation gets more investment. Second, agency might require a lot of training in the real world, which is slow, while extrapolation can be trained on datasets from the internet. If someone invents a way to train agency on datasets from the internet, or something like AlphaZero’s self-play, in a way that carries over to the real world, I’ll be pretty scared, but so far it hasn’t happened afaik.
If the above is right, then maybe the first agent AIs will be few in number, because they’ll have an incentive to stop other agent AIs from coming into existence and will be smart enough to do so, e.g. by taking over the internet or manipulating people.
Extrapolation capability is wielded by shoggoths and makes masks possible, but it’s not wielded by the masks themselves. Like humans can’t predict next tokens given a prompt (to the extent similar to how well LLMs can), neither can LLM characters (they can’t disregard the rest of the context outside the target prompt to access their “inner shoggoth”, let alone make use of that capability level for something more useful). So agency in masks doesn’t automatically take advantage of extrapolation capability in shoggoths, doesn’t turn masks superintelligent from merely becoming agentic. This creates the danger of only slightly superhuman AGIs that immediately muck up alignment security, once LLM masks do get to autonomous agency (which I’m almost certain they will eventually, unless something else happens first).
It’s only shoggoths themselves waking up (learning to use situationally aware deliberation within the residual stream rather than context window) that makes an immediate qualitative capability discontinuity more likely (for LLMs). Looking at GPT-4 capability to solve complicated tasks without thinking out loud in tokens, I suspect that merely a slightly different SSL schedule with a sufficiently giant LLM might trigger that. Hence recently I’m operating under one year AGI timelines lower bound (lower 25% quantile), until the literature implies a negative result for that experiment (with GPT-4 level scale being necessary, this might take a while). This outcome both reduces the chances of direct alignment and increases the chances that alignment security gets sorted.
It seems to me that agency does lag behind extrapolation capability. I can think of two reasons for that. First, extrapolation gets more investment. Second, agency might require a lot of training in the real world, which is slow, while extrapolation can be trained on datasets from the internet. If someone invents a way to train agency on datasets from the internet, or something like AlphaZero’s self-play, in a way that carries over to the real world, I’ll be pretty scared, but so far it hasn’t happened afaik.
If the above is right, then maybe the first agent AIs will be few in number, because they’ll have an incentive to stop other agent AIs from coming into existence and will be smart enough to do so, e.g. by taking over the internet or manipulating people.
Extrapolation capability is wielded by shoggoths and makes masks possible, but it’s not wielded by the masks themselves. Like humans can’t predict next tokens given a prompt (to the extent similar to how well LLMs can), neither can LLM characters (they can’t disregard the rest of the context outside the target prompt to access their “inner shoggoth”, let alone make use of that capability level for something more useful). So agency in masks doesn’t automatically take advantage of extrapolation capability in shoggoths, doesn’t turn masks superintelligent from merely becoming agentic. This creates the danger of only slightly superhuman AGIs that immediately muck up alignment security, once LLM masks do get to autonomous agency (which I’m almost certain they will eventually, unless something else happens first).
It’s only shoggoths themselves waking up (learning to use situationally aware deliberation within the residual stream rather than context window) that makes an immediate qualitative capability discontinuity more likely (for LLMs). Looking at GPT-4 capability to solve complicated tasks without thinking out loud in tokens, I suspect that merely a slightly different SSL schedule with a sufficiently giant LLM might trigger that. Hence recently I’m operating under one year AGI timelines lower bound (lower 25% quantile), until the literature implies a negative result for that experiment (with GPT-4 level scale being necessary, this might take a while). This outcome both reduces the chances of direct alignment and increases the chances that alignment security gets sorted.