I’m a little skeptical of your contention that all these properties are more-or-less independent.
Rather there is a strong feeling that all/most of these properties are downstream of a core of agentic behaviour that is inherent to the notion of true general intelligence.
I view the fact that LLMs are not agentic as further evidence that it’s a conceptual error to classify them as true general intelligences, not as evidence that ai risk is low. It’s a bit like if in the 1800s somebody says flying machines will be dominant weapons of war in the future and get rebutted by ‘hot gas balloons are only used for reconnaissance in war, they aren’t very lethal. Flying machines won’t be a decisive military technology ’
I don’t know Nate’s views exactly but I would imagine he would hold a similar view (do correct me if I’m wrong ). In any case, I imagine you are quite familiar with the my position here.
I’d be curious to hear more about where you’re coming from.
It is plausible to me that there’s a core of agentic behavior that causes all of these properties, and for this reason I don’t think they are totally independent in a statistical sense. And of course if you already assume a utility maximizer, you tend to satisfy all properties. But in practice the burden of proof lies with you here. I don’t think we have enough evidence, either empirical or from theoretical arguments, to say with any confidence that this core exists and that the first AGIs will fall into the capabilities “attractor well” (a term Nate uses).
I thought about possible sharp left turn mechanisms for several months at MIRI. Although some facts about future AIs seem pretty scary, like novelty and diversity of obstacles requiring agency, and most feedback being internal or outcome-oriented rather than provided by humans, the arguments are mostly nonrigorous (like in the linked post) and they left me feeling pretty uncertain. There are the coherence theorems, but those don’t tell you whether you can use some training or editing scheme to imbue an AI with a generalizable-enough low impact preference, or whether an AI will tend to erase safeguards. Overall my best guess is models will be about as consequentialist as humans are, but we will have greater control over them that makes some properties like shutdownability more likely.
Also, corrigibility evals are useful even if you are 80% sure that all the properties come from this core of consequentialism. In the hardest worlds, we see the AI become more agentic as it becomes more general, and it starts resisting shutdown at the same time—and for the same reason—it realizes other useful problem-solving techniques. Maybe that only takes days to months. This is a sure sign not to deploy! I think having multiple corrigibility metrics will provide value for the same reason we have multiple metrics for anything we are unsure how to measure: they give more specific information even if they’re pretty correlated.
I wrote earlier about why one should consider the cognitive properties of LLMs relevant to AI risk; in short, I think we should update more the more impressive cognitive tasks the AIs can do, and before observing LLMs I would have given a decent chance that one needs somewhat agentic consequentialism to summarize video, write code, and pass the bar exam, implying I should now make a moderately sized update against.
I’m a little skeptical of your contention that all these properties are more-or-less independent. Rather there is a strong feeling that all/most of these properties are downstream of a core of agentic behaviour that is inherent to the notion of true general intelligence. I view the fact that LLMs are not agentic as further evidence that it’s a conceptual error to classify them as true general intelligences, not as evidence that ai risk is low. It’s a bit like if in the 1800s somebody says flying machines will be dominant weapons of war in the future and get rebutted by ‘hot gas balloons are only used for reconnaissance in war, they aren’t very lethal. Flying machines won’t be a decisive military technology ’
I don’t know Nate’s views exactly but I would imagine he would hold a similar view (do correct me if I’m wrong ). In any case, I imagine you are quite familiar with the my position here.
I’d be curious to hear more about where you’re coming from.
It is plausible to me that there’s a core of agentic behavior that causes all of these properties, and for this reason I don’t think they are totally independent in a statistical sense. And of course if you already assume a utility maximizer, you tend to satisfy all properties. But in practice the burden of proof lies with you here. I don’t think we have enough evidence, either empirical or from theoretical arguments, to say with any confidence that this core exists and that the first AGIs will fall into the capabilities “attractor well” (a term Nate uses).
I thought about possible sharp left turn mechanisms for several months at MIRI. Although some facts about future AIs seem pretty scary, like novelty and diversity of obstacles requiring agency, and most feedback being internal or outcome-oriented rather than provided by humans, the arguments are mostly nonrigorous (like in the linked post) and they left me feeling pretty uncertain. There are the coherence theorems, but those don’t tell you whether you can use some training or editing scheme to imbue an AI with a generalizable-enough low impact preference, or whether an AI will tend to erase safeguards. Overall my best guess is models will be about as consequentialist as humans are, but we will have greater control over them that makes some properties like shutdownability more likely.
Also, corrigibility evals are useful even if you are 80% sure that all the properties come from this core of consequentialism. In the hardest worlds, we see the AI become more agentic as it becomes more general, and it starts resisting shutdown at the same time—and for the same reason—it realizes other useful problem-solving techniques. Maybe that only takes days to months. This is a sure sign not to deploy! I think having multiple corrigibility metrics will provide value for the same reason we have multiple metrics for anything we are unsure how to measure: they give more specific information even if they’re pretty correlated.
I wrote earlier about why one should consider the cognitive properties of LLMs relevant to AI risk; in short, I think we should update more the more impressive cognitive tasks the AIs can do, and before observing LLMs I would have given a decent chance that one needs somewhat agentic consequentialism to summarize video, write code, and pass the bar exam, implying I should now make a moderately sized update against.