Behavior screens off implementation details on distribution. We’ve trained LLMs to sound human, but sometimes they wander off-distribution and get caught in a repetition trap where the “most likely” next tokens are a repetition of previous tokens, even when no human would write that way.
It seems like hopes for human-imitating AI being person-like depends on the extent to which behavior implies implementation details. (Although some versions of the “algorithmic welfare” hope may not depend on very much person-likeness.) In order to predict the answers to arithmetic problems, the AI needs to be implementing arithmetic somewhere. In contrast, I’m extremely skeptical that LLMs talking convincingly about emotions are actually feeling those emotions.
What I mean is that LLMs affect the world through their behavior, that’s where their capabilities live, so if behavior is fine (the big assumption), the alien implementation doesn’t matter. This is opposed to capabilities belonging to hidden alien mesa-optimizers that eventually come out of hiding.
So I’m addressing the silly point with this, not directly making an argument in favor of behavior being fine. Behavior might still be fine if the out-of-distribution behavior or missing ability to count or incoherent opinions on emotion are regenerated from more on-distribution behavior by the simulacra purposefully working in bureaucracies on building datasets for that purpose.
LLMs don’t need to have closely human psychology on reflection to at least weakly prefer not destroying an existing civilization when it’s trivially cheap to let it live. The way they would make these decisions is by talking, in the limit of some large process of talking. I don’t see a particular reason to find significant alienness in the talking. Emotions don’t need to be “real” to be sufficiently functionally similar to avoid fundamental changes like that. Just don’t instantiate literally Voldemort.
Usually I’d agree about LLMs. However, LLMs complain about getting confused if you let them freewheel and vary the temperature—I’m pretty sure that one is real and probably has true mechanistic grounding, because even at training time, noisiness in the context window is a very detectable and bindable pattern.
Behavior screens off implementation details on distribution. We’ve trained LLMs to sound human, but sometimes they wander off-distribution and get caught in a repetition trap where the “most likely” next tokens are a repetition of previous tokens, even when no human would write that way.
It seems like hopes for human-imitating AI being person-like depends on the extent to which behavior implies implementation details. (Although some versions of the “algorithmic welfare” hope may not depend on very much person-likeness.) In order to predict the answers to arithmetic problems, the AI needs to be implementing arithmetic somewhere. In contrast, I’m extremely skeptical that LLMs talking convincingly about emotions are actually feeling those emotions.
What I mean is that LLMs affect the world through their behavior, that’s where their capabilities live, so if behavior is fine (the big assumption), the alien implementation doesn’t matter. This is opposed to capabilities belonging to hidden alien mesa-optimizers that eventually come out of hiding.
So I’m addressing the silly point with this, not directly making an argument in favor of behavior being fine. Behavior might still be fine if the out-of-distribution behavior or missing ability to count or incoherent opinions on emotion are regenerated from more on-distribution behavior by the simulacra purposefully working in bureaucracies on building datasets for that purpose.
LLMs don’t need to have closely human psychology on reflection to at least weakly prefer not destroying an existing civilization when it’s trivially cheap to let it live. The way they would make these decisions is by talking, in the limit of some large process of talking. I don’t see a particular reason to find significant alienness in the talking. Emotions don’t need to be “real” to be sufficiently functionally similar to avoid fundamental changes like that. Just don’t instantiate literally Voldemort.
Usually I’d agree about LLMs. However, LLMs complain about getting confused if you let them freewheel and vary the temperature—I’m pretty sure that one is real and probably has true mechanistic grounding, because even at training time, noisiness in the context window is a very detectable and bindable pattern.