Interesting argument. I think your main point is that AIs can achieve similar outcomes to current society and therefore be aligned with humanity’s goals by being a perfect replacement for an individual human and then being able to gradually replace all humans in an organization or the world. This argument also seems like an argument in favor of current AI practices such as pre-training on the next-word prediction objective on internet text followed by supervised fine-tuning.
That said, I noticed a few limitations of this argument: - Possibility of deception: As jimrandomh mentioned earlier, a misaligned AI might be incentivized to behave identically to a helpful human until it can safely pursue it’s true objective. Therefore this alignment plan seems to require AIs to not be too prone to deception. - Generalization: An AI might behave exactly like a human in situations similar to its training data but not generalize sufficiently to out-of-distribution scenarios. For example, the AIs might behave similar to humans in typical situations but diverge from human norms when they become superintelligent. - Emergent properties: The AIs might be perfect human substitutes individually but result in unexpected emergent behavior that can’t be easily forseen in advance when acting as a group. To use an analogy, adding grains of sand to a pile one by one seems stable until the pile collapses in a mini-avalanche.
a misaligned AI might be incentivized to behave identically to a helpful human until it can safely pursue it’s true objective
It could, but some humans might also do that. Indeed, humans do that kind of thing all the time.
AIs might behave similar to humans in typical situations but diverge from human norms when they become superintelligent.
But they wouldn’t ‘become’ superintelligent because there would be no extra training once the AI had finished training. And OOD inputs won’t produce different outputs if the underlying function is the same. Given a complexity prior and enough data, ML algos will converge on the same function as the human brain uses.
The AIs might be perfect human substitutes individually but result in unexpected emergent behavior that can’t be easily forseen in advance when acting as a group. To use an analogy, adding grains of sand to a pile one by one seems stable until the pile collapses in a mini-avalanche.
The behavior will follow the same probability distribution since the distribution of outputs for a given AI is the same as for the human it is a functional copy of. Think of a thousand piles of sand from the same well-mixed batch—each of them is slightly different, but any one pile falls within the distribution.
Interesting argument. I think your main point is that AIs can achieve similar outcomes to current society and therefore be aligned with humanity’s goals by being a perfect replacement for an individual human and then being able to gradually replace all humans in an organization or the world. This argument also seems like an argument in favor of current AI practices such as pre-training on the next-word prediction objective on internet text followed by supervised fine-tuning.
That said, I noticed a few limitations of this argument:
- Possibility of deception: As jimrandomh mentioned earlier, a misaligned AI might be incentivized to behave identically to a helpful human until it can safely pursue it’s true objective. Therefore this alignment plan seems to require AIs to not be too prone to deception.
- Generalization: An AI might behave exactly like a human in situations similar to its training data but not generalize sufficiently to out-of-distribution scenarios. For example, the AIs might behave similar to humans in typical situations but diverge from human norms when they become superintelligent.
- Emergent properties: The AIs might be perfect human substitutes individually but result in unexpected emergent behavior that can’t be easily forseen in advance when acting as a group. To use an analogy, adding grains of sand to a pile one by one seems stable until the pile collapses in a mini-avalanche.
It could, but some humans might also do that. Indeed, humans do that kind of thing all the time.
But they wouldn’t ‘become’ superintelligent because there would be no extra training once the AI had finished training. And OOD inputs won’t produce different outputs if the underlying function is the same. Given a complexity prior and enough data, ML algos will converge on the same function as the human brain uses.
The behavior will follow the same probability distribution since the distribution of outputs for a given AI is the same as for the human it is a functional copy of. Think of a thousand piles of sand from the same well-mixed batch—each of them is slightly different, but any one pile falls within the distribution.