All of those definitions involve imperfect modeling assumptions and simplifications. Your analogy also assumes that a normal-mixture-model is capable of perfectly capturing reality; I’m aware that this is provably asymptotically true for an infinite-cluster Dirichlet process mixture, but we don’t live in asymptopia and in reality it is effectively yet another strong assumption that holds at best weakly.
This is a critical point; it’s the reason we want to point to the pattern in the territory rather than to a human’s model itself. It may be that the human is using something analogous to a normal-mixture-model, which won’t perfectly match reality. But in order for that to actually be predictive, it has to find some real pattern in the world (which may not be perfectly normal, etc). The goal is to point to that real pattern, not to the human’s approximate representation of that pattern.
Now, two natural (and illustrative) objections to this:
If the human’s representation is an approximation, then there may not be a unique pattern to which their notions correspond; the “corresponding pattern” may be underdefined.
If we’re trying to align an AI to a human, then presumably we want the AI to use the human’s own idea of the human’s values, not some “idealized” version.
The answer to both of these is the same: we humans often update our own notion of what our values are, in response to new information. The reality-pattern we want to point to is the pattern toward which we are updating; it’s the thing our learning-algorithm is learning about. I think this is what coherent extrapolated volition is trying to get at: it asks “what would we want if we knew more, thought faster, …”. Assuming that the human-label-algorithm is working correctly, and continues working correctly, those are exactly the sort of conditions generally associated with convergence of the human’s model to the true reality-pattern.
This is a critical point; it’s the reason we want to point to the pattern in the territory rather than to a human’s model itself. It may be that the human is using something analogous to a normal-mixture-model, which won’t perfectly match reality. But in order for that to actually be predictive, it has to find some real pattern in the world (which may not be perfectly normal, etc). The goal is to point to that real pattern, not to the human’s approximate representation of that pattern.
Now, two natural (and illustrative) objections to this:
If the human’s representation is an approximation, then there may not be a unique pattern to which their notions correspond; the “corresponding pattern” may be underdefined.
If we’re trying to align an AI to a human, then presumably we want the AI to use the human’s own idea of the human’s values, not some “idealized” version.
The answer to both of these is the same: we humans often update our own notion of what our values are, in response to new information. The reality-pattern we want to point to is the pattern toward which we are updating; it’s the thing our learning-algorithm is learning about. I think this is what coherent extrapolated volition is trying to get at: it asks “what would we want if we knew more, thought faster, …”. Assuming that the human-label-algorithm is working correctly, and continues working correctly, those are exactly the sort of conditions generally associated with convergence of the human’s model to the true reality-pattern.