Recognise that almost all the Kolmogorov complexity of a particular simulacrum is dedicated to specifying the traits, not the valences. The traits — polite, politically liberal, racist, smart, deceitful — are these massively K-complex concepts, whereas each valence is a single floating point, or maybe even a single bit!
A bit of a side note, but I have to point out that Kolmogorov complexity in this context is basically a fake framework. There are many notions of complexity, and there’s nothing in your argument that requires Kolmogorov specifically.
People have good intuitions for why the traits (polite, liberal, helpful) will have massive Kolmogorov complexity but the valences won’t.
But the correct mechanistic explanation must actually appeal to what I call “semiotic complexity”.
Now, there a missing step to formally connect the two notions of complexity in a quantitative way. However, in the limit they should be equal up to a factor O(1) because story-telling is Turing-complete.
Maybe that constant factor messes up the explanation, but I think that’s unlikely.
A bit of a side note, but I have to point out that Kolmogorov complexity in this context is basically a fake framework. There are many notions of complexity, and there’s nothing in your argument that requires Kolmogorov specifically.
People have good intuitions for why the traits (polite, liberal, helpful) will have massive Kolmogorov complexity but the valences won’t.
But the correct mechanistic explanation must actually appeal to what I call “semiotic complexity”.
Now, there a missing step to formally connect the two notions of complexity in a quantitative way. However, in the limit they should be equal up to a factor O(1) because story-telling is Turing-complete.
Maybe that constant factor messes up the explanation, but I think that’s unlikely.