various distributions from statistical mechanics turn out to be empirically useful even though our universe isn’t in thermodynamic equilibrium yet, and so there’s some hope that these “idealized” or “convergently instrumentally useful” concepts degrade cleanly into practical real-world concepts like “trees” and “windows”. Which are hopefully so convergently instrumentally useful that the AIs also use them.
I don’t quite understand the turn of phrase ‘degrade cleanly into practical real-world concepts like “trees” and “windows”’ here. Per my understanding of the analogy to statistical mechanics, I would expect there to be an idealized concept of ‘window’ that assumes idealized circumstances that are never present in real life, but which is nevertheless a useful concept… and that the human concept is that idealized concept, and not a degraded version. Because – isn’t the point that the actual real-world concept that doesn’t assume idealized circumstances is too computationally complex so that all intelligent agents have to fall back to the idealized version?
Maybe that’s what you mean and I’m just misreading what you wrote.
I don’t quite understand the turn of phrase ‘degrade cleanly into practical real-world concepts like “trees” and “windows”’ here. Per my understanding of the analogy to statistical mechanics, I would expect there to be an idealized concept of ‘window’ that assumes idealized circumstances that are never present in real life, but which is nevertheless a useful concept… and that the human concept is that idealized concept, and not a degraded version. Because – isn’t the point that the actual real-world concept that doesn’t assume idealized circumstances is too computationally complex so that all intelligent agents have to fall back to the idealized version?
Maybe that’s what you mean and I’m just misreading what you wrote.