Your point about physical entropy is noted and a good one.
One reason to think that something like D(u/h) would pick out higher level features of reality is that h encodes those higher-level features. It may be possible to run a simulation of humanity on more efficient physical architecture. But unless that simulation is very close to what we’ve already got, it won’t be selected by g.
You make an interesting point about the inefficiency of physics. I’m not sure what you mean by that exactly, and am not in a position of expertise to say otherwise. However, I think there is a way to get around this problem. Like Kolmogorov complexity, depth has another hidden term in it, the specification of the universal Turing machine that is used, concretely, to measure the depth and size of strings. By defining depth in terms of a universal machine that is a physics simulator, then there wouldn’t be a way to “bypass” physics computationally. That would entail being able to build a computer, which our physics, that would be more efficient than our physics. Tell me if that’s not impossible.
Re: brains, I’m suggesting that we encode whatever we think is important about brains in the h term. If brains execute a computational process, then that process will be preserved somehow. It need not be preserve on grey matter exactly. Those brains could be uploaded onto more efficient architecture.
I appreciate your intuitions on this but this function is designed rather specifically to challenge them.
Your point about physical entropy is noted and a good one.
One reason to think that something like D(u/h) would pick out higher level features of reality is that h encodes those higher-level features. It may be possible to run a simulation of humanity on more efficient physical architecture. But unless that simulation is very close to what we’ve already got, it won’t be selected by g.
You make an interesting point about the inefficiency of physics. I’m not sure what you mean by that exactly, and am not in a position of expertise to say otherwise. However, I think there is a way to get around this problem. Like Kolmogorov complexity, depth has another hidden term in it, the specification of the universal Turing machine that is used, concretely, to measure the depth and size of strings. By defining depth in terms of a universal machine that is a physics simulator, then there wouldn’t be a way to “bypass” physics computationally. That would entail being able to build a computer, which our physics, that would be more efficient than our physics. Tell me if that’s not impossible.
Re: brains, I’m suggesting that we encode whatever we think is important about brains in the h term. If brains execute a computational process, then that process will be preserved somehow. It need not be preserve on grey matter exactly. Those brains could be uploaded onto more efficient architecture.
I appreciate your intuitions on this but this function is designed rather specifically to challenge them.