The physicists’ definition of entropy is misaligned with the intuitive definition, because it is affected massively more by micro-scale things like temperature and by mixing, than by macro-scale things like objects and people. This tends to trip people up, when they try to take it out of chemistry and physicists and bring it anywhere else.
When I look at your function g(u), I notice that it has a very similar flavor. While I’m having a very hard time interpreting what it will actually end up optimizing, my intuition is that it will end up dominated by some irrelevant micro-scale property like temperature. That’s the outside view.
On the inside view, I see a few more reasons to worry. This universe’s physics is very computationally inefficient, so the shortest computational path from any state A to some other state B will almost certainly bypass it somehow. Furthermore, human brains are very computationally inefficient, so I would expect the shortest computational path to bypass them, too. I don’t know what that computational shortcut might be, but I wouldn’t expect to like it.
Exploring the properties of logical depth might be interesting, but I don’t expect a good utility function out of it.
Your point about physical entropy is noted and a good one.
One reason to think that something like D(u/h) would pick out higher level features of reality is that h encodes those higher-level features. It may be possible to run a simulation of humanity on more efficient physical architecture. But unless that simulation is very close to what we’ve already got, it won’t be selected by g.
You make an interesting point about the inefficiency of physics. I’m not sure what you mean by that exactly, and am not in a position of expertise to say otherwise. However, I think there is a way to get around this problem. Like Kolmogorov complexity, depth has another hidden term in it, the specification of the universal Turing machine that is used, concretely, to measure the depth and size of strings. By defining depth in terms of a universal machine that is a physics simulator, then there wouldn’t be a way to “bypass” physics computationally. That would entail being able to build a computer, which our physics, that would be more efficient than our physics. Tell me if that’s not impossible.
Re: brains, I’m suggesting that we encode whatever we think is important about brains in the h term. If brains execute a computational process, then that process will be preserved somehow. It need not be preserve on grey matter exactly. Those brains could be uploaded onto more efficient architecture.
I appreciate your intuitions on this but this function is designed rather specifically to challenge them.
The physicists’ definition of entropy is misaligned with the intuitive definition, because it is affected massively more by micro-scale things like temperature and by mixing, than by macro-scale things like objects and people. This tends to trip people up, when they try to take it out of chemistry and physicists and bring it anywhere else.
When I look at your function g(u), I notice that it has a very similar flavor. While I’m having a very hard time interpreting what it will actually end up optimizing, my intuition is that it will end up dominated by some irrelevant micro-scale property like temperature. That’s the outside view.
On the inside view, I see a few more reasons to worry. This universe’s physics is very computationally inefficient, so the shortest computational path from any state A to some other state B will almost certainly bypass it somehow. Furthermore, human brains are very computationally inefficient, so I would expect the shortest computational path to bypass them, too. I don’t know what that computational shortcut might be, but I wouldn’t expect to like it.
Exploring the properties of logical depth might be interesting, but I don’t expect a good utility function out of it.
Your point about physical entropy is noted and a good one.
One reason to think that something like D(u/h) would pick out higher level features of reality is that h encodes those higher-level features. It may be possible to run a simulation of humanity on more efficient physical architecture. But unless that simulation is very close to what we’ve already got, it won’t be selected by g.
You make an interesting point about the inefficiency of physics. I’m not sure what you mean by that exactly, and am not in a position of expertise to say otherwise. However, I think there is a way to get around this problem. Like Kolmogorov complexity, depth has another hidden term in it, the specification of the universal Turing machine that is used, concretely, to measure the depth and size of strings. By defining depth in terms of a universal machine that is a physics simulator, then there wouldn’t be a way to “bypass” physics computationally. That would entail being able to build a computer, which our physics, that would be more efficient than our physics. Tell me if that’s not impossible.
Re: brains, I’m suggesting that we encode whatever we think is important about brains in the h term. If brains execute a computational process, then that process will be preserved somehow. It need not be preserve on grey matter exactly. Those brains could be uploaded onto more efficient architecture.
I appreciate your intuitions on this but this function is designed rather specifically to challenge them.