Thanks for your encouraging comments. They are much appreciated! I was concerned that following the last post with an improvement on it would be seen as redundant, so I’m glad that this process has your approval.
Regarding your first point:
Entropy is not depth. If you do something that increases entropy, then you actually reduce depth, because it is easier to get to what you have from an incompressible starting representation. In particular, the incompressible representation that matches the high-entropy representation you have created. So if you hold humanity steady and superheat the moon, you more or less just keep things at D(u) = D(h), with low D(u/h).
You can do better if you freeze humanity and then create fractal grey goo, which is still in the spirit of your objection. Then you have high D(u), D(u/h) is something like D(u) - D(h) except for when the fractal starts to reproduce human patterns out of the sheer vigor of its complexity, in which case I guess D(u/h) would begin to drop...though I’m not sure. This may require a more thorough look at the mathematics. What do you think?
Regarding your second point...
Strictly speaking, I’m not requiring that h abstract away the fleshy bits and capture what is essentially human or transhuman. I am trying to make the objective function agnostic to these questions. Rather, h can include fleshy bits and all. What’s important is that it includes at least what is valuable, and that can be done by including anything that might be valuable. The needle in the haystack can be discovered later, if it’s there at all. Personally, I’m not a transhumanist. I’m an existentialist; I believe our existence precedes our essence.
That said I think this is a clever point with substance to it. I am, in fact, trying to shift our problem-solving attention to other problems. However, I am trying to turn attention to more tractable and practical questions.
One simple one is: how can we make better libraries for capturing human existence, so that a supercontroller could make use of as much data as possible as it proceeds?
Another is: given that the proposed objective function is in fact impossible to compute, but (if the argument is ultimately successful) also given that it points in the right direction, what kinds of processes/architectures/algorithms would approximate a g-maximizing supercontroller? Since we have time to steer in the right direction now, how should we go about it?
My real agenda is that I think that there are a lot of pressing practical questions regarding machine intelligence and its role in the world, and that the “superintelligence” problem is a distraction except that it can provide clearer guidelines of how we should be acting now.
Thanks for your encouraging comments. They are much appreciated! I was concerned that following the last post with an improvement on it would be seen as redundant, so I’m glad that this process has your approval.
Regarding your first point:
Entropy is not depth. If you do something that increases entropy, then you actually reduce depth, because it is easier to get to what you have from an incompressible starting representation. In particular, the incompressible representation that matches the high-entropy representation you have created. So if you hold humanity steady and superheat the moon, you more or less just keep things at D(u) = D(h), with low D(u/h).
You can do better if you freeze humanity and then create fractal grey goo, which is still in the spirit of your objection. Then you have high D(u), D(u/h) is something like D(u) - D(h) except for when the fractal starts to reproduce human patterns out of the sheer vigor of its complexity, in which case I guess D(u/h) would begin to drop...though I’m not sure. This may require a more thorough look at the mathematics. What do you think?
Regarding your second point...
Strictly speaking, I’m not requiring that h abstract away the fleshy bits and capture what is essentially human or transhuman. I am trying to make the objective function agnostic to these questions. Rather, h can include fleshy bits and all. What’s important is that it includes at least what is valuable, and that can be done by including anything that might be valuable. The needle in the haystack can be discovered later, if it’s there at all. Personally, I’m not a transhumanist. I’m an existentialist; I believe our existence precedes our essence.
That said I think this is a clever point with substance to it. I am, in fact, trying to shift our problem-solving attention to other problems. However, I am trying to turn attention to more tractable and practical questions.
One simple one is: how can we make better libraries for capturing human existence, so that a supercontroller could make use of as much data as possible as it proceeds?
Another is: given that the proposed objective function is in fact impossible to compute, but (if the argument is ultimately successful) also given that it points in the right direction, what kinds of processes/architectures/algorithms would approximate a g-maximizing supercontroller? Since we have time to steer in the right direction now, how should we go about it?
My real agenda is that I think that there are a lot of pressing practical questions regarding machine intelligence and its role in the world, and that the “superintelligence” problem is a distraction except that it can provide clearer guidelines of how we should be acting now.