I enjoyed both this and the previous post. Not the usual computational fare around here, and it’s fun to play with new frameworks. I upvoted particularly for incorporating feedback and engaging with objections.
I have a couple of ways in which I’d like to challenge your ideas.
If I’m not mistaken, there are two routes to take in maximizing g. Either you can minimize D(u/h), or you can just drive D(u) through the roof and not damage h too badly. Intuitively, the latter seems to give you a better payoff per joule invested. Let’s say that our supercontroller grabs a population of humans, puts them in stasis pods of some kind, and then goes about maximizing entropy by superheating the moon. This is a machine that has done a pretty good job of increasing g(u). As long as the supercontroller is careful to keep D(u/h) from approaching D(u), it can easily ignore that term without negotiating the complexity of human civilzation or even human consciousness. That said, I clearly don’t understand relative logical depth very well- so maybe D(u/h) approaches D(u), in the case that D(u) increases as h is held constant?
Another very crucial step here is in the definition of humanity, and which processes count as human ones. I’m going to assume that everyone here is a member in good standing of Team Reductionism, so this is not a trivial task. It is called trans humanism, after all, and you are more than willing to abstract away from the fleshy bits when you define ‘human’. So what do you keep? It seems plausible, even likely, that we will not be able to define ‘humanity’ with a precision that satisfies our intuitions until we already have the capacity to create a supercontroller. In this sense your suggestion is hiding the problem it attempts to solve- that is, how to define our values with sufficient rigor that our machines can understand them.
Thanks for your encouraging comments. They are much appreciated! I was concerned that following the last post with an improvement on it would be seen as redundant, so I’m glad that this process has your approval.
Regarding your first point:
Entropy is not depth. If you do something that increases entropy, then you actually reduce depth, because it is easier to get to what you have from an incompressible starting representation. In particular, the incompressible representation that matches the high-entropy representation you have created. So if you hold humanity steady and superheat the moon, you more or less just keep things at D(u) = D(h), with low D(u/h).
You can do better if you freeze humanity and then create fractal grey goo, which is still in the spirit of your objection. Then you have high D(u), D(u/h) is something like D(u) - D(h) except for when the fractal starts to reproduce human patterns out of the sheer vigor of its complexity, in which case I guess D(u/h) would begin to drop...though I’m not sure. This may require a more thorough look at the mathematics. What do you think?
Regarding your second point...
Strictly speaking, I’m not requiring that h abstract away the fleshy bits and capture what is essentially human or transhuman. I am trying to make the objective function agnostic to these questions. Rather, h can include fleshy bits and all. What’s important is that it includes at least what is valuable, and that can be done by including anything that might be valuable. The needle in the haystack can be discovered later, if it’s there at all. Personally, I’m not a transhumanist. I’m an existentialist; I believe our existence precedes our essence.
That said I think this is a clever point with substance to it. I am, in fact, trying to shift our problem-solving attention to other problems. However, I am trying to turn attention to more tractable and practical questions.
One simple one is: how can we make better libraries for capturing human existence, so that a supercontroller could make use of as much data as possible as it proceeds?
Another is: given that the proposed objective function is in fact impossible to compute, but (if the argument is ultimately successful) also given that it points in the right direction, what kinds of processes/architectures/algorithms would approximate a g-maximizing supercontroller? Since we have time to steer in the right direction now, how should we go about it?
My real agenda is that I think that there are a lot of pressing practical questions regarding machine intelligence and its role in the world, and that the “superintelligence” problem is a distraction except that it can provide clearer guidelines of how we should be acting now.
I enjoyed both this and the previous post. Not the usual computational fare around here, and it’s fun to play with new frameworks. I upvoted particularly for incorporating feedback and engaging with objections.
I have a couple of ways in which I’d like to challenge your ideas.
If I’m not mistaken, there are two routes to take in maximizing g. Either you can minimize D(u/h), or you can just drive D(u) through the roof and not damage h too badly. Intuitively, the latter seems to give you a better payoff per joule invested. Let’s say that our supercontroller grabs a population of humans, puts them in stasis pods of some kind, and then goes about maximizing entropy by superheating the moon. This is a machine that has done a pretty good job of increasing g(u). As long as the supercontroller is careful to keep D(u/h) from approaching D(u), it can easily ignore that term without negotiating the complexity of human civilzation or even human consciousness. That said, I clearly don’t understand relative logical depth very well- so maybe D(u/h) approaches D(u), in the case that D(u) increases as h is held constant?
Another very crucial step here is in the definition of humanity, and which processes count as human ones. I’m going to assume that everyone here is a member in good standing of Team Reductionism, so this is not a trivial task. It is called trans humanism, after all, and you are more than willing to abstract away from the fleshy bits when you define ‘human’. So what do you keep? It seems plausible, even likely, that we will not be able to define ‘humanity’ with a precision that satisfies our intuitions until we already have the capacity to create a supercontroller. In this sense your suggestion is hiding the problem it attempts to solve- that is, how to define our values with sufficient rigor that our machines can understand them.
Thanks for your encouraging comments. They are much appreciated! I was concerned that following the last post with an improvement on it would be seen as redundant, so I’m glad that this process has your approval.
Regarding your first point:
Entropy is not depth. If you do something that increases entropy, then you actually reduce depth, because it is easier to get to what you have from an incompressible starting representation. In particular, the incompressible representation that matches the high-entropy representation you have created. So if you hold humanity steady and superheat the moon, you more or less just keep things at D(u) = D(h), with low D(u/h).
You can do better if you freeze humanity and then create fractal grey goo, which is still in the spirit of your objection. Then you have high D(u), D(u/h) is something like D(u) - D(h) except for when the fractal starts to reproduce human patterns out of the sheer vigor of its complexity, in which case I guess D(u/h) would begin to drop...though I’m not sure. This may require a more thorough look at the mathematics. What do you think?
Regarding your second point...
Strictly speaking, I’m not requiring that h abstract away the fleshy bits and capture what is essentially human or transhuman. I am trying to make the objective function agnostic to these questions. Rather, h can include fleshy bits and all. What’s important is that it includes at least what is valuable, and that can be done by including anything that might be valuable. The needle in the haystack can be discovered later, if it’s there at all. Personally, I’m not a transhumanist. I’m an existentialist; I believe our existence precedes our essence.
That said I think this is a clever point with substance to it. I am, in fact, trying to shift our problem-solving attention to other problems. However, I am trying to turn attention to more tractable and practical questions.
One simple one is: how can we make better libraries for capturing human existence, so that a supercontroller could make use of as much data as possible as it proceeds?
Another is: given that the proposed objective function is in fact impossible to compute, but (if the argument is ultimately successful) also given that it points in the right direction, what kinds of processes/architectures/algorithms would approximate a g-maximizing supercontroller? Since we have time to steer in the right direction now, how should we go about it?
My real agenda is that I think that there are a lot of pressing practical questions regarding machine intelligence and its role in the world, and that the “superintelligence” problem is a distraction except that it can provide clearer guidelines of how we should be acting now.