Thanks for your thoughtful response. I’m glad that I’ve been more comprehensible this time. Let me see if I can address the problems you raise:
1) Point taken that human freedom is important. In the background of my argument is a theory that human freedom has to do with the endogeneity of our own computational process. So, my intuitions about the role of efficiency and freedom are different from yours. One way of describing what I’m doing is trying to come up with a function that a supercontroller would use if it were to try to maximize human freedom. The idea is that choices humans make are some of the most computationally complex things they do, and so the representations created by choices are deeper than others. I realize now I haven’t said any of that explicitly let alone argued for it. Perhaps that’s something I should try to bring up in another post.
2) I also disagree with the morality of this outcome. But I suppose that would be taken as beside the point. Let me see if I understand the argument correctly: if the most ethical outcome is in fact something very simple or low-depth, then this supercontroller wouldn’t be able to hit that mark? I think this is a problem whenever morality (CEV, say) is a process that halts.
I wonder if there is a way to modify what I’ve proposed to select for moral processes as opposed to other generic computational processes.
3) A couple responses:
Oh, if you can just program in “keep humanity alive” then that’s pretty simple and maybe this whole derivation is unnecessary. But I’m concerned about the feasibility of formally specifying what is essential about humanity. VAuroch has commented that he thinks that coming up with the specification is the hard part. I’m trying to defer the problem to a simpler one of just describing everything we can think of that might be relevant. So, it’s meant to be an improvement over programming in “keep humanity alive” in terms of its feasibility, since it doesn’t require solving perhaps impossible problems of understanding human essence.
Is it the consensus of this community that finding an objective function in E is an easy problem? I got the sense from Bostrom’s book talk that existential catastrophe was on the table as a real possibility.
I encourage you to read the original Bennett paper if this interests you. I think your intuitions are on point and appreciate your feedback.
1) Hmmm. OK, this is pretty counter-intuitive to me.
2) I’m not totally sure what you mean here. But, to give a concrete example, suppose that the most moral thing to do would be to tile the universe with very happy kittens (or something). CEV, as I understand, would create as many of these as possible, with its finite resources; whereas g/g* would try to create much more complicated structures than kittens.
3) Sorry, I don’t think I was very clear. To clarify: once you’ve specified h, a superset of human essence, why would you apply the particular functions g/g to h? Why not just directly program in ‘do not let h cease to exist’? g/g do get around the problem of specifying ‘cease to exist’, but this seems pretty insignificant compared to the difficulty of specifying h. And unlike with programming a supercontroller to preserve an entire superset of human essence, g/g* might wind up with the supercontroller focused on some parts of h that are not part of the human essence- so it doesn’t completely solve the definition of ‘cease to exist’.
(You said above that h is an improvement because it is a superset of human essence. But we can equally program a supercontroller not to let a superset of human essence cease to exist, once we’ve specified said superset.)
If this is the case, then the ultra-intelligence wouldn’t even be as parallel as a human is, it would be some algebraic freak not found in nature. Why wouldn’t we design a smart, emergent, massively-parallel brain that was “taught” to be human? It seems this is most likely. Peter Voss is doing this now. He will achieve superintelligence within 10 years if he hasn’t already. This is the pathway the entire industry is taking: brainlike emulation, rising above human. Then, superhuman brain design by already-superhuman massively parallel brains.
I’m sure that some thoughts in those superhuman brainlike minds will “outvote” others. Which is why there will be a lot of them, from a lot of backgrounds. Some will be 2,000 IQs surrounded by gardens, others by war-zones. All will be taught all human literature, including F.A. Hayek’s and Clay Conrad’s works. Unlike stupid humans, they will likely prioritize these works, since they alleviate human suffering.
That won’t take much intelligence, but it will take enough intelligence to avoid papering the universe with toxoplasma-laced kitties, or paperclips, or nukes, or whatever.
PS: Yet again, this site is giving me a message that interferes with the steady drip of endorphins into my primate-model braincase. “You are trying to submit too fast. try again in 4 minutes.” …And here, I thought I was writing a bunch of messages about how I’m a radical libertarian who will never submit. I stand ’rected! And don’t ask if it’s “E” or “Co” just because I’m an ecologist. Have I blathered enough mind-killed enlightenment to regain my ability to post? …Ahhh, there we go! I didn’t even have to alternate between watching TED talks and this page. I just wasted all that time, typing this blather.
If it’s human-level, and it’s a community, and it’s capable of consensus, …the likelihood of it becoming a self-congratulatory echo chamber that ignores the primary source of its own demise approaches 100%.
PS:I hate this message: “You are trying to cum too fast. try again in 2 minutes.” Damnit, I’ll decide when the homunculus in my plastic skull gets a drop of cocaine-water! Is it my fault I process information at ultra-intelligence speeds?
Thanks for your thoughtful response. I’m glad that I’ve been more comprehensible this time. Let me see if I can address the problems you raise:
1) Point taken that human freedom is important. In the background of my argument is a theory that human freedom has to do with the endogeneity of our own computational process. So, my intuitions about the role of efficiency and freedom are different from yours. One way of describing what I’m doing is trying to come up with a function that a supercontroller would use if it were to try to maximize human freedom. The idea is that choices humans make are some of the most computationally complex things they do, and so the representations created by choices are deeper than others. I realize now I haven’t said any of that explicitly let alone argued for it. Perhaps that’s something I should try to bring up in another post.
2) I also disagree with the morality of this outcome. But I suppose that would be taken as beside the point. Let me see if I understand the argument correctly: if the most ethical outcome is in fact something very simple or low-depth, then this supercontroller wouldn’t be able to hit that mark? I think this is a problem whenever morality (CEV, say) is a process that halts.
I wonder if there is a way to modify what I’ve proposed to select for moral processes as opposed to other generic computational processes.
3) A couple responses:
Oh, if you can just program in “keep humanity alive” then that’s pretty simple and maybe this whole derivation is unnecessary. But I’m concerned about the feasibility of formally specifying what is essential about humanity. VAuroch has commented that he thinks that coming up with the specification is the hard part. I’m trying to defer the problem to a simpler one of just describing everything we can think of that might be relevant. So, it’s meant to be an improvement over programming in “keep humanity alive” in terms of its feasibility, since it doesn’t require solving perhaps impossible problems of understanding human essence.
Is it the consensus of this community that finding an objective function in E is an easy problem? I got the sense from Bostrom’s book talk that existential catastrophe was on the table as a real possibility.
I encourage you to read the original Bennett paper if this interests you. I think your intuitions are on point and appreciate your feedback.
Thanks for your response!
1) Hmmm. OK, this is pretty counter-intuitive to me.
2) I’m not totally sure what you mean here. But, to give a concrete example, suppose that the most moral thing to do would be to tile the universe with very happy kittens (or something). CEV, as I understand, would create as many of these as possible, with its finite resources; whereas g/g* would try to create much more complicated structures than kittens.
3) Sorry, I don’t think I was very clear. To clarify: once you’ve specified h, a superset of human essence, why would you apply the particular functions g/g to h? Why not just directly program in ‘do not let h cease to exist’? g/g do get around the problem of specifying ‘cease to exist’, but this seems pretty insignificant compared to the difficulty of specifying h. And unlike with programming a supercontroller to preserve an entire superset of human essence, g/g* might wind up with the supercontroller focused on some parts of h that are not part of the human essence- so it doesn’t completely solve the definition of ‘cease to exist’.
(You said above that h is an improvement because it is a superset of human essence. But we can equally program a supercontroller not to let a superset of human essence cease to exist, once we’ve specified said superset.)
If this is the case, then the ultra-intelligence wouldn’t even be as parallel as a human is, it would be some algebraic freak not found in nature. Why wouldn’t we design a smart, emergent, massively-parallel brain that was “taught” to be human? It seems this is most likely. Peter Voss is doing this now. He will achieve superintelligence within 10 years if he hasn’t already. This is the pathway the entire industry is taking: brainlike emulation, rising above human. Then, superhuman brain design by already-superhuman massively parallel brains.
I’m sure that some thoughts in those superhuman brainlike minds will “outvote” others. Which is why there will be a lot of them, from a lot of backgrounds. Some will be 2,000 IQs surrounded by gardens, others by war-zones. All will be taught all human literature, including F.A. Hayek’s and Clay Conrad’s works. Unlike stupid humans, they will likely prioritize these works, since they alleviate human suffering.
That won’t take much intelligence, but it will take enough intelligence to avoid papering the universe with toxoplasma-laced kitties, or paperclips, or nukes, or whatever.
PS: Yet again, this site is giving me a message that interferes with the steady drip of endorphins into my primate-model braincase. “You are trying to submit too fast. try again in 4 minutes.” …And here, I thought I was writing a bunch of messages about how I’m a radical libertarian who will never submit. I stand ’rected! And don’t ask if it’s “E” or “Co” just because I’m an ecologist. Have I blathered enough mind-killed enlightenment to regain my ability to post? …Ahhh, there we go! I didn’t even have to alternate between watching TED talks and this page. I just wasted all that time, typing this blather.
If it’s human-level, and it’s a community, and it’s capable of consensus, …the likelihood of it becoming a self-congratulatory echo chamber that ignores the primary source of its own demise approaches 100%.
PS:I hate this message: “You are trying to cum too fast. try again in 2 minutes.” Damnit, I’ll decide when the homunculus in my plastic skull gets a drop of cocaine-water! Is it my fault I process information at ultra-intelligence speeds?