1) Hmmm. OK, this is pretty counter-intuitive to me.
2) I’m not totally sure what you mean here. But, to give a concrete example, suppose that the most moral thing to do would be to tile the universe with very happy kittens (or something). CEV, as I understand, would create as many of these as possible, with its finite resources; whereas g/g* would try to create much more complicated structures than kittens.
3) Sorry, I don’t think I was very clear. To clarify: once you’ve specified h, a superset of human essence, why would you apply the particular functions g/g to h? Why not just directly program in ‘do not let h cease to exist’? g/g do get around the problem of specifying ‘cease to exist’, but this seems pretty insignificant compared to the difficulty of specifying h. And unlike with programming a supercontroller to preserve an entire superset of human essence, g/g* might wind up with the supercontroller focused on some parts of h that are not part of the human essence- so it doesn’t completely solve the definition of ‘cease to exist’.
(You said above that h is an improvement because it is a superset of human essence. But we can equally program a supercontroller not to let a superset of human essence cease to exist, once we’ve specified said superset.)
If this is the case, then the ultra-intelligence wouldn’t even be as parallel as a human is, it would be some algebraic freak not found in nature. Why wouldn’t we design a smart, emergent, massively-parallel brain that was “taught” to be human? It seems this is most likely. Peter Voss is doing this now. He will achieve superintelligence within 10 years if he hasn’t already. This is the pathway the entire industry is taking: brainlike emulation, rising above human. Then, superhuman brain design by already-superhuman massively parallel brains.
I’m sure that some thoughts in those superhuman brainlike minds will “outvote” others. Which is why there will be a lot of them, from a lot of backgrounds. Some will be 2,000 IQs surrounded by gardens, others by war-zones. All will be taught all human literature, including F.A. Hayek’s and Clay Conrad’s works. Unlike stupid humans, they will likely prioritize these works, since they alleviate human suffering.
That won’t take much intelligence, but it will take enough intelligence to avoid papering the universe with toxoplasma-laced kitties, or paperclips, or nukes, or whatever.
PS: Yet again, this site is giving me a message that interferes with the steady drip of endorphins into my primate-model braincase. “You are trying to submit too fast. try again in 4 minutes.” …And here, I thought I was writing a bunch of messages about how I’m a radical libertarian who will never submit. I stand ’rected! And don’t ask if it’s “E” or “Co” just because I’m an ecologist. Have I blathered enough mind-killed enlightenment to regain my ability to post? …Ahhh, there we go! I didn’t even have to alternate between watching TED talks and this page. I just wasted all that time, typing this blather.
Thanks for your response!
1) Hmmm. OK, this is pretty counter-intuitive to me.
2) I’m not totally sure what you mean here. But, to give a concrete example, suppose that the most moral thing to do would be to tile the universe with very happy kittens (or something). CEV, as I understand, would create as many of these as possible, with its finite resources; whereas g/g* would try to create much more complicated structures than kittens.
3) Sorry, I don’t think I was very clear. To clarify: once you’ve specified h, a superset of human essence, why would you apply the particular functions g/g to h? Why not just directly program in ‘do not let h cease to exist’? g/g do get around the problem of specifying ‘cease to exist’, but this seems pretty insignificant compared to the difficulty of specifying h. And unlike with programming a supercontroller to preserve an entire superset of human essence, g/g* might wind up with the supercontroller focused on some parts of h that are not part of the human essence- so it doesn’t completely solve the definition of ‘cease to exist’.
(You said above that h is an improvement because it is a superset of human essence. But we can equally program a supercontroller not to let a superset of human essence cease to exist, once we’ve specified said superset.)
If this is the case, then the ultra-intelligence wouldn’t even be as parallel as a human is, it would be some algebraic freak not found in nature. Why wouldn’t we design a smart, emergent, massively-parallel brain that was “taught” to be human? It seems this is most likely. Peter Voss is doing this now. He will achieve superintelligence within 10 years if he hasn’t already. This is the pathway the entire industry is taking: brainlike emulation, rising above human. Then, superhuman brain design by already-superhuman massively parallel brains.
I’m sure that some thoughts in those superhuman brainlike minds will “outvote” others. Which is why there will be a lot of them, from a lot of backgrounds. Some will be 2,000 IQs surrounded by gardens, others by war-zones. All will be taught all human literature, including F.A. Hayek’s and Clay Conrad’s works. Unlike stupid humans, they will likely prioritize these works, since they alleviate human suffering.
That won’t take much intelligence, but it will take enough intelligence to avoid papering the universe with toxoplasma-laced kitties, or paperclips, or nukes, or whatever.
PS: Yet again, this site is giving me a message that interferes with the steady drip of endorphins into my primate-model braincase. “You are trying to submit too fast. try again in 4 minutes.” …And here, I thought I was writing a bunch of messages about how I’m a radical libertarian who will never submit. I stand ’rected! And don’t ask if it’s “E” or “Co” just because I’m an ecologist. Have I blathered enough mind-killed enlightenment to regain my ability to post? …Ahhh, there we go! I didn’t even have to alternate between watching TED talks and this page. I just wasted all that time, typing this blather.