I read the essay you linked to. I really don’t know where to start.
Now, we are not currently facing threats from any alien races. However, if we do so in the future, then we probably do not want to have handicapped our entire civilisation.
So we should guard against potential threats from non-human intelligent life by building a non-human superintelligence that doesn’t care about humans?
While dependencies on humans may have the effect of postponing the demise of our species, they also have considerable potential to hamper and slow evolutionary progress.
Postpone? I thought the point of friendly AI was to preserve human values for as long as physically possible. “Evolutionary progress?” Evolution is stupid and doesn’t care about the individual organisms. Evolution causes pointless suffering and death. It produces stupid designs. As Michael Vassar once said: think of all the simple things that evolution didn’t invent. The wheel. The bow and arrow. The axial-flow pump. Evolution had billions of years creating and destroying organisms and it couldn’t invent stuff built by cave men. Is it OK in your book that people die of antibiotic resistant diseases? MRSA is a result of evolutionary progress.
For example humans have poor space-travel potential, and any tendency to keep humans around will be associated with remaining stuck on the home world.
Who said humans have to live on planets or breathe oxygen or run on neurons? Why do you think a superintelligence will have problems dealing with asteroids when humans today are researching ways to deflect them?
I think your main problem is that you’re valuing the wrong thing. You practically worship evolution while neglecting important things like people, animals, or anything that can suffer. Also, I think you fail to notice the huge first-mover advantage of any superintelligence, even one as “handicapped” as a friendly AI.
Finally, I know the appearance of the arguer doesn’t change the validity of the argument, but I feel compelled to tell you this: You would look much better with a haircut, a shave, and some different glasses.
I don’t avocate building machines that are indiffierent to humans. For instance, I think machine builders would be well advised to (and probably mostly will) construct devices that obey the law—which includes all kinds of provisions for preventing harm to humans.
Evolution did produce the wheel and the bow and arrow. If you think otherwise, please state clearly what definition of the term “evolution” you are using.
Regarding space travel—I was talking about wetware humans.
Re: “Why do you think a superintelligence will have problems dealing with asteroids when humans today are researching ways to deflect them?”
...that is a projection on your part—not something I said.
Re: “Also, I think you fail to notice the huge first-mover advantage of any superintelligence”
To quote mine myself:
“IMHO, it is indeed possible that the first AI will effectively take over the world. I.T. is an environment with dramatic first-mover advantages. It is often a winner-takes-all market – and AI seems likely to exhibit such effects in spades.”
“Google was not the first search engine, Microsoft was not the first OS maker—and Diffie–Hellman didn’t invent public key crypto.
Being first does not necessarily make players uncatchable—and there’s a selection process at work in the mean time, that weeds out certain classes of failures.”
I have thought and written about this issue quite a bit—and my position seems a bit more nuanced and realistic than the position you are saying you think I should have.
I read the essay you linked to. I really don’t know where to start.
So we should guard against potential threats from non-human intelligent life by building a non-human superintelligence that doesn’t care about humans?
Postpone? I thought the point of friendly AI was to preserve human values for as long as physically possible. “Evolutionary progress?” Evolution is stupid and doesn’t care about the individual organisms. Evolution causes pointless suffering and death. It produces stupid designs. As Michael Vassar once said: think of all the simple things that evolution didn’t invent. The wheel. The bow and arrow. The axial-flow pump. Evolution had billions of years creating and destroying organisms and it couldn’t invent stuff built by cave men. Is it OK in your book that people die of antibiotic resistant diseases? MRSA is a result of evolutionary progress.
Who said humans have to live on planets or breathe oxygen or run on neurons? Why do you think a superintelligence will have problems dealing with asteroids when humans today are researching ways to deflect them?
I think your main problem is that you’re valuing the wrong thing. You practically worship evolution while neglecting important things like people, animals, or anything that can suffer. Also, I think you fail to notice the huge first-mover advantage of any superintelligence, even one as “handicapped” as a friendly AI.
Finally, I know the appearance of the arguer doesn’t change the validity of the argument, but I feel compelled to tell you this: You would look much better with a haircut, a shave, and some different glasses.
Briefly:
I don’t avocate building machines that are indiffierent to humans. For instance, I think machine builders would be well advised to (and probably mostly will) construct devices that obey the law—which includes all kinds of provisions for preventing harm to humans.
Evolution did produce the wheel and the bow and arrow. If you think otherwise, please state clearly what definition of the term “evolution” you are using.
Regarding space travel—I was talking about wetware humans.
Re: “Why do you think a superintelligence will have problems dealing with asteroids when humans today are researching ways to deflect them?”
...that is a projection on your part—not something I said.
Re: “Also, I think you fail to notice the huge first-mover advantage of any superintelligence”
To quote mine myself:
“IMHO, it is indeed possible that the first AI will effectively take over the world. I.T. is an environment with dramatic first-mover advantages. It is often a winner-takes-all market – and AI seems likely to exhibit such effects in spades.”
http://www.overcomingbias.com/2008/05/roger-shank-ai.html
“Google was not the first search engine, Microsoft was not the first OS maker—and Diffie–Hellman didn’t invent public key crypto.
Being first does not necessarily make players uncatchable—and there’s a selection process at work in the mean time, that weeds out certain classes of failures.”
http://lesswrong.com/lw/1mm/advice_for_ai_makers/1gkg
I have thought and written about this issue quite a bit—and my position seems a bit more nuanced and realistic than the position you are saying you think I should have.