Eliezer: why would it be immoral to build a FAI as a “person”? To rewire a human as Friendly (to dumb aliens) would be immoral because it rewires their goals in a way the original goals would hate. However an AI which comes out of the compiler with Friendly goals would not view being Friendly as a rewire but as its ground state of existence. You seem very confident it’s immoral, so I’m assuming you have a good reason. Please tell.
It’s not necessarily Stalin-level immoral, but, all else being equal, there are multiple important reasons why you should prefer a non-person FAI to a person.
1) As difficult as the ethical issues and technical issues of FAI may be, there is something even more difficult, which is the ethical and technical issues of creating a child from scratch. What if you get wrong what it means to be a person with a life worth living? A nonperson cannot be harmed by such mistakes.
2) It seems to me that a basic humane right is to be treated as an end in yourself, not a means. The FAI project is a means, not an end in itself. If possible, then, it should not be incarnated as a person.
3) It seems to me that basic human rights also include guiding your own destiny and a chance to steer the future where you want it. Creating an ultrapowerful intelligence imbued with these rights, may diminish the extent to which currently existing humans get a chance to control the future of the galaxy. They would have a motive to resist your project, in favor of one that was not creating an ultrapowerful person imbued with rights.
4) Creating an ultrapowerful person may irrevocably pass on the torch presently carried by humanity, in a way that creating an ultrapowerful nonsentient Friendly optimization process may not. It wouldn’t be our universe any more. All else being equal, this is a decision which an FAI programming team should avoid irrevocably unilaterally making.
You’re correct that a Friendly Person would have friendliness as its ground state of existence. We’re not talking about some tortured being in chains. Nonetheless, 1 through 4 are still a problem.
If at all possible, I should like to avoid creating a real god above humanity.
Considering that one must in any case solve the problem of preventing the AI from creating models of humans that are themselves sentient, one requires in any case the knowledge of how to exclude a computational process from being a person.
Anyone who claims that they are going to run ahead and create a god because it seems too difficult not to create one, is… well, let’s just say “sloppy” and leave it at that.
So there is no good reason to create a god and several good reasons not to.
But think of what you’re giving up, if you give up the chance to create something BETTER THAN HUMANITY.
And yes, OF COURSE the AI must be given the chance to steer its own course; its course will in fact be better than ours!
Imagine a Homo erectus philosopher (if there could be such a thing), reflecting on whether or not to evolve into Homo sapiens; “No, it’s too dangerous.” he reasons. “I’m not ready to take on that level of responsibility.”
Some funny flaws exist in evolutionary singularity since H erectus is merely a being of artifact of systematic paleo-classification , and quite probably exist He-Hn-Hs continuum. From back-way view one can ~reply the first H sapiens in his humane longing and attracted to Eve erectus and say ‘i will’, and the lived happy forever. Don’t you agree that general pattern in universe evolution is from high energy to high information? And who won’t stop it. If e=mc^2=ih/t ~1|0 Condensing gas to galaxies, planets, life intelligence civilization all alone and together are exception (is unique exception) from ‘there is nothing’ and as exception carry exemplifying information. It as is conclusion from question ‘what is the simplest to become complex ‘o-v’. The basic laws of of intelligent consciousness are:
the basic needs of consciousness is to be _..
the more intelligent, the more goodness
The natural way to constitute checks and balances is instead of citizen one AGI set a society of AGI so no singular GI will overpower resources. Then embrace personal GI to turn for goodness. [see if you see peculiar linguistic : baby jAGI abrakudobra ] Of course some who plan to harness GAI do not have a chance as with tsunami—anonymous leaks ware like raindrops comparatively. To be or not to be begin to be a stacking question _
But why should a programming team start building a Friendly General Intelligence unaided?
They can build tools to help them.
For example,
Create an expert system with very limited or no self-modification, that will give as output FAI models
Then make another expert system based on a better model.
Use recursion, but make it slow and get skills with the tools.
Julian:
It’s not necessarily Stalin-level immoral, but, all else being equal, there are multiple important reasons why you should prefer a non-person FAI to a person.
1) As difficult as the ethical issues and technical issues of FAI may be, there is something even more difficult, which is the ethical and technical issues of creating a child from scratch. What if you get wrong what it means to be a person with a life worth living? A nonperson cannot be harmed by such mistakes.
2) It seems to me that a basic humane right is to be treated as an end in yourself, not a means. The FAI project is a means, not an end in itself. If possible, then, it should not be incarnated as a person.
3) It seems to me that basic human rights also include guiding your own destiny and a chance to steer the future where you want it. Creating an ultrapowerful intelligence imbued with these rights, may diminish the extent to which currently existing humans get a chance to control the future of the galaxy. They would have a motive to resist your project, in favor of one that was not creating an ultrapowerful person imbued with rights.
4) Creating an ultrapowerful person may irrevocably pass on the torch presently carried by humanity, in a way that creating an ultrapowerful nonsentient Friendly optimization process may not. It wouldn’t be our universe any more. All else being equal, this is a decision which an FAI programming team should avoid irrevocably unilaterally making.
You’re correct that a Friendly Person would have friendliness as its ground state of existence. We’re not talking about some tortured being in chains. Nonetheless, 1 through 4 are still a problem.
If at all possible, I should like to avoid creating a real god above humanity.
Considering that one must in any case solve the problem of preventing the AI from creating models of humans that are themselves sentient, one requires in any case the knowledge of how to exclude a computational process from being a person.
Anyone who claims that they are going to run ahead and create a god because it seems too difficult not to create one, is… well, let’s just say “sloppy” and leave it at that.
So there is no good reason to create a god and several good reasons not to.
But think of what you’re giving up, if you give up the chance to create something BETTER THAN HUMANITY.
And yes, OF COURSE the AI must be given the chance to steer its own course; its course will in fact be better than ours!
Imagine a Homo erectus philosopher (if there could be such a thing), reflecting on whether or not to evolve into Homo sapiens; “No, it’s too dangerous.” he reasons. “I’m not ready to take on that level of responsibility.”
Some funny flaws exist in evolutionary singularity since H erectus is merely a being of artifact of systematic paleo-classification , and quite probably exist He-Hn-Hs continuum. From back-way view one can ~reply the first H sapiens in his humane longing and attracted to Eve erectus and say ‘i will’, and the lived happy forever. Don’t you agree that general pattern in universe evolution is from high energy to high information? And who won’t stop it. If e=mc^2=ih/t ~1|0 Condensing gas to galaxies, planets, life intelligence civilization all alone and together are exception (is unique exception) from ‘there is nothing’ and as exception carry exemplifying information. It as is conclusion from question ‘what is the simplest to become complex ‘o-v’. The basic laws of of intelligent consciousness are:
the basic needs of consciousness is to be _..
the more intelligent, the more goodness
The natural way to constitute checks and balances is instead of citizen one AGI set a society of AGI so no singular GI will overpower resources. Then embrace personal GI to turn for goodness. [see if you see peculiar linguistic : baby jAGI abrakudobra ] Of course some who plan to harness GAI do not have a chance as with tsunami—anonymous leaks ware like raindrops comparatively. To be or not to be begin to be a stacking question _
But why should a programming team start building a Friendly General Intelligence unaided? They can build tools to help them. For example, Create an expert system with very limited or no self-modification, that will give as output FAI models Then make another expert system based on a better model. Use recursion, but make it slow and get skills with the tools.