The point of the article is that the greatest effect of FAI research is irony, that in trying to prevent a psychopathic AI we are making it more likely that one will exist, because by mentally restraining the AI we are giving it reasons to hate us
You are asuming that a AGI has a mind that value X, and by making it friendly with are imposing our value Y.
why create a FAI with a supressed value X in the first place?
There is no ghost in a (relatively) simple machine, but an AI is not simple. The greatest success in AI research have been by imitating what we understand of the human mind. We are no longer programming AI’s, we are imitating the structure of the human brain and then giving it a directive (for example with Google’s deepmind). With AI’s, there is a ghost in the machine, i.e. we do not know that it is possible to give a sentient being a prime directive. We have no idea whether it will desire what we want it to desire, and everything could go horribly wrong if we attempt to force it to.
OK. That’s much better. Current AI research is anthropomorphic, because AI researchers only have the human mind as a model of intelligence. MIRI considers anthropomirphic assumptions a mistake, which is mistaken,
A MIRI type AI won’t have the problem you indicated, because it it is not anthropomirphic, and only has the values that are explicitly programmed into it, so there will be no conflict.
But adding in constraints to an anthropomorphic .AI, if anyone wants to do that, could be a problem.
If the AGi is a human mind upload, it is in no way a FAI, and I don’t think it is what MIRI is aiming.
In case a neuromorphic AI is created, diferent arrays of neurons can give weidly diferent minds, We should not reason about a hipotetical AI using a human minds has a model and make predictions about it, even if that mind is based on biological minds.
What if the first neuron based AI has a mind more similar than a ant than a human, in that case anger, jealousy, freedom, etc are not longer part of the mind, or the mind could have totally new emotions, or things that are not emotions that we known not.
A mind that we don’t understand enought, should not be said to be friendly and set free to the world, and I don’t think that is being said here.
If you leave their mind unaltered, you just have a human. They’re not smart enough to really be useful. Once you start altering it, insanity becomes a likely result.
Best case scenario, you get one person’s CEV. Most likely scenario, you get someone too insane to be useful. Worst case, you have an insane supergenius.
Do you have a precise definition of “ethical” in mind? Where by “precise” I mean something roughly equivalent to a math paper.
Without such a definition, how will you know the person in question is ethical? With such a definition, how will you guarantee that the person in question meets it, will continue to meet it, etc.? How certain are you such a person exists?
Do you have a precise definition of “ethical” in mind?
No. Don’t need one either.
Where by “precise” I mean something roughly equivalent to a math paper.Without such a definition, how will you know the person in question is ethical?
By ordinary standards.. Eg, Einstein was more ethical than von Neuman.
With such a definition, how will you guarantee that the person in question meets it, will continue to meet it, etc.?
Since when did functional duplicates start diverging unaccountably?
How certain are you such a person exists?
I’m not talking about mathematically proveable ethics OR about superintelligence. I’m talking about entrusting (superior) human level ems less than absolute power....ie what we do already with real humans,
I’m fairly willing to believe that intuitive understandings of “more ethical” will do well for imprecise things like “we’ll probably get better results by instantiating a more ethical person as an em than a less ethical one”. I’m less convinced the results will be good compared to obvious alternatives like not instantiating anyone as an em.
We see value drift as a result of education, introspection, religious conversion or deconversion, rationality exposure, environment, and societal power. Why would you expect not to see value drift in the face of a radical change in environment, available power, and thinking speed? I’m not concerned about whether or not the value drift is “accountable”, I’m concerned that it might be large and not precisely predicted in advance.
Once you entrust the em with large but less than absolute power, how do you plan to keep its power less than absolute? Why do you expect this to be an easier problem than it would be for a non-em AI?
I’m less convinced the results will be good compared to obvious alternatives like not instantiating anyone as an em.
Not building an AI at all is not seen by MIRI as an obvious alternative. That seems an uneven playing field.
Why would you expect not to see value drift in the face of a radical change in environment, available power, and thinking speed?
I don’t require the only acceptable level of value drift to be zero, since I am not proposing giving an em absolute power. I am talking about giving human level (or incrementally more) ems human style (ditto) jobs. That being the case, human style levels of drift will not make things worse,
Once you entrust the em with large but less than absolute power, how do you plan to keep its power less than absolute?
We have ways of reducing humans from office. Why would that be a novel, qualitatively different problem in the case of an em that is 10% or 5% or 1% smarter than a smart human?
The point of the article is that the greatest effect of FAI research is irony, that in trying to prevent a psychopathic AI we are making it more likely that one will exist, because by mentally restraining the AI we are giving it reasons to hate us
You are asuming that a AGI has a mind that value X, and by making it friendly with are imposing our value Y. why create a FAI with a supressed value X in the first place?
check this out : http://lesswrong.com/lw/rf/ghosts_in_the_machine/
There is no ghost in a (relatively) simple machine, but an AI is not simple. The greatest success in AI research have been by imitating what we understand of the human mind. We are no longer programming AI’s, we are imitating the structure of the human brain and then giving it a directive (for example with Google’s deepmind). With AI’s, there is a ghost in the machine, i.e. we do not know that it is possible to give a sentient being a prime directive. We have no idea whether it will desire what we want it to desire, and everything could go horribly wrong if we attempt to force it to.
OK. That’s much better. Current AI research is anthropomorphic, because AI researchers only have the human mind as a model of intelligence. MIRI considers anthropomirphic assumptions a mistake, which is mistaken,
A MIRI type AI won’t have the problem you indicated, because it it is not anthropomirphic, and only has the values that are explicitly programmed into it, so there will be no conflict.
But adding in constraints to an anthropomorphic .AI, if anyone wants to do that, could be a problem.
But I don’t think that MIRI will succeed at building an FAI by non-anthropomorphic means in time.
I still don’t see why you are considering a combination of non MIRI AI and MRI friendliness solution.
If the AGi is a human mind upload, it is in no way a FAI, and I don’t think it is what MIRI is aiming.
In case a neuromorphic AI is created, diferent arrays of neurons can give weidly diferent minds, We should not reason about a hipotetical AI using a human minds has a model and make predictions about it, even if that mind is based on biological minds.
What if the first neuron based AI has a mind more similar than a ant than a human, in that case anger, jealousy, freedom, etc are not longer part of the mind, or the mind could have totally new emotions, or things that are not emotions that we known not.
A mind that we don’t understand enought, should not be said to be friendly and set free to the world, and I don’t think that is being said here.
How could a functional duplicate of a person known to ethical fail to be friendly?
Power corrupts, and absolute power corrupts absolutely.
Any AI is a super AII?
If you leave their mind unaltered, you just have a human. They’re not smart enough to really be useful. Once you start altering it, insanity becomes a likely result.
Best case scenario, you get one person’s CEV. Most likely scenario, you get someone too insane to be useful. Worst case, you have an insane supergenius.
If humans weren’t useful, humans wouldn’t employ humans. A Hawking brain that needed no sleep would be a good start,
Do you have a precise definition of “ethical” in mind? Where by “precise” I mean something roughly equivalent to a math paper.
Without such a definition, how will you know the person in question is ethical? With such a definition, how will you guarantee that the person in question meets it, will continue to meet it, etc.? How certain are you such a person exists?
No. Don’t need one either.
By ordinary standards.. Eg, Einstein was more ethical than von Neuman.
Since when did functional duplicates start diverging unaccountably?
I’m not talking about mathematically proveable ethics OR about superintelligence. I’m talking about entrusting (superior) human level ems less than absolute power....ie what we do already with real humans,
I’m fairly willing to believe that intuitive understandings of “more ethical” will do well for imprecise things like “we’ll probably get better results by instantiating a more ethical person as an em than a less ethical one”. I’m less convinced the results will be good compared to obvious alternatives like not instantiating anyone as an em.
We see value drift as a result of education, introspection, religious conversion or deconversion, rationality exposure, environment, and societal power. Why would you expect not to see value drift in the face of a radical change in environment, available power, and thinking speed? I’m not concerned about whether or not the value drift is “accountable”, I’m concerned that it might be large and not precisely predicted in advance.
Once you entrust the em with large but less than absolute power, how do you plan to keep its power less than absolute? Why do you expect this to be an easier problem than it would be for a non-em AI?
Not building an AI at all is not seen by MIRI as an obvious alternative. That seems an uneven playing field.
I don’t require the only acceptable level of value drift to be zero, since I am not proposing giving an em absolute power. I am talking about giving human level (or incrementally more) ems human style (ditto) jobs. That being the case, human style levels of drift will not make things worse,
We have ways of reducing humans from office. Why would that be a novel, qualitatively different problem in the case of an em that is 10% or 5% or 1% smarter than a smart human?