there are a number of rat/mice studies which entail sticking electrodes into the pleasure center and gaining complete control and the researchers don’t mention any mice/rat ever heroically defying the stimulus through sheer force of will, which suggests very bad things for any humans so situated.
Perhaps the sheer-force-of-will meters were malfunctioning in these experiments.
More seriously, lets create a series of thought experiments, all involving actions by “Friendly” AI. (FAI. Those were scare quotes. I won’t use them again. You have been warned!). In each case, the question in the thought experiment is whether the FAI behavior described is prima facie evidence that the FAI has been misprogrammed.
Thought experiment #1: The FAI has been instructed to respect the autonomy of the human will, but also to try to prevent humans from hurting themselves. Therefore, in cases where humans have threatened suicide, the FAI offers the alternative of becoming a Niven wirehead. No tasping, it is strictly voluntary.
Thought experiment #2: The FAI makes the wirehead option available to all of mankind. It also makes available effective, but somewhat unpleasant, addiction treatment programs for those who have tried the wire, but now wish to quit.
Thought experiment #3: The request for addiction treatment is irrevocable, once treated, humans do not have the option of becoming rewired.
Thought experiment #4: Practicing wireheads are prohibited from contributing genetically to the future human population. At least part of the motivation of the FAI in the whole wirehead policy is eugenic. The FAI wishes to make happiness more self-actualized in human nature, and less dependent on the FAI and its supplied technologies.
Thought experiment #5: This eugenic intervention is in conflict with various other possible eugenic interventions which the FAI is contemplating. In particular, the goal of making mankind more rational seems to be in irreconcilable conflict with the goal of making mankind more happiness-self-actualized. The FAI consults the fine print of its programming and decides in favor of self actualized happiness and against rationality.
Please, carry on with the scare quotes. Or maybe don’t use a capital F.
Apparently: “Friendly Artificial Intelligence” is a term that was coined by researcher Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence as a term of art distinct from the everyday meaning of the word “friendly”. However, nobody seems to be terribly clear about exactly what it means. If you were hoping to pin that down using a consensus, it looks as though you may be out of luck.
As an aside, I wonder how Eliezer’s FAI is going to decide whether to use eugenics. Using the equivalent of worldwide vote doesn’t look like a good idea to me.
How about purely voluntary choice of ‘designer babies’ for your own reproduction, within guidelines set by worldwide vote? Does that sound any more like a good idea? Frankly, it doesn’t seem all that scary to me, at least not as compared with other directions that the FAI might want to take us.
I agree that eugenics is far from the scariest thing FAI could do.
Not sure about designer babies, I don’t have any gut reaction to the issue, and a serious elicitation effort will likely cause me to just make stuff up.
Perhaps the sheer-force-of-will meters were malfunctioning in these experiments.
More seriously, lets create a series of thought experiments, all involving actions by “Friendly” AI. (FAI. Those were scare quotes. I won’t use them again. You have been warned!). In each case, the question in the thought experiment is whether the FAI behavior described is prima facie evidence that the FAI has been misprogrammed.
Thought experiment #1: The FAI has been instructed to respect the autonomy of the human will, but also to try to prevent humans from hurting themselves. Therefore, in cases where humans have threatened suicide, the FAI offers the alternative of becoming a Niven wirehead. No tasping, it is strictly voluntary.
Thought experiment #2: The FAI makes the wirehead option available to all of mankind. It also makes available effective, but somewhat unpleasant, addiction treatment programs for those who have tried the wire, but now wish to quit.
Thought experiment #3: The request for addiction treatment is irrevocable, once treated, humans do not have the option of becoming rewired.
Thought experiment #4: Practicing wireheads are prohibited from contributing genetically to the future human population. At least part of the motivation of the FAI in the whole wirehead policy is eugenic. The FAI wishes to make happiness more self-actualized in human nature, and less dependent on the FAI and its supplied technologies.
Thought experiment #5: This eugenic intervention is in conflict with various other possible eugenic interventions which the FAI is contemplating. In particular, the goal of making mankind more rational seems to be in irreconcilable conflict with the goal of making mankind more happiness-self-actualized. The FAI consults the fine print of its programming and decides in favor of self actualized happiness and against rationality.
Please, carry on with the scare quotes. Or maybe don’t use a capital F.
Apparently: “Friendly Artificial Intelligence” is a term that was coined by researcher Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence as a term of art distinct from the everyday meaning of the word “friendly”. However, nobody seems to be terribly clear about exactly what it means. If you were hoping to pin that down using a consensus, it looks as though you may be out of luck.
As an aside, I wonder how Eliezer’s FAI is going to decide whether to use eugenics. Using the equivalent of worldwide vote doesn’t look like a good idea to me.
How about purely voluntary choice of ‘designer babies’ for your own reproduction, within guidelines set by worldwide vote? Does that sound any more like a good idea? Frankly, it doesn’t seem all that scary to me, at least not as compared with other directions that the FAI might want to take us.
I agree that eugenics is far from the scariest thing FAI could do.
Not sure about designer babies, I don’t have any gut reaction to the issue, and a serious elicitation effort will likely cause me to just make stuff up.