#1: By a similar logic, you should be happy to feed your child gunk that you made with a chemistry set because you seem to have more control over the gunk. Degrees of freedom represent choices that have to be made correctly. In biology, nearly all choices are made for you, and it’s still hard to raise a child well.
You could create a person that led a better life than a human, but you would have to know how, and that would require more knowledge and more difficult ethical issues than FAI itself.
As for recovery after-the-fact, that gives you a whole new set of ethical issues; what if your baby schizophrenic doesn’t want to be changed? A non-person AI, you can just alter; altering a person is… healing? Or mind-rape? And what makes you think an ethical person will or should consent to that rape? A non-person AI, you can ethically build with a shutdown switch, recovery CD, etc.
#2: Yes, but I’d rather just not have to bother tacking on extra ends-in-themselves for dutiful reasons of ethical obligation. Let it just be a means. I’m not trying to have a baby, here! I’m trying to save the world!
#3-4: Humans are not stuck being humans—nor are chimps stuck being chimps, come the day.
It still shocks me that people read about my Friendly AI work and assume I want humans to stick around in their present form running around on two legs until the end of time, while excluding any more advanced forms of people—that the point of FAI is to keep them dern superminds under control. It shocks me that they assume the only way you get more advanced forms of people is to create powerful minds ab initio, and that the humans are just stuck the way they are. I grew up with a different concept of “growing up”, I guess.
However, this business of intelligence growing up is very deep, and very complicated, and if you build your own superintelligence that is an actual person, you have preempted the entire thing ab initio and possibly screwed it up! Nor can you just say “Oops” and correct it, if your newborn baby doesn’t think it is ethically right to be mind-raped by a chimpanzee.
It seems more like the kind of decision that should (1) draw on more mindpower than one programming team’s naked intellect, i.e., via a CEV (that is not itself a person or there’s no point to the recursion!) or via human-born minds that have increased in intelligence via CEV. And (2), the kind of decision that humanity might want to make as some kind of whole.
#1: By a similar logic, you should be happy to feed your child gunk that you made with a chemistry set because you seem to have more control over the gunk. Degrees of freedom represent choices that have to be made correctly. In biology, nearly all choices are made for you, and it’s still hard to raise a child well.
You could create a person that led a better life than a human, but you would have to know how, and that would require more knowledge and more difficult ethical issues than FAI itself.
As for recovery after-the-fact, that gives you a whole new set of ethical issues; what if your baby schizophrenic doesn’t want to be changed? A non-person AI, you can just alter; altering a person is… healing? Or mind-rape? And what makes you think an ethical person will or should consent to that rape? A non-person AI, you can ethically build with a shutdown switch, recovery CD, etc.
#2: Yes, but I’d rather just not have to bother tacking on extra ends-in-themselves for dutiful reasons of ethical obligation. Let it just be a means. I’m not trying to have a baby, here! I’m trying to save the world!
#3-4: Humans are not stuck being humans—nor are chimps stuck being chimps, come the day.
It still shocks me that people read about my Friendly AI work and assume I want humans to stick around in their present form running around on two legs until the end of time, while excluding any more advanced forms of people—that the point of FAI is to keep them dern superminds under control. It shocks me that they assume the only way you get more advanced forms of people is to create powerful minds ab initio, and that the humans are just stuck the way they are. I grew up with a different concept of “growing up”, I guess.
However, this business of intelligence growing up is very deep, and very complicated, and if you build your own superintelligence that is an actual person, you have preempted the entire thing ab initio and possibly screwed it up! Nor can you just say “Oops” and correct it, if your newborn baby doesn’t think it is ethically right to be mind-raped by a chimpanzee.
It seems more like the kind of decision that should (1) draw on more mindpower than one programming team’s naked intellect, i.e., via a CEV (that is not itself a person or there’s no point to the recursion!) or via human-born minds that have increased in intelligence via CEV. And (2), the kind of decision that humanity might want to make as some kind of whole.