Eliezer: I see what you’re saying, but I don’t agree.
Compare creating a natural child with #1. True, you have 3 billion years of evolution helping you pick a design—but what if the child doesn’t like the design? (Most don’t, it seems.) In creating a child the old fashioned way, more choices are compelled, there’s less chance for after-the-fact recovery, and there’s just plain more degrees of freedom flapping around loose. I know this is a bit of a tu quoque, but it establishes a moral ordering—building a Friendly Person would be LESS immoral.
As to #2, I disagree that being a means cheapens a person—most children are means to their parents enjoyment. Few people conceive from grim duty! Means and ends are orthogonal and what matters for humane rights is the “ends” scale”, ignoring the “means” scale.
#3 and #4 ask me to be human-centric. Humans may enjoy not being preempted or made to look small, but would a moral observer of the !xyzpkyf species, observing from his UFO, see a moral upside or an opportunity missed? It would be as if a chimp had created a nonperson human to invent them cities and moon-shots without preempting their chimp nature. Humans are better! I can’t avoid it, even though it sounds Nazi. Humans are genuinely worth more than chimps. Would not Friendly Person AIs be worth more than humans?
Finally, I’m not sure if it isn’t immoral in itself to create a mind that isn’t a person—and I acknowledge that my lack of surety mostly rests on a missing definition (and missing examples!) of “person” as distinct from “really damn smart optimization process that understands us and talks back”. Is personhood really orthogonal to smarts? You obviously aren’t thinking of zombies, so what does a non-person FAI look like?
Eliezer: I see what you’re saying, but I don’t agree.
Compare creating a natural child with #1. True, you have 3 billion years of evolution helping you pick a design—but what if the child doesn’t like the design? (Most don’t, it seems.) In creating a child the old fashioned way, more choices are compelled, there’s less chance for after-the-fact recovery, and there’s just plain more degrees of freedom flapping around loose. I know this is a bit of a tu quoque, but it establishes a moral ordering—building a Friendly Person would be LESS immoral.
As to #2, I disagree that being a means cheapens a person—most children are means to their parents enjoyment. Few people conceive from grim duty! Means and ends are orthogonal and what matters for humane rights is the “ends” scale”, ignoring the “means” scale.
#3 and #4 ask me to be human-centric. Humans may enjoy not being preempted or made to look small, but would a moral observer of the !xyzpkyf species, observing from his UFO, see a moral upside or an opportunity missed? It would be as if a chimp had created a nonperson human to invent them cities and moon-shots without preempting their chimp nature. Humans are better! I can’t avoid it, even though it sounds Nazi. Humans are genuinely worth more than chimps. Would not Friendly Person AIs be worth more than humans?
Finally, I’m not sure if it isn’t immoral in itself to create a mind that isn’t a person—and I acknowledge that my lack of surety mostly rests on a missing definition (and missing examples!) of “person” as distinct from “really damn smart optimization process that understands us and talks back”. Is personhood really orthogonal to smarts? You obviously aren’t thinking of zombies, so what does a non-person FAI look like?