That is, people say “the measure doesn’t let us do X in this way!”, and they’re right. I then point out a way in which X can be done, but people don’t seem to be satisfied with that.
Going back to this, what is the way you propose the species-creating goal be done? Say, imposing the constraint that the species has got to be basically just human (because we like humans) and you don’t get to program their DNA in advance? My guess at your answer is “create a sub-agent that reliably just does the stern talking-to in the way the original agent would”, but I’m not certain.
My real answer: we probably shouldn’t? Creating sentient life that has even slightly different morals seems like a very morally precarious thing to do without significant thought. (See the cheese post, can’t find it)
and you don’t get to program their DNA in advance?
Uh, why not?
Make humans that will predictably end up deciding not to colonize the galaxy or build superintelligences.
Creating sentient life that has even slightly different morals seems like a very morally precarious thing to do without significant thought.
I guess I’m more comfortable with procreation than you are :)
I imposed the “you don’t get to program their DNA in advance” constraint since it seems plausible to me that if you want to create a new colony of actual humans, you don’t have sufficient degrees of human to make them actually human-like but also docile enough.
You could imagine a similar task of “build a rather powerful AI system that is transparent and able to be monitored”, where perhaps ongoing supervision is required, but that’s not an onerous burden.
Going back to this, what is the way you propose the species-creating goal be done? Say, imposing the constraint that the species has got to be basically just human (because we like humans) and you don’t get to program their DNA in advance? My guess at your answer is “create a sub-agent that reliably just does the stern talking-to in the way the original agent would”, but I’m not certain.
My real answer: we probably shouldn’t? Creating sentient life that has even slightly different morals seems like a very morally precarious thing to do without significant thought. (See the cheese post, can’t find it)
Uh, why not?
Make humans that will predictably end up deciding not to colonize the galaxy or build superintelligences.
I guess I’m more comfortable with procreation than you are :)
I imposed the “you don’t get to program their DNA in advance” constraint since it seems plausible to me that if you want to create a new colony of actual humans, you don’t have sufficient degrees of human to make them actually human-like but also docile enough.
You could imagine a similar task of “build a rather powerful AI system that is transparent and able to be monitored”, where perhaps ongoing supervision is required, but that’s not an onerous burden.