With regards to animal experimentation before first upload and so on, a running upload is nothing but fancy processing of a scan of, likely, a cadaver brain, legally no different from displaying that brain on computer, and doesn’t require any sort of FDA style stringent functionality testing on animals, not that such testing would help for a brain much bigger, with different neuron sizes, and with the failure modes that are highly non-obvious in animals. Nor that such regulation is even necessary, as the scanned upload of dead person, functional enough to recognize his family, is a definite improvement over being completely dead, and to prevent it equates mercy killing accident victims who have good prospect at full recovery, to avoid the mere discomfort of being sick.
The gradual progress on humans is pretty much a certainty, if one is to drop the wide eyed optimism bias. There are enough people who would bite the bullet, and it is not human experimentation—it is mere data processing—it might become human experimentation decades after functional uploads.
There’s a consensus here that conscious computer programs have the same moral weight as people, so getting uploading moderately wrong in some directions is worse than getting it completely wrong.
There’s a consensus here that conscious computer programs have the same moral weight as people
No there isn’t. I would have remembered something like that happened, what with all the disagreeing I would have been doing.
The mindspace of ‘conscious computer programs’ is unfathomably large and most of those programs are morally worthless. A “some” and/or “could” inserted in there could make the ‘consensus’ correct.
I may well have overgeneralized. I was basing the idea on remembering an essay saying that FAI should be designed to be non-sentient, and seeing concerns about how uploads and simulated people would be treated.
I suppose that moral concern would apply to any of sentient programs that humans would be likely to create.
I may well have overgeneralized. I was basing the idea on remembering an essay saying that FAI should be designed to be non-sentient, and seeing concerns about how uploads and simulated people would be treated.
Yes, that ‘have’ vs ‘can have’ distinction changes everything—but most people are less picky with general claims than I.
I suppose that moral concern would apply to any of sentient programs that humans would be likely to create.
Food for thought. I wouldn’t rule this out as a possibility and certainly the proportion of ‘morally relevant’ programs in this group skyrockets over the broader class. I’m not too sure what we are likely to create. How probable is it that we succeed at creating sentience but fail at FAI?
I think creating sentience is a much easier project than FAI, especially proven FAI. We’ve got plenty of examples of sentience.
Creating sentience which isn’t much like the human model seems very difficult—I’m not even sure what that would mean, with the possible exception of basing something on cephalopods. OK, maybe beehives are sentient, too. How about cities?
I think creating sentience is a much easier project than FAI, especially proven FAI. We’ve got plenty of examples of sentience.
This is why I was hesitant to fully agree with your prediction that any sentient programs created by humans have intrinsic moral weight. Sentient uFAI can have neutral or even negative moral weight (although this is a subjective value.)
The main reason this outcome could be unlikely is that most of the reasons that a created GAI would fail to be an FAI would also obliterate the potential for sentience.
I didn’t think about the case of a sentient UFAI—I should think that self-defense would apply, though I suppose that self-defense becomes a complicated issue if you’re a hard-core utilitarian.
I didn’t think about the case of a sentient UFAI—I should think that self-defense would apply
The problem there is that the UFAI has to specifically value its own sentience. That is, it has to be at least a “Self Sentience Satisficer”. Creating an AI that stably values something so complex and reflective requires defeating most of the same challenges that creating an FAI requires. This ought to make it relatively improbable to create accidentally.
That goes also for valuing real world paperclips, perhaps even more so. World → real function that gives number of paperclips may even be trickier than valuing own sentience.
For the AI to be dangerously effective it still needs to be able to optimize it’s own processes and have sufficient self understanding. Also it needs to understand that the actions it makes are made from it’s sensory input, to value the sensory input correctly. You have to have a lot of reflectivity to be good at maximizing paperclips. The third alternative, neither friendly nor unfriendly, is AI that solves formally defined problems. Hook it up to a simulator with god’s eye view, give it full specs of the simulator, define what’s a paperclip, and it’ll maximize simulated paperclips there. I’ve impression that people mistake this—which doesn’t require solving any philosophical problems—for real world paperclip maximizer, which is much much trickier.
Legally, a mind upload is only different from any other medical scan in mere quantity, and a simulation of brain is only qualitatively different from any other processing. Just as the cryopreservation is only a form of burial.
Furthermore, while it would seem to be better to magically have the mind uploading completely figured out without any experimentation on human mind uploads, we aren’t writing a science fiction/fantasy story, we are actually building the damn thing in the real world where things tend to go wrong.
edit: also, a rather strong point can be made that it is more ethical to experiment on a copy of yourself than on a copy of your cat or any other not completely stupid mammal. The consent matters.
This is an area that hasn’t been addressed by the law, for the very good reason that it isn’t close to being a problem yet. I don’t know whether people outside LW have been looking at the ethical status of uploads.
I agree with you that there’s no way to have uploading without making mistakes first. And possibly no way to have FAI without it having excellent simulations of people so that it can estimate what to do.
Some of the basic problems will presumably be (partially?) solved with animal research before uploading is tried with humans.
One of the challenges of uploading would be including not just the current record, but also the ability to learn and heal.
With regards to animal experimentation before first upload and so on, a running upload is nothing but fancy processing of a scan of, likely, a cadaver brain, legally no different from displaying that brain on computer, and doesn’t require any sort of FDA style stringent functionality testing on animals, not that such testing would help for a brain much bigger, with different neuron sizes, and with the failure modes that are highly non-obvious in animals. Nor that such regulation is even necessary, as the scanned upload of dead person, functional enough to recognize his family, is a definite improvement over being completely dead, and to prevent it equates mercy killing accident victims who have good prospect at full recovery, to avoid the mere discomfort of being sick.
The gradual progress on humans is pretty much a certainty, if one is to drop the wide eyed optimism bias. There are enough people who would bite the bullet, and it is not human experimentation—it is mere data processing—it might become human experimentation decades after functional uploads.
There’s a consensus here that conscious computer programs have the same moral weight as people, so getting uploading moderately wrong in some directions is worse than getting it completely wrong.
No there isn’t. I would have remembered something like that happened, what with all the disagreeing I would have been doing.
The mindspace of ‘conscious computer programs’ is unfathomably large and most of those programs are morally worthless. A “some” and/or “could” inserted in there could make the ‘consensus’ correct.
I may well have overgeneralized. I was basing the idea on remembering an essay saying that FAI should be designed to be non-sentient, and seeing concerns about how uploads and simulated people would be treated.
I suppose that moral concern would apply to any of sentient programs that humans would be likely to create.
Yes, that ‘have’ vs ‘can have’ distinction changes everything—but most people are less picky with general claims than I.
Food for thought. I wouldn’t rule this out as a possibility and certainly the proportion of ‘morally relevant’ programs in this group skyrockets over the broader class. I’m not too sure what we are likely to create. How probable is it that we succeed at creating sentience but fail at FAI?
I think creating sentience is a much easier project than FAI, especially proven FAI. We’ve got plenty of examples of sentience.
Creating sentience which isn’t much like the human model seems very difficult—I’m not even sure what that would mean, with the possible exception of basing something on cephalopods. OK, maybe beehives are sentient, too. How about cities?
This is why I was hesitant to fully agree with your prediction that any sentient programs created by humans have intrinsic moral weight. Sentient uFAI can have neutral or even negative moral weight (although this is a subjective value.)
The main reason this outcome could be unlikely is that most of the reasons that a created GAI would fail to be an FAI would also obliterate the potential for sentience.
I didn’t think about the case of a sentient UFAI—I should think that self-defense would apply, though I suppose that self-defense becomes a complicated issue if you’re a hard-core utilitarian.
The problem there is that the UFAI has to specifically value its own sentience. That is, it has to be at least a “Self Sentience Satisficer”. Creating an AI that stably values something so complex and reflective requires defeating most of the same challenges that creating an FAI requires. This ought to make it relatively improbable to create accidentally.
That goes also for valuing real world paperclips, perhaps even more so. World → real function that gives number of paperclips may even be trickier than valuing own sentience.
I would bet against that. (For reasons including but not limited to the observation that sentience exists in the real world too.)
For the AI to be dangerously effective it still needs to be able to optimize it’s own processes and have sufficient self understanding. Also it needs to understand that the actions it makes are made from it’s sensory input, to value the sensory input correctly. You have to have a lot of reflectivity to be good at maximizing paperclips. The third alternative, neither friendly nor unfriendly, is AI that solves formally defined problems. Hook it up to a simulator with god’s eye view, give it full specs of the simulator, define what’s a paperclip, and it’ll maximize simulated paperclips there. I’ve impression that people mistake this—which doesn’t require solving any philosophical problems—for real world paperclip maximizer, which is much much trickier.
Legally, a mind upload is only different from any other medical scan in mere quantity, and a simulation of brain is only qualitatively different from any other processing. Just as the cryopreservation is only a form of burial.
Furthermore, while it would seem to be better to magically have the mind uploading completely figured out without any experimentation on human mind uploads, we aren’t writing a science fiction/fantasy story, we are actually building the damn thing in the real world where things tend to go wrong.
edit: also, a rather strong point can be made that it is more ethical to experiment on a copy of yourself than on a copy of your cat or any other not completely stupid mammal. The consent matters.
This is an area that hasn’t been addressed by the law, for the very good reason that it isn’t close to being a problem yet. I don’t know whether people outside LW have been looking at the ethical status of uploads.
I agree with you that there’s no way to have uploading without making mistakes first. And possibly no way to have FAI without it having excellent simulations of people so that it can estimate what to do.
That’s a good point about consent.