Or what if the ‘mountain people’ are utterly microscopic mites on a tiny ball hurling through space. Ohh, wait, that’s the reality.
sidenote: I doubt mind uploads scale all the way up, and it appears quite likely that amoral mind uploads would be unable to get along with the copies, so I am not very worried about the first upload having any sort of edge. The first upload will probably be crippled and on the brink of insanity, suffering from hallucinations and otherwise broken thought (after massively difficult work to get this upload to be conscious and not to just go into simulated seizure ). From that you might progress to sane but stupefied uploads, with very significant IQ drop. Get a whiff of xenon to see what small alteration to electrical properties of the neurons amounts to. It will take a lot of gradual improvement until there are well working uploads, and even then I am pretty sure that nearly anyone would be utterly unable to massively self improve on one’s own in any meaningful way rather than just screw itself into insanity, without supervision; sane person shouldn’t even attempt that because if your improvement is making things worse then the next improvement will make things even worse, and one needs external verification.
Get a whiff of xenon to see what small alteration to electrical properties of the neurons amounts to.
My initial reaction was shock that a heavier-than-air radioactive gas might go into someone’s lungs on purpose. It triggers a lot of my “scary danger” heuristics for gases. Googling turned up a bunch of fascinating stuff. Thanks for the surprise! For anyone else interested, educational content includes:
With regards to animal experimentation before first upload and so on, a running upload is nothing but fancy processing of a scan of, likely, a cadaver brain, legally no different from displaying that brain on computer, and doesn’t require any sort of FDA style stringent functionality testing on animals, not that such testing would help for a brain much bigger, with different neuron sizes, and with the failure modes that are highly non-obvious in animals. Nor that such regulation is even necessary, as the scanned upload of dead person, functional enough to recognize his family, is a definite improvement over being completely dead, and to prevent it equates mercy killing accident victims who have good prospect at full recovery, to avoid the mere discomfort of being sick.
The gradual progress on humans is pretty much a certainty, if one is to drop the wide eyed optimism bias. There are enough people who would bite the bullet, and it is not human experimentation—it is mere data processing—it might become human experimentation decades after functional uploads.
There’s a consensus here that conscious computer programs have the same moral weight as people, so getting uploading moderately wrong in some directions is worse than getting it completely wrong.
There’s a consensus here that conscious computer programs have the same moral weight as people
No there isn’t. I would have remembered something like that happened, what with all the disagreeing I would have been doing.
The mindspace of ‘conscious computer programs’ is unfathomably large and most of those programs are morally worthless. A “some” and/or “could” inserted in there could make the ‘consensus’ correct.
I may well have overgeneralized. I was basing the idea on remembering an essay saying that FAI should be designed to be non-sentient, and seeing concerns about how uploads and simulated people would be treated.
I suppose that moral concern would apply to any of sentient programs that humans would be likely to create.
I may well have overgeneralized. I was basing the idea on remembering an essay saying that FAI should be designed to be non-sentient, and seeing concerns about how uploads and simulated people would be treated.
Yes, that ‘have’ vs ‘can have’ distinction changes everything—but most people are less picky with general claims than I.
I suppose that moral concern would apply to any of sentient programs that humans would be likely to create.
Food for thought. I wouldn’t rule this out as a possibility and certainly the proportion of ‘morally relevant’ programs in this group skyrockets over the broader class. I’m not too sure what we are likely to create. How probable is it that we succeed at creating sentience but fail at FAI?
I think creating sentience is a much easier project than FAI, especially proven FAI. We’ve got plenty of examples of sentience.
Creating sentience which isn’t much like the human model seems very difficult—I’m not even sure what that would mean, with the possible exception of basing something on cephalopods. OK, maybe beehives are sentient, too. How about cities?
I think creating sentience is a much easier project than FAI, especially proven FAI. We’ve got plenty of examples of sentience.
This is why I was hesitant to fully agree with your prediction that any sentient programs created by humans have intrinsic moral weight. Sentient uFAI can have neutral or even negative moral weight (although this is a subjective value.)
The main reason this outcome could be unlikely is that most of the reasons that a created GAI would fail to be an FAI would also obliterate the potential for sentience.
I didn’t think about the case of a sentient UFAI—I should think that self-defense would apply, though I suppose that self-defense becomes a complicated issue if you’re a hard-core utilitarian.
I didn’t think about the case of a sentient UFAI—I should think that self-defense would apply
The problem there is that the UFAI has to specifically value its own sentience. That is, it has to be at least a “Self Sentience Satisficer”. Creating an AI that stably values something so complex and reflective requires defeating most of the same challenges that creating an FAI requires. This ought to make it relatively improbable to create accidentally.
That goes also for valuing real world paperclips, perhaps even more so. World → real function that gives number of paperclips may even be trickier than valuing own sentience.
For the AI to be dangerously effective it still needs to be able to optimize it’s own processes and have sufficient self understanding. Also it needs to understand that the actions it makes are made from it’s sensory input, to value the sensory input correctly. You have to have a lot of reflectivity to be good at maximizing paperclips. The third alternative, neither friendly nor unfriendly, is AI that solves formally defined problems. Hook it up to a simulator with god’s eye view, give it full specs of the simulator, define what’s a paperclip, and it’ll maximize simulated paperclips there. I’ve impression that people mistake this—which doesn’t require solving any philosophical problems—for real world paperclip maximizer, which is much much trickier.
Legally, a mind upload is only different from any other medical scan in mere quantity, and a simulation of brain is only qualitatively different from any other processing. Just as the cryopreservation is only a form of burial.
Furthermore, while it would seem to be better to magically have the mind uploading completely figured out without any experimentation on human mind uploads, we aren’t writing a science fiction/fantasy story, we are actually building the damn thing in the real world where things tend to go wrong.
edit: also, a rather strong point can be made that it is more ethical to experiment on a copy of yourself than on a copy of your cat or any other not completely stupid mammal. The consent matters.
This is an area that hasn’t been addressed by the law, for the very good reason that it isn’t close to being a problem yet. I don’t know whether people outside LW have been looking at the ethical status of uploads.
I agree with you that there’s no way to have uploading without making mistakes first. And possibly no way to have FAI without it having excellent simulations of people so that it can estimate what to do.
Or what if the ‘mountain people’ are utterly microscopic mites on a tiny ball hurling through space. Ohh, wait, that’s the reality.
Well, yes I am aware that my scenario is not literally descriptive of the world right now. The purpose is to inspire an intuitive understanding of why the economic reality of a society with strong upload technology would encourage destroying carbon copies of people who have been uploaded.
so I am not very worried about the first upload having any sort of edge.
I am not worried either. Nothing I said assumes a first-mover advantage or hard takeoff from the first mind upload. I’m describing society after upload technology has matured.
I am pretty sure that nearly anyone would be utterly unable to massively self improve on one’s own in any meaningful way rather than just screw itself into insanity
I’m certainly not assuming uploads will be self-improving, so it seems you are pretty comprehensively misunderstanding my point. I do assume uploads will become faster, due to hardware improvements. After some time, the ease and low cost of copying uploads will likely make them far more numerous than physical humans, and their economic advantages (being able to do orders of magnitude more work per year than physical humans) will drive wages far below human subsistence standards (even if the wages allow a great lifestyle for the uploads).
That was more a note on the Dr_Manhattan’s comment.
With regards to ‘economic advantage’, the advantage has to outgrow the overall growth for the state of carbon originals to decline. Also, you may want to read Accelerando by Charles Stross.
With regards to ‘economic advantage’, the advantage has to outgrow the overall growth for the state of carbon originals to decline.
There is no reason why this would be true. The economy can grow enormously while per-capita income and standard of living fall. This has happened before, the global economy and population grew enormously after the transition to agriculture, but living standards probably actually fell, and farmers were shorter, poorer, harder-working and more malnourished than their forager ancestors. It is not inevitable (or even very likely, IMO) that the economy will perpetually outgrow population.
Also, you may want to read Accelerando by Charles Stross.
I read it years ago, and wasn’t impressed. Why is that relevant?
Or what if the ‘mountain people’ are utterly microscopic mites on a tiny ball hurling through space. Ohh, wait, that’s the reality.
sidenote: I doubt mind uploads scale all the way up, and it appears quite likely that amoral mind uploads would be unable to get along with the copies, so I am not very worried about the first upload having any sort of edge. The first upload will probably be crippled and on the brink of insanity, suffering from hallucinations and otherwise broken thought (after massively difficult work to get this upload to be conscious and not to just go into simulated seizure ). From that you might progress to sane but stupefied uploads, with very significant IQ drop. Get a whiff of xenon to see what small alteration to electrical properties of the neurons amounts to. It will take a lot of gradual improvement until there are well working uploads, and even then I am pretty sure that nearly anyone would be utterly unable to massively self improve on one’s own in any meaningful way rather than just screw itself into insanity, without supervision; sane person shouldn’t even attempt that because if your improvement is making things worse then the next improvement will make things even worse, and one needs external verification.
My initial reaction was shock that a heavier-than-air radioactive gas might go into someone’s lungs on purpose. It triggers a lot of my “scary danger” heuristics for gases. Googling turned up a bunch of fascinating stuff. Thanks for the surprise! For anyone else interested, educational content includes:
Since its 1951 report of use in humans, xenon has been viewed as the closest candidate for an ideal anesthetic gas for its superior hemodynamic stability and swift recovery period
Neuroprotective and neurotoxic properties of the ‘inert’ gas, xenon
First baby given xenon gas to prevent brain injury
Sulfur hexafluoride has a similar speed of sound (and hence affect on voice sound) but isn’t mind altering
Neat!
Heh. Well, it’s not radioactive, the radon is. It is inert but it dissolves in membranes, changing electrical properties.
Some of the basic problems will presumably be (partially?) solved with animal research before uploading is tried with humans.
One of the challenges of uploading would be including not just the current record, but also the ability to learn and heal.
With regards to animal experimentation before first upload and so on, a running upload is nothing but fancy processing of a scan of, likely, a cadaver brain, legally no different from displaying that brain on computer, and doesn’t require any sort of FDA style stringent functionality testing on animals, not that such testing would help for a brain much bigger, with different neuron sizes, and with the failure modes that are highly non-obvious in animals. Nor that such regulation is even necessary, as the scanned upload of dead person, functional enough to recognize his family, is a definite improvement over being completely dead, and to prevent it equates mercy killing accident victims who have good prospect at full recovery, to avoid the mere discomfort of being sick.
The gradual progress on humans is pretty much a certainty, if one is to drop the wide eyed optimism bias. There are enough people who would bite the bullet, and it is not human experimentation—it is mere data processing—it might become human experimentation decades after functional uploads.
There’s a consensus here that conscious computer programs have the same moral weight as people, so getting uploading moderately wrong in some directions is worse than getting it completely wrong.
No there isn’t. I would have remembered something like that happened, what with all the disagreeing I would have been doing.
The mindspace of ‘conscious computer programs’ is unfathomably large and most of those programs are morally worthless. A “some” and/or “could” inserted in there could make the ‘consensus’ correct.
I may well have overgeneralized. I was basing the idea on remembering an essay saying that FAI should be designed to be non-sentient, and seeing concerns about how uploads and simulated people would be treated.
I suppose that moral concern would apply to any of sentient programs that humans would be likely to create.
Yes, that ‘have’ vs ‘can have’ distinction changes everything—but most people are less picky with general claims than I.
Food for thought. I wouldn’t rule this out as a possibility and certainly the proportion of ‘morally relevant’ programs in this group skyrockets over the broader class. I’m not too sure what we are likely to create. How probable is it that we succeed at creating sentience but fail at FAI?
I think creating sentience is a much easier project than FAI, especially proven FAI. We’ve got plenty of examples of sentience.
Creating sentience which isn’t much like the human model seems very difficult—I’m not even sure what that would mean, with the possible exception of basing something on cephalopods. OK, maybe beehives are sentient, too. How about cities?
This is why I was hesitant to fully agree with your prediction that any sentient programs created by humans have intrinsic moral weight. Sentient uFAI can have neutral or even negative moral weight (although this is a subjective value.)
The main reason this outcome could be unlikely is that most of the reasons that a created GAI would fail to be an FAI would also obliterate the potential for sentience.
I didn’t think about the case of a sentient UFAI—I should think that self-defense would apply, though I suppose that self-defense becomes a complicated issue if you’re a hard-core utilitarian.
The problem there is that the UFAI has to specifically value its own sentience. That is, it has to be at least a “Self Sentience Satisficer”. Creating an AI that stably values something so complex and reflective requires defeating most of the same challenges that creating an FAI requires. This ought to make it relatively improbable to create accidentally.
That goes also for valuing real world paperclips, perhaps even more so. World → real function that gives number of paperclips may even be trickier than valuing own sentience.
I would bet against that. (For reasons including but not limited to the observation that sentience exists in the real world too.)
For the AI to be dangerously effective it still needs to be able to optimize it’s own processes and have sufficient self understanding. Also it needs to understand that the actions it makes are made from it’s sensory input, to value the sensory input correctly. You have to have a lot of reflectivity to be good at maximizing paperclips. The third alternative, neither friendly nor unfriendly, is AI that solves formally defined problems. Hook it up to a simulator with god’s eye view, give it full specs of the simulator, define what’s a paperclip, and it’ll maximize simulated paperclips there. I’ve impression that people mistake this—which doesn’t require solving any philosophical problems—for real world paperclip maximizer, which is much much trickier.
Legally, a mind upload is only different from any other medical scan in mere quantity, and a simulation of brain is only qualitatively different from any other processing. Just as the cryopreservation is only a form of burial.
Furthermore, while it would seem to be better to magically have the mind uploading completely figured out without any experimentation on human mind uploads, we aren’t writing a science fiction/fantasy story, we are actually building the damn thing in the real world where things tend to go wrong.
edit: also, a rather strong point can be made that it is more ethical to experiment on a copy of yourself than on a copy of your cat or any other not completely stupid mammal. The consent matters.
This is an area that hasn’t been addressed by the law, for the very good reason that it isn’t close to being a problem yet. I don’t know whether people outside LW have been looking at the ethical status of uploads.
I agree with you that there’s no way to have uploading without making mistakes first. And possibly no way to have FAI without it having excellent simulations of people so that it can estimate what to do.
That’s a good point about consent.
Well, yes I am aware that my scenario is not literally descriptive of the world right now. The purpose is to inspire an intuitive understanding of why the economic reality of a society with strong upload technology would encourage destroying carbon copies of people who have been uploaded.
I am not worried either. Nothing I said assumes a first-mover advantage or hard takeoff from the first mind upload. I’m describing society after upload technology has matured.
I’m certainly not assuming uploads will be self-improving, so it seems you are pretty comprehensively misunderstanding my point. I do assume uploads will become faster, due to hardware improvements. After some time, the ease and low cost of copying uploads will likely make them far more numerous than physical humans, and their economic advantages (being able to do orders of magnitude more work per year than physical humans) will drive wages far below human subsistence standards (even if the wages allow a great lifestyle for the uploads).
That was more a note on the Dr_Manhattan’s comment.
With regards to ‘economic advantage’, the advantage has to outgrow the overall growth for the state of carbon originals to decline. Also, you may want to read Accelerando by Charles Stross.
There is no reason why this would be true. The economy can grow enormously while per-capita income and standard of living fall. This has happened before, the global economy and population grew enormously after the transition to agriculture, but living standards probably actually fell, and farmers were shorter, poorer, harder-working and more malnourished than their forager ancestors. It is not inevitable (or even very likely, IMO) that the economy will perpetually outgrow population.
I read it years ago, and wasn’t impressed. Why is that relevant?