What if the future is hellish and I won’t be able to die? (Current objection)
I realize there are lots of interesting technologies coming our way, but there are a lot of problems, too. I don’t know which will win. Will it be environmental collapse or green technology? FAI or the political/other issues created by AI? Will we have a world full of wonders or grey goo? Space colonies or alien invasions? As our power to solve problems grows, so does our ability to destroy everything we know. I do not believe in the future any more than I believe in heaven. I recognize it as a potential utopia / dystopia / neither. I do not assume that the ability to revive preserved people would make us utopia-creating demigods any more than our current abilities to do CPR or fly make our world carefree.
A new twist, waking up into this world, would be that I may not be able to die. The horrors that I could experience in a technologically advanced dystopia might be much worse than the ones we have currently. Dictators with ufAI armies, mind control brain implants, massive environmental and/or technological catastrophes.
There is one thing worse than dying, and that’s living an unnaturally long time in a hellish existence. If I sign up for cryo, I’ll be taking a risk with that, too.
What if the future is hellish and I won’t be able to die? (Current objection)
From this post (which is a great source of insight on many particular cryonics objections):
(5) You somehow know that a singularity-causing intelligence explosion will occur tomorrow. You also know that the building you are currently in is on fire. You pull an alarm and observe everyone else safely leaving the building. You realize that if you don’t leave you will fall unconscious, painlessly die, and have your brain incinerated. Do you leave the building?
Answering yes to (5) means you probably shouldn’t abstain from cryonics because you fear being revived and then tortured.
(This comment, including copying over text and links, was composed entirely without the mouse due to the Pentadactyl Firefox extension.)
That scenario is full of fail in terms of helping someone to weigh the issue in an ecologically valid way. Answers to the the trolley problem empirically hinge on all kinds of consequentially irrelevant details like whether you have to physically push the person to be sacrificed. The details that matter are hints about your true rejection and handling them in a sloppy way is less like grounded wisdom and more like a high pressure sales tactic.
In this case, for example, “leaving the building” stands in for signing up for cryonics, and “everyone else safely leaving the building” is the reason your unconscious body won’t be dragged out to safety… but that means you’d be doing a socially weird thing to not do the action that functions as a proxy for signing up for cryonics, which is the reverse of the actual state of affairs in the real world.
A more accurate scenario might be that your local witch doctor has diagnosed you with a theoretically curable degenerative disorder that will kill you in a few months, but you live in a shanty town by the sea where the cure is not available. The cure is probably available, but only in a distant and seemingly benevolent country across the sea where you don’t speak the language or understand the economy or politics very well. The sea has really strong currents and you could float downstream to the distant civilization, but they can’t come to you. You have heard from some people that you have a claim on something called “government assistance checks” in that far nation that will be given initially to whoever is taking care of you and helping you settle in while you are still sick.
You will be almost certainly be made a ward of some entity or another while there, but you don’t understand the details. It could be a complex organization or a person. There might be some time waiting for details of the cure to be worked out and there is a possibility that you could be completely cured but that this might cost an unknown amount of extra money, and its a real decision that reasonable people could go different ways on depending on their informed preferences, but the details and decisions will be made by whoever your benefactor ends up being.
That benefactor might have incentives to leave you the equivalent of “a hospital bed in the attic” for decades with lingering pain and some children’s books and audio tapes from the 1980′s for entertainment, pocketing some of the assistance checks for personal use, with your actual continued consciousness functioning as the basis of their moral claim to the assistance checks, and your continued ignorance being the basis of their claim to control the checks.
If you get bored/unhappy with your situation, especially over time, they might forcibly inject you with heroin or periodically erase your memory as a palliative. This is certainly within their means and is somewhat consistent with some of the professed values of some people who plan to take the raft trip themselves at some point, so there might actually be political cover for this to happen even if you don’t want that. Given the drugs and control over your information sources, they might trick you into nominally agreeing to the treatments.
You don’t get to pick your benefactor in advance, you don’t know what the details of the actual options that will exist to do the cost/benefit yourself in advance, and you don’t know what kind of larger political institutions will exist to oversee their decision making. You’d have to build your own raft and show up on their shores as a sort of refugee, and your family is aware of roughly the same things as you, and they could use the raft making materials to build part of a new shack for your sister, or perhaps a new outhouse for the family. Do you get on the raft and rely on the kindness of strangers or accept your fate and prepare to die in a way that leaves less bad memories for your loved ones than the average death.
Also, human nature being what it is, if you talk about it too much but then decide to not build the raft and make the attempt, then your family may feel worse than average about your death, because there will be lingering guilt over the possibility that they shamed or frightened you into sacrificing your chances at survival so that they could have a new outhouse instead. And knowing all of these additional local/emotional issues as well as you do, they might resent the subject being brought up in a way that destabilizes the status quo “common knowledge” of how family resources will be allocated. And your cousin got sick from an overflowing outhouse last year, so even though it sounds banal, the outhouse is a real thing that really verifiably matters.
Each of the questions in that post was meant to address one argument against cryo. The argument ‘hardly anyone I know will be alive when I’m revived’ is addressed by the second question. (Which I would answer “I don’t know, I’d have to think about it”, BTW.)
Here’s the reason I don’t find this very scary. As a frozen person, you have very little of value to offer people, and will probably take some resources. Thus, if someone wants to bring you back it will likely must be mostly for your benefit, rather than because they want to enslave you or something. If the universe just has people who don’t care about you, then they just won’t revive you, and it will be the same as if you had died.
In order for you to be revived in a hellish world, the people who brought you back have to be actively malicious, which doesn’t seem very likely to me.
Many among us will spend the better part of a million dollars to preserve the life of children born so deformed and disabled that they actually will spend a significant amount of their lives in pain and the rest of it not being able to do much of what gives the rest of us pleasure or status. You don’t have to be actively malicious to think that life at any cost is a Good Thing (tm).
There’s also the theoretical possibility that the world you are revived in to is perceived as a good one by the people born in to it, but is too hard to adjust to for a very old person from a very different world. I doubt the majority of slaves would prefer death to the lives they had, but someone who had lived 80 years in freedom and the best the 21st century could offer in terms of material comforts might not be as blase about a very different status quo in the future.
If the people reviving you are not malicious then you would expect to have the option of dying again unless they don’t believe you that your life sucks too much.
Also the psychology of happiness seems to suggest that people adjust pretty well to big life changes.
Unless you are defining malicious to mean “lets me kill myself if I want to,” then being revived into a society with similar laws and values as the current U.S. would certainly make it illegal for you to kill yourself. Most of us realize we could do it if we wanted anyway, but a society that can revive you probably has more effective means of enforcing prohibitions. Even now, we already have “chemical castration” for some sex criminals.
In order for you to be revived in a hellish world, the people who brought you back have to be actively malicious, which doesn’t seem very likely to me.
They might also be high-functioning but insane, from some of the very many ways tech at the level of mucking around with physical human brains to the degree of successfully reanimating cryonics patients can go wrong. With the original imperative to revive cryonics patients intact, the ability to do so also somehow intact, but things being very, very wrong otherwise.
I think “you might wake up in hell” is actually one of the better arguments for opting out of cryonics, since some of the sort of tech you need to revive cryonics patients is also tech you could use to build unescapeable virtual hells.
Although the hellish world scenario seems unlikely it might be important to consider. At least according to my own values things like being confined to children’s books and being injected with heroin would contribute very little negative utility (if negative at all) compared to even 1 in 1000 of enduring the worst psychologically possible torture for, say, a billion years.
Ok, the cost benefit ratio between reviving someone and profiting off of their slavery might be worth considering. I’m not sure how many resources it would take to revive me or if it would be safe to assume that my brain’s abilities (or whatever was valued) would not outweigh the resources required to revive me but it seems likely now that I think of it, especially considering that all my skills would be out of date and they’d probably have eugenics or intelligence enhancers by then which would outdo my brain.
Also, the people who enslaved me would not have to be the same ones as the people who revive me. They would not be subject to the cost-benefit ratio. The people who revive me could be well-meaning, but if the world has gone to hell, there might be nothing they can do about bad entities doing horrible things.
The reviver may only revive me because they’re required to, because the company storing me has a legal agreement and can be prosecuted if they don’t. The timing of my revival may be totally arbitrary in the grand scheme of things. It might have more to do with the limit for how long a person can stay in cryo (Whether that means a tangible one, or my account runs out of money with which to stay frozen or they reach some legal limit where they’re forced to honor my contract) than with the state of the world at that time.
I don’t assume that there would be a benevolent person waiting for me. There’s just too much time between here and there and you never know what is going to happen. Maybe none of my friends sign up for cryo. Maybe there’s only a 1 in 10 chance of successful revival and I’m the only one of my group who makes it.
So, I’m not convinced that the world will not have gone to hell or that I’ll be revived by friends, but I think slavery is less likely.
Consider that you might reach such a future in your natural lifespan, without cryonics. Does this cause you to spend resources on maintaining a suicide button that would ensure information-theoretical erasure of yourself, so no sudden UFAI foom could get hold of you? If not, what is the difference?
It’s not quite information-theoretical, but does a snub nose .357 count? I carry because statistically the safest thing to do as the attempted victim of a violent crime is to resist using a firearm.
[EDITED to add: oops, I completely misinterpreted what Decius wrote. What follows is therefore approximately 100% irrelevant. I’ll leave it there, though, because I don’t believe in trying to erase one’s errors from history :-). Also: I fixed a small typo.]
Assuming this isn’t a statistical joke like the one about always taking a bomb with you when you fly (because it’s very unlikely that there’ll be two bombs on a single plane) … do you have reason to think that having-but-deliberately-not-using the firearm actually causes this alleged improved safety?
It seems like there are some very obvious ways in in which that association could exist without the causal link—e.g., people are more likely to be able to resist when the danger is less, people who are concerned enough about their safety to carry for that reason but sensible enough not to shoot are also more likely to take other measures that improve their safety, etc.
Who said anything about not using? I have never seen statistics regarding outcomes of victims of violent crime having a firearm but never drawing it.
There could be other confounding factors as well, like underreporting by people who are mugged, cooperate, and experience no injury; or a tendency among people who carry legally to know how to use their weapons better than criminals and typical people; or difficulty determining whether a dead victim resisted or not. But the statistics aren’t even remotely vague: Among reported victims of violent crime, a larger percentage of those who cooperated with the criminal died than those who resisted the crime using a firearm.
Not that something already known would be able to prevent a post-singularity hostile AI from accomplishing the goals it has, much less a firearm that has about as long an effective range when fired as when performing a lunging swing.
Some optimistic future scenarios speculate that we might be able to revive even those who don’t cryopreserve (current cloning techniques on preserved remnants can recreate genetic phenotypes; some sort of simulation on records of your behaviour might be able to recreate your behavioural phenotype, and so on for every part that makes up you). That applies to the pessimistic future scenarios too: if you don’t sign up for cryo, you’ll be taking a risk that the future is hellish as well.
It would be extremely surprising if our current or traditional death ceremonies are the optimal minimisation of that risk. Almost certainly, we should be trying to minimise the risk further. Cremation, destruction of records pertaining to ourselves, erasure of Facebook profile, planting deliberately false information, and other such tactics should be considered.
Does this objection strike you as reasonable, or unreasonable?
If a copy of me were made, would this instance of me experience the next instance’s experiences? I don’t think so. As far as whether I could suffer from being re-created, I doubt that. However, I’d be very concerned about future instances of me being abused, if I thought there were an interest in reviving me. If I was famous, I’d be concerned that fans might want to make a bunch of clones of me, and I’d be concerned about how the clones were treated. Unless I had reason to think that A. People are going to reconstruct me against my will and B. The people reconstructing me would do something unethical with the clones, I wouldn’t worry about it.
From the perspective that you are your instances, it matters because if you fear being abused, you would fear any instance of you being abused. You wouldn’t want to walk into an atomically precise copying machine with the knowledge that the copy is going to be used in cruel experiments.
The question becomes, where do you draw the line? Is a rough copy of you based on your facebook posts and whatever advanced AI can extrapolate from that just as worthy of you anticipating their experiences? Or perhaps you should fear ending up as that person on a relative scale depending how similar it is—if it is 50% similar, have 50% of the fear, etc. Fear and other emotions don’t have to be a simple binary relationship, after all.
Empathy is an emotion that seems to be distinctly different (meaning, it feels different and probably happens differently on a biochemical and neurological level) from the emotion of actually anticipating being the individual. So while yes I would feel empathy for any abused clones that applies regardless of the degree to which I have fear of waking up as them, it would not be the only emotion because I believe I would be the clones. Any information I had that indicates that clones might be abused in the future becomes much more near to me and I am more likely to take action on it if I think it likely that I will actually be one of them.
Thus if you think the future is bad in a way that prohibits wanting to wake up from cryonics to any serious degree, then it might be smart to be concerned for the safety of clones who could be you anyway. Since you haven’t stated a desire to be cloned, being cloned against your will is more likely to be carried out by unethical people relative to ethical people, so even if the prospect is fairly remote it is more worrying than the prospect with cryonics, where caring people must keep you frozen and do have your consent to bring you back.
I fear a rough copy of myself made from my facebook posts (and lesswrong comments) being tortured about as much as I fear an intricate 3d sculpture of me being made and then being used as a target in a gun range. Is that really just me?
Is my corpse an atomically precise copy of myself? I wouldn’t care much about that.
If you mean the classic sci-fi picture of an exact and recent clone of myself, I would certainly prefer that a copy of myself be used at a gun range than that a copy of my daughters or a few of my relatives be used. And certainly prefer that a copy of myself be used than that the single original of any of my relatives be used.
It is an ironic thing that a rationalist discussion of values comes down to questions like “how do you feel about...” Personally, much of my rational effort around values is to make choices that go against some or even many of my feelings, presumably to get at values that I think are more important. I highly value not being fooled by appearances, I highly value minimizing the extent to which I succumb to “cargo cult” reasoning. I’m not sure how much identifying myself with a copy of myself is valid (whatever that means in this context) and how much is cargo cult. But I’m pretty sure identifying myself with my corpse or a caricature of myself is cargo cult.
If you undergo dementia or some other neuro-degenerative condition for a few years, it will turn you into a very different person. A “rough” copy made from information mined from the internet could perhaps be much closer than this to the healthy version of the person than the version kept alive in a nursing home in their later years. Because of this argument, I don’t see how you can come to the conclusion that identifying with a “caricature” is cargo-cult by definition.
Your corpse is definitely not an atomically precise copy of yourself. Corpses are the subject of extensive structural damage which makes their state of unconsciousness irreversible. If this were not the case, we would neither call them corpses nor consider it unreasonable to identify with them.
A more interesting grey area would be if you were subjected to cryonics or plastination, copied while in a completely ametabolic and unconscious state, and then reanimated. You could look across at a plastic-embedded or frozen copy of yourself and not even know if they are the original. In fact, there could be many of them, implying that you are probably not the original unless you can obtain information otherwise.
If you value your original self sufficiently, that seems to imply that if say you wake up in a room with 99 other versions of you still in stasis and have a choice to a) destroy them all and live or b) suicide and reanimate them all, you should pick suicide in advance so that it becomes 99% likely your copy will pick that option.
On the other hand if you don’t care whether you are the original or a copy you can destroy all those nonsentient copies (99% chance of including the original) without worrying about it.
I’ve had success explaining cryonics to people by using the “reconstruct” (succinct term, thank you!) spectrum—on one end, maybe reconstruction is easy, and we’ll all get to live forever. On the other end, maybe it’s impossible, and you simply cannot spend more than a few days de-animated before being lost forever. In the future, there will be scientists who do research and experiments and actually determine where on the spectrum the technology actually is. Cryonics is just a particular corpse preservation method that prepares for reconstruction being difficult.
More succinctly, cryonics is trying to reach the future, and this hypothetical objection is trying to avoid the future.
I asked because it seemed that, if a fear of bad future is a reason not to try harder to reach the future, it should also be a reason to try harder avoid the future, and I was curious to examine this fear of the future.
What if the future is hellish and I won’t be able to die? (Current objection)
I realize there are lots of interesting technologies coming our way, but there are a lot of problems, too. I don’t know which will win. Will it be environmental collapse or green technology? FAI or the political/other issues created by AI? Will we have a world full of wonders or grey goo? Space colonies or alien invasions? As our power to solve problems grows, so does our ability to destroy everything we know. I do not believe in the future any more than I believe in heaven. I recognize it as a potential utopia / dystopia / neither. I do not assume that the ability to revive preserved people would make us utopia-creating demigods any more than our current abilities to do CPR or fly make our world carefree.
A new twist, waking up into this world, would be that I may not be able to die. The horrors that I could experience in a technologically advanced dystopia might be much worse than the ones we have currently. Dictators with ufAI armies, mind control brain implants, massive environmental and/or technological catastrophes.
There is one thing worse than dying, and that’s living an unnaturally long time in a hellish existence. If I sign up for cryo, I’ll be taking a risk with that, too.
From this post (which is a great source of insight on many particular cryonics objections):
(This comment, including copying over text and links, was composed entirely without the mouse due to the Pentadactyl Firefox extension.)
That scenario is full of fail in terms of helping someone to weigh the issue in an ecologically valid way. Answers to the the trolley problem empirically hinge on all kinds of consequentially irrelevant details like whether you have to physically push the person to be sacrificed. The details that matter are hints about your true rejection and handling them in a sloppy way is less like grounded wisdom and more like a high pressure sales tactic.
In this case, for example, “leaving the building” stands in for signing up for cryonics, and “everyone else safely leaving the building” is the reason your unconscious body won’t be dragged out to safety… but that means you’d be doing a socially weird thing to not do the action that functions as a proxy for signing up for cryonics, which is the reverse of the actual state of affairs in the real world.
A more accurate scenario might be that your local witch doctor has diagnosed you with a theoretically curable degenerative disorder that will kill you in a few months, but you live in a shanty town by the sea where the cure is not available. The cure is probably available, but only in a distant and seemingly benevolent country across the sea where you don’t speak the language or understand the economy or politics very well. The sea has really strong currents and you could float downstream to the distant civilization, but they can’t come to you. You have heard from some people that you have a claim on something called “government assistance checks” in that far nation that will be given initially to whoever is taking care of you and helping you settle in while you are still sick.
You will be almost certainly be made a ward of some entity or another while there, but you don’t understand the details. It could be a complex organization or a person. There might be some time waiting for details of the cure to be worked out and there is a possibility that you could be completely cured but that this might cost an unknown amount of extra money, and its a real decision that reasonable people could go different ways on depending on their informed preferences, but the details and decisions will be made by whoever your benefactor ends up being.
That benefactor might have incentives to leave you the equivalent of “a hospital bed in the attic” for decades with lingering pain and some children’s books and audio tapes from the 1980′s for entertainment, pocketing some of the assistance checks for personal use, with your actual continued consciousness functioning as the basis of their moral claim to the assistance checks, and your continued ignorance being the basis of their claim to control the checks.
If you get bored/unhappy with your situation, especially over time, they might forcibly inject you with heroin or periodically erase your memory as a palliative. This is certainly within their means and is somewhat consistent with some of the professed values of some people who plan to take the raft trip themselves at some point, so there might actually be political cover for this to happen even if you don’t want that. Given the drugs and control over your information sources, they might trick you into nominally agreeing to the treatments.
You don’t get to pick your benefactor in advance, you don’t know what the details of the actual options that will exist to do the cost/benefit yourself in advance, and you don’t know what kind of larger political institutions will exist to oversee their decision making. You’d have to build your own raft and show up on their shores as a sort of refugee, and your family is aware of roughly the same things as you, and they could use the raft making materials to build part of a new shack for your sister, or perhaps a new outhouse for the family. Do you get on the raft and rely on the kindness of strangers or accept your fate and prepare to die in a way that leaves less bad memories for your loved ones than the average death.
Also, human nature being what it is, if you talk about it too much but then decide to not build the raft and make the attempt, then your family may feel worse than average about your death, because there will be lingering guilt over the possibility that they shamed or frightened you into sacrificing your chances at survival so that they could have a new outhouse instead. And knowing all of these additional local/emotional issues as well as you do, they might resent the subject being brought up in a way that destabilizes the status quo “common knowledge” of how family resources will be allocated. And your cousin got sick from an overflowing outhouse last year, so even though it sounds banal, the outhouse is a real thing that really verifiably matters.
That is an awesome metaphor :)
Each of the questions in that post was meant to address one argument against cryo. The argument ‘hardly anyone I know will be alive when I’m revived’ is addressed by the second question. (Which I would answer “I don’t know, I’d have to think about it”, BTW.)
[realizes he has been rationalizing] Oh...
Here’s the reason I don’t find this very scary. As a frozen person, you have very little of value to offer people, and will probably take some resources. Thus, if someone wants to bring you back it will likely must be mostly for your benefit, rather than because they want to enslave you or something. If the universe just has people who don’t care about you, then they just won’t revive you, and it will be the same as if you had died.
In order for you to be revived in a hellish world, the people who brought you back have to be actively malicious, which doesn’t seem very likely to me.
What do you think?
Many among us will spend the better part of a million dollars to preserve the life of children born so deformed and disabled that they actually will spend a significant amount of their lives in pain and the rest of it not being able to do much of what gives the rest of us pleasure or status. You don’t have to be actively malicious to think that life at any cost is a Good Thing (tm).
There’s also the theoretical possibility that the world you are revived in to is perceived as a good one by the people born in to it, but is too hard to adjust to for a very old person from a very different world. I doubt the majority of slaves would prefer death to the lives they had, but someone who had lived 80 years in freedom and the best the 21st century could offer in terms of material comforts might not be as blase about a very different status quo in the future.
If the people reviving you are not malicious then you would expect to have the option of dying again unless they don’t believe you that your life sucks too much.
Also the psychology of happiness seems to suggest that people adjust pretty well to big life changes.
Unless you are defining malicious to mean “lets me kill myself if I want to,” then being revived into a society with similar laws and values as the current U.S. would certainly make it illegal for you to kill yourself. Most of us realize we could do it if we wanted anyway, but a society that can revive you probably has more effective means of enforcing prohibitions. Even now, we already have “chemical castration” for some sex criminals.
Okay, that’s a good point. (I assume you meant “defining ‘not malicious’ to mean ‘lets me kill myself...’”)
They might also be high-functioning but insane, from some of the very many ways tech at the level of mucking around with physical human brains to the degree of successfully reanimating cryonics patients can go wrong. With the original imperative to revive cryonics patients intact, the ability to do so also somehow intact, but things being very, very wrong otherwise.
I think “you might wake up in hell” is actually one of the better arguments for opting out of cryonics, since some of the sort of tech you need to revive cryonics patients is also tech you could use to build unescapeable virtual hells.
Although the hellish world scenario seems unlikely it might be important to consider. At least according to my own values things like being confined to children’s books and being injected with heroin would contribute very little negative utility (if negative at all) compared to even 1 in 1000 of enduring the worst psychologically possible torture for, say, a billion years.
Ok, the cost benefit ratio between reviving someone and profiting off of their slavery might be worth considering. I’m not sure how many resources it would take to revive me or if it would be safe to assume that my brain’s abilities (or whatever was valued) would not outweigh the resources required to revive me but it seems likely now that I think of it, especially considering that all my skills would be out of date and they’d probably have eugenics or intelligence enhancers by then which would outdo my brain.
Also, the people who enslaved me would not have to be the same ones as the people who revive me. They would not be subject to the cost-benefit ratio. The people who revive me could be well-meaning, but if the world has gone to hell, there might be nothing they can do about bad entities doing horrible things.
The reviver may only revive me because they’re required to, because the company storing me has a legal agreement and can be prosecuted if they don’t. The timing of my revival may be totally arbitrary in the grand scheme of things. It might have more to do with the limit for how long a person can stay in cryo (Whether that means a tangible one, or my account runs out of money with which to stay frozen or they reach some legal limit where they’re forced to honor my contract) than with the state of the world at that time.
I don’t assume that there would be a benevolent person waiting for me. There’s just too much time between here and there and you never know what is going to happen. Maybe none of my friends sign up for cryo. Maybe there’s only a 1 in 10 chance of successful revival and I’m the only one of my group who makes it.
So, I’m not convinced that the world will not have gone to hell or that I’ll be revived by friends, but I think slavery is less likely.
Consider that you might reach such a future in your natural lifespan, without cryonics. Does this cause you to spend resources on maintaining a suicide button that would ensure information-theoretical erasure of yourself, so no sudden UFAI foom could get hold of you? If not, what is the difference?
It’s not quite information-theoretical, but does a snub nose .357 count? I carry because statistically the safest thing to do as the attempted victim of a violent crime is to resist using a firearm.
[EDITED to add: oops, I completely misinterpreted what Decius wrote. What follows is therefore approximately 100% irrelevant. I’ll leave it there, though, because I don’t believe in trying to erase one’s errors from history :-). Also: I fixed a small typo.]
Assuming this isn’t a statistical joke like the one about always taking a bomb with you when you fly (because it’s very unlikely that there’ll be two bombs on a single plane) … do you have reason to think that having-but-deliberately-not-using the firearm actually causes this alleged improved safety?
It seems like there are some very obvious ways in in which that association could exist without the causal link—e.g., people are more likely to be able to resist when the danger is less, people who are concerned enough about their safety to carry for that reason but sensible enough not to shoot are also more likely to take other measures that improve their safety, etc.
Who said anything about not using? I have never seen statistics regarding outcomes of victims of violent crime having a firearm but never drawing it.
There could be other confounding factors as well, like underreporting by people who are mugged, cooperate, and experience no injury; or a tendency among people who carry legally to know how to use their weapons better than criminals and typical people; or difficulty determining whether a dead victim resisted or not. But the statistics aren’t even remotely vague: Among reported victims of violent crime, a larger percentage of those who cooperated with the criminal died than those who resisted the crime using a firearm.
Not that something already known would be able to prevent a post-singularity hostile AI from accomplishing the goals it has, much less a firearm that has about as long an effective range when fired as when performing a lunging swing.
D’oh. I completely misinterpreted what you wrote: “to resist-using a firearm”, rather than “to resist, using a firearm”.
Sorry- my original phrasing is ambiguous to someone who doesn’t already know what I’m saying.
Interesting. Do you have a source on that?
Kleck G. Point Blank—Guns and Violence in America. New York, NY, Aldine De Gruyter, 1991.
Tangent: Do you have a link to a study that backs this up? I’m very interested in it. EDIT: Arg, serves me right for not reading more downthread.
Read this hypothetical objection:
Does this objection strike you as reasonable, or unreasonable?
If a copy of me were made, would this instance of me experience the next instance’s experiences? I don’t think so. As far as whether I could suffer from being re-created, I doubt that. However, I’d be very concerned about future instances of me being abused, if I thought there were an interest in reviving me. If I was famous, I’d be concerned that fans might want to make a bunch of clones of me, and I’d be concerned about how the clones were treated. Unless I had reason to think that A. People are going to reconstruct me against my will and B. The people reconstructing me would do something unethical with the clones, I wouldn’t worry about it.
Why do you ask?
From the perspective that you are your instances, it matters because if you fear being abused, you would fear any instance of you being abused. You wouldn’t want to walk into an atomically precise copying machine with the knowledge that the copy is going to be used in cruel experiments.
The question becomes, where do you draw the line? Is a rough copy of you based on your facebook posts and whatever advanced AI can extrapolate from that just as worthy of you anticipating their experiences? Or perhaps you should fear ending up as that person on a relative scale depending how similar it is—if it is 50% similar, have 50% of the fear, etc. Fear and other emotions don’t have to be a simple binary relationship, after all.
Empathy is an emotion that seems to be distinctly different (meaning, it feels different and probably happens differently on a biochemical and neurological level) from the emotion of actually anticipating being the individual. So while yes I would feel empathy for any abused clones that applies regardless of the degree to which I have fear of waking up as them, it would not be the only emotion because I believe I would be the clones. Any information I had that indicates that clones might be abused in the future becomes much more near to me and I am more likely to take action on it if I think it likely that I will actually be one of them.
Thus if you think the future is bad in a way that prohibits wanting to wake up from cryonics to any serious degree, then it might be smart to be concerned for the safety of clones who could be you anyway. Since you haven’t stated a desire to be cloned, being cloned against your will is more likely to be carried out by unethical people relative to ethical people, so even if the prospect is fairly remote it is more worrying than the prospect with cryonics, where caring people must keep you frozen and do have your consent to bring you back.
I fear a rough copy of myself made from my facebook posts (and lesswrong comments) being tortured about as much as I fear an intricate 3d sculpture of me being made and then being used as a target in a gun range. Is that really just me?
Nope, I’d feel the same. I think I would like to hang out with a rough copy of myself made from my internet behaviour, though.
Hmm. How do you feel about the prospect of an atomically precise copy of yourself being used as a living target at a gun range?
Is my corpse an atomically precise copy of myself? I wouldn’t care much about that.
If you mean the classic sci-fi picture of an exact and recent clone of myself, I would certainly prefer that a copy of myself be used at a gun range than that a copy of my daughters or a few of my relatives be used. And certainly prefer that a copy of myself be used than that the single original of any of my relatives be used.
It is an ironic thing that a rationalist discussion of values comes down to questions like “how do you feel about...” Personally, much of my rational effort around values is to make choices that go against some or even many of my feelings, presumably to get at values that I think are more important. I highly value not being fooled by appearances, I highly value minimizing the extent to which I succumb to “cargo cult” reasoning. I’m not sure how much identifying myself with a copy of myself is valid (whatever that means in this context) and how much is cargo cult. But I’m pretty sure identifying myself with my corpse or a caricature of myself is cargo cult.
If you undergo dementia or some other neuro-degenerative condition for a few years, it will turn you into a very different person. A “rough” copy made from information mined from the internet could perhaps be much closer than this to the healthy version of the person than the version kept alive in a nursing home in their later years. Because of this argument, I don’t see how you can come to the conclusion that identifying with a “caricature” is cargo-cult by definition.
Your corpse is definitely not an atomically precise copy of yourself. Corpses are the subject of extensive structural damage which makes their state of unconsciousness irreversible. If this were not the case, we would neither call them corpses nor consider it unreasonable to identify with them.
A more interesting grey area would be if you were subjected to cryonics or plastination, copied while in a completely ametabolic and unconscious state, and then reanimated. You could look across at a plastic-embedded or frozen copy of yourself and not even know if they are the original. In fact, there could be many of them, implying that you are probably not the original unless you can obtain information otherwise.
If you value your original self sufficiently, that seems to imply that if say you wake up in a room with 99 other versions of you still in stasis and have a choice to a) destroy them all and live or b) suicide and reanimate them all, you should pick suicide in advance so that it becomes 99% likely your copy will pick that option.
On the other hand if you don’t care whether you are the original or a copy you can destroy all those nonsentient copies (99% chance of including the original) without worrying about it.
I’ve had success explaining cryonics to people by using the “reconstruct” (succinct term, thank you!) spectrum—on one end, maybe reconstruction is easy, and we’ll all get to live forever. On the other end, maybe it’s impossible, and you simply cannot spend more than a few days de-animated before being lost forever. In the future, there will be scientists who do research and experiments and actually determine where on the spectrum the technology actually is. Cryonics is just a particular corpse preservation method that prepares for reconstruction being difficult.
More succinctly, cryonics is trying to reach the future, and this hypothetical objection is trying to avoid the future.
I asked because it seemed that, if a fear of bad future is a reason not to try harder to reach the future, it should also be a reason to try harder avoid the future, and I was curious to examine this fear of the future.