I’m not sure that’s a universal feeling. I would certainly would put it the other way around.
What are your exact (or, I suppose, approximated) probabilities? Maybe I don’t have a good sense of this.
Many human moral systems allow one to kill others to save a life.
Well yes. This brings up a whole host of issues though. The best analogy to the point I’d like to make would be abortion. Similar to fetuses, human moral systems do not view corpsicles as “lives”. Which is why those moral systems you mentioned wouldn’t approve of murdering someone to save a corpsicle, just as it would be widely considered “wrong” to shoot an abortion doctor. Whether or not it actually is wrong is not my point, but only that you can’t adopt the “many human moral systems allow this” argument without paying any attention to the fact that human moral systems wouldn’t see it as a life anyway.
In the same vein, I don’t think your decision theory argument pans out either, because following the rule “kill people who are trying to murder corpsicles” doesn’t equate to “kill people who are murdering other people” in the view of, for all intents and purposes, everyone, and ergo doesn’t have the intended deterrent effect. In fact, the effect it would have would be to get you jailed, a newspaper story about the “crazy transhumanist who shot someone to save a corpsicle”, and, well, the corpsicle will just be killed anyway once you’re imprisoned.
What are your exact (or, I suppose, approximated) probabilities
I’d estimate around a 5% chance for cryonics to work in some form, and a 1% chance of a Singularity-type event, broadly construed, in the next 40 years.
The best analogy to the point I’d like to make would be abortion. Similar to fetuses, human moral systems do not view corpsicles as “lives”.
Huh? Many moral systems do see fetuses as lives. That’s part of why abortion is so controversial. Moreover, what matters is not whether those systems normally see cryonics patients as alive, but that they have a general rule about saving lives. So if one takes one of those systems and then expands the set of people considered having moral weight to include the cryopatients, then the result follows. One should’t be surprised if here where a lot of people take cryonics seriously, they will have modified pre-existing moral systems to give weight to those already cryonicly preserved.
I don’t think many moral systems truly see them as lives on the same level that adult humans are. And some don’t see them as lives at all.
One should’t be surprised if here where a lot of people take cryonics seriously, they will have modified pre-existing moral systems to give weight to those already cryonicly preserved.
Well, of course. I’m not saying one should be. But you’re offering “many moral systems” like the populous at large is any real expert on morality. You’re appealing to the authority of the populous, which is an authority that many of us think is, well… pretty dumb.
Sorry, if I’ve been unclear. I’m not saying they are correct in general. Nor am I even defending wedrifid’s views. You seemed to ask where those views came from and I was trying to answer. Explaining from where a set of ethical/moral values arises is not the same as saying that they are correct.
I don’t know if his attitude is wrong or not. I really haven’t given the question enough thought to answer it either way. Moreover, it shouldn’t be an insult to explain how a given ethical attitude can develop whether or not one thinks the view is correct. I’m not sure why you think that would be an insult. Is it because there’s a common approach of dismissing the views of those they disagree with by giving psychological explanations for why someone would want to think that? Or is there something more subtle that I’m missing here?
Well perhaps not an insult. But it seems like what you are saying is “This is why I think he might think that but I think he’s wrong.” If you already think he believes something for a reason you believe is wrong, you don’t have a very high opinion of his rationality.
If you already think he believes something for a reason you believe is wrong, you don’t have a very high opinion of his rationality.
If your goal is to improve, it’s more important to notice and correct errors than deceive people about their absence. I believe it’s insulting, not respectful, to attribute to a rationalist the attitude that they would prefer the knowledge of a flaw withheld.
(You might want to take a precaution of asking first if Crocker’s rules apply, and communicate the bug report privately.)
I’m very confused. I wasn’t talking about a bug report. Unless you mean bug in rationality.
Furthermore, I never attributed that attitude to JoshuaZ. JoshuaZ had no evidence that the flaw he proposed is wedrifid’s thinking. He’s just selecting one potential reason out of the whole set of potential reasons.
Ah, I see. So to say “I’m not defending claim X” sounds more like “I disagree with X” than “I feel confused about X”. I don’t know how universal that is.
If you already think he believes something for a reason you believe is wrong, you’re not putting much faith in his rationality.
Really? You seem to be radically overestimating human rationality in general. We all likely believe things for reasons that are too weak to justify our levels of belief, or believe things due to cultural upbringing and other reasons which have zero actual evidentiary weight. Part of the task of becoming more rational is identifying those issues and dealing with them, especially the higher priority things that impact a lot of other beliefs. Everyone here, including myself, likely believes things for bad reasons. In that context, discussing where beliefs come from seems natural.
I think that wedrifid is one of the more careful, rational and thought provoking people here. That doesn’t mean that he’s a perfect rationalist.
What are your exact (or, I suppose, approximated) probabilities? Maybe I don’t have a good sense of this.
Well yes. This brings up a whole host of issues though. The best analogy to the point I’d like to make would be abortion. Similar to fetuses, human moral systems do not view corpsicles as “lives”. Which is why those moral systems you mentioned wouldn’t approve of murdering someone to save a corpsicle, just as it would be widely considered “wrong” to shoot an abortion doctor. Whether or not it actually is wrong is not my point, but only that you can’t adopt the “many human moral systems allow this” argument without paying any attention to the fact that human moral systems wouldn’t see it as a life anyway.
In the same vein, I don’t think your decision theory argument pans out either, because following the rule “kill people who are trying to murder corpsicles” doesn’t equate to “kill people who are murdering other people” in the view of, for all intents and purposes, everyone, and ergo doesn’t have the intended deterrent effect. In fact, the effect it would have would be to get you jailed, a newspaper story about the “crazy transhumanist who shot someone to save a corpsicle”, and, well, the corpsicle will just be killed anyway once you’re imprisoned.
I’d estimate around a 5% chance for cryonics to work in some form, and a 1% chance of a Singularity-type event, broadly construed, in the next 40 years.
Huh? Many moral systems do see fetuses as lives. That’s part of why abortion is so controversial. Moreover, what matters is not whether those systems normally see cryonics patients as alive, but that they have a general rule about saving lives. So if one takes one of those systems and then expands the set of people considered having moral weight to include the cryopatients, then the result follows. One should’t be surprised if here where a lot of people take cryonics seriously, they will have modified pre-existing moral systems to give weight to those already cryonicly preserved.
I don’t think many moral systems truly see them as lives on the same level that adult humans are. And some don’t see them as lives at all.
Well, of course. I’m not saying one should be. But you’re offering “many moral systems” like the populous at large is any real expert on morality. You’re appealing to the authority of the populous, which is an authority that many of us think is, well… pretty dumb.
Sorry, if I’ve been unclear. I’m not saying they are correct in general. Nor am I even defending wedrifid’s views. You seemed to ask where those views came from and I was trying to answer. Explaining from where a set of ethical/moral values arises is not the same as saying that they are correct.
Well it seems like an insult to wedrifid to offer an explanation for his actions that you think is wrong.
I don’t know if his attitude is wrong or not. I really haven’t given the question enough thought to answer it either way. Moreover, it shouldn’t be an insult to explain how a given ethical attitude can develop whether or not one thinks the view is correct. I’m not sure why you think that would be an insult. Is it because there’s a common approach of dismissing the views of those they disagree with by giving psychological explanations for why someone would want to think that? Or is there something more subtle that I’m missing here?
Well perhaps not an insult. But it seems like what you are saying is “This is why I think he might think that but I think he’s wrong.” If you already think he believes something for a reason you believe is wrong, you don’t have a very high opinion of his rationality.
If your goal is to improve, it’s more important to notice and correct errors than deceive people about their absence. I believe it’s insulting, not respectful, to attribute to a rationalist the attitude that they would prefer the knowledge of a flaw withheld.
(You might want to take a precaution of asking first if Crocker’s rules apply, and communicate the bug report privately.)
I’m very confused. I wasn’t talking about a bug report. Unless you mean bug in rationality.
Furthermore, I never attributed that attitude to JoshuaZ. JoshuaZ had no evidence that the flaw he proposed is wedrifid’s thinking. He’s just selecting one potential reason out of the whole set of potential reasons.
Ah, I see. So to say “I’m not defending claim X” sounds more like “I disagree with X” than “I feel confused about X”. I don’t know how universal that is.
Really? You seem to be radically overestimating human rationality in general. We all likely believe things for reasons that are too weak to justify our levels of belief, or believe things due to cultural upbringing and other reasons which have zero actual evidentiary weight. Part of the task of becoming more rational is identifying those issues and dealing with them, especially the higher priority things that impact a lot of other beliefs. Everyone here, including myself, likely believes things for bad reasons. In that context, discussing where beliefs come from seems natural.
I think that wedrifid is one of the more careful, rational and thought provoking people here. That doesn’t mean that he’s a perfect rationalist.