From consequentialist perspective, the value of not saving a life is the same as the value of killing someone. In which light, the title of your post becomes, “Is every person really worth not killing?” Try re-reading the argument with this framing in mind.
(Avoiding measures that save lives with certain probability is then equivalent to introducing the corresponding risk of death.)
If the value of not saving a life is the same as the value of killing someone, that’s fine. We can do that exercise and re-frame in terms of killing, and do the consequentialist calculation from there. The math is the same. If the goal is to bring ourselves to calculate from the heightened emotional perspective associated with killing, though, it is time to drop that frame and just get back to the math.
In terms of the opening post, the math is going to be similar even for the creation of all possible minds. If we have a good reason to restore every mind that has lived, it seems very probable that we have the exact same reason to create every mind that has not lived.
I’m not sure I see what that value is, though. Even if I want to live forever—and continue to want to live forever right up to the point that I am dead… One second after that point, I no longer care. At that point, only other living minds can find value in having me alive. It’s up to them if they want to invest their resources in preserving and re-animating me or prefer to invest more of their resources in keeping themselves alive and creating more novel new minds through reproduction.
If the goal is to bring ourselves to calculate from the heightened emotional perspective associated with killing, though, it is time to drop that frame and just get back to the math.
Well spotted. I was wondering if anyone was going to notice that Vladimir’s (absurdly highly upvoted) comment was basically a just a dark arts exploit trying to harness (largely deontological) moral judgements outside their intended context.
I was wondering if anyone was going to notice that Vladimir’s (absurdly highly upvoted) comment was basically a just a dark arts exploit trying to harness (largely deontological) moral judgements outside their intended context.
If that was an observation that you had already thought of, and you believed it good to be mentioned, why didn’t you so mention it yourself—instead of waiting to see if anyone else said it? I can conceive of some comments that are good to be made by only specific individuals, given specific contexts—but I don’t see this being one of them.
I find the attitude of “waiting to see if anyone else does this” and afterwards condemning/praising people collectively for failure/success in doing whatever person-failed-to-do-themselves an extremely distasteful one to me.
If that was an observation that you had already thought of, and you believed it good to be mentioned, why didn’t you so mention it yourself—instead of waiting to see if anyone else said it?
I did write a reply when Vladimir first wrote the comment. But I deleted it since I decided I couldn’t be bothered getting into a potential flamewar about a subject that I know from experience is easy to spin for cheap moral-high-group points (“you’re a murderer!”, etc). I long ago realized that it is not (always) my responsibility to fix people who are wrong on the internet.
Since smijer is (as of the time of this comment) a user with 9 votes while Vladimir is in the top 20 of the top contributors and the specific comment being corrected is at +19 it does not seem at all inappropriate to lend support to his observation.
Since smijer is (as of the time of this comment) a user with 9 votes while Vladimir is in the top 20 of the top contributors and the specific comment being corrected is at +19 it does not seem at all inappropriate to lend support to his observation.
Okay, I think I find this a good reason. Thank you for explaining.
If you mean the first...why? That wasn’t the issue. The issue was why wedrifid hadn’t chimed in.
As for the second, wouldn’t this imply that wedrifid was holding out because he expected someone with low karma to speak up first?
If I remember correctly then you have been taking a quite derogatory stance with respect to people who complained about the voting behavior on this site. In any case, here are some snippets from comments made by you in the past 30 days:
Note: I am at least as shocked by the current downvote of this comment...
Ok, me getting downvoted I can understand—someone has been mass dowvnvoting me across the board.
I’m actually getting concerned here. [...] he has not only been taken seriously but received upvotes while ridicule of the assumptions gets downvotes.
I was wondering if anyone was going to notice that Vladimir’s (absurdly highly upvoted) comment was basically a just a dark arts exploit...
I predict that within 5 years you will become frequently appalled by the voting behavior on this site and in another 10 years you’ll at least partly agree with me that a reputation system is actually a bad idea to have on a site like lesswrong because it doesn’t refine what you deem rational nor does it provide valuable feedback but instead does lend credence to the arguments of trolls (as you would call them).
If I remember correctly then you have been taking a quite derogatory stance with respect to people who complained about the voting behavior on this site.
I doubt I ever took such a broad stance. You seem to have generalized to a large category so that you can fit me into it. In fact one of those artfully trimmed quotes you make there should have, if parsed for meaning rather than scanned for quotable keywords, given a far more reasonable impression of where my preferences lie on that subject.
I predict that within 5 years you will become frequently appalled by the voting behavior on this site
Quite possible. A few years after that I may well start telling kids to get off my lawn and tell stories about “When I was your age”.
and in another 10 years you’ll at least partly agree with me that a reputation system is actually a bad idea to have on a site like lesswrong
Money. Make the prediction with money. Because I want to take it.
Counter-prediction: In ten years time you will not have changed your mind (on this subject) at all.
At least for myself, I’m happy to give that a low probability. Even with the lowered quality since Eliezer stopped writing, LW is still much better—thanks to karma—than OB or SL4 were.
LW is still much better—thanks to karma—than OB or SL4 were.
How do you know this? Would a reputation system cause the Tea Party movement to become less wrong?
The n-Category Café or Timothy Gowers blog do not employ a reputation system like less wrong. It’s the people who make places better off than others.
It is trivially true that the lesswrong reputation system would fail if there were more irrational people here than rational people, where ‘rational’ is defined according to your criteria (not implying that your criteria are wrong).
I am quite sure that a lot of valuable opinions are lost due to the current reputation system because there are a lot of people who don’t like the idea of being voted down according to unknown criteria rather than engaging in argumentative discourses.
And as I wrote before, the curren reputation system favors non-technical posts. More technical posts often don’t receive the same amount of upvotes as non-technical posts and technical posts that turn out to be wrong are downvoted more extensively. This does discourage rigor and gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.
A reputation system necessarily favors status quo.
This community are mostly aspired rationalists, not professionals in philosophy/decision theory/psychology, though there are a number of experts around. Accuracy of technical posts is hard to judge, so people probably go by the post quality, their gut feeling and how well it conforms to what has been agreed upon as correct before. Plus the usual points for humor. Minus penalty for poor spelling/grammar/style.
An example of a reputation system that works for a technical forum is MathOverflow, though partly because the mods are quite ruthless there about off-topic posts.
I am quite sure that a lot of valuable opinions are lost due to the current reputation system
...which likely means that this forum is not the right one for them. LW is open enough to resist “evaporative cooling”, and rapid downvoting inhibits all but expert trolling.
gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.
I think that is the idea. Educating people “about basic rationality” is a much more viable goal than doing basic research collaboratively. LW is often used as a sounding board for research write-ups, but that is probably as far as it can go. Anything more would require excluding amateurs from the discussion, to reduce the noise level. I am yet to see a public forum where “important problems” are solved “collaboratively”. Feel free to provide counterexamples.
Would a reputation system cause the Tea Party movement to become less wrong?
Yes. They would still have their major shibboleths like Obama being a Muslim born in Kenya, but reputation systems would at least reduce the most mouth-breathing comments.
The n-Category Café or Timothy Gowers blog do not employ a reputation system like less wrong. It’s the people who make places better off than others.
People are a factor. People are not the only factor which is solely determinative. Code is Law.
I am quite sure that a lot of valuable opinions are lost due to the current reputation system because there are a lot of people who don’t like the idea of being voted down according to unknown criteria rather than engaging in argumentative discourses.
And that is why LW has orders of magnitude less comments and posts than OB or SL4 did. Wait, never mind, I meant ‘more’.
This does discourage rigor and gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.
Or it discourages attempts to bamboozle with rigor. I don’t remember terribly many rigorous proofs on LW, but then, I don’t remember terribly many on OB or SL4 either.
I retracted the comment. Not sure why I made it and why I haven’t used my brain more, sorry.
Counter-prediction: In ten years time you will not have changed your mind (on this subject) at all.
Likely, because I hate reputation systems. Peer pressure is already bad enough as it is. But if a reliable study is being conducted that shows that reputation systems cause groups to become more rational I will of course change my mind.
Money. Make the prediction with money. Because I want to take it.
Betting money seems to be a pretty bad idea if the bet depends on the decision of someone participating in the bet.
the value of not saving a life is the same as the value of killing someone
If you found someone in the process of killing another, what actions would you be willing to undertake to stop them? Would you be willing to undertake those same actions every time you found someone whose non-subsistence expenditures exceeded $X, the minimum expenditure necessary to [buy enough malaria nets, etc… to] have an expected outcome of one life saved?
Even consequentialism is supposed to acknowledge that ethical rules need to be evaluated in terms of their long-term consequences rather than just their immediate outcomes.
That’s just very poor consequentialism in my eyes. Instead of me pointing out the most abominable scenarios that I believe immediately follow from such a consequentialism, why don’t you supply one that you think would be objectionable to others, but which you’d be willing to defend?
As for your spin on the question, while I think it is a different question than the original, I see no need to shy away from it. Some people are worth killing. That’s not to say there isn’t something of value in them, but choice is about tradeoffs, and I don’t expect that to change with greater technology. The particular tradeoffs will change, but that there are tradeoffs will not.
And in the same way, a great many more people are not worth saving either.
The reframed version gets much of its psychological strength from 1) intuitions that say killing is bad on top of its bad consequences and 2) intuitions that say killing has bad consequences that letting die does not have. You’re taking both of those intuitions as invalid (as you have to for the framing to be equivalent), so you can’t rely on conclusions largely caused by them.
No, the alternatives you are comparing are reviving a frozen brain vs doing something potentially more useful, once the revival technology is available.
For example, if creating a new mind has a positive utility some day, it’s the matter of calculating what to spend (potentially still limited) resources on: creating a new happy mind (trivially easy even now, except for the “happy” part) or reviving/rejuvenating/curing/uploading/rehabilitaing a grandpa stiff in a cryo tank (impossible now, but still probably much harder than the alternative even in the future).
No, the alternatives you are comparing are reviving a frozen brain vs doing something potentially more useful
My comment is unrelated to cryonics, I posted it to remind about framing effects of saying “not saving lives” as compared to “killing”. (Part of motivation for posting it is that I find the mention of Eliezer’s dead brother in the context of an argument for killing people distasteful.)
creating a new happy mind (trivially easy even now, except for the “happy” part) or reviving/rejuvenating/curing/uploading/rehabilitaing a grandpa
As I said, harder to evaluate. I’m uncertain on which of these particular alternatives is better (considering a hypothetical tradeoff), particularly where a new mind can be made better in some respects in a more resource-efficient way.
I am not sure what units are best for measuring a value of human life, so let’s just say that a life of average adult person has value 1. What would be your estimate of value of a 3-month fetus, 6-month fetus, 9-month fetus, a newborn child, 1⁄2 year old child, 1 year old child, etc.?
If you say that a fetus has less value than an adult person, but still a nonzero value, for example it could be 0.01, then killing 100 fetuses is like killing 1 adult person, and killing 100 000 fetuses is like killing 1 000 adult people. Calling the killing of 1 000 adult people “crime against humanity” would be perhaps exaggerated, but not exactly absurd.
If you have strong opinions on this topic, I would like to see your best try to estimate the shape of “human life value” curve for fetuses and small childs. At what age does killing a human organism become worse than having a proverbial dustspeck in rationalist’s eye?
Thousands of adults are in fact killed in auto accidents every year, and yet it seems to me very strange indeed to call auto accidents a crime against humanity.
Thousands of adults are killed in street crimes, and it seems very strange to me to call street crime a crime against humanity.
Etc., etc., etc.
I conclude that my intuitions about whether something counts as a “crime against humanity” aren’t especially well calibrated, and therefore that I should be reluctant to use those intuitions as evidence when thinking about scales way outside my normal experience.
And of course, the value-to-me of an individual can vary by many orders of magnitude, depending on the individual. I would likely have chosen to allow my nephew’s fetal development to continue rather than preserve the life of a randomly chosen adult, for example, but I don’t generally value the development of a fetus more than an adult.
But leaving the “crimes against humanity” labeling business aside, and assuming some typical value for a fetus and an adult, then sure, if I value a developing fetus 1/N as much as I value a living adult, then I prefer to allow 1 adult to die rather than allow the development of N fetuses to be terminated.
From consequentialist perspective, the value of not saving a life is the same as the value of killing someone. In which light, the title of your post becomes, “Is every person really worth not killing?” Try re-reading the argument with this framing in mind.
(Avoiding measures that save lives with certain probability is then equivalent to introducing the corresponding risk of death.)
If the value of not saving a life is the same as the value of killing someone, that’s fine. We can do that exercise and re-frame in terms of killing, and do the consequentialist calculation from there. The math is the same. If the goal is to bring ourselves to calculate from the heightened emotional perspective associated with killing, though, it is time to drop that frame and just get back to the math.
In terms of the opening post, the math is going to be similar even for the creation of all possible minds. If we have a good reason to restore every mind that has lived, it seems very probable that we have the exact same reason to create every mind that has not lived.
I’m not sure I see what that value is, though. Even if I want to live forever—and continue to want to live forever right up to the point that I am dead… One second after that point, I no longer care. At that point, only other living minds can find value in having me alive. It’s up to them if they want to invest their resources in preserving and re-animating me or prefer to invest more of their resources in keeping themselves alive and creating more novel new minds through reproduction.
Well spotted. I was wondering if anyone was going to notice that Vladimir’s (absurdly highly upvoted) comment was basically a just a dark arts exploit trying to harness (largely deontological) moral judgements outside their intended context.
If that was an observation that you had already thought of, and you believed it good to be mentioned, why didn’t you so mention it yourself—instead of waiting to see if anyone else said it? I can conceive of some comments that are good to be made by only specific individuals, given specific contexts—but I don’t see this being one of them.
I find the attitude of “waiting to see if anyone else does this” and afterwards condemning/praising people collectively for failure/success in doing whatever person-failed-to-do-themselves an extremely distasteful one to me.
I did write a reply when Vladimir first wrote the comment. But I deleted it since I decided I couldn’t be bothered getting into a potential flamewar about a subject that I know from experience is easy to spin for cheap moral-high-group points (“you’re a murderer!”, etc). I long ago realized that it is not (always) my responsibility to fix people who are wrong on the internet.
Since smijer is (as of the time of this comment) a user with 9 votes while Vladimir is in the top 20 of the top contributors and the specific comment being corrected is at +19 it does not seem at all inappropriate to lend support to his observation.
Okay, I think I find this a good reason. Thank you for explaining.
You find this a good reason for what?
(1) For supporting smijer’s comment
(2) For not chiming in when he first had the idea
If you mean the first...why? That wasn’t the issue. The issue was why wedrifid hadn’t chimed in. As for the second, wouldn’t this imply that wedrifid was holding out because he expected someone with low karma to speak up first?
For the seeming inconsistency I had noticed between (1) and (2).
Not wanting to get into a flamewar is, of course, reasonable. But daring to be the first to dissent is a valuable service, too.
I appreciate the support.
Off topic:
If I remember correctly then you have been taking a quite derogatory stance with respect to people who complained about the voting behavior on this site. In any case, here are some snippets from comments made by you in the past 30 days:
I predict that within 5 years you will become frequently appalled by the voting behavior on this site and in another 10 years you’ll at least partly agree with me that a reputation system is actually a bad idea to have on a site like lesswrong because it doesn’t refine what you deem rational nor does it provide valuable feedback but instead does lend credence to the arguments of trolls (as you would call them).
I doubt I ever took such a broad stance. You seem to have generalized to a large category so that you can fit me into it. In fact one of those artfully trimmed quotes you make there should have, if parsed for meaning rather than scanned for quotable keywords, given a far more reasonable impression of where my preferences lie on that subject.
Quite possible. A few years after that I may well start telling kids to get off my lawn and tell stories about “When I was your age”.
Money. Make the prediction with money. Because I want to take it.
Counter-prediction: In ten years time you will not have changed your mind (on this subject) at all.
At least for myself, I’m happy to give that a low probability. Even with the lowered quality since Eliezer stopped writing, LW is still much better—thanks to karma—than OB or SL4 were.
How do you know this? Would a reputation system cause the Tea Party movement to become less wrong?
The n-Category Café or Timothy Gowers blog do not employ a reputation system like less wrong. It’s the people who make places better off than others.
It is trivially true that the lesswrong reputation system would fail if there were more irrational people here than rational people, where ‘rational’ is defined according to your criteria (not implying that your criteria are wrong).
I am quite sure that a lot of valuable opinions are lost due to the current reputation system because there are a lot of people who don’t like the idea of being voted down according to unknown criteria rather than engaging in argumentative discourses.
And as I wrote before, the curren reputation system favors non-technical posts. More technical posts often don’t receive the same amount of upvotes as non-technical posts and technical posts that turn out to be wrong are downvoted more extensively. This does discourage rigor and gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.
A reputation system necessarily favors status quo.
This community are mostly aspired rationalists, not professionals in philosophy/decision theory/psychology, though there are a number of experts around. Accuracy of technical posts is hard to judge, so people probably go by the post quality, their gut feeling and how well it conforms to what has been agreed upon as correct before. Plus the usual points for humor. Minus penalty for poor spelling/grammar/style.
An example of a reputation system that works for a technical forum is MathOverflow, though partly because the mods are quite ruthless there about off-topic posts.
...which likely means that this forum is not the right one for them. LW is open enough to resist “evaporative cooling”, and rapid downvoting inhibits all but expert trolling.
I think that is the idea. Educating people “about basic rationality” is a much more viable goal than doing basic research collaboratively. LW is often used as a sounding board for research write-ups, but that is probably as far as it can go. Anything more would require excluding amateurs from the discussion, to reduce the noise level. I am yet to see a public forum where “important problems” are solved “collaboratively”. Feel free to provide counterexamples.
Yes. They would still have their major shibboleths like Obama being a Muslim born in Kenya, but reputation systems would at least reduce the most mouth-breathing comments.
People are a factor. People are not the only factor which is solely determinative. Code is Law.
And that is why LW has orders of magnitude less comments and posts than OB or SL4 did. Wait, never mind, I meant ‘more’.
Or it discourages attempts to bamboozle with rigor. I don’t remember terribly many rigorous proofs on LW, but then, I don’t remember terribly many on OB or SL4 either.
I retracted the comment. Not sure why I made it and why I haven’t used my brain more, sorry.
Likely, because I hate reputation systems. Peer pressure is already bad enough as it is. But if a reliable study is being conducted that shows that reputation systems cause groups to become more rational I will of course change my mind.
Betting money seems to be a pretty bad idea if the bet depends on the decision of someone participating in the bet.
If you found someone in the process of killing another, what actions would you be willing to undertake to stop them? Would you be willing to undertake those same actions every time you found someone whose non-subsistence expenditures exceeded $X, the minimum expenditure necessary to [buy enough malaria nets, etc… to] have an expected outcome of one life saved?
Even consequentialism is supposed to acknowledge that ethical rules need to be evaluated in terms of their long-term consequences rather than just their immediate outcomes.
That’s just very poor consequentialism in my eyes. Instead of me pointing out the most abominable scenarios that I believe immediately follow from such a consequentialism, why don’t you supply one that you think would be objectionable to others, but which you’d be willing to defend?
As for your spin on the question, while I think it is a different question than the original, I see no need to shy away from it. Some people are worth killing. That’s not to say there isn’t something of value in them, but choice is about tradeoffs, and I don’t expect that to change with greater technology. The particular tradeoffs will change, but that there are tradeoffs will not.
And in the same way, a great many more people are not worth saving either.
Sure, assuming we’re clear on what the question means.
The reframed version gets much of its psychological strength from 1) intuitions that say killing is bad on top of its bad consequences and 2) intuitions that say killing has bad consequences that letting die does not have. You’re taking both of those intuitions as invalid (as you have to for the framing to be equivalent), so you can’t rely on conclusions largely caused by them.
I think you mean “uncertain probability”?
“Certain” as in a figure of speech, like “ice cream of certain flavor”, not indication of precision. (Although probabilities can well be precise...)
Taking this argument ad absurdum: Roe vs Wade is a crime against humanity, since a fetus is potentially a person.
The alternatives I’m comparing are a living person dying vs. not dying. Living vs. never having lived is different and harder to evaluate.
No, the alternatives you are comparing are reviving a frozen brain vs doing something potentially more useful, once the revival technology is available.
For example, if creating a new mind has a positive utility some day, it’s the matter of calculating what to spend (potentially still limited) resources on: creating a new happy mind (trivially easy even now, except for the “happy” part) or reviving/rejuvenating/curing/uploading/rehabilitaing a grandpa stiff in a cryo tank (impossible now, but still probably much harder than the alternative even in the future).
My comment is unrelated to cryonics, I posted it to remind about framing effects of saying “not saving lives” as compared to “killing”. (Part of motivation for posting it is that I find the mention of Eliezer’s dead brother in the context of an argument for killing people distasteful.)
As I said, harder to evaluate. I’m uncertain on which of these particular alternatives is better (considering a hypothetical tradeoff), particularly where a new mind can be made better in some respects in a more resource-efficient way.
Ah, OK. I thought you were commenting on the merits of cryopreservation.
What exactly makes it absurd?
I am not sure what units are best for measuring a value of human life, so let’s just say that a life of average adult person has value 1. What would be your estimate of value of a 3-month fetus, 6-month fetus, 9-month fetus, a newborn child, 1⁄2 year old child, 1 year old child, etc.?
If you say that a fetus has less value than an adult person, but still a nonzero value, for example it could be 0.01, then killing 100 fetuses is like killing 1 adult person, and killing 100 000 fetuses is like killing 1 000 adult people. Calling the killing of 1 000 adult people “crime against humanity” would be perhaps exaggerated, but not exactly absurd.
If you have strong opinions on this topic, I would like to see your best try to estimate the shape of “human life value” curve for fetuses and small childs. At what age does killing a human organism become worse than having a proverbial dustspeck in rationalist’s eye?
Thousands of adults are in fact killed in auto accidents every year, and yet it seems to me very strange indeed to call auto accidents a crime against humanity.
Thousands of adults are killed in street crimes, and it seems very strange to me to call street crime a crime against humanity.
Etc., etc., etc.
I conclude that my intuitions about whether something counts as a “crime against humanity” aren’t especially well calibrated, and therefore that I should be reluctant to use those intuitions as evidence when thinking about scales way outside my normal experience.
And of course, the value-to-me of an individual can vary by many orders of magnitude, depending on the individual. I would likely have chosen to allow my nephew’s fetal development to continue rather than preserve the life of a randomly chosen adult, for example, but I don’t generally value the development of a fetus more than an adult.
But leaving the “crimes against humanity” labeling business aside, and assuming some typical value for a fetus and an adult, then sure, if I value a developing fetus 1/N as much as I value a living adult, then I prefer to allow 1 adult to die rather than allow the development of N fetuses to be terminated.
Actually, much worse: Roe vs Wade effectively enables serial genocide.