If the value of not saving a life is the same as the value of killing someone, that’s fine. We can do that exercise and re-frame in terms of killing, and do the consequentialist calculation from there. The math is the same. If the goal is to bring ourselves to calculate from the heightened emotional perspective associated with killing, though, it is time to drop that frame and just get back to the math.
In terms of the opening post, the math is going to be similar even for the creation of all possible minds. If we have a good reason to restore every mind that has lived, it seems very probable that we have the exact same reason to create every mind that has not lived.
I’m not sure I see what that value is, though. Even if I want to live forever—and continue to want to live forever right up to the point that I am dead… One second after that point, I no longer care. At that point, only other living minds can find value in having me alive. It’s up to them if they want to invest their resources in preserving and re-animating me or prefer to invest more of their resources in keeping themselves alive and creating more novel new minds through reproduction.
If the goal is to bring ourselves to calculate from the heightened emotional perspective associated with killing, though, it is time to drop that frame and just get back to the math.
Well spotted. I was wondering if anyone was going to notice that Vladimir’s (absurdly highly upvoted) comment was basically a just a dark arts exploit trying to harness (largely deontological) moral judgements outside their intended context.
I was wondering if anyone was going to notice that Vladimir’s (absurdly highly upvoted) comment was basically a just a dark arts exploit trying to harness (largely deontological) moral judgements outside their intended context.
If that was an observation that you had already thought of, and you believed it good to be mentioned, why didn’t you so mention it yourself—instead of waiting to see if anyone else said it? I can conceive of some comments that are good to be made by only specific individuals, given specific contexts—but I don’t see this being one of them.
I find the attitude of “waiting to see if anyone else does this” and afterwards condemning/praising people collectively for failure/success in doing whatever person-failed-to-do-themselves an extremely distasteful one to me.
If that was an observation that you had already thought of, and you believed it good to be mentioned, why didn’t you so mention it yourself—instead of waiting to see if anyone else said it?
I did write a reply when Vladimir first wrote the comment. But I deleted it since I decided I couldn’t be bothered getting into a potential flamewar about a subject that I know from experience is easy to spin for cheap moral-high-group points (“you’re a murderer!”, etc). I long ago realized that it is not (always) my responsibility to fix people who are wrong on the internet.
Since smijer is (as of the time of this comment) a user with 9 votes while Vladimir is in the top 20 of the top contributors and the specific comment being corrected is at +19 it does not seem at all inappropriate to lend support to his observation.
Since smijer is (as of the time of this comment) a user with 9 votes while Vladimir is in the top 20 of the top contributors and the specific comment being corrected is at +19 it does not seem at all inappropriate to lend support to his observation.
Okay, I think I find this a good reason. Thank you for explaining.
If you mean the first...why? That wasn’t the issue. The issue was why wedrifid hadn’t chimed in.
As for the second, wouldn’t this imply that wedrifid was holding out because he expected someone with low karma to speak up first?
If I remember correctly then you have been taking a quite derogatory stance with respect to people who complained about the voting behavior on this site. In any case, here are some snippets from comments made by you in the past 30 days:
Note: I am at least as shocked by the current downvote of this comment...
Ok, me getting downvoted I can understand—someone has been mass dowvnvoting me across the board.
I’m actually getting concerned here. [...] he has not only been taken seriously but received upvotes while ridicule of the assumptions gets downvotes.
I was wondering if anyone was going to notice that Vladimir’s (absurdly highly upvoted) comment was basically a just a dark arts exploit...
I predict that within 5 years you will become frequently appalled by the voting behavior on this site and in another 10 years you’ll at least partly agree with me that a reputation system is actually a bad idea to have on a site like lesswrong because it doesn’t refine what you deem rational nor does it provide valuable feedback but instead does lend credence to the arguments of trolls (as you would call them).
If I remember correctly then you have been taking a quite derogatory stance with respect to people who complained about the voting behavior on this site.
I doubt I ever took such a broad stance. You seem to have generalized to a large category so that you can fit me into it. In fact one of those artfully trimmed quotes you make there should have, if parsed for meaning rather than scanned for quotable keywords, given a far more reasonable impression of where my preferences lie on that subject.
I predict that within 5 years you will become frequently appalled by the voting behavior on this site
Quite possible. A few years after that I may well start telling kids to get off my lawn and tell stories about “When I was your age”.
and in another 10 years you’ll at least partly agree with me that a reputation system is actually a bad idea to have on a site like lesswrong
Money. Make the prediction with money. Because I want to take it.
Counter-prediction: In ten years time you will not have changed your mind (on this subject) at all.
At least for myself, I’m happy to give that a low probability. Even with the lowered quality since Eliezer stopped writing, LW is still much better—thanks to karma—than OB or SL4 were.
LW is still much better—thanks to karma—than OB or SL4 were.
How do you know this? Would a reputation system cause the Tea Party movement to become less wrong?
The n-Category Café or Timothy Gowers blog do not employ a reputation system like less wrong. It’s the people who make places better off than others.
It is trivially true that the lesswrong reputation system would fail if there were more irrational people here than rational people, where ‘rational’ is defined according to your criteria (not implying that your criteria are wrong).
I am quite sure that a lot of valuable opinions are lost due to the current reputation system because there are a lot of people who don’t like the idea of being voted down according to unknown criteria rather than engaging in argumentative discourses.
And as I wrote before, the curren reputation system favors non-technical posts. More technical posts often don’t receive the same amount of upvotes as non-technical posts and technical posts that turn out to be wrong are downvoted more extensively. This does discourage rigor and gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.
A reputation system necessarily favors status quo.
This community are mostly aspired rationalists, not professionals in philosophy/decision theory/psychology, though there are a number of experts around. Accuracy of technical posts is hard to judge, so people probably go by the post quality, their gut feeling and how well it conforms to what has been agreed upon as correct before. Plus the usual points for humor. Minus penalty for poor spelling/grammar/style.
An example of a reputation system that works for a technical forum is MathOverflow, though partly because the mods are quite ruthless there about off-topic posts.
I am quite sure that a lot of valuable opinions are lost due to the current reputation system
...which likely means that this forum is not the right one for them. LW is open enough to resist “evaporative cooling”, and rapid downvoting inhibits all but expert trolling.
gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.
I think that is the idea. Educating people “about basic rationality” is a much more viable goal than doing basic research collaboratively. LW is often used as a sounding board for research write-ups, but that is probably as far as it can go. Anything more would require excluding amateurs from the discussion, to reduce the noise level. I am yet to see a public forum where “important problems” are solved “collaboratively”. Feel free to provide counterexamples.
Would a reputation system cause the Tea Party movement to become less wrong?
Yes. They would still have their major shibboleths like Obama being a Muslim born in Kenya, but reputation systems would at least reduce the most mouth-breathing comments.
The n-Category Café or Timothy Gowers blog do not employ a reputation system like less wrong. It’s the people who make places better off than others.
People are a factor. People are not the only factor which is solely determinative. Code is Law.
I am quite sure that a lot of valuable opinions are lost due to the current reputation system because there are a lot of people who don’t like the idea of being voted down according to unknown criteria rather than engaging in argumentative discourses.
And that is why LW has orders of magnitude less comments and posts than OB or SL4 did. Wait, never mind, I meant ‘more’.
This does discourage rigor and gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.
Or it discourages attempts to bamboozle with rigor. I don’t remember terribly many rigorous proofs on LW, but then, I don’t remember terribly many on OB or SL4 either.
I retracted the comment. Not sure why I made it and why I haven’t used my brain more, sorry.
Counter-prediction: In ten years time you will not have changed your mind (on this subject) at all.
Likely, because I hate reputation systems. Peer pressure is already bad enough as it is. But if a reliable study is being conducted that shows that reputation systems cause groups to become more rational I will of course change my mind.
Money. Make the prediction with money. Because I want to take it.
Betting money seems to be a pretty bad idea if the bet depends on the decision of someone participating in the bet.
If the value of not saving a life is the same as the value of killing someone, that’s fine. We can do that exercise and re-frame in terms of killing, and do the consequentialist calculation from there. The math is the same. If the goal is to bring ourselves to calculate from the heightened emotional perspective associated with killing, though, it is time to drop that frame and just get back to the math.
In terms of the opening post, the math is going to be similar even for the creation of all possible minds. If we have a good reason to restore every mind that has lived, it seems very probable that we have the exact same reason to create every mind that has not lived.
I’m not sure I see what that value is, though. Even if I want to live forever—and continue to want to live forever right up to the point that I am dead… One second after that point, I no longer care. At that point, only other living minds can find value in having me alive. It’s up to them if they want to invest their resources in preserving and re-animating me or prefer to invest more of their resources in keeping themselves alive and creating more novel new minds through reproduction.
Well spotted. I was wondering if anyone was going to notice that Vladimir’s (absurdly highly upvoted) comment was basically a just a dark arts exploit trying to harness (largely deontological) moral judgements outside their intended context.
If that was an observation that you had already thought of, and you believed it good to be mentioned, why didn’t you so mention it yourself—instead of waiting to see if anyone else said it? I can conceive of some comments that are good to be made by only specific individuals, given specific contexts—but I don’t see this being one of them.
I find the attitude of “waiting to see if anyone else does this” and afterwards condemning/praising people collectively for failure/success in doing whatever person-failed-to-do-themselves an extremely distasteful one to me.
I did write a reply when Vladimir first wrote the comment. But I deleted it since I decided I couldn’t be bothered getting into a potential flamewar about a subject that I know from experience is easy to spin for cheap moral-high-group points (“you’re a murderer!”, etc). I long ago realized that it is not (always) my responsibility to fix people who are wrong on the internet.
Since smijer is (as of the time of this comment) a user with 9 votes while Vladimir is in the top 20 of the top contributors and the specific comment being corrected is at +19 it does not seem at all inappropriate to lend support to his observation.
Okay, I think I find this a good reason. Thank you for explaining.
You find this a good reason for what?
(1) For supporting smijer’s comment
(2) For not chiming in when he first had the idea
If you mean the first...why? That wasn’t the issue. The issue was why wedrifid hadn’t chimed in. As for the second, wouldn’t this imply that wedrifid was holding out because he expected someone with low karma to speak up first?
For the seeming inconsistency I had noticed between (1) and (2).
Not wanting to get into a flamewar is, of course, reasonable. But daring to be the first to dissent is a valuable service, too.
I appreciate the support.
Off topic:
If I remember correctly then you have been taking a quite derogatory stance with respect to people who complained about the voting behavior on this site. In any case, here are some snippets from comments made by you in the past 30 days:
I predict that within 5 years you will become frequently appalled by the voting behavior on this site and in another 10 years you’ll at least partly agree with me that a reputation system is actually a bad idea to have on a site like lesswrong because it doesn’t refine what you deem rational nor does it provide valuable feedback but instead does lend credence to the arguments of trolls (as you would call them).
I doubt I ever took such a broad stance. You seem to have generalized to a large category so that you can fit me into it. In fact one of those artfully trimmed quotes you make there should have, if parsed for meaning rather than scanned for quotable keywords, given a far more reasonable impression of where my preferences lie on that subject.
Quite possible. A few years after that I may well start telling kids to get off my lawn and tell stories about “When I was your age”.
Money. Make the prediction with money. Because I want to take it.
Counter-prediction: In ten years time you will not have changed your mind (on this subject) at all.
At least for myself, I’m happy to give that a low probability. Even with the lowered quality since Eliezer stopped writing, LW is still much better—thanks to karma—than OB or SL4 were.
How do you know this? Would a reputation system cause the Tea Party movement to become less wrong?
The n-Category Café or Timothy Gowers blog do not employ a reputation system like less wrong. It’s the people who make places better off than others.
It is trivially true that the lesswrong reputation system would fail if there were more irrational people here than rational people, where ‘rational’ is defined according to your criteria (not implying that your criteria are wrong).
I am quite sure that a lot of valuable opinions are lost due to the current reputation system because there are a lot of people who don’t like the idea of being voted down according to unknown criteria rather than engaging in argumentative discourses.
And as I wrote before, the curren reputation system favors non-technical posts. More technical posts often don’t receive the same amount of upvotes as non-technical posts and technical posts that turn out to be wrong are downvoted more extensively. This does discourage rigor and gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.
A reputation system necessarily favors status quo.
This community are mostly aspired rationalists, not professionals in philosophy/decision theory/psychology, though there are a number of experts around. Accuracy of technical posts is hard to judge, so people probably go by the post quality, their gut feeling and how well it conforms to what has been agreed upon as correct before. Plus the usual points for humor. Minus penalty for poor spelling/grammar/style.
An example of a reputation system that works for a technical forum is MathOverflow, though partly because the mods are quite ruthless there about off-topic posts.
...which likely means that this forum is not the right one for them. LW is open enough to resist “evaporative cooling”, and rapid downvoting inhibits all but expert trolling.
I think that is the idea. Educating people “about basic rationality” is a much more viable goal than doing basic research collaboratively. LW is often used as a sounding board for research write-ups, but that is probably as far as it can go. Anything more would require excluding amateurs from the discussion, to reduce the noise level. I am yet to see a public forum where “important problems” are solved “collaboratively”. Feel free to provide counterexamples.
Yes. They would still have their major shibboleths like Obama being a Muslim born in Kenya, but reputation systems would at least reduce the most mouth-breathing comments.
People are a factor. People are not the only factor which is solely determinative. Code is Law.
And that is why LW has orders of magnitude less comments and posts than OB or SL4 did. Wait, never mind, I meant ‘more’.
Or it discourages attempts to bamboozle with rigor. I don’t remember terribly many rigorous proofs on LW, but then, I don’t remember terribly many on OB or SL4 either.
I retracted the comment. Not sure why I made it and why I haven’t used my brain more, sorry.
Likely, because I hate reputation systems. Peer pressure is already bad enough as it is. But if a reliable study is being conducted that shows that reputation systems cause groups to become more rational I will of course change my mind.
Betting money seems to be a pretty bad idea if the bet depends on the decision of someone participating in the bet.