Whatever your values are, suicide will not help you achieve them. Stay alive, give 5% of your income to charity, and spend the rest on whatever makes you happy. You end up doing more good than 99% of the rest of humanity.
Whatever your values are, suicide will not help you achieve them. Stay alive, give 5% of your income to charity, and spend the rest on whatever makes you happy. You end up doing more good than 99% of the rest of humanity.
Well, the return on term life insurance can be pretty big, too. You can multiply your wealth by a factor of 500 by buying a 10 year term life insurance premium, making payments for two years (at which point U.S. law obligates the insurer to pay out in the event of suicide) and then dying.
....Ha! Guys, I should have made this clearer. I don’t need counseling. I more-or-less fixed my problem for myself. By which I mean I could do with having it expressed for myself a bit more succinctly in a snappy, catchy sentence or two, but essentially, I got it already.
My point in bringing it to this audience was, “Hey, pretty sure generalizing fundamental techniques of human rationality shouldn’t cause existential angst. Seems like a problem that comes from and incomplete application of rationality to the issue. I think I figured out how to solve it for myself, but has anyone else ever had this problem, and how did you solve it?”
And we’re talking about a situation in which a being discovered that its values were internally inconsistent, and the same logic that identifies wireheading as “not what I actually want” extended to everything. Leaving the being with nothing ‘worth’ living for, but still capable of feeling pain.
So it wouldn’t make any sense for it to care at all how its death affected the state of the universe after it was gone. The point is that there are NO states that it actually values over others, other than ending its own subjective experience of pain.
If it had any reason to value killing itself to save the world over killing itself with a world destroying bomb (so long as both methods were somehow equally quick, easy, and painless to itself), then the whole reason it was killing itself in the first place wouldn’t be true.
The questions I mean to raise here are, is it even possible for a being to have a value system that logically eats itself from the inside out like that? And even if it was, I don’t think human values would fit into that class. But what’s the simplest, clearest way of proving that?
This usually falls in the “enjoying having done good” category. If you do not enjoy knowing that you are in a “better” world more, why say that you value it?
Whatever your values are, suicide will not help you achieve them. Stay alive, give 5% of your income to charity, and spend the rest on whatever makes you happy. You end up doing more good than 99% of the rest of humanity.
Well, the return on term life insurance can be pretty big, too. You can multiply your wealth by a factor of 500 by buying a 10 year term life insurance premium, making payments for two years (at which point U.S. law obligates the insurer to pay out in the event of suicide) and then dying.
The blood’s on your hands if they actually do this.
But what about all the people we’re letting die by not donating to charity?
There’s a taboo against encouraging suicide, and it’s probably there for the same reason we have other deontological taboos.
I can’t even do it semi-ironically? :(
Uhm, no?
....Ha! Guys, I should have made this clearer. I don’t need counseling. I more-or-less fixed my problem for myself. By which I mean I could do with having it expressed for myself a bit more succinctly in a snappy, catchy sentence or two, but essentially, I got it already.
My point in bringing it to this audience was, “Hey, pretty sure generalizing fundamental techniques of human rationality shouldn’t cause existential angst. Seems like a problem that comes from and incomplete application of rationality to the issue. I think I figured out how to solve it for myself, but has anyone else ever had this problem, and how did you solve it?”
And we’re talking about a situation in which a being discovered that its values were internally inconsistent, and the same logic that identifies wireheading as “not what I actually want” extended to everything. Leaving the being with nothing ‘worth’ living for, but still capable of feeling pain.
So it wouldn’t make any sense for it to care at all how its death affected the state of the universe after it was gone. The point is that there are NO states that it actually values over others, other than ending its own subjective experience of pain.
If it had any reason to value killing itself to save the world over killing itself with a world destroying bomb (so long as both methods were somehow equally quick, easy, and painless to itself), then the whole reason it was killing itself in the first place wouldn’t be true.
The questions I mean to raise here are, is it even possible for a being to have a value system that logically eats itself from the inside out like that? And even if it was, I don’t think human values would fit into that class. But what’s the simplest, clearest way of proving that?
Well, I recognized that… :P
This needn’t be ironic. If I’m willing to die to give my beneficiary a comfortable living, this might be a viable strategy.
Yous missin’ da point dere.
It looks that if you enjoyed doing good—or at least having done good—the problem wouldn’t occur anyway. And then, why suffer the conflict?
Because even if you don’t enjoy doing good, you can still value it.
This usually falls in the “enjoying having done good” category. If you do not enjoy knowing that you are in a “better” world more, why say that you value it?
Yes, exactly. I’m glad I was at least clear enough for someone to get that point. =]