One thing to note is that the man would probably harm, not help, his chosen charity (in expectation).
One would hope that in the two years between signing up for the insurance policy and offing himself he took the time to figure out how to make the donation suitably indirect and manage appearances. All it would take is one person you can trust.
I don’t know if “trust” is a sufficiently boolean property for this. One would need an executor trustworthy to
Handle large amounts of money with no oversight
Deal with the legal system
Maintain absolute discretion on the subject, basically forever
Deal with the knowledge that a close, trusting friend is going to commit suicide for unconventional reasons
A good lawyer fits some of those criteria, but not all; and is difficult for the unemployed to retain. Frankly, I think that most people who could inspire that kind of loyalty in others could do more good alive.
Deal with the knowledge that a close, trusting friend is going to commit suicide for unconventional reasons
They do not need to know this. Their role is to execute your will. That is all.
Frankly, I think that most people who could inspire that kind of loyalty in others could do more good alive.
Will the money to someone else who is obsessed with the cause. In that case you don’t need personal trust. Just game theory.
Saying “this will do more harm than good” sounds wise and sends the desired message of ‘suicide is bad and I do not encourage it’ but isn’t actually accurate under examination.
“This will do more harm than good” may not be accurate under examination, but I think it is accurate in reality.
What you’re talking about is a flimsy elaborate plan that requires some people to do exactly what they are supposed to do and nobody else to seriously interfere. The probability of such a plan working first time is small enough to be ignored. Something will go wrong that you didn’t think of.
In many contexts, that’s not a showstopper: you wait until something does go wrong, then you fix it. But if step two of the plan was “you die”, it’s going to be a bit hard to fix what goes wrong in step three.
I disagree. Especially with the way ‘flimsy’, ‘elaborate’ and ‘reality’ are used (or misused) and the straightforward complications of will-execution raised as though this is some sort of special case.
I would consider an argument of the form “This is a f@$%@ing terrible idea because if you kill yourself you DIE” far more persuasive than anything that relied on technical difficulties. Flip. This is two years worth of preparation time. How long does it take to google “suicide look like accident”? The technical problem is utterly trivial. It is just one that you are better off not implementing. On account of life being better than death.
Well I agree with you that “if you kill yourself you die” is a sufficient and primary argument against the proposal. I was merely following the implied “what if somebody is in a suicidal mood and therefore not convinced by the primary argument, what arguments are there against the feasibility of the proposal on its own terms” of this subthread.
One would hope that in the two years between signing up for the insurance policy and offing himself he took the time to figure out how to make the donation suitably indirect and manage appearances. All it would take is one person you can trust.
I don’t know if “trust” is a sufficiently boolean property for this. One would need an executor trustworthy to
Handle large amounts of money with no oversight
Deal with the legal system
Maintain absolute discretion on the subject, basically forever
Deal with the knowledge that a close, trusting friend is going to commit suicide for unconventional reasons
A good lawyer fits some of those criteria, but not all; and is difficult for the unemployed to retain. Frankly, I think that most people who could inspire that kind of loyalty in others could do more good alive.
They do not need to know this. Their role is to execute your will. That is all.
Will the money to someone else who is obsessed with the cause. In that case you don’t need personal trust. Just game theory.
Saying “this will do more harm than good” sounds wise and sends the desired message of ‘suicide is bad and I do not encourage it’ but isn’t actually accurate under examination.
“This will do more harm than good” may not be accurate under examination, but I think it is accurate in reality.
What you’re talking about is a flimsy elaborate plan that requires some people to do exactly what they are supposed to do and nobody else to seriously interfere. The probability of such a plan working first time is small enough to be ignored. Something will go wrong that you didn’t think of.
In many contexts, that’s not a showstopper: you wait until something does go wrong, then you fix it. But if step two of the plan was “you die”, it’s going to be a bit hard to fix what goes wrong in step three.
I disagree. Especially with the way ‘flimsy’, ‘elaborate’ and ‘reality’ are used (or misused) and the straightforward complications of will-execution raised as though this is some sort of special case.
I would consider an argument of the form “This is a f@$%@ing terrible idea because if you kill yourself you DIE” far more persuasive than anything that relied on technical difficulties. Flip. This is two years worth of preparation time. How long does it take to google “suicide look like accident”? The technical problem is utterly trivial. It is just one that you are better off not implementing. On account of life being better than death.
Well I agree with you that “if you kill yourself you die” is a sufficient and primary argument against the proposal. I was merely following the implied “what if somebody is in a suicidal mood and therefore not convinced by the primary argument, what arguments are there against the feasibility of the proposal on its own terms” of this subthread.