As a 911 Operator, I have spoken to hundreds of suicidal people at their very lowest moment (often with a weapon in hand). In my professional judgment, the quote is accurate for a large number of cases (obviously, there are exceptions).
The point is that whether and how much one wants to die tends to fluctuate a lot, and the willingness to commit suicide depends a lot on the availiability of means to easily and painlessly do so. A large percentage of suicide attempts are opportunistic rather than planned. The planned ones probably succeed more often, but that does not necessarily mean that those people really wanted to die more—just that their will to die was over a certain threshold for a certain time.
The so-called ‘psychotically depressed’ person who tries to kill herself doesn’t do so out of quote ‘hopelessness’ or any abstract conviction that life’s assets and debits do not square. And surely not because death seems suddenly appealing. The person in whom Its invisible agony reaches a certain unendurable level will kill herself the same way a trapped person will eventually jump from the window of a burning high-rise.
I have read that a majority of people who survive suicide attempts end up glad that they did not succeed (although I can no longer remember and thus cannot vouch for the source.) A somewhat alarming proportion of my own acquaintances have attempted suicide though, and all except for one so far have attested that this is the case for them.
“of all would-be jumpers who were thwarted from leaping off the Golden Gate between 1937 and 1971 — an astonishing 515 individuals in all — he painstakingly culled death-certificate records to see how many had subsequently “completed.” His report, “Where Are They Now?” remains a landmark in the study of suicide, for what he found was that just 6 percent of those pulled off the bridge went on to kill themselves. Even allowing for suicides that might have been mislabeled as accidents only raised the total to 10 percent.”
In other words, if you ever think you want to kill yourself, there’s a 90% chance you’re wrong. Behave accordingly.
Well, yes, it just establishes a prior. But a remarkably hard prior to update, don’t you think? “I’m probably in worse shape than all those people who tried to jump off the Golden Gate Bridge” would demand some exceptional new information.
All this data says is that between 90% and 94% of people who are convinced not to jump did not go on to successfully commit suicide at a later date. It would be a big mistake to assume that whether or not you would come to regret your choice is 100% independent of whether or not you can be convinced not to jump and that therefore the fraction of people who came to regret commiting suicide is the same as the fraction who would have come to regret commiting suicide if they had failed their attempt.
“Apprehended” isn’t synonymous with “convinced not to jump”, but there does seem to be a sampling bias here, yes. (And can I say how refreshing it is to hear someone point that out and not be ignorantly insulted for it by dozens of people? Hyperlink to a “More Wrong” website omitted in the name of internet civility, but take my word for it that I’m describing an actual event.)
I think even “convinced not to jump” wouldn’t necessarily change the decision calculus here, though. To the extent there is a selection bias it’s because some subset of suicidal people behaved in ways which caused them to avoid opportunities to have their minds changed. That’s so irrational you could practically write a book about it.
One old study about one bridge is not the whole body of evidence regarding suicide, either. Read a few more bits from just that one news article.
Suicide rates reduced by a third in Britain merely because one easy method became unavailable? In other words, a large minority of would-be suicides didn’t even need to be convinced by someone else, they just needed less time to convince themselves than it would have taken them to find a slightly less convenient way of killing themselves. Even “very slightly less convenient” can provide enough time: 4 bridge jumpers per year were all deterred by one new barrier at the Ellington bridge, the local suicide rate went down by 4 jumpers per year, and the suicide rate at the unprotected, easily visible neighboring bridge only went up by 0.3 per year?
I personally wouldn’t have predicted any of this, but I don’t think there’s any major flaws in the data now that I’ve seen it. The biggest selection bias here may be one for those of us who naturally try to predict how people will rationally respond to changing incentives: applying such predictions to a tiny fraction of the population which has already self-selected for irrationality is not going to work well.
Suicide rates reduced by a third in Britain merely because one easy method became unavailable? In other words, a large minority of would-be suicides didn’t even need to be convinced by someone else, they just needed less time to convince themselves than it would have taken them to find a slightly less convenient way of killing themselves.
Not only is “actual preferences” ill-defined, but so is “accurately represent.” So let me try and operationalize this a bit.
We have someone with a set of preferences that turn out to be mutually exclusive in the world they live in. We can in principle create a procedure for sorting their preferences into categories such that each preference falls into at least one category and all the preferences in a category can (at least in principle) be realized in that world at the same time. So suppose we’ve done this, and it turns out they have two categories A and B, where A includes those preferences Cato describes as “a fit of melancholy.”
I would say that their “actual” preferences = (A + B). It’s not realizable in the world, but it’s nevertheless their preference. So your question can be restated: does A or B more accurately represent (A + B)?
There doesn’t seem to be any nonarbitrary way to measure the extent of A, B, and (A+B) to determine this directly. I mean, what would you measure? The amount of brain matter devoted to representing all three? The number of lines of code required to represent them in some suitably powerful language?
One common approach is to look at their revealed preferences as demonstrated by the choices they make. Given an A-satisfying and a B-satisfying choice that are otherwise equivalent (and constructing such an exercise is left as an exercise to the class), which do they choose? This is tricky in this case, since the whole premise here is that their revealed preferences are inconsistent over time, but you could in principle measure their revealed preferences at multiple different times and weight the results accordingly (assuming for simplicity that all preference-moments are identical in weight).
When you were done doing all of that, you’d know whether A > B, B>A, or A=B.
It’s not in the least clear to me what good knowing that would do you. I suspect that this sort of analysis is not actually what you had in mind.
A more common approach is to decide which of A and B I endorse, and to assert that the one I endorse is his actual preference. E.g., if I endorse choosing to live over choosing to die, then I endorse B, and I therefore assert that B is his actual preference. But this is not emotionally satisfying when I say it baldly like that. Fortunately, there are all kinds of ways to conceal the question-begging nature of this approach, even from oneself.
I would instead ask “What preferences would this agent have, in a counterfactual universe in which they were fully-informed and rational but otherwise identical?”.
“The problem with trying to extrapolate what a person would want with perfect information is, perfect information is a lot of fucking information. The human brain can’t handle that much information, so if you want your extrapolatory homunculus to do anything but scream and die like someone put into the Total Perspective Vortex, you need to enhance its information processing capabilities. And once you’ve reached that point, why not improve its general intelligence too, so it can make better decisions? Maybe teach it a little bit about heuristics and biases, to help it make more rational choices. And you know it wouldn’t really hate blacks except for those pesky emotions that get in the way, so lets throw those out the window. You know what, let’s just replace it with a copy of me, I want all the cool things anyway.
Truly, the path of a utilitarian is a thorny one. That’s why I prefer a whimsicalist moral philosophy. Whimsicalism is a humanism!”
The sophisticated reader presented with a slippery slope argument like that one first checks whether there really is a force driving us in a particular direction, that makes the metaphorical terrain a slippery slope rather than just a slippery field, and secondly they check whether there are any defensible points of cleavage in the metaphorical terrain that could be used to build a fence and stop the slide at some point.
The slippery slope argument you are quoting, when uprooted and placed in this context, seems to me to fail both tests. There’s no reason at all to descend progressively into the problems described, and even if there was you could draw a line and say “we’re just going to inform our mental model of any relevant facts we know that it doesn’t, and fix any mental processes our construct has that are clearly highly irrational”.
You haven’t given us a link but going by the principle of charity I imagine that what you’ve done here is take a genuine problem with building a weakly God-like friendly AI and tried to transplant the argument into the context of intervening in a suicide attempt, where it doesn’t belong.
Thanks to all the pushback against my initial complaint, I’ve retracted my downvote. I announce this here so that I can signal what a wonderful rationalist I am.
-Voltaire, Cato
I think this quote unfairly trivializes the subjectively (and often objectively) harsh lives suicidal people go through.
As a 911 Operator, I have spoken to hundreds of suicidal people at their very lowest moment (often with a weapon in hand). In my professional judgment, the quote is accurate for a large number of cases (obviously, there are exceptions).
There are many people who want to die. There are few who are willing to commit suicide to do it.
The point is that whether and how much one wants to die tends to fluctuate a lot, and the willingness to commit suicide depends a lot on the availiability of means to easily and painlessly do so. A large percentage of suicide attempts are opportunistic rather than planned. The planned ones probably succeed more often, but that does not necessarily mean that those people really wanted to die more—just that their will to die was over a certain threshold for a certain time.
DFW, Infinite Jest
I have read that a majority of people who survive suicide attempts end up glad that they did not succeed (although I can no longer remember and thus cannot vouch for the source.) A somewhat alarming proportion of my own acquaintances have attempted suicide though, and all except for one so far have attested that this is the case for them.
I think this quote is objectively accurate:
In other words, if you ever think you want to kill yourself, there’s a 90% chance you’re wrong. Behave accordingly.
That isn’t what the quote tells you. It is evidence that you could be wrong but certainly doesn’t make you 90% likely to be wrong.
Well, yes, it just establishes a prior. But a remarkably hard prior to update, don’t you think? “I’m probably in worse shape than all those people who tried to jump off the Golden Gate Bridge” would demand some exceptional new information.
All this data says is that between 90% and 94% of people who are convinced not to jump did not go on to successfully commit suicide at a later date. It would be a big mistake to assume that whether or not you would come to regret your choice is 100% independent of whether or not you can be convinced not to jump and that therefore the fraction of people who came to regret commiting suicide is the same as the fraction who would have come to regret commiting suicide if they had failed their attempt.
“Apprehended” isn’t synonymous with “convinced not to jump”, but there does seem to be a sampling bias here, yes. (And can I say how refreshing it is to hear someone point that out and not be ignorantly insulted for it by dozens of people? Hyperlink to a “More Wrong” website omitted in the name of internet civility, but take my word for it that I’m describing an actual event.)
I think even “convinced not to jump” wouldn’t necessarily change the decision calculus here, though. To the extent there is a selection bias it’s because some subset of suicidal people behaved in ways which caused them to avoid opportunities to have their minds changed. That’s so irrational you could practically write a book about it.
One old study about one bridge is not the whole body of evidence regarding suicide, either. Read a few more bits from just that one news article.
Suicide rates reduced by a third in Britain merely because one easy method became unavailable? In other words, a large minority of would-be suicides didn’t even need to be convinced by someone else, they just needed less time to convince themselves than it would have taken them to find a slightly less convenient way of killing themselves. Even “very slightly less convenient” can provide enough time: 4 bridge jumpers per year were all deterred by one new barrier at the Ellington bridge, the local suicide rate went down by 4 jumpers per year, and the suicide rate at the unprotected, easily visible neighboring bridge only went up by 0.3 per year?
I personally wouldn’t have predicted any of this, but I don’t think there’s any major flaws in the data now that I’ve seen it. The biggest selection bias here may be one for those of us who naturally try to predict how people will rationally respond to changing incentives: applying such predictions to a tiny fraction of the population which has already self-selected for irrationality is not going to work well.
Not that I don’t think that most people who plan to kill themselves will tend to think better of it as time passes, but it’s a mistake to assume that trivial inconveniences only prevent people from doing things they don’t really want or believe are good for them.
If you ever think you want to kill yourself, there’s a 90% percent chance that, either you’re wrong, or you will be after surviving the attempt.
From what I understand, it’s accurate. Whether waiting a week would result in a more or less responsible decision is an open question.
When it is subjectively and not objectively harsh, what needs to happen is that their malfunctioning brain be fixed.
And what needs to happen, for the others, is that their objective reality be fixed.
Relevant discussion.
But which of these more accurately represents his “actual preferences”, to the extent that such a thing even exists?
Not only is “actual preferences” ill-defined, but so is “accurately represent.” So let me try and operationalize this a bit.
We have someone with a set of preferences that turn out to be mutually exclusive in the world they live in.
We can in principle create a procedure for sorting their preferences into categories such that each preference falls into at least one category and all the preferences in a category can (at least in principle) be realized in that world at the same time.
So suppose we’ve done this, and it turns out they have two categories A and B, where A includes those preferences Cato describes as “a fit of melancholy.”
I would say that their “actual” preferences = (A + B). It’s not realizable in the world, but it’s nevertheless their preference. So your question can be restated: does A or B more accurately represent (A + B)?
There doesn’t seem to be any nonarbitrary way to measure the extent of A, B, and (A+B) to determine this directly. I mean, what would you measure? The amount of brain matter devoted to representing all three? The number of lines of code required to represent them in some suitably powerful language?
One common approach is to look at their revealed preferences as demonstrated by the choices they make. Given an A-satisfying and a B-satisfying choice that are otherwise equivalent (and constructing such an exercise is left as an exercise to the class), which do they choose? This is tricky in this case, since the whole premise here is that their revealed preferences are inconsistent over time, but you could in principle measure their revealed preferences at multiple different times and weight the results accordingly (assuming for simplicity that all preference-moments are identical in weight).
When you were done doing all of that, you’d know whether A > B, B>A, or A=B.
It’s not in the least clear to me what good knowing that would do you. I suspect that this sort of analysis is not actually what you had in mind.
A more common approach is to decide which of A and B I endorse, and to assert that the one I endorse is his actual preference. E.g., if I endorse choosing to live over choosing to die, then I endorse B, and I therefore assert that B is his actual preference. But this is not emotionally satisfying when I say it baldly like that. Fortunately, there are all kinds of ways to conceal the question-begging nature of this approach, even from oneself.
I would instead ask “What preferences would this agent have, in a counterfactual universe in which they were fully-informed and rational but otherwise identical?”.
Quoting a forum post from a couple years ago...
“The problem with trying to extrapolate what a person would want with perfect information is, perfect information is a lot of fucking information. The human brain can’t handle that much information, so if you want your extrapolatory homunculus to do anything but scream and die like someone put into the Total Perspective Vortex, you need to enhance its information processing capabilities. And once you’ve reached that point, why not improve its general intelligence too, so it can make better decisions? Maybe teach it a little bit about heuristics and biases, to help it make more rational choices. And you know it wouldn’t really hate blacks except for those pesky emotions that get in the way, so lets throw those out the window. You know what, let’s just replace it with a copy of me, I want all the cool things anyway.
Truly, the path of a utilitarian is a thorny one. That’s why I prefer a whimsicalist moral philosophy. Whimsicalism is a humanism!”
The sophisticated reader presented with a slippery slope argument like that one first checks whether there really is a force driving us in a particular direction, that makes the metaphorical terrain a slippery slope rather than just a slippery field, and secondly they check whether there are any defensible points of cleavage in the metaphorical terrain that could be used to build a fence and stop the slide at some point.
The slippery slope argument you are quoting, when uprooted and placed in this context, seems to me to fail both tests. There’s no reason at all to descend progressively into the problems described, and even if there was you could draw a line and say “we’re just going to inform our mental model of any relevant facts we know that it doesn’t, and fix any mental processes our construct has that are clearly highly irrational”.
You haven’t given us a link but going by the principle of charity I imagine that what you’ve done here is take a genuine problem with building a weakly God-like friendly AI and tried to transplant the argument into the context of intervening in a suicide attempt, where it doesn’t belong.
Thanks to all the pushback against my initial complaint, I’ve retracted my downvote. I announce this here so that I can signal what a wonderful rationalist I am.