I’m kicking myself for not realizing this, but you’re right. A probability of zero on their left arm being paralyzed only comes from people with anosognosia and people who are not paralyzed. Therefore, a non-zero probability only comes from people who are paralyzed and do not have anosognosia, in which case their probability is 1.
Estimating between zero and 1 by definition means you cannot be anosognosic. However, it also means you are not paralyzed, because only anosognosic paralytics place a non-1 probability on their condition. Therefore, if you are not certain you are paralyzed, you must be certain you are not. I am subsequently forced to place a probability of zero on my left arm being currently paralyzed.
I strongly suspect that a Less Wrong reader with anosognosia would at first reply, “Well, of course I’m not certain that my left arm isn’t paralyzed—you can’t be infinitely certain about anything.” And they might well go on to say, “But the fact that I don’t feel any absolute certainty along these lines is, in fact, evidence that I don’t have anosognosia”.
I have seen many would-be rationalists who say “I’m not certain” and then, secure in having proven their rationality as much as anyone could possibly ask, forge straight on without a second glance. See The Proper Use of Humility.
I’m still having trouble with this one. I don’t know why this particularly morbid example popped into my head, but here it is: we have very strong survival instincts. These can be overcome by overdosing on pills or jumping off a bridge. However, they cannot be overcome by holding your head in a bucket of water and trying to drown. You may be determined to kill yourself, but every time you try, as soon as you start breathing water you’re going to pull your head back out. Now, I can say that “I’m not certain” I can’t kill myself this way—but in reality, I know it’s not possible. My brain has a very real physical process that just won’t let me. Therefore, I don’t feel that it is honest to say “I’m not certain” in this case. In fact, saying “I’m not certain” feels very much like saying “the sky is green” or “I like to eat glass”, i.e. it feels like bullshit. Is that something a rationalist needs to overcome, if only so he can admit, “I could say I am not certain, but I am?” The argument against this seems to be saying, “I am not certain I am not a butterfly,” even though it is not possible for a butterfly to have such a thought.
The Proper Use of Humility is one of my favorite articles of yours, by the way, and I do feel like I’m making progress in a worthy direction, even if it can seem like I’m random-walking on the way there.
However, they cannot be overcome by holding your head in a bucket of water and trying to drown. You may be determined to kill yourself, but every time you try, as soon as you start breathing water you’re going to pull your head back out. Now, I can say that “I’m not certain” I can’t kill myself this way—but in reality, I know it’s not possible. My brain has a very real physical process that just won’t let me.
How do you know that? It doesn’t seem intuitively obvious to me that you can’t train to successfully drown this way. It’ll take more than intuition, and I can’t think of a way this could’ve been reliably studied, so I don’t believe you can have a good reason to have this belief.
Anecdotal evidence: When I swim for distance underwater, and really push myself, I will often experience a strong compulsion to surface, even when I believe I can hold out for a few more feet and reach my goal. I am not even afraid of drowning, yet I consistently follow the compulsion to surface.
I can’t think of a way to study this in an ethical controlled experiment, but data can be gathered from suicides and attempted suicides that would be relevant to the theory.
I have similar experiences when swimming underwater. I used to see if I could swim the length of the pool in one breath, and often would surface seemingly-prematurely out of a sudden strong desire to take a breath.
My old roommate reported having lots of trouble letting go of a handle when skydiving. He very much wanted to dive, and was not afraid of an unsafe landing, but instinct was very difficult to overcome.
Which reminds me that there are people who can hold their breath for insane amounts of time, so presumably they overcame this instinct, and start breathing only by intellectually deciding that they must do so to survive (and they likely know a lot about the properties of this danger).
I am with you in disagreeing with Eirenicon’s assertion that self drowning in a bucket is impossible with probability 1, though I believe with high probability that it is difficult beyond the ability of most people. I was mainly objecting to your assertion that this couldn’t be studied.
Also, merely holding your breath is not dangerous. You would pass out before suffering any permanent damage, and breathe normally while unconscious. It would be dangerous in an environment, such as under water, in which you could breathe normally after passing out.
It could be apocryphal, and it doesn’t help that it seems like something I heard about a long time ago, but as far as I know, when you start to drown the best of your intentions are overcome by your instinct for self preservation. However, Google turns up a result from the Telegraph about a recent case in which someone may indeed have drowned himself in a bucket of water, although there seems to be some confusion over the case. Thanks for calling me on it—I really am now, in fact, not certain I couldn’t.
I couldn’t think of a better example at the time, though, so the spirit of the argument will have to stand in for its questionable veracity.
When an inmate is found drowned in a bucket of water in a cell with three other inmates, my first theory is not suicide while the other inmates are sleeping.
You’re right, and I won’t argue it. The idea of not impossible is one I have difficulty with, though. In my original post, replace with , for lack of a better alternative. With anosognosia, that thing is “recognize left-arm paralysis”. The reason I didn’t stick with that is because I don’t know if I have anosognosia or not, which is another layer of uncertainty. Stripped down, though, this is what I’m saying: it seems I should be uncertain about things I know to be certain, and that seems dishonest. I understand the argument against infinite certainty, and that 0 And 1 Are Not Probabilities. Perhaps it’s because, as EY suggests, people often say “I can’t be certain” simply to establish themselves as rational rather than actually assessing probability. Perhaps it’s simply because I dislike an infinitely uncertain universe. Of course, the universe isn’t interested in what I like. The map, as ever, is not the territory.
You should say that something is impossible, without intending that to mean zero probability, if you can safely antipredict that event. Antiprediction means that you think of an event as if it can’t happen. Intuition resulting from thinking of a sufficiently low-probability event as impossible is more accurate than intuition resulting from thinking of it as still possible.
Antiprediction is a very interesting suggestion. Your aggressive reasoning in this thread has changed the way I think about a few things. Well done, and thanks!
That was precisely my first reaction, and in fact I originally wrote that admitting you can’t be certain your left arm isn’t paralyzed may seem like a stronger defense against an accusation of anosognosia than claiming infinite certainty. However, I realized that saying you can’t be absolutely certain your left arm is not paralyzed while absolutely denying that it is seems like a pretty obvious contradiction. The very reason we’re talking about anosognosia is because it is unique in that you aren’t saying “I’m not certain, but I don’t think I’m paralyzed,” but “I’m not certain, but I completely reject any evidence that I’m paralyzed.”
I don’t fully understand the condition, though. Would a Less Wrong reader with anosognosia be able to realize he had it if you confronted him on the notion, not that he is paralyzed, but that he is totally rejecting the evidence instead of exhibiting real uncertainty? Difficult to wrap my head around.
I did study logic for a while, though, and it gave me an unfortunate predilection for resolving to certainty when I should at least be providing reasonable probability bounds.
You are arguing by definition (little can be learned with way), and throwing in infinite certainty. I doubt an anosognosic believes that it’s impossible for them to be paralyzed more than I believe that 2+2=4, or that there is no God. Maybe that belief isn’t even that strong, the only problem with it being that it won’t go away in the face of counterevidence.
I doubt an anosognosic believes that it’s impossible for them to be paralyzed more than I believe that 2+2=4, or that there is no God.
I don’t know if that’s correct, actually.
Anosognosia seems to be a symptom of a catastrophic failure of the brain’s ability to reconsider current beliefs in light of new evidence; these systems are apparently localized to the right hemisphere, which is why you won’t find anosognosiacs with paralyzed right arms, only left.
If a god descended from the heavens and spoke to you, personally, declaring existence and providing myriad demonstrations of divine power, I expect you would reconsider your belief at least a little bit. Anosognosiacs routinely deny equally compelling evidence for the paralysis of their arm!
If a god descended from the heavens and spoke to you, personally, declaring existence and providing myriad demonstrations of divine power, I expect you would reconsider your belief at least a little bit.
Given what you already know about the world (including the possibility of insanity and simulations), how much evidence should be necessary to convince you in that situation? A subjective year? One hundred? One thousand? More? Once you’ve already decided that you’re insane or in a simulation with probability X, I can’t see how any evidence of anything would be useful if you already assign less than probability X to that thing. It’s a local minima you can’t escape from as far as I can see. One reason I’m not especially anti-religion is that I think that at least some theists are in the same position: there’s no evidence that is more likely to be real evidence that there is no god than it is to be evidence of testing by fallen angels, or whatever.
But maybe I’ve just missed the excellent discussion about this?
The ability to update on evidence is different from level of certainty. If I’m absolutely certain about something, I accept any bet on it being true. If I’m merely unable to update my belief but I’m not absolutely certain, I will only accept moderate bets, but I’ll do that long after any reasonable trust in the statements should’ve been eliminated by the evidence.
Okay, true. I was thinking about it backwards; absolute certainty does, of course, lead to an inability to update (which is why we don’t use 1 and 0 as probabilities).
Out of curiousity, which proposition do you have higher confidence in: “No being fitting the standard definitions of God exists” or “My left arm is not paralyzed”?
For satisfying SoullessAutomaton’s curiosity I think phrasing it differently would have been better: which one would you bet (say, $100) if you had to do and could only pick one? (Assuming that both questions would get truthfully answered immediately after making the bet. It’s just so that you wouldn’t pick one of these just because the question seems more interesting.)
Is it to say, if you had to make such a bet (at a gunpoint, if you will), you’d be indifferent and might as well flip a coin to choose? If so, fair enough. If not, what’s more to it? (Assuming you don’t want to get killed on refusing to take the bet.)
Funny, I was just reading the arguing by definition article, then clicked the red envelope and saw your reply. I looked it up because the post I just made here reminded me of it as well. However, I feel justified in this instance because anosognosia is characterized by absolute denial. As far as I can tell, this is an unusual form of brain damage because it is so black and white; 100% of anosognosics will absolutely deny their left arm is paralyzed. If they do not, it by definition (oops) is not anosognosia, just as someone with a paralyzed left arm by definition cannot move it. Consequently, I don’t see the fallacy. I genuinely appreciate the criticism, though.
In any case, I avoided arguing the question, which itself is predicated on anosognosics assigning zero probability to their left arm being paralyzed. If they don’t, then there is nothing to base our probability estimate on, and the question is meaningless, like asking “There are a hundred trees, one in ten trees has an apple on it, how many apples are there? (Some apples are oranges)”.
Incidentally, I assign a probability of 1 to 2+2=4 and don’t understand why you would not. Can you explain?
I’m kicking myself for not realizing this, but you’re right. A probability of zero on their left arm being paralyzed only comes from people with anosognosia and people who are not paralyzed. Therefore, a non-zero probability only comes from people who are paralyzed and do not have anosognosia, in which case their probability is 1.
Estimating between zero and 1 by definition means you cannot be anosognosic. However, it also means you are not paralyzed, because only anosognosic paralytics place a non-1 probability on their condition. Therefore, if you are not certain you are paralyzed, you must be certain you are not. I am subsequently forced to place a probability of zero on my left arm being currently paralyzed.
I strongly suspect that a Less Wrong reader with anosognosia would at first reply, “Well, of course I’m not certain that my left arm isn’t paralyzed—you can’t be infinitely certain about anything.” And they might well go on to say, “But the fact that I don’t feel any absolute certainty along these lines is, in fact, evidence that I don’t have anosognosia”.
I have seen many would-be rationalists who say “I’m not certain” and then, secure in having proven their rationality as much as anyone could possibly ask, forge straight on without a second glance. See The Proper Use of Humility.
I’m still having trouble with this one. I don’t know why this particularly morbid example popped into my head, but here it is: we have very strong survival instincts. These can be overcome by overdosing on pills or jumping off a bridge. However, they cannot be overcome by holding your head in a bucket of water and trying to drown. You may be determined to kill yourself, but every time you try, as soon as you start breathing water you’re going to pull your head back out. Now, I can say that “I’m not certain” I can’t kill myself this way—but in reality, I know it’s not possible. My brain has a very real physical process that just won’t let me. Therefore, I don’t feel that it is honest to say “I’m not certain” in this case. In fact, saying “I’m not certain” feels very much like saying “the sky is green” or “I like to eat glass”, i.e. it feels like bullshit. Is that something a rationalist needs to overcome, if only so he can admit, “I could say I am not certain, but I am?” The argument against this seems to be saying, “I am not certain I am not a butterfly,” even though it is not possible for a butterfly to have such a thought.
The Proper Use of Humility is one of my favorite articles of yours, by the way, and I do feel like I’m making progress in a worthy direction, even if it can seem like I’m random-walking on the way there.
How do you know that? It doesn’t seem intuitively obvious to me that you can’t train to successfully drown this way. It’ll take more than intuition, and I can’t think of a way this could’ve been reliably studied, so I don’t believe you can have a good reason to have this belief.
Anecdotal evidence: When I swim for distance underwater, and really push myself, I will often experience a strong compulsion to surface, even when I believe I can hold out for a few more feet and reach my goal. I am not even afraid of drowning, yet I consistently follow the compulsion to surface.
I can’t think of a way to study this in an ethical controlled experiment, but data can be gathered from suicides and attempted suicides that would be relevant to the theory.
I have similar experiences when swimming underwater. I used to see if I could swim the length of the pool in one breath, and often would surface seemingly-prematurely out of a sudden strong desire to take a breath.
My old roommate reported having lots of trouble letting go of a handle when skydiving. He very much wanted to dive, and was not afraid of an unsafe landing, but instinct was very difficult to overcome.
Which reminds me that there are people who can hold their breath for insane amounts of time, so presumably they overcame this instinct, and start breathing only by intellectually deciding that they must do so to survive (and they likely know a lot about the properties of this danger).
I am with you in disagreeing with Eirenicon’s assertion that self drowning in a bucket is impossible with probability 1, though I believe with high probability that it is difficult beyond the ability of most people. I was mainly objecting to your assertion that this couldn’t be studied.
Also, merely holding your breath is not dangerous. You would pass out before suffering any permanent damage, and breathe normally while unconscious. It would be dangerous in an environment, such as under water, in which you could breathe normally after passing out.
It could be apocryphal, and it doesn’t help that it seems like something I heard about a long time ago, but as far as I know, when you start to drown the best of your intentions are overcome by your instinct for self preservation. However, Google turns up a result from the Telegraph about a recent case in which someone may indeed have drowned himself in a bucket of water, although there seems to be some confusion over the case. Thanks for calling me on it—I really am now, in fact, not certain I couldn’t.
I couldn’t think of a better example at the time, though, so the spirit of the argument will have to stand in for its questionable veracity.
When an inmate is found drowned in a bucket of water in a cell with three other inmates, my first theory is not suicide while the other inmates are sleeping.
In a Macedonian jail, accused of raping and murdering four elderly women? I had the same reaction.
Even if no examples of this were available, it’s not the kind of evidence that is enough to claim that something is impossible.
You’re right, and I won’t argue it. The idea of not impossible is one I have difficulty with, though. In my original post, replace with , for lack of a better alternative. With anosognosia, that thing is “recognize left-arm paralysis”. The reason I didn’t stick with that is because I don’t know if I have anosognosia or not, which is another layer of uncertainty. Stripped down, though, this is what I’m saying: it seems I should be uncertain about things I know to be certain, and that seems dishonest. I understand the argument against infinite certainty, and that 0 And 1 Are Not Probabilities. Perhaps it’s because, as EY suggests, people often say “I can’t be certain” simply to establish themselves as rational rather than actually assessing probability. Perhaps it’s simply because I dislike an infinitely uncertain universe. Of course, the universe isn’t interested in what I like. The map, as ever, is not the territory.
You should say that something is impossible, without intending that to mean zero probability, if you can safely antipredict that event. Antiprediction means that you think of an event as if it can’t happen. Intuition resulting from thinking of a sufficiently low-probability event as impossible is more accurate than intuition resulting from thinking of it as still possible.
Antiprediction is a very interesting suggestion. Your aggressive reasoning in this thread has changed the way I think about a few things. Well done, and thanks!
That was precisely my first reaction, and in fact I originally wrote that admitting you can’t be certain your left arm isn’t paralyzed may seem like a stronger defense against an accusation of anosognosia than claiming infinite certainty. However, I realized that saying you can’t be absolutely certain your left arm is not paralyzed while absolutely denying that it is seems like a pretty obvious contradiction. The very reason we’re talking about anosognosia is because it is unique in that you aren’t saying “I’m not certain, but I don’t think I’m paralyzed,” but “I’m not certain, but I completely reject any evidence that I’m paralyzed.”
I don’t fully understand the condition, though. Would a Less Wrong reader with anosognosia be able to realize he had it if you confronted him on the notion, not that he is paralyzed, but that he is totally rejecting the evidence instead of exhibiting real uncertainty? Difficult to wrap my head around.
I did study logic for a while, though, and it gave me an unfortunate predilection for resolving to certainty when I should at least be providing reasonable probability bounds.
You are arguing by definition (little can be learned with way), and throwing in infinite certainty. I doubt an anosognosic believes that it’s impossible for them to be paralyzed more than I believe that 2+2=4, or that there is no God. Maybe that belief isn’t even that strong, the only problem with it being that it won’t go away in the face of counterevidence.
I don’t know if that’s correct, actually.
Anosognosia seems to be a symptom of a catastrophic failure of the brain’s ability to reconsider current beliefs in light of new evidence; these systems are apparently localized to the right hemisphere, which is why you won’t find anosognosiacs with paralyzed right arms, only left.
If a god descended from the heavens and spoke to you, personally, declaring existence and providing myriad demonstrations of divine power, I expect you would reconsider your belief at least a little bit. Anosognosiacs routinely deny equally compelling evidence for the paralysis of their arm!
Given what you already know about the world (including the possibility of insanity and simulations), how much evidence should be necessary to convince you in that situation? A subjective year? One hundred? One thousand? More? Once you’ve already decided that you’re insane or in a simulation with probability X, I can’t see how any evidence of anything would be useful if you already assign less than probability X to that thing. It’s a local minima you can’t escape from as far as I can see. One reason I’m not especially anti-religion is that I think that at least some theists are in the same position: there’s no evidence that is more likely to be real evidence that there is no god than it is to be evidence of testing by fallen angels, or whatever.
But maybe I’ve just missed the excellent discussion about this?
The ability to update on evidence is different from level of certainty. If I’m absolutely certain about something, I accept any bet on it being true. If I’m merely unable to update my belief but I’m not absolutely certain, I will only accept moderate bets, but I’ll do that long after any reasonable trust in the statements should’ve been eliminated by the evidence.
Okay, true. I was thinking about it backwards; absolute certainty does, of course, lead to an inability to update (which is why we don’t use 1 and 0 as probabilities).
Out of curiousity, which proposition do you have higher confidence in: “No being fitting the standard definitions of God exists” or “My left arm is not paralyzed”?
I don’t know: these probabilities are not technically defined, so I’m unable to compute them, and too low for my intuition to compare.
For satisfying SoullessAutomaton’s curiosity I think phrasing it differently would have been better: which one would you bet (say, $100) if you had to do and could only pick one? (Assuming that both questions would get truthfully answered immediately after making the bet. It’s just so that you wouldn’t pick one of these just because the question seems more interesting.)
This is a trivial transformation that I don’t see how could change the interpretation of the question.
Is it to say, if you had to make such a bet (at a gunpoint, if you will), you’d be indifferent and might as well flip a coin to choose? If so, fair enough. If not, what’s more to it? (Assuming you don’t want to get killed on refusing to take the bet.)
I could as well flip a coin.
Funny, I was just reading the arguing by definition article, then clicked the red envelope and saw your reply. I looked it up because the post I just made here reminded me of it as well. However, I feel justified in this instance because anosognosia is characterized by absolute denial. As far as I can tell, this is an unusual form of brain damage because it is so black and white; 100% of anosognosics will absolutely deny their left arm is paralyzed. If they do not, it by definition (oops) is not anosognosia, just as someone with a paralyzed left arm by definition cannot move it. Consequently, I don’t see the fallacy. I genuinely appreciate the criticism, though.
In any case, I avoided arguing the question, which itself is predicated on anosognosics assigning zero probability to their left arm being paralyzed. If they don’t, then there is nothing to base our probability estimate on, and the question is meaningless, like asking “There are a hundred trees, one in ten trees has an apple on it, how many apples are there? (Some apples are oranges)”.
Incidentally, I assign a probability of 1 to 2+2=4 and don’t understand why you would not. Can you explain?
You may find this post helpful.