I have mentioned the Chinese room as an example of details included in the thought experiments to activate certain intuitive reactions, especially in response to this:
But I just can’t see any motivation I agree with to making both a hypothetical and an alternate world instead of a simpler question which is apparently ‘the point’ of exercise.
I certainly don’t defend Searle’s conclusion.
That’s the way vivid examples are, they don’t yield binary choices. I don’t see why we should be giving answers more befit of the simple abstract questions to the vivid examples.
No real world situations yield binary choices. For any questions of form “do you prefer A to B or vice versa?” you are free to answer “in fact I prefer C”. Only be aware that some people (me included) find this way of non-answering questions annoying; my experience is that it’s a pattern often used in endless evasive debates where people are speaking past each other without moving anywhere. There is certain advantage in binary questions: thay may not reflect all aspects of realistic decisions, but they are conductive of efficient communication.
Well, that’s fair enough but I find the thought experiments (like Chinese room) irritating as well; they typically try to coax you into making some reasoning mistake when reasoning visually with unrealistic assumptions, then it can be quite difficult for you to vocalize what’s wrong, or even realize you made a mistake (and Chinese room is a perfect example of this). If one wants me to answer question—is it moral to kill one person when it is absolutely the only way to save 10 people and they all have same life expectancy—they should ask this question. To which I would answer something along the lines of error rate in this sort of decisions leading to far more deaths than it prevents, thus making the strategy of forbidding such decisions beforehand a correct strategy to precommit to (I am a game programmer hence if I think of future decision I think how to decide).
I don’t like the following process:
You have abstract moral question.
You take time to make up much less abstract, more verbose, and more vivid example.
I am expected to ignore all the vivid detail and instead ‘get the point’ and answer the abstract moral question. (plus I can be asked to, so to say, visualize a tiger, and then be told ‘but i told nothing of the stripes’ if in my visualization i have stripes on the tiger and they matter)
I share your irritation with the Chinese room experiment. I don’t share the same objection with the discussed hospital scenario, the level of non-realism is much lower in the latter. The Chinese room tacitly assumes all involved agents are normal people (so that our intuitions about knowing and understanding hold) while also assumes the man in the room’s ability to learn a vast algorithm which we have even been unable to develop as a computer program yet. In the latter case, the non-realism is of sort “this doesn’t usually happen”.
Consider the scenario put in this form:
You have this dialogue with your doctor:
doctor: “I’ve had nightmares recently because what I’ve done. I feel I can’t keep it for myself anymore and it can easily be you whom I tell my secret, if you don’t object, of course.” you: “Go on.” “Well, I have killed a man. I have done it to save others, but still I suspect I might have done something very wrong. There were ten patients in the hospital, all in need of organ transplants. Each needed a different organ, and each of them was in a serious danger of dying if a donor doesn’t appear quickly. Then a stranger wandered in. He wanted a routine checkup, but from the blood test I realised that, accidentally, he would probably be an ideal donor for all ten patients we had in the hospital. You know, we don’t receive many donors in our hospital and we had little time. Almost certainly, this was the last chance to save those people.” ”But, you couldn’t be sure that the transplants would be successful, just based on a simple blood test.” “Of course, when I got the idea, I told the man that I need to do more tests to rule out my suspicion of a serious disease. I also asked him questions about his personal life to find out whether he had children or family who would regret his death. It turned out to not be the case.” ”Yes, but even with an ideal donor, the quality and lenght of life of the transplantees are usually poor.” “Actually, according to my statistics half of the patients will survive twenty years with modest inconveniences. That’s five people. One or two of the remaining five are going to die in a couple of years, but still, I was buying twenty years of life for five patients who would die in few weeks otherwise. The stranger was in his fifties so he could live for thirty years more.” ”But, wasn’t there another solution? You could kill one of the patients and use his organs, for example.” “Do you think this didn’t occur to me? It couldn’t be done. The patients were closely monitored and their families would sue the hospital under the slightest suspicion. If the truth comes out, the hospital will certainly lose the trust of the public, and perhaps be closed, causing many unnecessary deaths in the future. I was able to kill the stranger and arrange it as a traffic accident with head injury. I haven’t been able to do that with the patients, or make up an alternative plan to secretly kill any of them.” ”But there have to be thousands of alternative solutions. Literally.” ″Maybe there were. I had thought about it for several days and no alternative solution had occured to me. After few days, the stranger insisted he couldn’t stay longer. At that point I hadn’t any alternative solution available, I was choosing between basically two alternatives. I chose to kill.”
So, what are you going to do? Will you call police? Will you morally blame the doctor? In this setting you can’t call for alternative solutions. The scenario is not probable, of course, but there are no blatantly absurd assumptions which would allow you to discard it as completely implausible, as in the Chinese room case.
Well, look at how you had to arrive at this example. There had to have been an iteration with a traveller, and the example had to be adjusted to make it so that this traveller is an ideal donor for 10 people, none of whom is a good donor for remaining 9. We’re down to probabilities easily below 10^-10 meaning ‘not expected to ever have happened in the history of medicine’. (Whereas the number of worldwide cases when someone got killed for organs is easily in the tens thousands)
Human immune system does not work so conveniently for your argument, so you’ll have to drop the transplant example and come up with something else.
This should serve as quite effective demonstration of how extremely rare such circumstances are. So rare that you can not reason about them without aid of another person who strikes down your example repeatedly, forcing you to refine it. At same time, the cases whereby something like this is done for personal gain, and then are rationalized as selfless and altruistic—those are commonplace.
Privileging those exceedingly improbable situations to the same level of consideration as the much much more probable situations is a case of extreme bias.
The issue with rare situations is that the false positive rate can be dramatically larger than the rate of event actually happening, meaning that majority of detected events are false positives. When you carelessly increase the number of lives saved in your taking apart the traveller example, you are linearly increasing the gain but exponentially decreasing the probability of this bizzare histocompatibility coincidence.
If I heard that story from a doctor—I would think—what is the probability of this histocompatibility coincidence? Very very low. I am guessing below 1E-10 (likely well below). What is the probability that doctor is beginning to succumb to a mental disorder of some delusionary kind? Far larger, on order of 1/1000 to 1/10000 . Meaning that when you hear such story, there is still a very low probability still that it is true, and most likely explanation for the story is that doctor is simply nuts (and most likely he did just lie to you about the entire thing for sake of argument or something). Meaning that it would be (from utilitarian standpoint) more optimal to do nothing (based on belief that story was entirely made up) or call the police (based on belief that he did actually kill someone, or is planning to). [Of course I would try to estimate probabilities as reliably as I can before calling the police, to far greater degree of confidence than in an argument here.]
Basically, the thought experiments like that—usually they are just a way of forcing a person to make a mistake when reasoning non-verbally, in the hope that he wouldn’t be able to vocalize the mistake (or even realize he made one). In that case the mistake that the example tries to trick reader into is ignoring the error rate of the agent which is making the decision, in the circumstances where that rate is BY FAR (at least by 6 orders of magnitude I’d say, for the 10 patients) the dominating number in the utility equation.
Likewise the Chinese room tries to trick reader into making a 14 orders of magnitude error or so. Such mindbogglingly huge errors slip past reason. We are not accustomed to being this wrong.
I have mentioned the Chinese room as an example of details included in the thought experiments to activate certain intuitive reactions, especially in response to this:
I certainly don’t defend Searle’s conclusion.
No real world situations yield binary choices. For any questions of form “do you prefer A to B or vice versa?” you are free to answer “in fact I prefer C”. Only be aware that some people (me included) find this way of non-answering questions annoying; my experience is that it’s a pattern often used in endless evasive debates where people are speaking past each other without moving anywhere. There is certain advantage in binary questions: thay may not reflect all aspects of realistic decisions, but they are conductive of efficient communication.
Well, that’s fair enough but I find the thought experiments (like Chinese room) irritating as well; they typically try to coax you into making some reasoning mistake when reasoning visually with unrealistic assumptions, then it can be quite difficult for you to vocalize what’s wrong, or even realize you made a mistake (and Chinese room is a perfect example of this). If one wants me to answer question—is it moral to kill one person when it is absolutely the only way to save 10 people and they all have same life expectancy—they should ask this question. To which I would answer something along the lines of error rate in this sort of decisions leading to far more deaths than it prevents, thus making the strategy of forbidding such decisions beforehand a correct strategy to precommit to (I am a game programmer hence if I think of future decision I think how to decide).
I don’t like the following process:
You have abstract moral question.
You take time to make up much less abstract, more verbose, and more vivid example.
I am expected to ignore all the vivid detail and instead ‘get the point’ and answer the abstract moral question. (plus I can be asked to, so to say, visualize a tiger, and then be told ‘but i told nothing of the stripes’ if in my visualization i have stripes on the tiger and they matter)
I share your irritation with the Chinese room experiment. I don’t share the same objection with the discussed hospital scenario, the level of non-realism is much lower in the latter. The Chinese room tacitly assumes all involved agents are normal people (so that our intuitions about knowing and understanding hold) while also assumes the man in the room’s ability to learn a vast algorithm which we have even been unable to develop as a computer program yet. In the latter case, the non-realism is of sort “this doesn’t usually happen”.
Consider the scenario put in this form:
You have this dialogue with your doctor:
doctor: “I’ve had nightmares recently because what I’ve done. I feel I can’t keep it for myself anymore and it can easily be you whom I tell my secret, if you don’t object, of course.”
you: “Go on.”
“Well, I have killed a man. I have done it to save others, but still I suspect I might have done something very wrong. There were ten patients in the hospital, all in need of organ transplants. Each needed a different organ, and each of them was in a serious danger of dying if a donor doesn’t appear quickly. Then a stranger wandered in. He wanted a routine checkup, but from the blood test I realised that, accidentally, he would probably be an ideal donor for all ten patients we had in the hospital. You know, we don’t receive many donors in our hospital and we had little time. Almost certainly, this was the last chance to save those people.”
”But, you couldn’t be sure that the transplants would be successful, just based on a simple blood test.”
“Of course, when I got the idea, I told the man that I need to do more tests to rule out my suspicion of a serious disease. I also asked him questions about his personal life to find out whether he had children or family who would regret his death. It turned out to not be the case.”
”Yes, but even with an ideal donor, the quality and lenght of life of the transplantees are usually poor.”
“Actually, according to my statistics half of the patients will survive twenty years with modest inconveniences. That’s five people. One or two of the remaining five are going to die in a couple of years, but still, I was buying twenty years of life for five patients who would die in few weeks otherwise. The stranger was in his fifties so he could live for thirty years more.”
”But, wasn’t there another solution? You could kill one of the patients and use his organs, for example.”
“Do you think this didn’t occur to me? It couldn’t be done. The patients were closely monitored and their families would sue the hospital under the slightest suspicion. If the truth comes out, the hospital will certainly lose the trust of the public, and perhaps be closed, causing many unnecessary deaths in the future. I was able to kill the stranger and arrange it as a traffic accident with head injury. I haven’t been able to do that with the patients, or make up an alternative plan to secretly kill any of them.”
”But there have to be thousands of alternative solutions. Literally.”
″Maybe there were. I had thought about it for several days and no alternative solution had occured to me. After few days, the stranger insisted he couldn’t stay longer. At that point I hadn’t any alternative solution available, I was choosing between basically two alternatives. I chose to kill.”
So, what are you going to do? Will you call police? Will you morally blame the doctor? In this setting you can’t call for alternative solutions. The scenario is not probable, of course, but there are no blatantly absurd assumptions which would allow you to discard it as completely implausible, as in the Chinese room case.
Well, look at how you had to arrive at this example. There had to have been an iteration with a traveller, and the example had to be adjusted to make it so that this traveller is an ideal donor for 10 people, none of whom is a good donor for remaining 9. We’re down to probabilities easily below 10^-10 meaning ‘not expected to ever have happened in the history of medicine’. (Whereas the number of worldwide cases when someone got killed for organs is easily in the tens thousands) Human immune system does not work so conveniently for your argument, so you’ll have to drop the transplant example and come up with something else.
This should serve as quite effective demonstration of how extremely rare such circumstances are. So rare that you can not reason about them without aid of another person who strikes down your example repeatedly, forcing you to refine it. At same time, the cases whereby something like this is done for personal gain, and then are rationalized as selfless and altruistic—those are commonplace.
Privileging those exceedingly improbable situations to the same level of consideration as the much much more probable situations is a case of extreme bias.
The issue with rare situations is that the false positive rate can be dramatically larger than the rate of event actually happening, meaning that majority of detected events are false positives. When you carelessly increase the number of lives saved in your taking apart the traveller example, you are linearly increasing the gain but exponentially decreasing the probability of this bizzare histocompatibility coincidence.
If I heard that story from a doctor—I would think—what is the probability of this histocompatibility coincidence? Very very low. I am guessing below 1E-10 (likely well below). What is the probability that doctor is beginning to succumb to a mental disorder of some delusionary kind? Far larger, on order of 1/1000 to 1/10000 . Meaning that when you hear such story, there is still a very low probability still that it is true, and most likely explanation for the story is that doctor is simply nuts (and most likely he did just lie to you about the entire thing for sake of argument or something). Meaning that it would be (from utilitarian standpoint) more optimal to do nothing (based on belief that story was entirely made up) or call the police (based on belief that he did actually kill someone, or is planning to). [Of course I would try to estimate probabilities as reliably as I can before calling the police, to far greater degree of confidence than in an argument here.]
edit: a good example, the ambulance story here: http://lesswrong.com/lw/if/your_strength_as_a_rationalist/
Basically, the thought experiments like that—usually they are just a way of forcing a person to make a mistake when reasoning non-verbally, in the hope that he wouldn’t be able to vocalize the mistake (or even realize he made one). In that case the mistake that the example tries to trick reader into is ignoring the error rate of the agent which is making the decision, in the circumstances where that rate is BY FAR (at least by 6 orders of magnitude I’d say, for the 10 patients) the dominating number in the utility equation.
Likewise the Chinese room tries to trick reader into making a 14 orders of magnitude error or so. Such mindbogglingly huge errors slip past reason. We are not accustomed to being this wrong.