The typical Sherlock Holmes story has Holmes perform twice. First he impresses his client with a seemingly impossible deduction; then he uses another deduction to solve the mystery. Watson or the client convince Holmes to explain the first deduction, which gives the reader the template Holmes will use for the second (likely inferences from small details). The data that Holmes uses to make the second deduction are in the text and available to the reader—the reader’s challenge is to make Holmes’s inference in advance.
Holmes himself attributes his success to observation, not rationality. (There’s a startling passage in A Study In Scarlet where Holmes tells Watson that he can’t be bothered to remember that the sun orbits the earth! Visit the link and search for ‘Copernican Theory’ in the full text for the passage.) The Sherlock Holmes stories are intended to be exercises in attention to detail, which is surely a useful skill for a rationalist.
When you have eliminated the impossible, whatever remains is often more improbable than your having made a mistake in one of your impossibility proofs.
Beautiful comment, but I’d add that whatever remains of the hypotheses you considered is often more improbable than your having missed an unconsidered alternative.
I just stumbled across this and felt this comment and the one above it were worth reminding everyone of in light the Knox case discussion. Way too many of our discussions have involved trying to come up with accounts of the crime that make sense of all the evidence. In retrospect I would labels such discussions as fun, but unhelpful.
I reject that entirely. The impossible often has a kind of integrity to it which the merely improbable lacks. How often have you been presented with an apparently rational explanation of something which works in all respects other than one, which is just that it is hopelessly improbable? Your instinct is to say, ‘Yes, but he or she simply wouldn’t do that.’
I view Dirk Gently as a kind of wonderfully effective strawman, and his stories were a great aid to realizing I was an atheist, because at first he seems correct: surely, rather than a “localized meteorological phenomenon”, it makes more sense that the guy who’s been rained on for 14 straight years is some kind of rain god.
And then you think about what would happen in the real world, and realize that no, even if someone had been rained on for 14 years straight, I would not believe that they were a rain god. Because rain gods are actually impossible.
In the real world, these are mostly just games we play with words.
Someone who has been rained on for 14 years straight has an extremely surprising property.
The label we assign that property matters a little, since it affects our subsequent behavior with respect to it. If I call it “rain god” I may be more inclined to worship it; if I label it a “localized meteorological phenomenon” I might be more inclined to study it using the techniques of meteorology; if I label it an extremely unlikely coincidence I might be more inclined not to study it at all; if I label it the work of pranksters with advanced technology I might be more inclined to look for pranksters, etc.
Etc.
But other things matter far more.
Do they have any other equally unlikely observable attributes, for example? Did anything equally unlikely occur 14 years ago?
Etc.
Worrying overmuch about labels can distract us from actually observing what’s in front of us.
If someone claimed to be a rain god, or was credibly claimed to be a rain god based on previous evidence, and tested this by going through an EMP, stripping, generally removing any plausible way technological means could be associated with them, then being transported while in a medically-induced coma to a series of destinations not disclosed to them in advance in large deserts, and at all times was directly under, in, or above rainclouds, defying all meteorological patterns predicted by the best models just in advance of the trip, I find it hard to see how you could reasonably fail to assign significant probability to a model which made the same predictions as “this person is a rain god”.
In some sense, basically everywhere there is a very-low or very-high probability belief, since obviously I can’t be more confident in any belief than I can be in the reliableness of my system of reasoning. I definitely consider this when I’m evaluating the proper strength of nearly-certain beliefs. In another sense, almost nowhere.
I don’t know exactly how confident I should be in my sanity, except that the probability of insanity is small. Also, I’m not confident there would be any evidence distinguishing ‘sane and rational’ from ‘insane but apparently rational’. I model a logical-insane VAuroch as being like the anti-inductors; following different rules which, according to their own standards, are self-consistent.
Since I can’t determine how to quantify it, my response has been to treat all other beliefs as conditioned on “my reasoning process is basically sound”, which makes a fair number of my beliefs having tacit probability 1; if I find reason to question any of these beliefs, I will have to rederive every belief from the original evidence as much as possible, because it’s exposed a significant flaw in the means by which I determine what beliefs to hold. Largely this consists of mathematical proofs, but also things like “there is not currently a flying green elephant in this room” and “an extant rain god is mutually incompatible with reductionism”.
Since I can’t determine how to quantify it, my response has been to treat all other beliefs as conditioned on “my reasoning process is basically sound”, which makes a fair number of my beliefs having tacit probability 1; if I find reason to question any of these beliefs, I will have to rederive every belief from the original evidence as much as possible, because it’s exposed a significant flaw in the means by which I determine what beliefs to hold. Largely this consists of mathematical proofs, but also things like “there is not currently a flying green elephant in this room” and “an extant rain god is mutually incompatible with reductionism”.
This is an amazingly apt description of the mind-state that Robert Anton Wilson called “Chapel Perilous”.
It is interesting that you think so, but I can’t make head or tail of his description of the state, and other descriptions don’t bear any particular resemblance to the state of mind I describe.
My position on the matter boils down to “All my beliefs may be unjustified, but until I have evidence suggesting they are, I should provisionally assume the opposite, because worrying about it is counterproductive.”
It’d be possible, but it would take more evidence than someone having been rained on for 14 years.
If you’re talking about models and predictions you’ve already made the relevant leap, IMO. Even if you’re calling the person a “god”, you’re still taking a fundamentally naturalistic approach; you’re not assuming basic mental entities, you’re not worshiping.
Calling someone a rain god is making the prediction “If I worship this person, rain will occur at the times I need it more often than it would if I did not worship this person.” Worship doesn’t stop being worship just because it works.
This reminds me of a bit in the Illuminatus! trilogy—there was a man who had filing cabinets full of information about the Kennedy assassination. [1]
He kept hoping that he’d find one more piece of information which would make sense of everything he’d accumulated, little realizing that most of what he had was people getting things wrong and covering their asses.
[1] Once upon a time, it was normal to store information in filing cabinets, and there was only one Kennedy assassination.
The thing is, it’s usually much easier to solve the mystery by getting a feel for Doyle’s tells than by trying to piece together whatever abstruse chain of deductions Holmes is going to use. Examples:
Watson is an incredibly good judge of character. If he thinks someone seems cold, that person is heartless. If he says someone seems shifty, they are guilty of something (although maybe not the crime under investigation).
The woman never did it. The only two exceptions to this are a story in which he clears one woman to implicate another (who is the only other possible suspect), and one in which an innocent woman is corrupted and manipulated by an evil man.
Just from those two rules you can usually figure out whodunit, at which point you can occupy yourself by figuring out how, a task made relatively simple by conservation of detail.
“Holmes himself attributes his success to observation, not rationality. (There’s a startling passage in A Study In Scarlet where Holmes tells Watson that he can’t be bothered to remember that the sun orbits the earth!)”
He then states that such knowledge can have no influence on the things he’s concerned about, and so he doesn’t bother learning it.
I appreciate your defense of Holmes. As I mentioned in another comment, I haven’t read much of him, but I do remember one particular passage which annoyed me due to the way I had felt like I could have figured out the deductions had I been there personally, but because of the way Dr Watson narrates, the deduction eluded me.
Basically, Watson describes the client as wearing some sort of “odd circular jewelry with square holes through which a thin string was passed” (paraphrased from memory). From this, Sherlock deduces that the client has recently been on vacation to China. How? Well, that jewelry are Chinese coins, of course!
I know what Chinese coins look like, but was completely misled by Watson’s description. Furthermore, “recent vacation to China” is somewhat of a lucky guess. Perhaps it was a friend who went to China, and brought these coins back as a souvenir gift.
Perhaps it was a friend who went to China, and brought these coins back as a souvenir gift.
Fortunately you can rely on Conan Doyle and writers in general being parsimonious: unlike reality, stories don’t contain odd details unless they’re important to the plot.
I don’t think that most Holmes stories should even be read as we do modern mystery stories. They are adventure stories, and Conan Doyle is more like an intellectual Raymond Chandler than a precursor to Agatha Christie (although I suppose, in fact, that he is both). It’s simply impossible to solve most of the early ones (including A Study in Scarlet), although the later stories (which postdate Christie’s first stories) were more honest mysteries. (At one point he even has Watson apologise for having been unfair in the past.)
In defense of Sherlock Holmes:
The typical Sherlock Holmes story has Holmes perform twice. First he impresses his client with a seemingly impossible deduction; then he uses another deduction to solve the mystery. Watson or the client convince Holmes to explain the first deduction, which gives the reader the template Holmes will use for the second (likely inferences from small details). The data that Holmes uses to make the second deduction are in the text and available to the reader—the reader’s challenge is to make Holmes’s inference in advance.
Holmes himself attributes his success to observation, not rationality. (There’s a startling passage in A Study In Scarlet where Holmes tells Watson that he can’t be bothered to remember that the sun orbits the earth! Visit the link and search for ‘Copernican Theory’ in the full text for the passage.) The Sherlock Holmes stories are intended to be exercises in attention to detail, which is surely a useful skill for a rationalist.
When you have eliminated the impossible, whatever remains is often more improbable than your having made a mistake in one of your impossibility proofs.
Beautiful comment, but I’d add that whatever remains of the hypotheses you considered is often more improbable than your having missed an unconsidered alternative.
I just stumbled across this and felt this comment and the one above it were worth reminding everyone of in light the Knox case discussion. Way too many of our discussions have involved trying to come up with accounts of the crime that make sense of all the evidence. In retrospect I would labels such discussions as fun, but unhelpful.
-- Dirk Gently
I view Dirk Gently as a kind of wonderfully effective strawman, and his stories were a great aid to realizing I was an atheist, because at first he seems correct: surely, rather than a “localized meteorological phenomenon”, it makes more sense that the guy who’s been rained on for 14 straight years is some kind of rain god.
And then you think about what would happen in the real world, and realize that no, even if someone had been rained on for 14 years straight, I would not believe that they were a rain god. Because rain gods are actually impossible.
That part hit me like a punch in the gut.
In the real world, these are mostly just games we play with words.
Someone who has been rained on for 14 years straight has an extremely surprising property.
The label we assign that property matters a little, since it affects our subsequent behavior with respect to it. If I call it “rain god” I may be more inclined to worship it; if I label it a “localized meteorological phenomenon” I might be more inclined to study it using the techniques of meteorology; if I label it an extremely unlikely coincidence I might be more inclined not to study it at all; if I label it the work of pranksters with advanced technology I might be more inclined to look for pranksters, etc.
Etc.
But other things matter far more.
Do they have any other equally unlikely observable attributes, for example?
Did anything equally unlikely occur 14 years ago?
Etc.
Worrying overmuch about labels can distract us from actually observing what’s in front of us.
So it wouldn’t be possible to convince you that 2+2=3? No matter the evidence?
If someone claimed to be a rain god, or was credibly claimed to be a rain god based on previous evidence, and tested this by going through an EMP, stripping, generally removing any plausible way technological means could be associated with them, then being transported while in a medically-induced coma to a series of destinations not disclosed to them in advance in large deserts, and at all times was directly under, in, or above rainclouds, defying all meteorological patterns predicted by the best models just in advance of the trip, I find it hard to see how you could reasonably fail to assign significant probability to a model which made the same predictions as “this person is a rain god”.
Where does personal insanity become a factor in your probability estimates?
In some sense, basically everywhere there is a very-low or very-high probability belief, since obviously I can’t be more confident in any belief than I can be in the reliableness of my system of reasoning. I definitely consider this when I’m evaluating the proper strength of nearly-certain beliefs. In another sense, almost nowhere.
I don’t know exactly how confident I should be in my sanity, except that the probability of insanity is small. Also, I’m not confident there would be any evidence distinguishing ‘sane and rational’ from ‘insane but apparently rational’. I model a logical-insane VAuroch as being like the anti-inductors; following different rules which, according to their own standards, are self-consistent.
Since I can’t determine how to quantify it, my response has been to treat all other beliefs as conditioned on “my reasoning process is basically sound”, which makes a fair number of my beliefs having tacit probability 1; if I find reason to question any of these beliefs, I will have to rederive every belief from the original evidence as much as possible, because it’s exposed a significant flaw in the means by which I determine what beliefs to hold. Largely this consists of mathematical proofs, but also things like “there is not currently a flying green elephant in this room” and “an extant rain god is mutually incompatible with reductionism”.
This is an amazingly apt description of the mind-state that Robert Anton Wilson called “Chapel Perilous”.
It is interesting that you think so, but I can’t make head or tail of his description of the state, and other descriptions don’t bear any particular resemblance to the state of mind I describe.
My position on the matter boils down to “All my beliefs may be unjustified, but until I have evidence suggesting they are, I should provisionally assume the opposite, because worrying about it is counterproductive.”
It’d be possible, but it would take more evidence than someone having been rained on for 14 years.
If you’re talking about models and predictions you’ve already made the relevant leap, IMO. Even if you’re calling the person a “god”, you’re still taking a fundamentally naturalistic approach; you’re not assuming basic mental entities, you’re not worshiping.
Calling someone a rain god is making the prediction “If I worship this person, rain will occur at the times I need it more often than it would if I did not worship this person.” Worship doesn’t stop being worship just because it works.
This reminds me of a bit in the Illuminatus! trilogy—there was a man who had filing cabinets full of information about the Kennedy assassination. [1]
He kept hoping that he’d find one more piece of information which would make sense of everything he’d accumulated, little realizing that most of what he had was people getting things wrong and covering their asses.
[1] Once upon a time, it was normal to store information in filing cabinets, and there was only one Kennedy assassination.
Sounds like me and my PhD project.
If you don’t mind, what was the subject?
The thing is, it’s usually much easier to solve the mystery by getting a feel for Doyle’s tells than by trying to piece together whatever abstruse chain of deductions Holmes is going to use. Examples:
Watson is an incredibly good judge of character. If he thinks someone seems cold, that person is heartless. If he says someone seems shifty, they are guilty of something (although maybe not the crime under investigation).
The woman never did it. The only two exceptions to this are a story in which he clears one woman to implicate another (who is the only other possible suspect), and one in which an innocent woman is corrupted and manipulated by an evil man.
Just from those two rules you can usually figure out whodunit, at which point you can occupy yourself by figuring out how, a task made relatively simple by conservation of detail.
“Holmes himself attributes his success to observation, not rationality. (There’s a startling passage in A Study In Scarlet where Holmes tells Watson that he can’t be bothered to remember that the sun orbits the earth!)”
He then states that such knowledge can have no influence on the things he’s concerned about, and so he doesn’t bother learning it.
That seems like a starkly rational position.
I appreciate your defense of Holmes. As I mentioned in another comment, I haven’t read much of him, but I do remember one particular passage which annoyed me due to the way I had felt like I could have figured out the deductions had I been there personally, but because of the way Dr Watson narrates, the deduction eluded me.
Basically, Watson describes the client as wearing some sort of “odd circular jewelry with square holes through which a thin string was passed” (paraphrased from memory). From this, Sherlock deduces that the client has recently been on vacation to China. How? Well, that jewelry are Chinese coins, of course!
I know what Chinese coins look like, but was completely misled by Watson’s description. Furthermore, “recent vacation to China” is somewhat of a lucky guess. Perhaps it was a friend who went to China, and brought these coins back as a souvenir gift.
Fortunately you can rely on Conan Doyle and writers in general being parsimonious: unlike reality, stories don’t contain odd details unless they’re important to the plot.
I don’t think that most Holmes stories should even be read as we do modern mystery stories. They are adventure stories, and Conan Doyle is more like an intellectual Raymond Chandler than a precursor to Agatha Christie (although I suppose, in fact, that he is both). It’s simply impossible to solve most of the early ones (including A Study in Scarlet), although the later stories (which postdate Christie’s first stories) were more honest mysteries. (At one point he even has Watson apologise for having been unfair in the past.)