I reject that entirely. The impossible often has a kind of integrity to it which the merely improbable lacks. How often have you been presented with an apparently rational explanation of something which works in all respects other than one, which is just that it is hopelessly improbable? Your instinct is to say, ‘Yes, but he or she simply wouldn’t do that.’
I view Dirk Gently as a kind of wonderfully effective strawman, and his stories were a great aid to realizing I was an atheist, because at first he seems correct: surely, rather than a “localized meteorological phenomenon”, it makes more sense that the guy who’s been rained on for 14 straight years is some kind of rain god.
And then you think about what would happen in the real world, and realize that no, even if someone had been rained on for 14 years straight, I would not believe that they were a rain god. Because rain gods are actually impossible.
In the real world, these are mostly just games we play with words.
Someone who has been rained on for 14 years straight has an extremely surprising property.
The label we assign that property matters a little, since it affects our subsequent behavior with respect to it. If I call it “rain god” I may be more inclined to worship it; if I label it a “localized meteorological phenomenon” I might be more inclined to study it using the techniques of meteorology; if I label it an extremely unlikely coincidence I might be more inclined not to study it at all; if I label it the work of pranksters with advanced technology I might be more inclined to look for pranksters, etc.
Etc.
But other things matter far more.
Do they have any other equally unlikely observable attributes, for example? Did anything equally unlikely occur 14 years ago?
Etc.
Worrying overmuch about labels can distract us from actually observing what’s in front of us.
If someone claimed to be a rain god, or was credibly claimed to be a rain god based on previous evidence, and tested this by going through an EMP, stripping, generally removing any plausible way technological means could be associated with them, then being transported while in a medically-induced coma to a series of destinations not disclosed to them in advance in large deserts, and at all times was directly under, in, or above rainclouds, defying all meteorological patterns predicted by the best models just in advance of the trip, I find it hard to see how you could reasonably fail to assign significant probability to a model which made the same predictions as “this person is a rain god”.
In some sense, basically everywhere there is a very-low or very-high probability belief, since obviously I can’t be more confident in any belief than I can be in the reliableness of my system of reasoning. I definitely consider this when I’m evaluating the proper strength of nearly-certain beliefs. In another sense, almost nowhere.
I don’t know exactly how confident I should be in my sanity, except that the probability of insanity is small. Also, I’m not confident there would be any evidence distinguishing ‘sane and rational’ from ‘insane but apparently rational’. I model a logical-insane VAuroch as being like the anti-inductors; following different rules which, according to their own standards, are self-consistent.
Since I can’t determine how to quantify it, my response has been to treat all other beliefs as conditioned on “my reasoning process is basically sound”, which makes a fair number of my beliefs having tacit probability 1; if I find reason to question any of these beliefs, I will have to rederive every belief from the original evidence as much as possible, because it’s exposed a significant flaw in the means by which I determine what beliefs to hold. Largely this consists of mathematical proofs, but also things like “there is not currently a flying green elephant in this room” and “an extant rain god is mutually incompatible with reductionism”.
Since I can’t determine how to quantify it, my response has been to treat all other beliefs as conditioned on “my reasoning process is basically sound”, which makes a fair number of my beliefs having tacit probability 1; if I find reason to question any of these beliefs, I will have to rederive every belief from the original evidence as much as possible, because it’s exposed a significant flaw in the means by which I determine what beliefs to hold. Largely this consists of mathematical proofs, but also things like “there is not currently a flying green elephant in this room” and “an extant rain god is mutually incompatible with reductionism”.
This is an amazingly apt description of the mind-state that Robert Anton Wilson called “Chapel Perilous”.
It is interesting that you think so, but I can’t make head or tail of his description of the state, and other descriptions don’t bear any particular resemblance to the state of mind I describe.
My position on the matter boils down to “All my beliefs may be unjustified, but until I have evidence suggesting they are, I should provisionally assume the opposite, because worrying about it is counterproductive.”
It’d be possible, but it would take more evidence than someone having been rained on for 14 years.
If you’re talking about models and predictions you’ve already made the relevant leap, IMO. Even if you’re calling the person a “god”, you’re still taking a fundamentally naturalistic approach; you’re not assuming basic mental entities, you’re not worshiping.
Calling someone a rain god is making the prediction “If I worship this person, rain will occur at the times I need it more often than it would if I did not worship this person.” Worship doesn’t stop being worship just because it works.
-- Dirk Gently
I view Dirk Gently as a kind of wonderfully effective strawman, and his stories were a great aid to realizing I was an atheist, because at first he seems correct: surely, rather than a “localized meteorological phenomenon”, it makes more sense that the guy who’s been rained on for 14 straight years is some kind of rain god.
And then you think about what would happen in the real world, and realize that no, even if someone had been rained on for 14 years straight, I would not believe that they were a rain god. Because rain gods are actually impossible.
That part hit me like a punch in the gut.
In the real world, these are mostly just games we play with words.
Someone who has been rained on for 14 years straight has an extremely surprising property.
The label we assign that property matters a little, since it affects our subsequent behavior with respect to it. If I call it “rain god” I may be more inclined to worship it; if I label it a “localized meteorological phenomenon” I might be more inclined to study it using the techniques of meteorology; if I label it an extremely unlikely coincidence I might be more inclined not to study it at all; if I label it the work of pranksters with advanced technology I might be more inclined to look for pranksters, etc.
Etc.
But other things matter far more.
Do they have any other equally unlikely observable attributes, for example?
Did anything equally unlikely occur 14 years ago?
Etc.
Worrying overmuch about labels can distract us from actually observing what’s in front of us.
So it wouldn’t be possible to convince you that 2+2=3? No matter the evidence?
If someone claimed to be a rain god, or was credibly claimed to be a rain god based on previous evidence, and tested this by going through an EMP, stripping, generally removing any plausible way technological means could be associated with them, then being transported while in a medically-induced coma to a series of destinations not disclosed to them in advance in large deserts, and at all times was directly under, in, or above rainclouds, defying all meteorological patterns predicted by the best models just in advance of the trip, I find it hard to see how you could reasonably fail to assign significant probability to a model which made the same predictions as “this person is a rain god”.
Where does personal insanity become a factor in your probability estimates?
In some sense, basically everywhere there is a very-low or very-high probability belief, since obviously I can’t be more confident in any belief than I can be in the reliableness of my system of reasoning. I definitely consider this when I’m evaluating the proper strength of nearly-certain beliefs. In another sense, almost nowhere.
I don’t know exactly how confident I should be in my sanity, except that the probability of insanity is small. Also, I’m not confident there would be any evidence distinguishing ‘sane and rational’ from ‘insane but apparently rational’. I model a logical-insane VAuroch as being like the anti-inductors; following different rules which, according to their own standards, are self-consistent.
Since I can’t determine how to quantify it, my response has been to treat all other beliefs as conditioned on “my reasoning process is basically sound”, which makes a fair number of my beliefs having tacit probability 1; if I find reason to question any of these beliefs, I will have to rederive every belief from the original evidence as much as possible, because it’s exposed a significant flaw in the means by which I determine what beliefs to hold. Largely this consists of mathematical proofs, but also things like “there is not currently a flying green elephant in this room” and “an extant rain god is mutually incompatible with reductionism”.
This is an amazingly apt description of the mind-state that Robert Anton Wilson called “Chapel Perilous”.
It is interesting that you think so, but I can’t make head or tail of his description of the state, and other descriptions don’t bear any particular resemblance to the state of mind I describe.
My position on the matter boils down to “All my beliefs may be unjustified, but until I have evidence suggesting they are, I should provisionally assume the opposite, because worrying about it is counterproductive.”
It’d be possible, but it would take more evidence than someone having been rained on for 14 years.
If you’re talking about models and predictions you’ve already made the relevant leap, IMO. Even if you’re calling the person a “god”, you’re still taking a fundamentally naturalistic approach; you’re not assuming basic mental entities, you’re not worshiping.
Calling someone a rain god is making the prediction “If I worship this person, rain will occur at the times I need it more often than it would if I did not worship this person.” Worship doesn’t stop being worship just because it works.