If someone claimed to be a rain god, or was credibly claimed to be a rain god based on previous evidence, and tested this by going through an EMP, stripping, generally removing any plausible way technological means could be associated with them, then being transported while in a medically-induced coma to a series of destinations not disclosed to them in advance in large deserts, and at all times was directly under, in, or above rainclouds, defying all meteorological patterns predicted by the best models just in advance of the trip, I find it hard to see how you could reasonably fail to assign significant probability to a model which made the same predictions as “this person is a rain god”.
In some sense, basically everywhere there is a very-low or very-high probability belief, since obviously I can’t be more confident in any belief than I can be in the reliableness of my system of reasoning. I definitely consider this when I’m evaluating the proper strength of nearly-certain beliefs. In another sense, almost nowhere.
I don’t know exactly how confident I should be in my sanity, except that the probability of insanity is small. Also, I’m not confident there would be any evidence distinguishing ‘sane and rational’ from ‘insane but apparently rational’. I model a logical-insane VAuroch as being like the anti-inductors; following different rules which, according to their own standards, are self-consistent.
Since I can’t determine how to quantify it, my response has been to treat all other beliefs as conditioned on “my reasoning process is basically sound”, which makes a fair number of my beliefs having tacit probability 1; if I find reason to question any of these beliefs, I will have to rederive every belief from the original evidence as much as possible, because it’s exposed a significant flaw in the means by which I determine what beliefs to hold. Largely this consists of mathematical proofs, but also things like “there is not currently a flying green elephant in this room” and “an extant rain god is mutually incompatible with reductionism”.
Since I can’t determine how to quantify it, my response has been to treat all other beliefs as conditioned on “my reasoning process is basically sound”, which makes a fair number of my beliefs having tacit probability 1; if I find reason to question any of these beliefs, I will have to rederive every belief from the original evidence as much as possible, because it’s exposed a significant flaw in the means by which I determine what beliefs to hold. Largely this consists of mathematical proofs, but also things like “there is not currently a flying green elephant in this room” and “an extant rain god is mutually incompatible with reductionism”.
This is an amazingly apt description of the mind-state that Robert Anton Wilson called “Chapel Perilous”.
It is interesting that you think so, but I can’t make head or tail of his description of the state, and other descriptions don’t bear any particular resemblance to the state of mind I describe.
My position on the matter boils down to “All my beliefs may be unjustified, but until I have evidence suggesting they are, I should provisionally assume the opposite, because worrying about it is counterproductive.”
It’d be possible, but it would take more evidence than someone having been rained on for 14 years.
If you’re talking about models and predictions you’ve already made the relevant leap, IMO. Even if you’re calling the person a “god”, you’re still taking a fundamentally naturalistic approach; you’re not assuming basic mental entities, you’re not worshiping.
Calling someone a rain god is making the prediction “If I worship this person, rain will occur at the times I need it more often than it would if I did not worship this person.” Worship doesn’t stop being worship just because it works.
So it wouldn’t be possible to convince you that 2+2=3? No matter the evidence?
If someone claimed to be a rain god, or was credibly claimed to be a rain god based on previous evidence, and tested this by going through an EMP, stripping, generally removing any plausible way technological means could be associated with them, then being transported while in a medically-induced coma to a series of destinations not disclosed to them in advance in large deserts, and at all times was directly under, in, or above rainclouds, defying all meteorological patterns predicted by the best models just in advance of the trip, I find it hard to see how you could reasonably fail to assign significant probability to a model which made the same predictions as “this person is a rain god”.
Where does personal insanity become a factor in your probability estimates?
In some sense, basically everywhere there is a very-low or very-high probability belief, since obviously I can’t be more confident in any belief than I can be in the reliableness of my system of reasoning. I definitely consider this when I’m evaluating the proper strength of nearly-certain beliefs. In another sense, almost nowhere.
I don’t know exactly how confident I should be in my sanity, except that the probability of insanity is small. Also, I’m not confident there would be any evidence distinguishing ‘sane and rational’ from ‘insane but apparently rational’. I model a logical-insane VAuroch as being like the anti-inductors; following different rules which, according to their own standards, are self-consistent.
Since I can’t determine how to quantify it, my response has been to treat all other beliefs as conditioned on “my reasoning process is basically sound”, which makes a fair number of my beliefs having tacit probability 1; if I find reason to question any of these beliefs, I will have to rederive every belief from the original evidence as much as possible, because it’s exposed a significant flaw in the means by which I determine what beliefs to hold. Largely this consists of mathematical proofs, but also things like “there is not currently a flying green elephant in this room” and “an extant rain god is mutually incompatible with reductionism”.
This is an amazingly apt description of the mind-state that Robert Anton Wilson called “Chapel Perilous”.
It is interesting that you think so, but I can’t make head or tail of his description of the state, and other descriptions don’t bear any particular resemblance to the state of mind I describe.
My position on the matter boils down to “All my beliefs may be unjustified, but until I have evidence suggesting they are, I should provisionally assume the opposite, because worrying about it is counterproductive.”
It’d be possible, but it would take more evidence than someone having been rained on for 14 years.
If you’re talking about models and predictions you’ve already made the relevant leap, IMO. Even if you’re calling the person a “god”, you’re still taking a fundamentally naturalistic approach; you’re not assuming basic mental entities, you’re not worshiping.
Calling someone a rain god is making the prediction “If I worship this person, rain will occur at the times I need it more often than it would if I did not worship this person.” Worship doesn’t stop being worship just because it works.