Banish talk like “There is absolutely no evidence for that belief”.
But anecdotal evidence is evidence, and it ought to sway my beliefs.
The evidence can be weak enough and/or be evidence for an immense number of other things besides what evidence claims it is evidence for, as to be impossible to process qualitatively. If there’s “no evidence”, the effect size is usually pretty small, much smaller than the filtering that the anecdotes pass through, much smaller than can be inferred qualitatively, etc.
2.
Banish talk like “I don’t know anything about that”.
There’s great many things that you have never even thought of, and you know nothing about those things. They have no probabilities assigned, and worse, they work as if they had probability of zero. And you can’t avoid that, because there’s far more things you ought to enumerate than you can enumerate, by a very very huge factor.
Having heard of something leads to quite non-Bayesian change of the belief (effective zero to non-zero). In light of this, degrees of beliefs are not probabilities, but some sort of tag values attached to the propositions that were considered (a very small subset of the totality of propositions), tags which need to be processed in a manner as to arrive at most accurate choices in the end despite absence of processing of the vast majority of relevant beliefs. (A manner which does resemble probability theory to some extent)
Treating them more like probabilities will just lead to a larger final error, even though superficially the edit distance from your algorithm to some basic understanding of probability theory can seem smaller.
3.
Banish talk of “thresholds of belief”.
Imposing thresholds on both beliefs and evidence in support of the beliefs allows you to compensate for and decrease the consequences of the unavoidable errors described above. The thresholds have been established after a very long history of inferring some completely wrong conclusions based on accumulation of evidence that was weaker than what can be usefully processed qualitatively but instead requires very accurate set up and quantitative calculations.
4.
Absence of evidence is evidence of absence.
Sometimes, and sometimes it’s really weak evidence that isn’t statistically independent from the belief that it ought to affect.
5.
Many bits of “common sense” rationality can be precisely stated and easily proved within the austere framework of Bayesian probability.
But you threw away those that can not be easily demonstrated.
6.
You cannot expect[2] that future evidence will sway you in a particular direction.
Theorems of probability are not going to hold exactly for the optimum value that should be assigned to the beliefs in the light of what’s described in 2, and working as ff they do hold can not be expected to improve outcomes.
Keep in mind that you can reasonably expect that in the future great many things that you have never thought of may be brought to your attention, without being able to actually enumerate and process a significant fraction of them right now and then.
edit: improved that some. Also, many of those limitations would hold for any physically plausible Jupiter Brains, Matroshka Brains, or other such giant objects which, while they can process great many more beliefs than you can, are still stuck with processing only a minuscule fraction of the beliefs they ought to process.
edit2: interestingly, David Chapman touches on much same points.
There’s great many things that you have never even thought of, and you know nothing about those things. They have no probabilities assigned, and worse, they work as if they had probability of zero. And you can’t avoid that, because there’s far more things you ought to enumerate than you can enumerate, by a very very huge factor.
You don’t need to enumerate beliefs to assign them nonzero probability. You can have a catch-all “stuff nothing like anything that’d ever even occur to me, unless it smacked me in the face” category, to which you can assign nonzero probability.
Those beliefs don’t propagate where they should, that’s the issue, and universe doesn’t care if you made an excuse to make it sound better. Those beliefs still have zero effect on inferences, and that’s what matters. And when you get some of that weak “evidence” such as your Zeus example it doesn’t go towards other hypotheses, but it goes towards Zeus, because the latter you have been prompted with.
Or when you process an anecdote, it would seem to me that with your qualitative Bayes you are going to tend to affect your belief about the conclusion too much and your belief about how the anecdote has been picked, too little (for contentious issues you expect anecdotes for both sides). Since you are doing everything qualitatively rather than quantitatively, that’s an approximation, and approximation that breaks down for what is normally not called “evidence”.
edit: I’d think, by the way, that a real deity and a made up deity would result in statistically different sets of myths, making a specific set of myths evidence either for or against a deity depending on the actual content of the myths. Just as a police report by the suspect where the suspect denies guilt can be either evidence against or for the guilt depending on what the suspect actually said and how it squares together with the other facts.
edit2: an analogy. Suppose you have a huge, enormous network of water pipes, or an electronic circuit. A lot of pipes, trillions. You want to find water flow in a specific point, or you want to find voltage at a spot. (Probability flows in an even more complicated manner than water in pipes or electricity through resistor networks, by the way, and numbers are larger than trillions). I am telling you that you aren’t considering a lot of pipes, they have effective flow of zero where they should have non-zero. You’re saying that no, you can have one thick pipe which is all the flows that you didn’t even consider—a pipe that aren’t really connected much to anything. As far as processing flows does, that does not even make any coherent sense.
Bayes theorem only works with as much information as you put into it. Humans can only ever be approximate Bayesian agents. If you learn about some proposition you never though of before it is not a failing of Bayesian reasoning, it is just that you learn you have been doing it wrong up until that point and have to recompute everything.
Point by point take on this:
The evidence can be weak enough and/or be evidence for an immense number of other things besides what evidence claims it is evidence for, as to be impossible to process qualitatively. If there’s “no evidence”, the effect size is usually pretty small, much smaller than the filtering that the anecdotes pass through, much smaller than can be inferred qualitatively, etc.
2.
There’s great many things that you have never even thought of, and you know nothing about those things. They have no probabilities assigned, and worse, they work as if they had probability of zero. And you can’t avoid that, because there’s far more things you ought to enumerate than you can enumerate, by a very very huge factor.
Having heard of something leads to quite non-Bayesian change of the belief (effective zero to non-zero). In light of this, degrees of beliefs are not probabilities, but some sort of tag values attached to the propositions that were considered (a very small subset of the totality of propositions), tags which need to be processed in a manner as to arrive at most accurate choices in the end despite absence of processing of the vast majority of relevant beliefs. (A manner which does resemble probability theory to some extent)
Treating them more like probabilities will just lead to a larger final error, even though superficially the edit distance from your algorithm to some basic understanding of probability theory can seem smaller.
3.
Imposing thresholds on both beliefs and evidence in support of the beliefs allows you to compensate for and decrease the consequences of the unavoidable errors described above. The thresholds have been established after a very long history of inferring some completely wrong conclusions based on accumulation of evidence that was weaker than what can be usefully processed qualitatively but instead requires very accurate set up and quantitative calculations.
4.
Sometimes, and sometimes it’s really weak evidence that isn’t statistically independent from the belief that it ought to affect.
5.
But you threw away those that can not be easily demonstrated.
6.
Theorems of probability are not going to hold exactly for the optimum value that should be assigned to the beliefs in the light of what’s described in 2, and working as ff they do hold can not be expected to improve outcomes.
Keep in mind that you can reasonably expect that in the future great many things that you have never thought of may be brought to your attention, without being able to actually enumerate and process a significant fraction of them right now and then.
edit: improved that some. Also, many of those limitations would hold for any physically plausible Jupiter Brains, Matroshka Brains, or other such giant objects which, while they can process great many more beliefs than you can, are still stuck with processing only a minuscule fraction of the beliefs they ought to process.
edit2: interestingly, David Chapman touches on much same points.
You don’t need to enumerate beliefs to assign them nonzero probability. You can have a catch-all “stuff nothing like anything that’d ever even occur to me, unless it smacked me in the face” category, to which you can assign nonzero probability.
Those beliefs don’t propagate where they should, that’s the issue, and universe doesn’t care if you made an excuse to make it sound better. Those beliefs still have zero effect on inferences, and that’s what matters. And when you get some of that weak “evidence” such as your Zeus example it doesn’t go towards other hypotheses, but it goes towards Zeus, because the latter you have been prompted with.
Or when you process an anecdote, it would seem to me that with your qualitative Bayes you are going to tend to affect your belief about the conclusion too much and your belief about how the anecdote has been picked, too little (for contentious issues you expect anecdotes for both sides). Since you are doing everything qualitatively rather than quantitatively, that’s an approximation, and approximation that breaks down for what is normally not called “evidence”.
edit: I’d think, by the way, that a real deity and a made up deity would result in statistically different sets of myths, making a specific set of myths evidence either for or against a deity depending on the actual content of the myths. Just as a police report by the suspect where the suspect denies guilt can be either evidence against or for the guilt depending on what the suspect actually said and how it squares together with the other facts.
edit2: an analogy. Suppose you have a huge, enormous network of water pipes, or an electronic circuit. A lot of pipes, trillions. You want to find water flow in a specific point, or you want to find voltage at a spot. (Probability flows in an even more complicated manner than water in pipes or electricity through resistor networks, by the way, and numbers are larger than trillions). I am telling you that you aren’t considering a lot of pipes, they have effective flow of zero where they should have non-zero. You’re saying that no, you can have one thick pipe which is all the flows that you didn’t even consider—a pipe that aren’t really connected much to anything. As far as processing flows does, that does not even make any coherent sense.
Bayes theorem only works with as much information as you put into it. Humans can only ever be approximate Bayesian agents. If you learn about some proposition you never though of before it is not a failing of Bayesian reasoning, it is just that you learn you have been doing it wrong up until that point and have to recompute everything.