One of the core beliefs of Orthodox Judaism is that God appeared at Mount Sinai and said in a thundering voice, “Yeah, it’s all true.” From a Bayesian perspective that’s some darned unambiguous evidence of a superhumanly powerful entity. (Albeit it doesn’t prove that the entity is God per se, or that the entity is benevolent—it could be alien teenagers.) The vast majority of religions in human history—excepting only those invented extremely recently—tell stories of events that would constitute completely unmistakable evidence if they’d actually happened. -- Religion’s Claim
A lot of miracle stories, if they really happened, would be best explained by someone tinkering with natural laws, don’t you think?
Even if the miracle stories actually happened as described, Sufficiently Advanced Technology would explain them just as well, and is probably a more likely explanation even if it doesn’t fit our narrative conventions for miracle stories.
To generalize the point further, phrases like “best explained” tend to confuse the likelihood ratio P(evidence|hypothesis) with the posterior probability P(hypothesis|evidence). Remember, you must always be mindful of the prior probability.
Be careful not to privilege the scientific prior, though. Somewhere between the inability to justify induction and the inability to justify your interpretation of the evidence is some tricky argumentation that might be flawed. E.g. in some ontologies there’s an upper bound on the confidence you can have that we’re not in an ironic tragedy written by God to mock all those who fall to the cunning of the transhuman called Satan and thus aspire to false knowledge. To a non-trivial extent violating the laws of physics only has high K complexity if you assume the conclusion that it has high K complexity.
/shrugs. I might be out of my depth here, and would not be surprised if someone corrects me, so take everything with a grain of salt. (“Someone” probably being someone with extensive knowledge of decision theory, though.) But. In the case of “Are we being simulated by a bastard?” we can try to extrapolate from human psychology using the kind of reasoning found in Omohundro’s basic AI drives. It probably ends up such that we can use that kind of reference class forecasting combined with pragmatic considerations to make predictions/decisions that are “good enough” in some objective sense. E.g. even if we are being simulated by a bastard, we’re probably also in other simulations that matter more to us, and assuming what we suspect matters is what matters more “objectively”, whatever the hell that means, then we should pragmatically act as if we’re (mostly) acting from within non-bastard-simulated contexts—though we might update based on (logical?) evidence.
I think the important point is that such reasoning sounds significantly less impressive/authoritative than “your hypothesis has too many burdensome details”. It worries me when people start wielding the sword of Bayes, partially because the math isn’t “fundamental” when there are are “copies” of you—which there almost certainly are—and partially because it gives a veneer of technicality that makes it naively seem impossible to argue against, which is bad for epistemic hygiene. But my worries might be more paranoia than anything legitimate.
I do not know what to call this kind of evidence outside of technical decision theory discussion. “Logical” is the obvious choice, and that’s the decision theory name for it, but it’s only ‘logical’ in an abstract way. Maybe it’s the most accurate word though, and I’m just not familiar with the etymology. “Platonic evidence” (or evidence) feels a little more accurate to me, ’cuz you can talk about an observation carrying both physical evidence and Platonic evidence without thinking that Platonic evidence entails having to perform explicit logical operations or anything like that. (Likewise you can resolve Platonic uncertainty without recourse to anything that looks like formal logic.) Eh, whatever, “logical uncertainty” is fine. (Related words: acausal, timeless, teleological.)
Hm, “set” in English doesn’t mean the same thing as “set” in set theory, I think. Computer programmers and mathematicians have taken all the commonsense words and perverted them to mean alien things. I reject your ontology.
Edit: In case it wasn’t clear, I’m half-joking, and accept Ben’s correction.
I hate to be that guy, but could you list examples of what would be the set of supernatural phenomena conditional on the existence of such a set?
A lot of miracle stories, if they really happened, would be best explained by someone tinkering with natural laws, don’t you think?
Even if the miracle stories actually happened as described, Sufficiently Advanced Technology would explain them just as well, and is probably a more likely explanation even if it doesn’t fit our narrative conventions for miracle stories.
It depends. If the “tinkering with natural laws” explanation is really, really complex, then mass hallucination can be more likely.
To generalize the point further, phrases like “best explained” tend to confuse the likelihood ratio P(evidence|hypothesis) with the posterior probability P(hypothesis|evidence). Remember, you must always be mindful of the prior probability.
Be careful not to privilege the scientific prior, though. Somewhere between the inability to justify induction and the inability to justify your interpretation of the evidence is some tricky argumentation that might be flawed. E.g. in some ontologies there’s an upper bound on the confidence you can have that we’re not in an ironic tragedy written by God to mock all those who fall to the cunning of the transhuman called Satan and thus aspire to false knowledge. To a non-trivial extent violating the laws of physics only has high K complexity if you assume the conclusion that it has high K complexity.
Fair point. How, then, can we reasonably assign a prior distribution on events whose complexity depends almost entirely on ontology?
/shrugs. I might be out of my depth here, and would not be surprised if someone corrects me, so take everything with a grain of salt. (“Someone” probably being someone with extensive knowledge of decision theory, though.) But. In the case of “Are we being simulated by a bastard?” we can try to extrapolate from human psychology using the kind of reasoning found in Omohundro’s basic AI drives. It probably ends up such that we can use that kind of reference class forecasting combined with pragmatic considerations to make predictions/decisions that are “good enough” in some objective sense. E.g. even if we are being simulated by a bastard, we’re probably also in other simulations that matter more to us, and assuming what we suspect matters is what matters more “objectively”, whatever the hell that means, then we should pragmatically act as if we’re (mostly) acting from within non-bastard-simulated contexts—though we might update based on (logical?) evidence.
I think the important point is that such reasoning sounds significantly less impressive/authoritative than “your hypothesis has too many burdensome details”. It worries me when people start wielding the sword of Bayes, partially because the math isn’t “fundamental” when there are are “copies” of you—which there almost certainly are—and partially because it gives a veneer of technicality that makes it naively seem impossible to argue against, which is bad for epistemic hygiene. But my worries might be more paranoia than anything legitimate.
I do not know what to call this kind of evidence outside of technical decision theory discussion. “Logical” is the obvious choice, and that’s the decision theory name for it, but it’s only ‘logical’ in an abstract way. Maybe it’s the most accurate word though, and I’m just not familiar with the etymology. “Platonic evidence” (or evidence) feels a little more accurate to me, ’cuz you can talk about an observation carrying both physical evidence and Platonic evidence without thinking that Platonic evidence entails having to perform explicit logical operations or anything like that. (Likewise you can resolve Platonic uncertainty without recourse to anything that looks like formal logic.) Eh, whatever, “logical uncertainty” is fine. (Related words: acausal, timeless, teleological.)
Are you an empty set atheist?
Hm, “set” in English doesn’t mean the same thing as “set” in set theory, I think. Computer programmers and mathematicians have taken all the commonsense words and perverted them to mean alien things. I reject your ontology.
Edit: In case it wasn’t clear, I’m half-joking, and accept Ben’s correction.