To generalize the point further, phrases like “best explained” tend to confuse the likelihood ratio P(evidence|hypothesis) with the posterior probability P(hypothesis|evidence). Remember, you must always be mindful of the prior probability.
Be careful not to privilege the scientific prior, though. Somewhere between the inability to justify induction and the inability to justify your interpretation of the evidence is some tricky argumentation that might be flawed. E.g. in some ontologies there’s an upper bound on the confidence you can have that we’re not in an ironic tragedy written by God to mock all those who fall to the cunning of the transhuman called Satan and thus aspire to false knowledge. To a non-trivial extent violating the laws of physics only has high K complexity if you assume the conclusion that it has high K complexity.
/shrugs. I might be out of my depth here, and would not be surprised if someone corrects me, so take everything with a grain of salt. (“Someone” probably being someone with extensive knowledge of decision theory, though.) But. In the case of “Are we being simulated by a bastard?” we can try to extrapolate from human psychology using the kind of reasoning found in Omohundro’s basic AI drives. It probably ends up such that we can use that kind of reference class forecasting combined with pragmatic considerations to make predictions/decisions that are “good enough” in some objective sense. E.g. even if we are being simulated by a bastard, we’re probably also in other simulations that matter more to us, and assuming what we suspect matters is what matters more “objectively”, whatever the hell that means, then we should pragmatically act as if we’re (mostly) acting from within non-bastard-simulated contexts—though we might update based on (logical?) evidence.
I think the important point is that such reasoning sounds significantly less impressive/authoritative than “your hypothesis has too many burdensome details”. It worries me when people start wielding the sword of Bayes, partially because the math isn’t “fundamental” when there are are “copies” of you—which there almost certainly are—and partially because it gives a veneer of technicality that makes it naively seem impossible to argue against, which is bad for epistemic hygiene. But my worries might be more paranoia than anything legitimate.
I do not know what to call this kind of evidence outside of technical decision theory discussion. “Logical” is the obvious choice, and that’s the decision theory name for it, but it’s only ‘logical’ in an abstract way. Maybe it’s the most accurate word though, and I’m just not familiar with the etymology. “Platonic evidence” (or evidence) feels a little more accurate to me, ’cuz you can talk about an observation carrying both physical evidence and Platonic evidence without thinking that Platonic evidence entails having to perform explicit logical operations or anything like that. (Likewise you can resolve Platonic uncertainty without recourse to anything that looks like formal logic.) Eh, whatever, “logical uncertainty” is fine. (Related words: acausal, timeless, teleological.)
It depends. If the “tinkering with natural laws” explanation is really, really complex, then mass hallucination can be more likely.
To generalize the point further, phrases like “best explained” tend to confuse the likelihood ratio P(evidence|hypothesis) with the posterior probability P(hypothesis|evidence). Remember, you must always be mindful of the prior probability.
Be careful not to privilege the scientific prior, though. Somewhere between the inability to justify induction and the inability to justify your interpretation of the evidence is some tricky argumentation that might be flawed. E.g. in some ontologies there’s an upper bound on the confidence you can have that we’re not in an ironic tragedy written by God to mock all those who fall to the cunning of the transhuman called Satan and thus aspire to false knowledge. To a non-trivial extent violating the laws of physics only has high K complexity if you assume the conclusion that it has high K complexity.
Fair point. How, then, can we reasonably assign a prior distribution on events whose complexity depends almost entirely on ontology?
/shrugs. I might be out of my depth here, and would not be surprised if someone corrects me, so take everything with a grain of salt. (“Someone” probably being someone with extensive knowledge of decision theory, though.) But. In the case of “Are we being simulated by a bastard?” we can try to extrapolate from human psychology using the kind of reasoning found in Omohundro’s basic AI drives. It probably ends up such that we can use that kind of reference class forecasting combined with pragmatic considerations to make predictions/decisions that are “good enough” in some objective sense. E.g. even if we are being simulated by a bastard, we’re probably also in other simulations that matter more to us, and assuming what we suspect matters is what matters more “objectively”, whatever the hell that means, then we should pragmatically act as if we’re (mostly) acting from within non-bastard-simulated contexts—though we might update based on (logical?) evidence.
I think the important point is that such reasoning sounds significantly less impressive/authoritative than “your hypothesis has too many burdensome details”. It worries me when people start wielding the sword of Bayes, partially because the math isn’t “fundamental” when there are are “copies” of you—which there almost certainly are—and partially because it gives a veneer of technicality that makes it naively seem impossible to argue against, which is bad for epistemic hygiene. But my worries might be more paranoia than anything legitimate.
I do not know what to call this kind of evidence outside of technical decision theory discussion. “Logical” is the obvious choice, and that’s the decision theory name for it, but it’s only ‘logical’ in an abstract way. Maybe it’s the most accurate word though, and I’m just not familiar with the etymology. “Platonic evidence” (or evidence) feels a little more accurate to me, ’cuz you can talk about an observation carrying both physical evidence and Platonic evidence without thinking that Platonic evidence entails having to perform explicit logical operations or anything like that. (Likewise you can resolve Platonic uncertainty without recourse to anything that looks like formal logic.) Eh, whatever, “logical uncertainty” is fine. (Related words: acausal, timeless, teleological.)