/shrugs. I might be out of my depth here, and would not be surprised if someone corrects me, so take everything with a grain of salt. (“Someone” probably being someone with extensive knowledge of decision theory, though.) But. In the case of “Are we being simulated by a bastard?” we can try to extrapolate from human psychology using the kind of reasoning found in Omohundro’s basic AI drives. It probably ends up such that we can use that kind of reference class forecasting combined with pragmatic considerations to make predictions/decisions that are “good enough” in some objective sense. E.g. even if we are being simulated by a bastard, we’re probably also in other simulations that matter more to us, and assuming what we suspect matters is what matters more “objectively”, whatever the hell that means, then we should pragmatically act as if we’re (mostly) acting from within non-bastard-simulated contexts—though we might update based on (logical?) evidence.
I think the important point is that such reasoning sounds significantly less impressive/authoritative than “your hypothesis has too many burdensome details”. It worries me when people start wielding the sword of Bayes, partially because the math isn’t “fundamental” when there are are “copies” of you—which there almost certainly are—and partially because it gives a veneer of technicality that makes it naively seem impossible to argue against, which is bad for epistemic hygiene. But my worries might be more paranoia than anything legitimate.
I do not know what to call this kind of evidence outside of technical decision theory discussion. “Logical” is the obvious choice, and that’s the decision theory name for it, but it’s only ‘logical’ in an abstract way. Maybe it’s the most accurate word though, and I’m just not familiar with the etymology. “Platonic evidence” (or evidence) feels a little more accurate to me, ’cuz you can talk about an observation carrying both physical evidence and Platonic evidence without thinking that Platonic evidence entails having to perform explicit logical operations or anything like that. (Likewise you can resolve Platonic uncertainty without recourse to anything that looks like formal logic.) Eh, whatever, “logical uncertainty” is fine. (Related words: acausal, timeless, teleological.)
Fair point. How, then, can we reasonably assign a prior distribution on events whose complexity depends almost entirely on ontology?
/shrugs. I might be out of my depth here, and would not be surprised if someone corrects me, so take everything with a grain of salt. (“Someone” probably being someone with extensive knowledge of decision theory, though.) But. In the case of “Are we being simulated by a bastard?” we can try to extrapolate from human psychology using the kind of reasoning found in Omohundro’s basic AI drives. It probably ends up such that we can use that kind of reference class forecasting combined with pragmatic considerations to make predictions/decisions that are “good enough” in some objective sense. E.g. even if we are being simulated by a bastard, we’re probably also in other simulations that matter more to us, and assuming what we suspect matters is what matters more “objectively”, whatever the hell that means, then we should pragmatically act as if we’re (mostly) acting from within non-bastard-simulated contexts—though we might update based on (logical?) evidence.
I think the important point is that such reasoning sounds significantly less impressive/authoritative than “your hypothesis has too many burdensome details”. It worries me when people start wielding the sword of Bayes, partially because the math isn’t “fundamental” when there are are “copies” of you—which there almost certainly are—and partially because it gives a veneer of technicality that makes it naively seem impossible to argue against, which is bad for epistemic hygiene. But my worries might be more paranoia than anything legitimate.
I do not know what to call this kind of evidence outside of technical decision theory discussion. “Logical” is the obvious choice, and that’s the decision theory name for it, but it’s only ‘logical’ in an abstract way. Maybe it’s the most accurate word though, and I’m just not familiar with the etymology. “Platonic evidence” (or evidence) feels a little more accurate to me, ’cuz you can talk about an observation carrying both physical evidence and Platonic evidence without thinking that Platonic evidence entails having to perform explicit logical operations or anything like that. (Likewise you can resolve Platonic uncertainty without recourse to anything that looks like formal logic.) Eh, whatever, “logical uncertainty” is fine. (Related words: acausal, timeless, teleological.)