I agree with your first paragraph, but I’m not convinced of your second paragraph… at least, if you intend it as a rhetorical way of asserting that there is no possible way to weight the evidence properly. It’s just another proposition; there’s evidence for and against it.
I think we get confused here because we start with our bottom line already written.
I “know” that the EV of destroying my light cone is negative. But theory seems to indicate that, when assigning a confidence interval P1 to the statement “Destroying my future light cone will preserve 3^^^3 extra-universal people” (hereafter, statement S1), a well-calibrated inference engine might assign P1 such that the EV of destroying my light cone is positive. So I become anxious, and I try to alter the theory so that the resulting P1s are aligned with my pre-existing “knowledge” that the EV of destroying my light cone is negative.
Ultimately, I have to ask what I trust more: the “knowledge” produced by the poorly calibrated inference engine that is my brain, or the “knowledge” produced by the well-calibrated inference engine I built? If I trust the inference engine, then I should trust the inference engine.
I agree with your first paragraph, but I’m not convinced of your second paragraph… at least, if you intend it as a rhetorical way of asserting that there is no possible way to weight the evidence properly. It’s just another proposition; there’s evidence for and against it.
I think we get confused here because we start with our bottom line already written.
I “know” that the EV of destroying my light cone is negative. But theory seems to indicate that, when assigning a confidence interval P1 to the statement “Destroying my future light cone will preserve 3^^^3 extra-universal people” (hereafter, statement S1), a well-calibrated inference engine might assign P1 such that the EV of destroying my light cone is positive. So I become anxious, and I try to alter the theory so that the resulting P1s are aligned with my pre-existing “knowledge” that the EV of destroying my light cone is negative.
Ultimately, I have to ask what I trust more: the “knowledge” produced by the poorly calibrated inference engine that is my brain, or the “knowledge” produced by the well-calibrated inference engine I built? If I trust the inference engine, then I should trust the inference engine.