Here’s how I see the whole issue, after some more reflection:
Imagine a hypothetical universe with more than 3^^^3 total bits of information in it, which also contained a version of the Kercher murder. If you knew enough about the state of such a universe (e.g. if you were something like a Laplacian demon with respect to it), you could conceivably have on the order of 3^^^3 bits of evidence that the Amanda Knox of that universe was innocent of the crime.
Now, the possibility would still exist that you were being deceived by a yet more powerful demon. But this possibility would only bound your probability away from 0 by an amount smaller than 1/3^^^3. In your (hypothesized) state of knowledge, you would be entitled to assert a probability of 1/3^^^3 that Knox killed Kercher.
Furthermore, if a demon were deceiving you to the extent of feeding you 3^^^3 bits of “misleading” information, it would automatically be creating, within your mind, a model so complex as to almost certainly contain fully conscious versions of Knox, Kercher, and everyone else involved. In other words, it would effectively be creating an autonomous world in which Knox was innocent. Thus, while you might technically be “mistaken”, in the sense that your highly complex model does not “correspond” to the external situation known to the demon, the moral force of that mistake would be undermined considerably, in view of the existence of a morally significant universe in which (the appropriate version of) Knox was indeed innocent.
When we make probability estimates, what we’re really doing is measuring the information content of our model. (The more detailed our model, the more extreme our estimates should be.) Positing additional layers of reality only adds information; it cannot take information away. A sufficiently complex model might be “wrong” as a model but yet morally significant as a universe in its own right.
That is a fascinating counterargument that I’m not sure what to make of yet.
Here’s how I see the whole issue, after some more reflection:
Imagine a hypothetical universe with more than 3^^^3 total bits of information in it, which also contained a version of the Kercher murder. If you knew enough about the state of such a universe (e.g. if you were something like a Laplacian demon with respect to it), you could conceivably have on the order of 3^^^3 bits of evidence that the Amanda Knox of that universe was innocent of the crime.
Now, the possibility would still exist that you were being deceived by a yet more powerful demon. But this possibility would only bound your probability away from 0 by an amount smaller than 1/3^^^3. In your (hypothesized) state of knowledge, you would be entitled to assert a probability of 1/3^^^3 that Knox killed Kercher.
Furthermore, if a demon were deceiving you to the extent of feeding you 3^^^3 bits of “misleading” information, it would automatically be creating, within your mind, a model so complex as to almost certainly contain fully conscious versions of Knox, Kercher, and everyone else involved. In other words, it would effectively be creating an autonomous world in which Knox was innocent. Thus, while you might technically be “mistaken”, in the sense that your highly complex model does not “correspond” to the external situation known to the demon, the moral force of that mistake would be undermined considerably, in view of the existence of a morally significant universe in which (the appropriate version of) Knox was indeed innocent.
When we make probability estimates, what we’re really doing is measuring the information content of our model. (The more detailed our model, the more extreme our estimates should be.) Positing additional layers of reality only adds information; it cannot take information away. A sufficiently complex model might be “wrong” as a model but yet morally significant as a universe in its own right.