Now, if it is the case that she didn’t, then it follows that, given sufficient information about how-the-world-is, one’s probability estimate could be made arbitrarily close to 0.
What, like 1/3^^^3? There isn’t that much information in the universe, and come to think, I’m not sure I can conceive of any stream of evidence which would drive the probability that low in the Knox case, because there are complicated hypotheses much less complicated than that in which you’re in a computer simulation expressly created for the purpose of deluding you about the Amanda Knox case.
I thought I was stating a mathematical tautology. I didn’t say there was enough information in the universe to get below 1/3^^^3. The point was only that the information controls the probability.
But surely any statement one could make about Amanda Knox is only about the Amanda Knox in this world, whether she’s a fully simulated human or something less. Perhaps only the places I actually go are fully simulated, and everywhere else is only simulated in its effects on the places I go, so that the light from distant stars are supplied without bothering to run their internal processes; in that case, the innocent Amanda Knox only exists insofar as the effects that an innocent Amanda Knox would have on my part of the world are implemented. Even so, my beliefs about the case can only be about the figure in my own world. It doesn’t matter that there could be some other world where Amanda Knox is a murderess and Hitler was a great humanitarian.
I’m not sure why this is being downvoted so much (to –3 when I saw it). It’s a good point.
If I’m in a simulation, and the “base reality” is sufficiently different from how things appear to me in the simulation, it stops making sense to say that I’m fooled into attributing false predicates to things in the base reality. I’m so cut off from the base reality that few of my beliefs can be said to be about it at all. It makes more sense to say that I have true beliefs about the things in the simulation. I just have one important false belief about them—namely, that they’re not simulated. But that doesn’t mean that my other beliefs about them are wrong.
The situation is similar to that of the proverbial man who thinks that penguins are blind borrowing mammals who live in the Namib Desert. Such beliefs aren’t really about penguins at all. More probably, the man has true beliefs about some variety of golden mole. He just has one important false belief about them—namely, that they’re called “penguins”.
Perhaps it’s being downvoted because of my strange speculation that the stars are unreal—but it seems to me that if this is a simulation with such a narrow purpose as fooling komponisto/me/us/somebody about the Knox case is would be more thrifty to only simulate some narrow portion of the world, which need not include Knox herself. Even then, I think, it would make sense to say that my beliefs are about Knox as she is inside the simulation, not some other Knox I cannot have any knowledge of, even in principle.
I downvoted the great-grandparent because it ignores the least convenient possible world where the simulators are implementing the entire Earth in detail such that the simulated Amanda Knox is a person, is guilty of the murder, and yet circumstances are such that she seems innocent given your state of knowledge. You’re right that implementing the entire Earth is more expensive then just deluding you personally, but that’s irrelevant to Eliezer’s nitpick, which was only that 1/(3^^^3) really is just that small and yet nonzero.
I think that you’ve only pushed it up (down?) a level.
If I have gathered sufficiently strong evidence that the simulated Knox is not guilty, then the deception that you’re suggesting would very probably amount to constructing a simulated simulated Knox, who is not guilty, and who, it would turn out, was the subject of my beliefs about Knox. My belief in her innocence would be a true belief about the simulated-squared Knox, rather than a false belief about the guilty simulated-to-the-first-power Knox.
All deception amounts to an attempt to construct a simulation by controlling the evidence that the deceived person receives. The kind of deception that we see day-to-day is far too crude to really merit the term “simulation”. But the difference is one of degree. If an epistemic agent were sufficiently powerful, then deceiving it would very probably require the sort of thing that we normally think of as a simulation.
ETA: And the more powerful the agent, the more probable it is that whatever we induced it to believe is a true belief about the simulation, rather than a false belief about the “base reality” (except for its belief that it’s not in a simulation, of course).
If I have gathered sufficiently strong evidence that the simulated Knox is not guilty, then the deception that you’re suggesting would very probably amount to constructing a simulated simulated Knox, who is not guilty, and who, it would turn out, was the subject of my beliefs about Knox. My belief in her innocence would be a true belief about the simulated-squared Knox, rather than a false belief about the guilty simulated-to-the-first-power Knox.
This is a good point, but your input could also be the product of modeling you and computing “what inputs will make this person believe Knox is innocent?”, not modeling Knox at all.
How would this work in detail? When I try to think it through, it seems that, if I’m sufficiently good at gathering evidence, then the simulator would have to model Knox at some point while determining which inputs convince me that she’s innocent.
There are shades here of Eliezer’s point about Giant Look-Up Tables modeling conscious minds. The GLUT itself might not be a conscious mind, but the process that built the GLUT probably had to contain the conscious mind that the GLUT models, and then some.
The process that builds the GLUT has to contain your mind, but nothing else. The deceiver tries all exponentially-many strings of sensory inputs, and sees what effects they have on your simulated internal state. Select the one that maximizes your belief in proposition X. No simulation of X involved, and the deceiver doesn’t even need to know anything more about X than you think you know at the beginning.
If whoever controls the simulation knows that Tyrrell/me/komponisto/Eliezer/etc. are reasonably reasonable, there’s little to be gained by modeling all the evidences that might persuade me. Just include the total lack of physical evidence tying the accused to the room where the murder happened, and I’m all yours. I’m sure I care more than I might have otherwise because she’s pretty, and obviously (obviously to me, anyway) completely harmless and well-meaning, even now. Whereas, if we were talking about a gang member who’s probably guilty of other horrible felonies, I’d still be more convinced of innocence than I am of some things I personally witnessed (since the physical evidence is more reliable than human memory), but I wouldn’t feel so sorry for the wrongly convicted.
But remember my original point here: level-of-belief is controlled by the amount of information. In order for me to reach certain extremely high levels of certainty about Knox’s innocence, it may be necessary to effectively simulate a copy of Knox inside my mind.
ETA: And that of course raises the question about whether in that case my beliefs are about the mind-external Knox (“simulated” or not) or the mind-internal simulated Knox. This is somewhat tricky, but the answer is the former—for the same reason that the simple, non-conscious model of Amanda I have in my mind right now represents beliefs about the real, conscious Amanda in Capanne prison. Thus, a demon could theoretically create a conscious simulation of an innocent Amanda Knox in my mind, which could represent a “wrong” extremely-certain belief about a particular external reality. But in order to pull off a deception of this order, the demon would have to inhabit a world with a lot more information than even the large amount available to me in this scenario.
Here’s how I see the whole issue, after some more reflection:
Imagine a hypothetical universe with more than 3^^^3 total bits of information in it, which also contained a version of the Kercher murder. If you knew enough about the state of such a universe (e.g. if you were something like a Laplacian demon with respect to it), you could conceivably have on the order of 3^^^3 bits of evidence that the Amanda Knox of that universe was innocent of the crime.
Now, the possibility would still exist that you were being deceived by a yet more powerful demon. But this possibility would only bound your probability away from 0 by an amount smaller than 1/3^^^3. In your (hypothesized) state of knowledge, you would be entitled to assert a probability of 1/3^^^3 that Knox killed Kercher.
Furthermore, if a demon were deceiving you to the extent of feeding you 3^^^3 bits of “misleading” information, it would automatically be creating, within your mind, a model so complex as to almost certainly contain fully conscious versions of Knox, Kercher, and everyone else involved. In other words, it would effectively be creating an autonomous world in which Knox was innocent. Thus, while you might technically be “mistaken”, in the sense that your highly complex model does not “correspond” to the external situation known to the demon, the moral force of that mistake would be undermined considerably, in view of the existence of a morally significant universe in which (the appropriate version of) Knox was indeed innocent.
When we make probability estimates, what we’re really doing is measuring the information content of our model. (The more detailed our model, the more extreme our estimates should be.) Positing additional layers of reality only adds information; it cannot take information away. A sufficiently complex model might be “wrong” as a model but yet morally significant as a universe in its own right.
What possible world would that be? If it should turn out that the Italian government is engaged in a vast experiment to see how many people it can convince of a true thing using only very inadequate evidence (and therefore falsified the evidence so as to destroy any reasonable case it had), we could, in principle, discover that. If the simulation simply deleted all of her hair, fiber, fingerprint, and DNA evidence left behind by the salacious ritual sex murder, then I can think of two objections. First, something like Tyrrell McAllister’s second-order simulation, only this isn’t so much a simulated Knox in my own head, I think, as it is a second-order simulation implemented in reality, by conforming all of reality (the crime scene, etc.) to what it would be if Knox were innocent. Second, an unlawful simulation such as this might seem to undermine any possible belief I might form, I could still in principle acquire some knowledge of it. Suppose whoever is running the simulation decides to talk to me and I have good reason to think he’s telling the truth. (This last is indistinguishable from “suppose I run into a prophet”—but in an unlawful universe that stops being a vice.)
ETA: I suppose if I’m entertaining the possibility that the simulator might start telling me truths I couldn’t otherwise know then I could, in principle, find out that I live in a simulated reality and the “real” Knox is guilty (contrary to what I asserted above). I don’t think I’d change my mind about her so much as I would begin thinking that there is a guilty Knox out there and an innocent Knox in here. After all, I think I’m pretty real, so why shouldn’t the innocent Amanda Knox be real?
There seems to be a deep idea here, but I don’t yet see that the numbers really balance out. I would appreciate it if you made a top-level post elaborating on this.
What, like 1/3^^^3? There isn’t that much information in the universe, and come to think, I’m not sure I can conceive of any stream of evidence which would drive the probability that low in the Knox case, because there are complicated hypotheses much less complicated than that in which you’re in a computer simulation expressly created for the purpose of deluding you about the Amanda Knox case.
I thought I was stating a mathematical tautology. I didn’t say there was enough information in the universe to get below 1/3^^^3. The point was only that the information controls the probability.
But surely any statement one could make about Amanda Knox is only about the Amanda Knox in this world, whether she’s a fully simulated human or something less. Perhaps only the places I actually go are fully simulated, and everywhere else is only simulated in its effects on the places I go, so that the light from distant stars are supplied without bothering to run their internal processes; in that case, the innocent Amanda Knox only exists insofar as the effects that an innocent Amanda Knox would have on my part of the world are implemented. Even so, my beliefs about the case can only be about the figure in my own world. It doesn’t matter that there could be some other world where Amanda Knox is a murderess and Hitler was a great humanitarian.
I’m not sure why this is being downvoted so much (to –3 when I saw it). It’s a good point.
If I’m in a simulation, and the “base reality” is sufficiently different from how things appear to me in the simulation, it stops making sense to say that I’m fooled into attributing false predicates to things in the base reality. I’m so cut off from the base reality that few of my beliefs can be said to be about it at all. It makes more sense to say that I have true beliefs about the things in the simulation. I just have one important false belief about them—namely, that they’re not simulated. But that doesn’t mean that my other beliefs about them are wrong.
The situation is similar to that of the proverbial man who thinks that penguins are blind borrowing mammals who live in the Namib Desert. Such beliefs aren’t really about penguins at all. More probably, the man has true beliefs about some variety of golden mole. He just has one important false belief about them—namely, that they’re called “penguins”.
Perhaps it’s being downvoted because of my strange speculation that the stars are unreal—but it seems to me that if this is a simulation with such a narrow purpose as fooling komponisto/me/us/somebody about the Knox case is would be more thrifty to only simulate some narrow portion of the world, which need not include Knox herself. Even then, I think, it would make sense to say that my beliefs are about Knox as she is inside the simulation, not some other Knox I cannot have any knowledge of, even in principle.
I downvoted the great-grandparent because it ignores the least convenient possible world where the simulators are implementing the entire Earth in detail such that the simulated Amanda Knox is a person, is guilty of the murder, and yet circumstances are such that she seems innocent given your state of knowledge. You’re right that implementing the entire Earth is more expensive then just deluding you personally, but that’s irrelevant to Eliezer’s nitpick, which was only that 1/(3^^^3) really is just that small and yet nonzero.
I think that you’ve only pushed it up (down?) a level.
If I have gathered sufficiently strong evidence that the simulated Knox is not guilty, then the deception that you’re suggesting would very probably amount to constructing a simulated simulated Knox, who is not guilty, and who, it would turn out, was the subject of my beliefs about Knox. My belief in her innocence would be a true belief about the simulated-squared Knox, rather than a false belief about the guilty simulated-to-the-first-power Knox.
All deception amounts to an attempt to construct a simulation by controlling the evidence that the deceived person receives. The kind of deception that we see day-to-day is far too crude to really merit the term “simulation”. But the difference is one of degree. If an epistemic agent were sufficiently powerful, then deceiving it would very probably require the sort of thing that we normally think of as a simulation.
ETA: And the more powerful the agent, the more probable it is that whatever we induced it to believe is a true belief about the simulation, rather than a false belief about the “base reality” (except for its belief that it’s not in a simulation, of course).
This is a good point, but your input could also be the product of modeling you and computing “what inputs will make this person believe Knox is innocent?”, not modeling Knox at all.
How would this work in detail? When I try to think it through, it seems that, if I’m sufficiently good at gathering evidence, then the simulator would have to model Knox at some point while determining which inputs convince me that she’s innocent.
There are shades here of Eliezer’s point about Giant Look-Up Tables modeling conscious minds. The GLUT itself might not be a conscious mind, but the process that built the GLUT probably had to contain the conscious mind that the GLUT models, and then some.
The process that builds the GLUT has to contain your mind, but nothing else. The deceiver tries all exponentially-many strings of sensory inputs, and sees what effects they have on your simulated internal state. Select the one that maximizes your belief in proposition X. No simulation of X involved, and the deceiver doesn’t even need to know anything more about X than you think you know at the beginning.
If whoever controls the simulation knows that Tyrrell/me/komponisto/Eliezer/etc. are reasonably reasonable, there’s little to be gained by modeling all the evidences that might persuade me. Just include the total lack of physical evidence tying the accused to the room where the murder happened, and I’m all yours. I’m sure I care more than I might have otherwise because she’s pretty, and obviously (obviously to me, anyway) completely harmless and well-meaning, even now. Whereas, if we were talking about a gang member who’s probably guilty of other horrible felonies, I’d still be more convinced of innocence than I am of some things I personally witnessed (since the physical evidence is more reliable than human memory), but I wouldn’t feel so sorry for the wrongly convicted.
But remember my original point here: level-of-belief is controlled by the amount of information. In order for me to reach certain extremely high levels of certainty about Knox’s innocence, it may be necessary to effectively simulate a copy of Knox inside my mind.
ETA: And that of course raises the question about whether in that case my beliefs are about the mind-external Knox (“simulated” or not) or the mind-internal simulated Knox. This is somewhat tricky, but the answer is the former—for the same reason that the simple, non-conscious model of Amanda I have in my mind right now represents beliefs about the real, conscious Amanda in Capanne prison. Thus, a demon could theoretically create a conscious simulation of an innocent Amanda Knox in my mind, which could represent a “wrong” extremely-certain belief about a particular external reality. But in order to pull off a deception of this order, the demon would have to inhabit a world with a lot more information than even the large amount available to me in this scenario.
That is a fascinating counterargument that I’m not sure what to make of yet.
Here’s how I see the whole issue, after some more reflection:
Imagine a hypothetical universe with more than 3^^^3 total bits of information in it, which also contained a version of the Kercher murder. If you knew enough about the state of such a universe (e.g. if you were something like a Laplacian demon with respect to it), you could conceivably have on the order of 3^^^3 bits of evidence that the Amanda Knox of that universe was innocent of the crime.
Now, the possibility would still exist that you were being deceived by a yet more powerful demon. But this possibility would only bound your probability away from 0 by an amount smaller than 1/3^^^3. In your (hypothesized) state of knowledge, you would be entitled to assert a probability of 1/3^^^3 that Knox killed Kercher.
Furthermore, if a demon were deceiving you to the extent of feeding you 3^^^3 bits of “misleading” information, it would automatically be creating, within your mind, a model so complex as to almost certainly contain fully conscious versions of Knox, Kercher, and everyone else involved. In other words, it would effectively be creating an autonomous world in which Knox was innocent. Thus, while you might technically be “mistaken”, in the sense that your highly complex model does not “correspond” to the external situation known to the demon, the moral force of that mistake would be undermined considerably, in view of the existence of a morally significant universe in which (the appropriate version of) Knox was indeed innocent.
When we make probability estimates, what we’re really doing is measuring the information content of our model. (The more detailed our model, the more extreme our estimates should be.) Positing additional layers of reality only adds information; it cannot take information away. A sufficiently complex model might be “wrong” as a model but yet morally significant as a universe in its own right.
What possible world would that be? If it should turn out that the Italian government is engaged in a vast experiment to see how many people it can convince of a true thing using only very inadequate evidence (and therefore falsified the evidence so as to destroy any reasonable case it had), we could, in principle, discover that. If the simulation simply deleted all of her hair, fiber, fingerprint, and DNA evidence left behind by the salacious ritual sex murder, then I can think of two objections. First, something like Tyrrell McAllister’s second-order simulation, only this isn’t so much a simulated Knox in my own head, I think, as it is a second-order simulation implemented in reality, by conforming all of reality (the crime scene, etc.) to what it would be if Knox were innocent. Second, an unlawful simulation such as this might seem to undermine any possible belief I might form, I could still in principle acquire some knowledge of it. Suppose whoever is running the simulation decides to talk to me and I have good reason to think he’s telling the truth. (This last is indistinguishable from “suppose I run into a prophet”—but in an unlawful universe that stops being a vice.)
ETA: I suppose if I’m entertaining the possibility that the simulator might start telling me truths I couldn’t otherwise know then I could, in principle, find out that I live in a simulated reality and the “real” Knox is guilty (contrary to what I asserted above). I don’t think I’d change my mind about her so much as I would begin thinking that there is a guilty Knox out there and an innocent Knox in here. After all, I think I’m pretty real, so why shouldn’t the innocent Amanda Knox be real?
There seems to be a deep idea here, but I don’t yet see that the numbers really balance out. I would appreciate it if you made a top-level post elaborating on this.