Is it possible to assign a non-zero prior probability to statements like “my memory has been altered”, “I am suffering from delusions”, and “I live in a perfectly simulated matrix”?
No, it’s not meaningless, because if it’s true, the matrix’s implementers could decide to intervene (or for that matter create an afterlife simulation for all of us). If it’s true, there’s also the possibility of the simulation ending prematurely.
Is it possible to assign a non-zero prior probability to statements like “my memory has been altered”, “I am suffering from delusions”, and “I live in a perfectly simulated matrix”?
Of course we have to assign non-zero probabilities to them, but I’m not quite sure how we’d figure out the right priors. Assuming that the hypotheses that your memory has been altered or you’re delusional do not actually cause you to anticipate anything differently (see the bit about the blue tentacle in Technical Explanation), you may as well live in whatever reality appears to you to be the outermost one accessible to your mind.
(As for the last one, Nick Bostrom argues that we can actually assign a very high probability to a statement somewhat similar to “I live in a perfectly simulated matrix” — see the Simulation Argument. I have doubts about the meaningfulness of that on the basis of modal realism, but I’m not too confident one way or the other.)
I disagree with the idea that modal realism, whether right or not, changes the chances of any particular hypothesis like that being true. I am not saying that we can never have a rational belief about whether or not modal realism is true: There may or may not be a philosophical justification for modal realism. However, I do think that whether modal realism applies has no bearing on the probability of you being in some situation, such as in a computer simulation. I think this issue needs debating, so for that purpose I have asserted this is a rule, which I call “The Principle of Modal Realism Equivalence”, and that gives us something well-defined to argue for or against. I define and assert the rule, and give a (short) justification of it here:
http://www.paul-almond.com/ModalRealismEquivalence.pdf.
But what if you should anticipate things very differently, if your memory has been altered? If I assigned a high probability to my memory having been altered, then I should expect that the technology exists to alter memories, and all manner of even stranger things that that would imply. Figuring out what prior to assign to a case like that, or whether it can be done at all, is what I’m struggling with.
Eliezer seems to think (or, at least he did at the time) that this isn’t a solvable problem. To phrase the question in a way more relevant to recent discussions, are those statements in any way similar to “a halting oracle exists”?
Solomonoff’s prior can’t predict something uncomputable, but I don’t see anything obviously uncomputable about any of the 3 statements you asked about.
Yes. Anything that can be represented by a turing machine gets a nonzero prior. And its model of itself goes in the same turing machine with the rest of the world.
A general question about decision theory:
Is it possible to assign a non-zero prior probability to statements like “my memory has been altered”, “I am suffering from delusions”, and “I live in a perfectly simulated matrix”?
Apologies if this has been answered elsewhere.
The first two questions aren’t about decisions.
This question is meaningless. It’s equivalent to “There is a God, but he’s unreachable and he never does anything.”
No, it’s not meaningless, because if it’s true, the matrix’s implementers could decide to intervene (or for that matter create an afterlife simulation for all of us). If it’s true, there’s also the possibility of the simulation ending prematurely.
Yes.
Of course we have to assign non-zero probabilities to them, but I’m not quite sure how we’d figure out the right priors. Assuming that the hypotheses that your memory has been altered or you’re delusional do not actually cause you to anticipate anything differently (see the bit about the blue tentacle in Technical Explanation), you may as well live in whatever reality appears to you to be the outermost one accessible to your mind.
(As for the last one, Nick Bostrom argues that we can actually assign a very high probability to a statement somewhat similar to “I live in a perfectly simulated matrix” — see the Simulation Argument. I have doubts about the meaningfulness of that on the basis of modal realism, but I’m not too confident one way or the other.)
I disagree with the idea that modal realism, whether right or not, changes the chances of any particular hypothesis like that being true. I am not saying that we can never have a rational belief about whether or not modal realism is true: There may or may not be a philosophical justification for modal realism. However, I do think that whether modal realism applies has no bearing on the probability of you being in some situation, such as in a computer simulation. I think this issue needs debating, so for that purpose I have asserted this is a rule, which I call “The Principle of Modal Realism Equivalence”, and that gives us something well-defined to argue for or against. I define and assert the rule, and give a (short) justification of it here: http://www.paul-almond.com/ModalRealismEquivalence.pdf.
But what if you should anticipate things very differently, if your memory has been altered? If I assigned a high probability to my memory having been altered, then I should expect that the technology exists to alter memories, and all manner of even stranger things that that would imply. Figuring out what prior to assign to a case like that, or whether it can be done at all, is what I’m struggling with.
It’s not actually all that hard to mess with memories.
Why not?
“Where’d you get your universal prior, Neo?”
Eliezer seems to think (or, at least he did at the time) that this isn’t a solvable problem. To phrase the question in a way more relevant to recent discussions, are those statements in any way similar to “a halting oracle exists”?
Solomonoff’s prior can’t predict something uncomputable, but I don’t see anything obviously uncomputable about any of the 3 statements you asked about.
Right. But can it predict computable scenarios in which it is wrong?
Yes. Anything that can be represented by a turing machine gets a nonzero prior. And its model of itself goes in the same turing machine with the rest of the world.