“There are possible minds in mind design space who have anti-Occamian and anti-Laplacian priors; they believe that simpler theories are less likely to be correct, and that the more often something happens, the less likely it is to happen again.”
You’ve been making this point a lot lately. But I don’t see any reason for “mind design space” to have that kind of symmetry. Why do you believe this? Could you elaborate on it at some point?
That something is included in “mind design space” does not imply that it actually exists. Think of it instead as everything that we might label “mind” if it did exist.
Imagine a mind as already exists. Now I install a small frog trained to kick its leg when you try to perform Occamian or Laplacian thinking, and its kicking leg hits a button that inverts your output so your conclusion is exactly backwards from the one you should/would have made but for the frog.
Though, the anti-Laplacian mind, in this case, is inherently more complicated. Maybe it’s not a moot point that Laplacian minds are on average simpler than their anti-Laplacian counterparts? There are infinite Laplacian and anti-Laplacian minds, but of the two infinities, might one be proportionately larger?
None of this is to detract from Eliezer’s original point, of course. I only find it interesting to think about.
They must be of exactly the same magnitude, as the odds and even integers are, because either can be given a frog. From any Laplacian mind, I can install a frog and get an anti-Laplacian. And vice versa. This even applies to ones I’ve installed a frog in already. Adding a second frog gets you a new mind that is just like the one two steps back, except lags behind it in computation power by two kicks. There is a 1:1 mapping between Laplacian and non-Laplacian minds, and I have demonstrated the constructor function of adding a frog.
The possible mind, that assumes that things are more likely to work if they have never worked before, can in all honesty continue to use this prior if it has never worked before. But this is only a self-sustaining method if it continues not to work.
Let us introduce our hypothetical poor-prior, rationalist observer to a rigged game of chance; let us say, a roulette wheel. (For simplicity, let’s call him Jim). We allow Jim to inspect an (unrigged) roulette wheel beforehand. We ask him to place a bet, on any number of his choice; once he places his bet, we use our rigged roulette wheel to ensure that he wins and continues to win, for any number of future guesses.
Now, from Jim’s point of view, whatever line of reasoning he is using to find the correct number to bet on, it is working. He’ll presumably select a different number every time; it continues to work. Thus, the idea that a theory that work now is less likely to work in the future is working… and thus is less likely to work in the future. Wouldn’t this success cause him to eventually reject his prior?
“There are possible minds in mind design space who have anti-Occamian and anti-Laplacian priors; they believe that simpler theories are less likely to be correct, and that the more often something happens, the less likely it is to happen again.”
You’ve been making this point a lot lately. But I don’t see any reason for “mind design space” to have that kind of symmetry. Why do you believe this? Could you elaborate on it at some point?
That something is included in “mind design space” does not imply that it actually exists. Think of it instead as everything that we might label “mind” if it did exist.
Imagine a mind as already exists. Now I install a small frog trained to kick its leg when you try to perform Occamian or Laplacian thinking, and its kicking leg hits a button that inverts your output so your conclusion is exactly backwards from the one you should/would have made but for the frog.
And thus symmetry.
Though, the anti-Laplacian mind, in this case, is inherently more complicated. Maybe it’s not a moot point that Laplacian minds are on average simpler than their anti-Laplacian counterparts? There are infinite Laplacian and anti-Laplacian minds, but of the two infinities, might one be proportionately larger?
None of this is to detract from Eliezer’s original point, of course. I only find it interesting to think about.
They must be of exactly the same magnitude, as the odds and even integers are, because either can be given a frog. From any Laplacian mind, I can install a frog and get an anti-Laplacian. And vice versa. This even applies to ones I’ve installed a frog in already. Adding a second frog gets you a new mind that is just like the one two steps back, except lags behind it in computation power by two kicks. There is a 1:1 mapping between Laplacian and non-Laplacian minds, and I have demonstrated the constructor function of adding a frog.
Mind design space is very large and comprehensive. It’s like how the set of all possible theories contains both A and ~A.
A question.
The possible mind, that assumes that things are more likely to work if they have never worked before, can in all honesty continue to use this prior if it has never worked before. But this is only a self-sustaining method if it continues not to work.
Let us introduce our hypothetical poor-prior, rationalist observer to a rigged game of chance; let us say, a roulette wheel. (For simplicity, let’s call him Jim). We allow Jim to inspect an (unrigged) roulette wheel beforehand. We ask him to place a bet, on any number of his choice; once he places his bet, we use our rigged roulette wheel to ensure that he wins and continues to win, for any number of future guesses.
Now, from Jim’s point of view, whatever line of reasoning he is using to find the correct number to bet on, it is working. He’ll presumably select a different number every time; it continues to work. Thus, the idea that a theory that work now is less likely to work in the future is working… and thus is less likely to work in the future. Wouldn’t this success cause him to eventually reject his prior?