I’m surprised that you weren’t aware that I took Tegmark’s multiverse seriously, since I mentioned it in the UDT post. It was one of the main inspirations for me coming up with UDT. You can see here a 2006 proto-UDT that’s perhaps more clearly based on Tegmark’s idea.
Have they found a workaround for the problem of teacups turning into pheasants?
Well, UDT is sort of my answer to that. In UDT you can no longer say “I assign a small probability for observing this teacup turning into a pheasant” but you can still say “I’m willing to bet a large amount of money that this teacup won’t turn into a pheasant.” See also What are probabilities, anyway? I’m not sure if that answers your question, so let me know.
(You might also be interested in UDASSA, which was an earlier attempt to solve the same problem.)
This sounds circular to me. Why are you willing to bet a large amount of money that this teacup won’t turn into a pheasant? Why do we happen to have a “preference” for a highly ordered world?
Why do we happen to have a “preference” for a highly ordered world?
One approach to answering that question is the one I gave here. Another possibility is that there is something like “objective morality” going on. Another one is that our preferences are simply arbitrary and there is no further explanation.
So I think this is still an open question, but there’s probably an answer one way or another, and the fact that we don’t know what the right answer is yet shouldn’t count against Tegmark’s idea. Furthermore, I think denying Tegmark’s idea only leads to more serious problems, like why does one universe “exist” and not another, and how do we know that one universe exists and not two or three?
There may be a grain of truth in this kind of theory, but I cannot see it clearly yet. How exactly do you separate statements about the mind (“probability as preference”) from statements about the world? What about bunnies, for example? Bunnies aren’t very smart, but their bodies seem evolved to make some outcomes more probable than others, in perfect accord with our idea of probability. The same applies to plants, that have no brains at all. Did evolution decide very early on that all life should use our particular “random” concept of preference? (How is it encoded in living organisms, then?) Or do you have some other mechanism in mind?
The shared traits come from shared evolution, that operates in the context of our physics and measure of expected outcomes. The concept of expectation implies evolution (given some other conditions), and evolution in its turn makes organisms that respect the concept of expectation (that is, persist within evolution, get selected).
If you believe in “measure of expected outcomes”, there’s no problem. Wei was trying to dissolve that belief and replace it with preference encoded in programs, or something. What do you think about this now?
To make it more pithy: are there, somewhere in the configuration space of our universe, evolved pointy-eared humanoids that can solve NP-complete problems quickly because they don’t respect the Born probabilities? Are they immune to “spontaneous existence failure”, from their own point of view?
What do you mean by “believe”? To refer to the concept of evolution (as explanation for plants and bunnies), you have to refer to the world, and not just the world, but the world equipped with measure (quantum mechanical measure, say). Without that measure, evolution doesn’t work, and the world won’t behave as we expect it to behave. After that is understood, it’s not surprising that evolution selected organisms that respect that measure and not something else.
So, I’m not assuming measure additionally, the argument is that measure is implicit in your very question.
The NP-solving creatures won’t be in our universe in the sense that they don’t exist in the context of our universe with its measure. When you refer to our universe, you necessarily reference measure as part. It’s like a fundamental law, a necessary part of specification of what you are talking about.
When you refer to our universe, you necessarily reference measure as part.
Um, no. I don’t know of any fundamental dynamical laws in QM that use measure. You can calculate the evolution of the wavefunction without mentioning measure at all. It only appears when we try to make probabilistic predictions about our subjective experience. You could equip the same big evolving wavefunction with a different measure, and get superintelligent elves. Or no?
You could equip the same big evolving wavefunction with another measure, and get superintelligent elves. Or no?
Yes, but then you won’t be talking about our world in the usual sense, because, say, classical world won’t work as expected anymore given those laws (measure). If you don’t include measure, you don’t get any predictions about what you expect to see in reality, while that’s what physics is normally all about.
Uh… So, our subjective experience matches the Born probabilities because our minds are implemented with macroscopic gears, which require classical physics (and thus Born probabilities) to function in a stable manner? This sounds like it might be an explanation, but we’d need to show that other probability rules lead to unstable physics (no planets, or no proteins, or something like that). And even if we had proof of that, I think some leftover mystery would still remain.
I begin to feel that the mystery has been dissolved. Even if other measures (or indeed physical laws) lead to lawful enough processes that can also support evolution, it doesn’t impact the notion of anticipation, because our anticipation matches our evolution, and our evolution exists in the process under our measure.
Also, it’s not specifically minds that are macroscopic and depend on measure, it’s evolution itself that is thus macroscopic and selects replicators that replicate under that measure. For minds, anticipation matching measure is just another psychological adaptation, not necessarily a perfect match, but close enough.
As another crazy hypothesis, building on the previous one, it’s possible that we don’t particularly care about our reality or our measure, like we don’t care whether a person is in a biological body or uploaded, so that we will build our goodness out of different mathematics, having no effect on our reality. Thus, when we run the FAI, “nothing happens” in our world. Let’s hope this applies to most UFAIs, that will therefore have no ill effect, because they don’t care about our world or our measure.
I disagree with your first two paragraphs. Without a demonstration that the Born rule is somehow special (yields the most stable world for working complex machines, or something), the argument is still disappointingly circular. For example, if some other rule turns out to be even more conducive to evolution, the anthropic question arises: why aren’t we in that world instead of this one? (Kinda like the Boltzmann brain problem, but in reverse.) Fortunately, checking the macroscopic behavior that arises from quantum physics under different assumed measures is a completely empirical question. Now I just need to understand enough math to build a toy model and see for myself how it pans out. For the record, I’m about 70% confident that this line of inquiry will fail, because other worlds will look just as stable and macroscopically lawful as ours.
An FAI that doesn’t help our world is a big fat piece of fail. Can I please have a machine that’s based on less lofty abstractions, but actually does stuff?
Could you frame the debate to avoid ambiguity? What argument do you refer to (in your own words)? In what way is it circular? (I feel that the structure of the argument is roughly that the answer to the question “what is 2+2?” is “4″, because the algebraic laws assumed in the question imply 4 as the answer, even though other algebraic laws can lead to other answers.)
For example, if some other rule turns out to be even more conducive to evolution, the anthropic question arises: why aren’t we in that world instead of this one?
We just aren’t, this question has no meaning. Why are you you, and not someone else? When you refer to yourself, you identify a particular concept (of “yourself”). That concept is distinct from other concepts, and that’s the end of it. Two given concepts are not identical, as defined.
It’s entirely possible that other rules (measures) are also conductive to evolution, but look at them as something happening “far away”, like in universes with different fundamental constants. And over there, other creatures could’ve also biologically evolved. I’m not arguing with that, so finding other rules that produce good-enough physical processes doesn’t answer any questions. Why am I a human, and not a dolphin?
An FAI that doesn’t help our world is a big fat piece of fail. Can I please have a machine that’s based on less lofty abstractions, but actually does stuff?
We can’t outright assume anything about preference. We need to actually understand it. Powerful optimization is bound to be weird, so absurdity heuristic goes out the window. And correspondingly, the necessary standard of understanding goes up a dozen of notches. We are so far away from the adequate level that if a random AGI is built 30 year from now, we still almost certainly fail to beat it. Maybe 50 or 100 years (at which point uploads start influencing progress) sounds more reasonable, judging by the rate of progress in mathematics. We need to work faster.
You are committing the general error of prematurely declaring a question “dissolved”. It’s always better to err in the other direction. That’s how I come up with all my weird models, anyway.
I just took a little walk outside and this clarification occurred to me: imagine an algorithm (Turing machine) running on a classical physical computer, sitting on a table in our quantum universe. The computer has the interesting property that it is “stable” under the Born rule: a weighted-majority of near futures ranked by the 2-norm have the computer correctly executing the next few steps of the computation, but for the 1-norm this isn’t necessarily the case—the computer will likely glitch or self-destruct. (All computers built by humans probably have this property. Also note that it can be defined in terms of the wavefunction alone, without assuming weights a priori.) Then the algorithm will have “subjective anticipation” of a weird kind: conditioned on the algorithm itself running faithfully in the future, it can conclude that future histories with higher Born-weight are more likely.
This idea has the drawback that it doesn’t look at histories of the outside world, only the computer’s internals. But maybe it can be extended to include observations somehow?
You are committing the general error of prematurely declaring the question “dissolved”. It’s always better to err in the other direction.
“Beginning to feel” that the question is dissolved is far from the level of certainty required to “declare it dissolved”, merely a hunch that it’s the right direction to look for the answer (not that it’s a question I’m especially interested in, but it might be useful to understand it better).
I agree with your description in the second paragraph, but don’t clearly see what you wanted to communicate through it. (Closest salient idea is Hanson’s “mingled worlds”.)
Why do we happen to have a “preference” for a highly ordered world?
Evolution happened in that ordered world, and it built systems that are expected (and hence, expect) to work in the ordered world, because working in ordered world was the criterion for selecting them in that ordered world in the past. In order to survive/replicate in an ordered world (narrow subset of what’s possible), it’s adaptive to expect ordered world.
...which seems to be roughly the same “reality is a Darwinian concept” nonsense as what I came up with (do you agree?). You can still assign probabilities though, but they are no longer decision-theoretic probabilities.
I’m surprised that you weren’t aware that I took Tegmark’s multiverse seriously, since I mentioned it in the UDT post. It was one of the main inspirations for me coming up with UDT. You can see here a 2006 proto-UDT that’s perhaps more clearly based on Tegmark’s idea.
Well, UDT is sort of my answer to that. In UDT you can no longer say “I assign a small probability for observing this teacup turning into a pheasant” but you can still say “I’m willing to bet a large amount of money that this teacup won’t turn into a pheasant.” See also What are probabilities, anyway? I’m not sure if that answers your question, so let me know.
(You might also be interested in UDASSA, which was an earlier attempt to solve the same problem.)
This sounds circular to me. Why are you willing to bet a large amount of money that this teacup won’t turn into a pheasant? Why do we happen to have a “preference” for a highly ordered world?
One approach to answering that question is the one I gave here. Another possibility is that there is something like “objective morality” going on. Another one is that our preferences are simply arbitrary and there is no further explanation.
So I think this is still an open question, but there’s probably an answer one way or another, and the fact that we don’t know what the right answer is yet shouldn’t count against Tegmark’s idea. Furthermore, I think denying Tegmark’s idea only leads to more serious problems, like why does one universe “exist” and not another, and how do we know that one universe exists and not two or three?
There may be a grain of truth in this kind of theory, but I cannot see it clearly yet. How exactly do you separate statements about the mind (“probability as preference”) from statements about the world? What about bunnies, for example? Bunnies aren’t very smart, but their bodies seem evolved to make some outcomes more probable than others, in perfect accord with our idea of probability. The same applies to plants, that have no brains at all. Did evolution decide very early on that all life should use our particular “random” concept of preference? (How is it encoded in living organisms, then?) Or do you have some other mechanism in mind?
The shared traits come from shared evolution, that operates in the context of our physics and measure of expected outcomes. The concept of expectation implies evolution (given some other conditions), and evolution in its turn makes organisms that respect the concept of expectation (that is, persist within evolution, get selected).
If you believe in “measure of expected outcomes”, there’s no problem. Wei was trying to dissolve that belief and replace it with preference encoded in programs, or something. What do you think about this now?
To make it more pithy: are there, somewhere in the configuration space of our universe, evolved pointy-eared humanoids that can solve NP-complete problems quickly because they don’t respect the Born probabilities? Are they immune to “spontaneous existence failure”, from their own point of view?
What do you mean by “believe”? To refer to the concept of evolution (as explanation for plants and bunnies), you have to refer to the world, and not just the world, but the world equipped with measure (quantum mechanical measure, say). Without that measure, evolution doesn’t work, and the world won’t behave as we expect it to behave. After that is understood, it’s not surprising that evolution selected organisms that respect that measure and not something else.
So, I’m not assuming measure additionally, the argument is that measure is implicit in your very question.
The NP-solving creatures won’t be in our universe in the sense that they don’t exist in the context of our universe with its measure. When you refer to our universe, you necessarily reference measure as part. It’s like a fundamental law, a necessary part of specification of what you are talking about.
Um, no. I don’t know of any fundamental dynamical laws in QM that use measure. You can calculate the evolution of the wavefunction without mentioning measure at all. It only appears when we try to make probabilistic predictions about our subjective experience. You could equip the same big evolving wavefunction with a different measure, and get superintelligent elves. Or no?
Yes, but then you won’t be talking about our world in the usual sense, because, say, classical world won’t work as expected anymore given those laws (measure). If you don’t include measure, you don’t get any predictions about what you expect to see in reality, while that’s what physics is normally all about.
Uh… So, our subjective experience matches the Born probabilities because our minds are implemented with macroscopic gears, which require classical physics (and thus Born probabilities) to function in a stable manner? This sounds like it might be an explanation, but we’d need to show that other probability rules lead to unstable physics (no planets, or no proteins, or something like that). And even if we had proof of that, I think some leftover mystery would still remain.
I begin to feel that the mystery has been dissolved. Even if other measures (or indeed physical laws) lead to lawful enough processes that can also support evolution, it doesn’t impact the notion of anticipation, because our anticipation matches our evolution, and our evolution exists in the process under our measure.
Also, it’s not specifically minds that are macroscopic and depend on measure, it’s evolution itself that is thus macroscopic and selects replicators that replicate under that measure. For minds, anticipation matching measure is just another psychological adaptation, not necessarily a perfect match, but close enough.
As another crazy hypothesis, building on the previous one, it’s possible that we don’t particularly care about our reality or our measure, like we don’t care whether a person is in a biological body or uploaded, so that we will build our goodness out of different mathematics, having no effect on our reality. Thus, when we run the FAI, “nothing happens” in our world. Let’s hope this applies to most UFAIs, that will therefore have no ill effect, because they don’t care about our world or our measure.
I disagree with your first two paragraphs. Without a demonstration that the Born rule is somehow special (yields the most stable world for working complex machines, or something), the argument is still disappointingly circular. For example, if some other rule turns out to be even more conducive to evolution, the anthropic question arises: why aren’t we in that world instead of this one? (Kinda like the Boltzmann brain problem, but in reverse.) Fortunately, checking the macroscopic behavior that arises from quantum physics under different assumed measures is a completely empirical question. Now I just need to understand enough math to build a toy model and see for myself how it pans out. For the record, I’m about 70% confident that this line of inquiry will fail, because other worlds will look just as stable and macroscopically lawful as ours.
An FAI that doesn’t help our world is a big fat piece of fail. Can I please have a machine that’s based on less lofty abstractions, but actually does stuff?
Could you frame the debate to avoid ambiguity? What argument do you refer to (in your own words)? In what way is it circular? (I feel that the structure of the argument is roughly that the answer to the question “what is 2+2?” is “4″, because the algebraic laws assumed in the question imply 4 as the answer, even though other algebraic laws can lead to other answers.)
We just aren’t, this question has no meaning. Why are you you, and not someone else? When you refer to yourself, you identify a particular concept (of “yourself”). That concept is distinct from other concepts, and that’s the end of it. Two given concepts are not identical, as defined.
It’s entirely possible that other rules (measures) are also conductive to evolution, but look at them as something happening “far away”, like in universes with different fundamental constants. And over there, other creatures could’ve also biologically evolved. I’m not arguing with that, so finding other rules that produce good-enough physical processes doesn’t answer any questions. Why am I a human, and not a dolphin?
We can’t outright assume anything about preference. We need to actually understand it. Powerful optimization is bound to be weird, so absurdity heuristic goes out the window. And correspondingly, the necessary standard of understanding goes up a dozen of notches. We are so far away from the adequate level that if a random AGI is built 30 year from now, we still almost certainly fail to beat it. Maybe 50 or 100 years (at which point uploads start influencing progress) sounds more reasonable, judging by the rate of progress in mathematics. We need to work faster.
You are committing the general error of prematurely declaring a question “dissolved”. It’s always better to err in the other direction. That’s how I come up with all my weird models, anyway.
I just took a little walk outside and this clarification occurred to me: imagine an algorithm (Turing machine) running on a classical physical computer, sitting on a table in our quantum universe. The computer has the interesting property that it is “stable” under the Born rule: a weighted-majority of near futures ranked by the 2-norm have the computer correctly executing the next few steps of the computation, but for the 1-norm this isn’t necessarily the case—the computer will likely glitch or self-destruct. (All computers built by humans probably have this property. Also note that it can be defined in terms of the wavefunction alone, without assuming weights a priori.) Then the algorithm will have “subjective anticipation” of a weird kind: conditioned on the algorithm itself running faithfully in the future, it can conclude that future histories with higher Born-weight are more likely.
This idea has the drawback that it doesn’t look at histories of the outside world, only the computer’s internals. But maybe it can be extended to include observations somehow?
“Beginning to feel” that the question is dissolved is far from the level of certainty required to “declare it dissolved”, merely a hunch that it’s the right direction to look for the answer (not that it’s a question I’m especially interested in, but it might be useful to understand it better).
I agree with your description in the second paragraph, but don’t clearly see what you wanted to communicate through it. (Closest salient idea is Hanson’s “mingled worlds”.)
Evolution happened in that ordered world, and it built systems that are expected (and hence, expect) to work in the ordered world, because working in ordered world was the criterion for selecting them in that ordered world in the past. In order to survive/replicate in an ordered world (narrow subset of what’s possible), it’s adaptive to expect ordered world.
...which seems to be roughly the same “reality is a Darwinian concept” nonsense as what I came up with (do you agree?). You can still assign probabilities though, but they are no longer decision-theoretic probabilities.