Ata, there are many things wrong with your ideas. (Hopefully saying that doesn’t put you off—you want to become less wrong, I assume.)
it is more difficult to get to the point where it actually seems convincing and intuitively correct, until you independently invent it for yourself
I have indeed independently invented the “all math exists” idea myself, years ago. I used to believe it was almost certainly true. I have since downgraded its likelihood of being true to more like 50% as it has intractable problems.
If it saved a copy of the universe at the beginning of your life and repeatedly ran the simulation from there until your death (if any), would it mean anything to say that you are experiencing your life multiple times?
Of course. (Well, it might be better to say that multiple guys like you are experiencing their own lives.)
Otherwise, it would mean that all types of people have the same measure of consciousness. Thus, for example, the fact that people who seem to be products of Darwinian evolution are more numerous would mean nothing—they are more numerous in terms of copies, not in terms of types, so the typical observer would not be one. So more copies = more measure. A similar argument applies to high measure terms in the quantum wavefunction. None of these considerations change if we assume that all math structures exist.
how about if we’re being simulated by zero computers?
You assume that this would make no difference to our consciousness, but you don’t actually present any argument for that. You just assert it in the post. So I would have to say that your argument—being nonexistent—has zero credibility. That doesn’t mean that your conclusion must be false, just that your argument provides no evidence in favor of it. The measure argument shows that your conclusion is false—though with the caveat that Platonic computers might count as real enough to simulate us. So let’s continue.
By Occam’s Razor, I conclude that if a universe can exist in this way — as one giant subjunctive — then we must accept that that is how and why our universe does exist
So you are abandoning the question of “Why does anything exist?” in favor of just accepting that it does, which is what you warned against doing in the first place.
If all math must exist in a strong Platonic sense, then obviously, it does. If it merely can so exist as far as we know, or OTOH might not, then we have no answer as to why anything exists. “Nothing exists” would seem to be the simplest thing that might have been true, if we had no evidence otherwise.
That said, “everything exists” is prima facie simpler that “something exists” so, given that at least something exists, Occam’s Razor suggests that everything exists. Hence my interest in it.
There’s a problem, though.
If every possible mathematical structure is real in the same way that this universe is, then isn’t there only an infinitesimal probability that this universe will turn out to be ruled entirely by simple regularities?
Good question. There is an argument based on Turing machines that the simplest programs (i.e. laws of physics) have more measure, because a random string is more likely to have a short segment at the beginning that works well and then a random section of ‘don’t care’ bits, as opposed to needing a long string that all works as part of the program. So if we run all TM programs Platonically, simpler “laws of physics” have more measure, possibly resulting in universes like ours being typical. Great, right?
But there are problems with this. First, there are many possible TMs that could run such programs. We need to choose one—but such a choice contradicts the “inevitable” nature that Platonism is supposed to have. So why not just use all of them? There are infinitely many, so there is no unique measure to use for them. Any choice we can make of how to run them all is inevitably arbritrary, and thus, we are back to “something” rather than “everything”. We can have a very “big” something, since all programs do run, but it’s still something—some nonzero information that pure math doesn’t know anything about.
That’s just TMs, but there’s no reason other types of math structures such as continuous functions shouldn’t exist, and we don’t even have the equivalent of a TM to put a measure distribution on them.
I don’t know for sure that there isn’t some natural measure, but if there is I don’t think we can know about it. Maybe I’m overlooking some selection effect that makes things work without arbritrariness.
Ok, so suppose we ignore the arbritrariness problem. The resulting ‘everything’ might not be Platonism, but at least it would be a high level and fairly simple theory of physics. Does the TM measure in fact predict a universe like ours?
I don’t know. Selecting a fairly simple TM, in practice the differences resulting from choice of TM are negligable. But we still have the Boltzmann brain question. I don’t know if a BB is typical in such an ensemble or not. At least that is a question that can be studied mathematically.
If it saved a copy of the universe at the beginning of your life and repeatedly ran the simulation from there until your death (if any), would it mean anything to say that you are experiencing your life multiple times?
Of course.
I’m not so sure, Mallah. Your first argument seems to say that if someone simulated universe A a thousand times and then simulated universe B once, and you knew only that you were in one of those simulations, then you’d expect to be in universe A. I think your expectation depends entirely on your prior, and it I don’t see why your prior should assign equal probabilities to all instances of simulation rather than assigning equal probabilities to all computationally distinct simulations.
(I’m assuming the simulation of universe A includes every Everett branch, or else it includes only a single Everett branch and it’s the same one in every instance.)
What if you run a simulation of universe A on a computer whose memory is mirrored a thousand times on back-up hard disks? What if it only has one hard disk, but it writes each bit a thousand times, just to be safe? Does this count as a thousand copies of you?
As for wavefunction amplitudes, I don’t see why that should have anything to do with the number of instantiations of a simulation.
Your first argument seems to say that if someone simulated universe A a thousand times and then simulated universe B once, and you knew only that you were in one of those simulations, then you’d expect to be in universe A.
That’s right, Nisan (all else being equal, such as A and B having the same # of observers).
I don’t see why your prior should assign equal probabilities to all instances of simulation rather than assigning equal probabilities to all computationally distinct simulations.
In the latter case, at least in a large enough universe (or quantum MWI, or the Everything), the prior probability of being a Boltzmann brain (not product of Darwinian evolution) would be nearly 1, since most distinct brain types are. We are not BBs (perhaps not prior info, but certainly info we have) so we must reject that method.
What if you run a simulation of universe A on a computer whose memory is mirrored a thousand times on back-up hard disks? … Does this count as a thousand copies of you?
No. That is not a case of independent implementations, so it just has the measure of a single A.
As for wavefunction amplitudes, I don’t see why that should have anything to do with the number of instantiations of a simulation.
A similar argument applies - more amplitude means more measure, or we would probably be BB’s. Also, in the Turing machine version of the Tegmarkian everything, that could only be explained by more copies.
For an argument that even in the regular MWI, more amplitude means more implementations (copies), as well as discussion of what exactly counts as an implementation of a computation, see my paper
But there are problems with this. First, there are many possible TMs that could run such programs. We need to choose one—but such a choice contradicts the “inevitable” nature that Platonism is supposed to have.
The choice of your turing machine doesn’t much matter, since all turing machines can simulate each other. If you choose the “wrong” turing machine, your measures will be off by at most a constant factor (the complexity penalty of an interpreter for the “right” machine language).
That’s just TMs, but there’s no reason other types of math structures such as continuous functions shouldn’t exist, and we don’t even have the equivalent of a TM to put a measure distribution on them.
Interesting. Do you know of place on the net where I can see what other (independent, mathematically knowledgeable) people have to say about its implications? It’s asking for a lot maybe, but I think that would be the most efficient way for me to gain info about it, if there is.
Ata, there are many things wrong with your ideas. (Hopefully saying that doesn’t put you off—you want to become less wrong, I assume.)
I have indeed independently invented the “all math exists” idea myself, years ago. I used to believe it was almost certainly true. I have since downgraded its likelihood of being true to more like 50% as it has intractable problems.
Of course. (Well, it might be better to say that multiple guys like you are experiencing their own lives.)
Otherwise, it would mean that all types of people have the same measure of consciousness. Thus, for example, the fact that people who seem to be products of Darwinian evolution are more numerous would mean nothing—they are more numerous in terms of copies, not in terms of types, so the typical observer would not be one. So more copies = more measure. A similar argument applies to high measure terms in the quantum wavefunction. None of these considerations change if we assume that all math structures exist.
You assume that this would make no difference to our consciousness, but you don’t actually present any argument for that. You just assert it in the post. So I would have to say that your argument—being nonexistent—has zero credibility. That doesn’t mean that your conclusion must be false, just that your argument provides no evidence in favor of it. The measure argument shows that your conclusion is false—though with the caveat that Platonic computers might count as real enough to simulate us. So let’s continue.
So you are abandoning the question of “Why does anything exist?” in favor of just accepting that it does, which is what you warned against doing in the first place.
If all math must exist in a strong Platonic sense, then obviously, it does. If it merely can so exist as far as we know, or OTOH might not, then we have no answer as to why anything exists. “Nothing exists” would seem to be the simplest thing that might have been true, if we had no evidence otherwise.
That said, “everything exists” is prima facie simpler that “something exists” so, given that at least something exists, Occam’s Razor suggests that everything exists. Hence my interest in it.
There’s a problem, though.
Good question. There is an argument based on Turing machines that the simplest programs (i.e. laws of physics) have more measure, because a random string is more likely to have a short segment at the beginning that works well and then a random section of ‘don’t care’ bits, as opposed to needing a long string that all works as part of the program. So if we run all TM programs Platonically, simpler “laws of physics” have more measure, possibly resulting in universes like ours being typical. Great, right?
But there are problems with this. First, there are many possible TMs that could run such programs. We need to choose one—but such a choice contradicts the “inevitable” nature that Platonism is supposed to have. So why not just use all of them? There are infinitely many, so there is no unique measure to use for them. Any choice we can make of how to run them all is inevitably arbritrary, and thus, we are back to “something” rather than “everything”. We can have a very “big” something, since all programs do run, but it’s still something—some nonzero information that pure math doesn’t know anything about.
That’s just TMs, but there’s no reason other types of math structures such as continuous functions shouldn’t exist, and we don’t even have the equivalent of a TM to put a measure distribution on them.
I don’t know for sure that there isn’t some natural measure, but if there is I don’t think we can know about it. Maybe I’m overlooking some selection effect that makes things work without arbritrariness.
Ok, so suppose we ignore the arbritrariness problem. The resulting ‘everything’ might not be Platonism, but at least it would be a high level and fairly simple theory of physics. Does the TM measure in fact predict a universe like ours?
I don’t know. Selecting a fairly simple TM, in practice the differences resulting from choice of TM are negligable. But we still have the Boltzmann brain question. I don’t know if a BB is typical in such an ensemble or not. At least that is a question that can be studied mathematically.
I’m not so sure, Mallah. Your first argument seems to say that if someone simulated universe A a thousand times and then simulated universe B once, and you knew only that you were in one of those simulations, then you’d expect to be in universe A. I think your expectation depends entirely on your prior, and it I don’t see why your prior should assign equal probabilities to all instances of simulation rather than assigning equal probabilities to all computationally distinct simulations.
(I’m assuming the simulation of universe A includes every Everett branch, or else it includes only a single Everett branch and it’s the same one in every instance.)
What if you run a simulation of universe A on a computer whose memory is mirrored a thousand times on back-up hard disks? What if it only has one hard disk, but it writes each bit a thousand times, just to be safe? Does this count as a thousand copies of you?
As for wavefunction amplitudes, I don’t see why that should have anything to do with the number of instantiations of a simulation.
That’s right, Nisan (all else being equal, such as A and B having the same # of observers).
In the latter case, at least in a large enough universe (or quantum MWI, or the Everything), the prior probability of being a Boltzmann brain (not product of Darwinian evolution) would be nearly 1, since most distinct brain types are. We are not BBs (perhaps not prior info, but certainly info we have) so we must reject that method.
No. That is not a case of independent implementations, so it just has the measure of a single A.
A similar argument applies - more amplitude means more measure, or we would probably be BB’s. Also, in the Turing machine version of the Tegmarkian everything, that could only be explained by more copies.
For an argument that even in the regular MWI, more amplitude means more implementations (copies), as well as discussion of what exactly counts as an implementation of a computation, see my paper
MCI of QM
The choice of your turing machine doesn’t much matter, since all turing machines can simulate each other. If you choose the “wrong” turing machine, your measures will be off by at most a constant factor (the complexity penalty of an interpreter for the “right” machine language).
For continuous functions, we do. See “abstract stone duality”.
Interesting. Do you know of place on the net where I can see what other (independent, mathematically knowledgeable) people have to say about its implications? It’s asking for a lot maybe, but I think that would be the most efficient way for me to gain info about it, if there is.