If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones. Every day you spend without spontaneously turning into a pheasant should weaken your faith in the multiverse.
This depends on which level of the Tegmark classification you are talking about. Level III for example, quantum MWI, gives very low probabilities for things like turning into a pheasant, since those evens while possible, have tiny chances of occurring. Level IV, the ultimate ensemble, which seems to the main emphasis of the poster above, may have your argument as a valid rebuttal, but since level IV requires consistency, it would require a much better understanding of what consistent rule systems look like. And it may be that the vast majority of those universes don’t have observers, so we actually would need to look at consistent rule systems with observers. Without a lot more information, it is very hard to examine the expected probabilities of weird even events in a level IV setting.
Yeah, that’s a good point. Hardcoding complicated changes is consistent. So any such argument of this form about level IV fails. I withdraw that claim.
Tegmark level IV is a very useful tool to guide one’s intuitions, but in the end, the only meaningful question about Tegmark IV universes is this: Based on my observations, what is the relative probability that I am in this one rather than that one? And this, of course, is just what scientists do anyway, without citing Tegmark each time. Hardcoded universes are easily dealt with by the scientists’ favorite tool, Occam’s Razor.
Consistency is about logics, while Tegmark’s madness is about mathematical structures. Whenever you can model your own actions (decision-making algorithm) using huge complicated mathematical structures, you can also do so with relatively simple mathematical structures constructed from the syntax of your algorithm (Lowenheim-Skolem type constructions). There is no fact of the matter about whether a given consistent countable first order theory, say, talks about an uncountable model or a countable one.
This depends on which level of the Tegmark classification you are talking about. Level III for example, quantum MWI, gives very low probabilities for things like turning into a pheasant, since those evens while possible, have tiny chances of occurring. Level IV, the ultimate ensemble, which seems to the main emphasis of the poster above, may have your argument as a valid rebuttal, but since level IV requires consistency, it would require a much better understanding of what consistent rule systems look like. And it may be that the vast majority of those universes don’t have observers, so we actually would need to look at consistent rule systems with observers. Without a lot more information, it is very hard to examine the expected probabilities of weird even events in a level IV setting.
Wha? Any sequence of observations can be embedded in a consistent system that “hardcodes” it.
Yeah, that’s a good point. Hardcoding complicated changes is consistent. So any such argument of this form about level IV fails. I withdraw that claim.
Tegmark level IV is a very useful tool to guide one’s intuitions, but in the end, the only meaningful question about Tegmark IV universes is this: Based on my observations, what is the relative probability that I am in this one rather than that one? And this, of course, is just what scientists do anyway, without citing Tegmark each time. Hardcoded universes are easily dealt with by the scientists’ favorite tool, Occam’s Razor.
Consistency is about logics, while Tegmark’s madness is about mathematical structures. Whenever you can model your own actions (decision-making algorithm) using huge complicated mathematical structures, you can also do so with relatively simple mathematical structures constructed from the syntax of your algorithm (Lowenheim-Skolem type constructions). There is no fact of the matter about whether a given consistent countable first order theory, say, talks about an uncountable model or a countable one.