I’m willing to (provisionally) believe in MWI, but not Tegmark’s ensemble. You haven’t provided any actual evidence why the latter is true, and chocolate bars indicate that it’s almost certainly false. Here’s the cousin_it scale of science-worthiness:
This is true.
This works.
This sounds true.
This sounds neat.
From the looks of things, you have yet to rise above level 4.
No. That’s just one small part of the evidence, far from sufficient and I would say far from necessary. By itself, these ideas would cause me to say “so much the worse for chaotic inflation theory” which is, as far as I know, not terribly well confirmed (or more to the point, not terribly clear in its proper interpretation).
If I understand it correctly, chaotic inflation theory implies a multitude of universes with differing but stable physical laws, not a multitude of universes that evolved just like ours but will soon begin turning chocolate bars into hamsters.
If arbitrarily large universes exist, then there would be people with arbitrarily large computers running every possible program. From that you would get worlds in which chocolate bars turn into hamsters.
Question: Tegmark, in one of his multiverse papers, suggests that ordering measure by complexity seems to be an explanation for finding ourselves in a simple universe as well as a possible to answer to the question ‘how much relative existence do these structures get?’ My intuition says rather strongly that this is almost assuredly correct. Do you know of any other sane ways of assigning measure to ‘structures’ or ‘computations’ other than complexity?
Could you elaborate? It seems to me that because there exists a much greater number of complex computations than there are simple computations, we should expect to find ourselves in a complex one. But this, obviously, does not seem to be the case.
If we run each universe-program with probability 2 to the power of minus L, where L is the length of the program in bits, and additionally assume that a valid program can’t be a prefix of another valid program, then the total probability sums to 1 or less (by Kraft’s inequality). In this setup shorter programs carry most of the probability weight despite being vastly outnumbered by longer ones. I think the same holds for most other probability distributions over programs that you can imagine.
Right: it is enough if there is a sequence U1, U2, U3, … of increasingly computationally large universes, which seems to be roughly what chaotic inflation + the string theory landscape gives you, though I am a little confused about the ST landscape having a finite number of elements; this may spoil it.
Doesn’t follow at all. A large variety of physical laws and universe sizes doesn’t imply arbitrarily large computers. It’s quite possible that sentient life that can build computers exists only in universes with parameters very much like ours, and our particular universe seems to have hard physical limits on the size of computers before they collapse into black holes or whatever.
Who said anything about sentient life? Arbitrarily numerous computers should simply emerge, within this universe though not this Hubble volume, and should run every computation.
our particular universe seems to have hard physical limits on the size of
computers before they collapse into black holes or whatever.
There’s no upper limit on the size of a computer in our universe. Black holes are only a problem if you assume a very dense computer.
Moreover, it isn’t that hard to construct hypothetical rules for a universe that could easily have arbitrarily large Turing machines. For example, simply using the rules of Conway’s Game of Life.
Assuming the existence of a Game of Life universe is begging the question.
The discussion above was in the context of arbitrarily large universes existing. The point is that one can construct very simple universes which allow arbitrarily large simulation. You only need one such universe for the argument to go through.
Does chaotic inflation theory imply the existence of a Game of Life universe? I don’t see how. If it doesn’t, what’s the evidence for the proposition that a Game of Life universe exists in the first place? Where are you getting this stuff from?
It doens’t necessarily do so, but it does imply the existence of others that are similar enough to make not much of a difference. For example, there would probably be a universe with much weaker gravity making black holes impossible. I don’t know enough about chaotic inflation to comment in detail but my impression is that one can get much more exotic universes than even that.
This is a good question that I’d like to hear a hardcore physicist answer.
The relevant point is that chaotic inflation allows the string theory landscape to be populated—but are there vacuum states of string theory that allow infinite computation?
My suspicion is yes because of effects like the omega point. It may be impossible in our universe, but surely there are some where all the parameters work out.
Just so you know: Tipler’s Omega Point scenario is the time reverse of a big bang expansion from a BKL singularity. The collapsing universe, filled with a plasma too hot and dense for any bound object to survive, is supposed to undergo an infinite series of “Kasner oscillations” which alternately squeeze the plasma from different cosmic directions, providing the energy for computation.
The scenario is very problematic. The plasma description will not be valid at arbitrarily high temperatures. Eventually the particles will be colliding so hard that they become micro black holes; some other dynamical regime will take over. Tipler has workedhard to contrive ways around this, but it’s just really unlikely that an infinite sequence of Kasner epochs can be made to happen; especially, I would think, if you work within string theory, which behaves differently from field theory at high energies.
There is no consensus in string theory regarding cosmological initial and final conditions. String theory sometimes “resolves” singularities, i.e. provides a non-singular description of an apparently singular geometry (e.g. the “fuzzball” description of the black hole interior). However, there is no consensus on whether a big-crunch singularity will generically resolve and lead to a big bounce (as in “ekpyrotic” and “pre-big-bang” models), or whether it is simply the end, even in string theory.
At the other end, there is no particular consensus about the combination of chaotic inflation and the string theory landscape being the right way to think about cosmology. (I should probably emphasize that most “string cosmology” is actually about events in a single expanding universe—e.g. studying how the inevitable extra heavy particles affect measurable aspects of cosmic evolution like dark matter and atomic abundances—and not this mind-of-God stuff.) Its chief champion is Leonard Susskind, who is very eminent but does not speak for all his equally eminent colleagues. But let us assume this framework for the purpose of discussion.
Inflation is a hypothetical period of exponentially rapid expansion in the very early universe. In a field theory model of inflation, you start with a “scalar” field in a high energy density state, it dynamically relaxes into a lower energy state, and then inflation ends, being replaced by cosmic expansion at ordinary rates. For inflation to occur, the scalar field only has to have a few properties and so there are endless specific field theories which will exhibit inflation. In string theory, (see bottom of page 6 here), there are also many ways to achieve inflation.
In “eternal inflation”, most of the universe always remains in the energy-dense inflating state. The relaxation into slower expansion only occurs in small, disconnected spatial regions, outside of which exponential inflation continues forever. In “chaotic inflation”, the relaxation process sees the inflationary fields settling into different stable states in different regions. Maybe I should explain what a “stable state” is. In particle physics theories, particles usually get their mass by interacting with a Higgs field or fields with a “nonzero vacuum expectation value”. The Higgs fields interact with each other and settle into some lowest-energy equilibrium determined by the form of the interaction (which can be quite complicated). There can be more than one such equilibrium.
In string theory many apparently stable configurations of the extra dimensions have been constructed. So in stringy eternal inflation, you suppose that different string geometries are being realized in different isolated regions of an otherwise uniformly and eternally inflating universe. Usually this is brought up in the context of anthropic reasoning; the hope is to predict the features of our local physics anthropically, since we can’t be living in a region hostile to life.
Now we can think about this question of whether an infinite computation might get to occur somewhere in such a universe. The two standard cosmological scenarios for eternal life are the Tipler and Dyson scenarios. I’ve already mentioned that Tipler’s scenario is dubious. Dyson’s scenario is for an eternally expanding universe; something about stable islands of matter communicating with each other ever more rarely and weakly, with these interactions spaced out in such a way that they manage an infinite sequence of such interactions on a finite energy budget. If you believe in a cyclic cosmology, you might add to these scenarios one in which life persists through the bounce from collapse to expansion, but no-one has proposed a model of that.
I have not deeply surveyed the literature on eternal inflation, but I don’t remember ever seeing anyone talk about one of those non-inflating regions entirely ceasing to expand and undergoing collapse. My grasp of the concepts is weak enough that I can’t even say if there’s some principled reason for this, though inflation is such a generic phenomenon, I would think that there must be models where a local big crunch can occur.
In the string theory context, people really started talking about a landscape in theory-space of many possible geometries, after the observational discovery of dark energy in 1998. That was at first difficult to incorporate into string theory, and the way it was achieved (in “KKLT vacua”) involved the discovery of a new, very large class of stable string geometries. A universe with dark energy is one that expands forever, even at an accelerating rate (just not as fast as inflation). So if we suppose, as Susskind seems to do, that the landscape is dominated by these vacua, then it’s the Dyson scenario, appropriate for an open universe, which is the relevant model of infinite computation.
Now here I am really overreaching what I know, but in discussions of these vacua—which have a de Sitter geometry—I often see it stated that in the end every particle ends up isolated from every other, alone in its own Hubble volume. So the Dyson scenario may require a flat universe, and may be impossible in de Sitter space. I think there’s actually a paper saying as much. I’m not clear on this, but I don’t think this geometry requires that literally every particle ends up in its own patch of expanding space. Just as a galaxy doesn’t experience cosmic expansion, that only happens out in deep intergalactic space where the geometry is FRW, I don’t see why a gravitationally bound system much larger than a single particle couldn’t become one of these islands in de Sitter space (in which case, maybe you could hope for infinite computation, but not infinitely many states—you would end up repeating). It may only be the “big rip” scenario, in which the dark energy grows, that tears all bound systems apart. But I’m really not sure!
Really, I think these discussions about what’s going on beyond our cosmological horizon are a lot like the discussions of the Fermi paradox. They are exercises in reasoning almost totally unconstrained by empirical data. String theory is supposed to be this unique mathematical structure and so you might hope that it simply tells you how string cosmology is supposed to be. But it’s a work in progress, and in fact the cosmological question may be the same as the other big unresolved question, how to think about all those different geometries. Usually you just pick a geometry and study how the strings behave in it. You allow for some back-reaction, so the geometry adjusts to what the strings are doing, and in some cases you can even describe how one geometry becomes another (Brian Greene worked on this). But a conceptually unified approach regarding the whole “moduli space” of possible geometries is lacking. Other theorists like Cumrun Vafa and Tom Banks have approaches very different to Susskind’s. (Vafa appears to be looking for a single preferred geometry, by using the Hartle-Hawking wavefunction, while Banks thinks moduli space is divided up and the vacua form disjoint groups that aren’t dynamically connected.)
Final message: as currently described, the string landscape plus inflation is not generally thought of as allowing infinite computation or eternal life. But that whole cosmological conception may be faulty.
Final message: as currently described, the string landscape plus inflation is not generally thought of as allowing infinite computation or eternal life. But that whole cosmological conception may be faulty.
However, from what you’ve said, it seems that it is not ruled out even if ST is correct, as we don’t know what ST at high energy scales actually does, right?
There are ideas about what happens in those extreme conditions. But frankly I think your chances are better at very low energies. The difficult part about aspiring to live forever is that somehow you need the probability of an accident to drop off sharply and permanently, or else the asymptotic odds of survival are zero. Late-time de Sitter space should be a lot more peaceful than a collapsing cosmological fireball.
The difficult part about aspiring to live forever is that somehow you need the probability of an accident to drop off sharply and permanently
Not necessarily—you can use error-correcting algorithms and multiply redundant hardware to run your computer in spite of an error rate, as long as it is not too high.
Dyson has a scenario for infinitely much computation with finitely much energy with cosmological constant zero. Probably you can’t really do infinitely much computation, but end up in a loop because of limited memory. If inflation changes the cosmological constant, then getting it arbitrarily close to zero would be as good as Dyson’s scenario for the purpose of this discussion. You also want regions with arbitrarily high memory, which is probably mainly a matter of energy. My vague impression is that the cosmological constant gives a bound on the computation independent of the amount of memory.
Dyson has a scenario for infinitely much computation with finitely much energy with cosmological constant zero. Probably you can’t really do infinitely much computation, but end up in a loop because of limited memory.
Dyson suggests that spatially encoding memory in an expanding computer would allow memory capacity to grow logarithmically.
If inflation changes the cosmological constant, then getting it arbitrarily close to zero would be as good as Dyson’s scenario for the purpose of this discussion. You also want regions with arbitrarily high memory, which is probably mainly a matter of energy. My vague impression is that the cosmological constant gives a bound on the computation independent of the amount of memory.
What I recall hearing is that a nonzero cosmological constant makes things fall off the edge of the universe, i.e. the edge is an event horizon, so it Hawking-radiates, so the temperature of the sky (=> energy dissipation per operation) asymptotically approaches something nonzero. There might be a more pure argument.
“Universe” means everything there is. You can’t have multiple universes. That’s what the “uni” in “universe” means—and that’s why its the M.W.I. - and not the M.U.I.
Rather, what we know (anthropically) is that the typical observer-moment comes from an ordered history within a big, simple universe. If the universe works as we think it does (just assuming MWI, not Level IV), then there do exist Boltzmann brains in the same state as my current brain, and some of them have successor states where they do see the chocolate-hamster singularity.
But the measure of those observer-moments is dwarfed by the measure of the observer-moments in orderly contexts, or else my memories wouldn’t match my experiences and my current experiences would be highly unlikely to be this low in entropy.
But chocolate bars don’t turn into hamsters. The universe is predictable. Why are we discussing this stuff when we already know it isn’t true?
Chocolate bars have a very low probability of turning into hamsters. A chocolate bar is one configuration of elementary particles, and a hamster is another, and there are lots of particles that may or may not be in the space of that chocolate bar at any given point in time.
Our universe is predictable, in that very low probability events happen with a very low frequency, but this does not entail that very low probability events never happen.
But chocolate bars don’t turn into hamsters. The universe is predictable. Why are we discussing this stuff when we already know it isn’t true?
Some universes are predictable. Others are predictable until tomorrow, and after that, chocolate bars turn into hamsters.
I’m talking about our universe. Don’t try to confuse me.
“our universe”
SL5 error: we don’t have a unique universe…
What makes you think so? Pure shock value?
I’m willing to (provisionally) believe in MWI, but not Tegmark’s ensemble. You haven’t provided any actual evidence why the latter is true, and chocolate bars indicate that it’s almost certainly false. Here’s the cousin_it scale of science-worthiness:
This is true.
This works.
This sounds true.
This sounds neat.
From the looks of things, you have yet to rise above level 4.
Chaotic inflation theory is the evidence.
No. That’s just one small part of the evidence, far from sufficient and I would say far from necessary. By itself, these ideas would cause me to say “so much the worse for chaotic inflation theory” which is, as far as I know, not terribly well confirmed (or more to the point, not terribly clear in its proper interpretation).
If I understand it correctly, chaotic inflation theory implies a multitude of universes with differing but stable physical laws, not a multitude of universes that evolved just like ours but will soon begin turning chocolate bars into hamsters.
If arbitrarily large universes exist, then there would be people with arbitrarily large computers running every possible program. From that you would get worlds in which chocolate bars turn into hamsters.
Question: Tegmark, in one of his multiverse papers, suggests that ordering measure by complexity seems to be an explanation for finding ourselves in a simple universe as well as a possible to answer to the question ‘how much relative existence do these structures get?’ My intuition says rather strongly that this is almost assuredly correct. Do you know of any other sane ways of assigning measure to ‘structures’ or ‘computations’ other than complexity?
Could you elaborate? It seems to me that because there exists a much greater number of complex computations than there are simple computations, we should expect to find ourselves in a complex one. But this, obviously, does not seem to be the case.
Meanwhile, a newly-minted hamster scurries down the candy aisle in a vacant supermarket.
If we run each universe-program with probability 2 to the power of minus L, where L is the length of the program in bits, and additionally assume that a valid program can’t be a prefix of another valid program, then the total probability sums to 1 or less (by Kraft’s inequality). In this setup shorter programs carry most of the probability weight despite being vastly outnumbered by longer ones. I think the same holds for most other probability distributions over programs that you can imagine.
Right: it is enough if there is a sequence U1, U2, U3, … of increasingly computationally large universes, which seems to be roughly what chaotic inflation + the string theory landscape gives you, though I am a little confused about the ST landscape having a finite number of elements; this may spoil it.
Doesn’t follow at all. A large variety of physical laws and universe sizes doesn’t imply arbitrarily large computers. It’s quite possible that sentient life that can build computers exists only in universes with parameters very much like ours, and our particular universe seems to have hard physical limits on the size of computers before they collapse into black holes or whatever.
Who said anything about sentient life? Arbitrarily numerous computers should simply emerge, within this universe though not this Hubble volume, and should run every computation.
There’s no upper limit on the size of a computer in our universe. Black holes are only a problem if you assume a very dense computer.
Moreover, it isn’t that hard to construct hypothetical rules for a universe that could easily have arbitrarily large Turing machines. For example, simply using the rules of Conway’s Game of Life.
If you make the computer sparse, other limits come into play: all matter decays in finite time, and the speed of light is finite.
Assuming the existence of a Game of Life universe is begging the question.
The discussion above was in the context of arbitrarily large universes existing. The point is that one can construct very simple universes which allow arbitrarily large simulation. You only need one such universe for the argument to go through.
Does chaotic inflation theory imply the existence of a Game of Life universe? I don’t see how. If it doesn’t, what’s the evidence for the proposition that a Game of Life universe exists in the first place? Where are you getting this stuff from?
It doens’t necessarily do so, but it does imply the existence of others that are similar enough to make not much of a difference. For example, there would probably be a universe with much weaker gravity making black holes impossible. I don’t know enough about chaotic inflation to comment in detail but my impression is that one can get much more exotic universes than even that.
This is a good question that I’d like to hear a hardcore physicist answer.
The relevant point is that chaotic inflation allows the string theory landscape to be populated—but are there vacuum states of string theory that allow infinite computation?
My suspicion is yes because of effects like the omega point. It may be impossible in our universe, but surely there are some where all the parameters work out.
Just so you know: Tipler’s Omega Point scenario is the time reverse of a big bang expansion from a BKL singularity. The collapsing universe, filled with a plasma too hot and dense for any bound object to survive, is supposed to undergo an infinite series of “Kasner oscillations” which alternately squeeze the plasma from different cosmic directions, providing the energy for computation.
The scenario is very problematic. The plasma description will not be valid at arbitrarily high temperatures. Eventually the particles will be colliding so hard that they become micro black holes; some other dynamical regime will take over. Tipler has worked hard to contrive ways around this, but it’s just really unlikely that an infinite sequence of Kasner epochs can be made to happen; especially, I would think, if you work within string theory, which behaves differently from field theory at high energies.
There is no consensus in string theory regarding cosmological initial and final conditions. String theory sometimes “resolves” singularities, i.e. provides a non-singular description of an apparently singular geometry (e.g. the “fuzzball” description of the black hole interior). However, there is no consensus on whether a big-crunch singularity will generically resolve and lead to a big bounce (as in “ekpyrotic” and “pre-big-bang” models), or whether it is simply the end, even in string theory.
At the other end, there is no particular consensus about the combination of chaotic inflation and the string theory landscape being the right way to think about cosmology. (I should probably emphasize that most “string cosmology” is actually about events in a single expanding universe—e.g. studying how the inevitable extra heavy particles affect measurable aspects of cosmic evolution like dark matter and atomic abundances—and not this mind-of-God stuff.) Its chief champion is Leonard Susskind, who is very eminent but does not speak for all his equally eminent colleagues. But let us assume this framework for the purpose of discussion.
Inflation is a hypothetical period of exponentially rapid expansion in the very early universe. In a field theory model of inflation, you start with a “scalar” field in a high energy density state, it dynamically relaxes into a lower energy state, and then inflation ends, being replaced by cosmic expansion at ordinary rates. For inflation to occur, the scalar field only has to have a few properties and so there are endless specific field theories which will exhibit inflation. In string theory, (see bottom of page 6 here), there are also many ways to achieve inflation.
In “eternal inflation”, most of the universe always remains in the energy-dense inflating state. The relaxation into slower expansion only occurs in small, disconnected spatial regions, outside of which exponential inflation continues forever. In “chaotic inflation”, the relaxation process sees the inflationary fields settling into different stable states in different regions. Maybe I should explain what a “stable state” is. In particle physics theories, particles usually get their mass by interacting with a Higgs field or fields with a “nonzero vacuum expectation value”. The Higgs fields interact with each other and settle into some lowest-energy equilibrium determined by the form of the interaction (which can be quite complicated). There can be more than one such equilibrium.
In string theory many apparently stable configurations of the extra dimensions have been constructed. So in stringy eternal inflation, you suppose that different string geometries are being realized in different isolated regions of an otherwise uniformly and eternally inflating universe. Usually this is brought up in the context of anthropic reasoning; the hope is to predict the features of our local physics anthropically, since we can’t be living in a region hostile to life.
Now we can think about this question of whether an infinite computation might get to occur somewhere in such a universe. The two standard cosmological scenarios for eternal life are the Tipler and Dyson scenarios. I’ve already mentioned that Tipler’s scenario is dubious. Dyson’s scenario is for an eternally expanding universe; something about stable islands of matter communicating with each other ever more rarely and weakly, with these interactions spaced out in such a way that they manage an infinite sequence of such interactions on a finite energy budget. If you believe in a cyclic cosmology, you might add to these scenarios one in which life persists through the bounce from collapse to expansion, but no-one has proposed a model of that.
I have not deeply surveyed the literature on eternal inflation, but I don’t remember ever seeing anyone talk about one of those non-inflating regions entirely ceasing to expand and undergoing collapse. My grasp of the concepts is weak enough that I can’t even say if there’s some principled reason for this, though inflation is such a generic phenomenon, I would think that there must be models where a local big crunch can occur.
In the string theory context, people really started talking about a landscape in theory-space of many possible geometries, after the observational discovery of dark energy in 1998. That was at first difficult to incorporate into string theory, and the way it was achieved (in “KKLT vacua”) involved the discovery of a new, very large class of stable string geometries. A universe with dark energy is one that expands forever, even at an accelerating rate (just not as fast as inflation). So if we suppose, as Susskind seems to do, that the landscape is dominated by these vacua, then it’s the Dyson scenario, appropriate for an open universe, which is the relevant model of infinite computation.
Now here I am really overreaching what I know, but in discussions of these vacua—which have a de Sitter geometry—I often see it stated that in the end every particle ends up isolated from every other, alone in its own Hubble volume. So the Dyson scenario may require a flat universe, and may be impossible in de Sitter space. I think there’s actually a paper saying as much. I’m not clear on this, but I don’t think this geometry requires that literally every particle ends up in its own patch of expanding space. Just as a galaxy doesn’t experience cosmic expansion, that only happens out in deep intergalactic space where the geometry is FRW, I don’t see why a gravitationally bound system much larger than a single particle couldn’t become one of these islands in de Sitter space (in which case, maybe you could hope for infinite computation, but not infinitely many states—you would end up repeating). It may only be the “big rip” scenario, in which the dark energy grows, that tears all bound systems apart. But I’m really not sure!
Really, I think these discussions about what’s going on beyond our cosmological horizon are a lot like the discussions of the Fermi paradox. They are exercises in reasoning almost totally unconstrained by empirical data. String theory is supposed to be this unique mathematical structure and so you might hope that it simply tells you how string cosmology is supposed to be. But it’s a work in progress, and in fact the cosmological question may be the same as the other big unresolved question, how to think about all those different geometries. Usually you just pick a geometry and study how the strings behave in it. You allow for some back-reaction, so the geometry adjusts to what the strings are doing, and in some cases you can even describe how one geometry becomes another (Brian Greene worked on this). But a conceptually unified approach regarding the whole “moduli space” of possible geometries is lacking. Other theorists like Cumrun Vafa and Tom Banks have approaches very different to Susskind’s. (Vafa appears to be looking for a single preferred geometry, by using the Hartle-Hawking wavefunction, while Banks thinks moduli space is divided up and the vacua form disjoint groups that aren’t dynamically connected.)
Final message: as currently described, the string landscape plus inflation is not generally thought of as allowing infinite computation or eternal life. But that whole cosmological conception may be faulty.
This comment is most informative, thanks.
However, from what you’ve said, it seems that it is not ruled out even if ST is correct, as we don’t know what ST at high energy scales actually does, right?
There are ideas about what happens in those extreme conditions. But frankly I think your chances are better at very low energies. The difficult part about aspiring to live forever is that somehow you need the probability of an accident to drop off sharply and permanently, or else the asymptotic odds of survival are zero. Late-time de Sitter space should be a lot more peaceful than a collapsing cosmological fireball.
Not necessarily—you can use error-correcting algorithms and multiply redundant hardware to run your computer in spite of an error rate, as long as it is not too high.
Dyson has a scenario for infinitely much computation with finitely much energy with cosmological constant zero. Probably you can’t really do infinitely much computation, but end up in a loop because of limited memory. If inflation changes the cosmological constant, then getting it arbitrarily close to zero would be as good as Dyson’s scenario for the purpose of this discussion. You also want regions with arbitrarily high memory, which is probably mainly a matter of energy. My vague impression is that the cosmological constant gives a bound on the computation independent of the amount of memory.
Dyson suggests that spatially encoding memory in an expanding computer would allow memory capacity to grow logarithmically.
What I recall hearing is that a nonzero cosmological constant makes things fall off the edge of the universe, i.e. the edge is an event horizon, so it Hawking-radiates, so the temperature of the sky (=> energy dissipation per operation) asymptotically approaches something nonzero. There might be a more pure argument.
Yeah, I guess I should have looked that up. I do not find Dyson’s paragraph on memory convincing. Frankly, I take it as evidence of the opposite.
The concept of “our universe” does make sense, even if it’s not the only thing we care about.
Re: “we don’t have a unique universe”
“Universe” means everything there is. You can’t have multiple universes. That’s what the “uni” in “universe” means—and that’s why its the M.W.I. - and not the M.U.I.
Do you often go around criticizing people for talking about ‘atom smashers’ or ‘ATM machines’?
That is not the promotion of misuse of scientific terminology—so I am less concerned about that.
Rather, what we know (anthropically) is that the typical observer-moment comes from an ordered history within a big, simple universe. If the universe works as we think it does (just assuming MWI, not Level IV), then there do exist Boltzmann brains in the same state as my current brain, and some of them have successor states where they do see the chocolate-hamster singularity.
But the measure of those observer-moments is dwarfed by the measure of the observer-moments in orderly contexts, or else my memories wouldn’t match my experiences and my current experiences would be highly unlikely to be this low in entropy.
Chocolate bars have a very low probability of turning into hamsters. A chocolate bar is one configuration of elementary particles, and a hamster is another, and there are lots of particles that may or may not be in the space of that chocolate bar at any given point in time.
Our universe is predictable, in that very low probability events happen with a very low frequency, but this does not entail that very low probability events never happen.