Actually, you can scale up to even bigger brains. With serious nanotech stuff and a large scale, the limiting factor becomes energy again. To maximise thought per mass you need energy stores much larger than the brainware; near 100% mass to energy and an efficient processor.
The best solution is a black hole, with size on the order of the psudoradius of the De sitter spacetime. The radiating temperature of the blackhole is nanokelvin, only a few times hotter than the average de Sitter radiation. Thus the black hole is of galactic mass. All of its energy is slowly radiated away, and used to run an ultra cold computer at the lamdow limit. The result looks like what a far future civilization might build at the end of time.
Actually there are some subtle issues here that I didn’t spot before. If you take a small (not exponentially vast) region of space-time, and condition on that region containing at least 100 observer seconds, it is far more likely that this is from a single Boltzmann astronaut, than from 100 separate Boltzmann brains.
However if you select a region of space-time with hyper-volume
exp[1069]
Then it is likely to contain a Boltzmann brain of mass 1kg, and we suppose that can think for 1 second. The chance of the same volume containing a 2kg Boltzmann brain is
So unless that extra 1 kg of life support can let the Boltzmann brain exist for exp(10^69) seconds, most observer moments should not have life support.
Imagine a lottery thats played by 1,000,000,000 people. there is 1 prize of £1,000,000 and 1,000 prizes of £100,000 each. If I say that my friends have won at least £1000,000 between them (and that I have a number of friends <<100,000) , then its likely that one friend who hit the jackpot. But if I pick a random £1 handed out by this lottery, and look at where it goes, it probably goes to a runner up.
This is directly analogous, except with smaller numbers, and £££’s instead of subjective experience. The one big win is the Boltzmann astronaut, the smaller prizes are Boltzmann brains.
The reason for this behaviour is that doubling the size of the spacetime considered makes a Boltzmann astronaut twice as likely, but makes a swarm of 100 Boltzmann brains 2^100 times as likely. For any small region of spacetime, Nothing happens is the most likely option. A Boltzmann brain, is Far less likely, and a Boltzmann astronaut Far less likely than that. The ratio of thinking times is small enough to be ignored.
If we think that we are Boltzmann brains, then we should expect to freeze over in the next instant. If we thought that we were Boltzmann brains, and that there was at least a billion observer moments nearby, then we should expect to be a Boltzmann astronaut.
Let V be the hyper-volume where the probability of a Mkg BB is exactly exp[−M×1069]. Let’s imagine a sequence of V’s stretching forward in time. About exp[−1069] of them will contain one BB of mass 1 kg, and about exp[−2×1069] will contain a BB of mass 2kg, which is also the proportion that contains two brains of mass 1kg.
So I think you are correct; most observer-moments will still be in short-lived BBs. But if you are in an area with disproportionately many observer moments, then they are more likely to be in long-lived BBs. I will adjust the post to reflect this.
However, Boltzmann simulation may be much more efficient than biological brains. 1 g of advanced nanotech supercomputer could stimulate trillions observer-moments per second, and weight 1000 times less than “real” brain. This means that me are more likely to be inside BB-simulation when in a real BB. Also, most curse and primitive simulations with many errors should dominate.
It probably depends on how mass and time duration of the fluctuation are traded between themselves. For quantum fluctuations which return back to nothingness this relation is define by the principle of uncertainty, and for any fluctuations with significant mass, its time of existence would be minuscule share of a second, which would be enough only for one static observer-moment.
But if we able imagine very efficient in calculations computer, which could perform many calculations by the time allowed for its existence by uncertainty principle, it should dominate by number of observer-moments.
You are making some unjustified assumptions about the way computations can be embedded in a physical process. In particular we shouldn’t presume that the only way to instantiate a computation giving rise to an experience is via the forward evolution of time. See comment below.
Upper level for the energy of randomly appearing BB-simulation is 1 Solar mass. Because the whole new our Sun and whole new our planet could appear as a physical object and in that case it will be not be a simulation—it will be normal people living on the normal planet.
Moreover, it could be not a fluctuation creating a planet, but a fluctuation creating a gas cloud, which later naturally evolve in the formation of a star and planets. Not every gas cloud will create habitable planet, but given astronomically small probabilities we are speaking about, the change will be insignificant.
we even could suggest that what we observer as the Bing Bang could be such a cloud.
Actually, you can scale up to even bigger brains. With serious nanotech stuff and a large scale, the limiting factor becomes energy again. To maximise thought per mass you need energy stores much larger than the brainware; near 100% mass to energy and an efficient processor.
The best solution is a black hole, with size on the order of the psudoradius of the De sitter spacetime. The radiating temperature of the blackhole is nanokelvin, only a few times hotter than the average de Sitter radiation. Thus the black hole is of galactic mass. All of its energy is slowly radiated away, and used to run an ultra cold computer at the lamdow limit. The result looks like what a far future civilization might build at the end of time.
Actually there are some subtle issues here that I didn’t spot before. If you take a small (not exponentially vast) region of space-time, and condition on that region containing at least 100 observer seconds, it is far more likely that this is from a single Boltzmann astronaut, than from 100 separate Boltzmann brains.
However if you select a region of space-time with hyper-volume
Then it is likely to contain a Boltzmann brain of mass 1kg, and we suppose that can think for 1 second. The chance of the same volume containing a 2kg Boltzmann brain is
So unless that extra 1 kg of life support can let the Boltzmann brain exist for exp(10^69) seconds, most observer moments should not have life support.
Imagine a lottery thats played by 1,000,000,000 people. there is 1 prize of £1,000,000 and 1,000 prizes of £100,000 each. If I say that my friends have won at least £1000,000 between them (and that I have a number of friends <<100,000) , then its likely that one friend who hit the jackpot. But if I pick a random £1 handed out by this lottery, and look at where it goes, it probably goes to a runner up.
This is directly analogous, except with smaller numbers, and £££’s instead of subjective experience. The one big win is the Boltzmann astronaut, the smaller prizes are Boltzmann brains.
The reason for this behaviour is that doubling the size of the spacetime considered makes a Boltzmann astronaut twice as likely, but makes a swarm of 100 Boltzmann brains 2^100 times as likely. For any small region of spacetime, Nothing happens is the most likely option. A Boltzmann brain, is Far less likely, and a Boltzmann astronaut Far less likely than that. The ratio of thinking times is small enough to be ignored.
If we think that we are Boltzmann brains, then we should expect to freeze over in the next instant. If we thought that we were Boltzmann brains, and that there was at least a billion observer moments nearby, then we should expect to be a Boltzmann astronaut.
Let V be the hyper-volume where the probability of a Mkg BB is exactly exp[−M×1069]. Let’s imagine a sequence of V’s stretching forward in time. About exp[−1069] of them will contain one BB of mass 1 kg, and about exp[−2×1069] will contain a BB of mass 2kg, which is also the proportion that contains two brains of mass 1kg.
So I think you are correct; most observer-moments will still be in short-lived BBs. But if you are in an area with disproportionately many observer moments, then they are more likely to be in long-lived BBs. I will adjust the post to reflect this.
However, Boltzmann simulation may be much more efficient than biological brains. 1 g of advanced nanotech supercomputer could stimulate trillions observer-moments per second, and weight 1000 times less than “real” brain. This means that me are more likely to be inside BB-simulation when in a real BB. Also, most curse and primitive simulations with many errors should dominate.
That won’t fix the issue. Just redo the analysis at whatever size is able to mereky do a few seconds of brain simulation.
It probably depends on how mass and time duration of the fluctuation are traded between themselves. For quantum fluctuations which return back to nothingness this relation is define by the principle of uncertainty, and for any fluctuations with significant mass, its time of existence would be minuscule share of a second, which would be enough only for one static observer-moment.
But if we able imagine very efficient in calculations computer, which could perform many calculations by the time allowed for its existence by uncertainty principle, it should dominate by number of observer-moments.
You are making some unjustified assumptions about the way computations can be embedded in a physical process. In particular we shouldn’t presume that the only way to instantiate a computation giving rise to an experience is via the forward evolution of time. See comment below.
Hum, why use a black hole when you could have matter and anti-matter to react directly when needed?
Upper level for the energy of randomly appearing BB-simulation is 1 Solar mass. Because the whole new our Sun and whole new our planet could appear as a physical object and in that case it will be not be a simulation—it will be normal people living on the normal planet.
Moreover, it could be not a fluctuation creating a planet, but a fluctuation creating a gas cloud, which later naturally evolve in the formation of a star and planets. Not every gas cloud will create habitable planet, but given astronomically small probabilities we are speaking about, the change will be insignificant.
we even could suggest that what we observer as the Bing Bang could be such a cloud.