I agree that there isn’t likely to be any way of calculating the inputs without simulating the missing part of the brain. But that’s not really my point. For every time the correct inputs happen by chance, there will be many other occasions where such a half-BB comes into existence and then gets completely incorrect information. But getting the correct information is definitely cheaper, in probabilistic terms, than getting the rest of the brain—it will happen more often.
That’s what I’m disagreeing with—the assertion that its more likely for you to “accidentally” get the other inputs than it is for you to just get the rest of the brain.
There’s about 200-250 million axons in the corpus collosum, which goes between the right and left hemispheres. There’s about 7000 synapses per neuron.
P(a, b, c, …) = P(a)P(b|a)P(c|a,b)...
If you don’t have a brain, the individual P(x)s are pretty much independent, and in order to get a particular stimulus pattern you need a few hundred billion fairly-unlikely things to happen at once. In order for the brain to have any sort of sustained existence, you need a few hundred billion fairly unlikely things to happen in a way that corresponds to brain states. So a few hundred billion times a few hundred billion a bunch of times, and more times the longer you run it.
If you have a brain, you take massive penalties on a few P(x)s in order to “buy” your neurons, but after that, things aren’t at all independent. Given what one synapse is doing, you have a much better guess at the other 7000/neuron. So you’re only guessing a few hundred billion things if you just connect neurons to the other half.
However, neurons interact with each other, and given what everything connected to it is doing, you have a pretty good guess of what it’s doing.
So you’re guessing a few orders of magnitude fewer things, on top of the 3 orders of magnitude savings from encoding in neurons.
On top of that, sustained interaction with the other hemisphere is much cheaper, given the fact that you already have the other half of the brain, it’s fairly probable that it will respond the way the other half of the brain would over the next few seconds.
I didn’t have the numbers of axons in the corpus callosum, and they are interesting. If we assume they either fire, or not, independently of each other, at a rate of up to 200Hz, then the bit rate for the bus is about 4 Gigabits per second. If the brain lives a couple of minutes, you’ll need about 400 Gigabits, or 40 Gigabytes. This means you get about 4 bytes per brain cell in the other hemisphere.
A single brain cell is so complex that nothing that complex could come into existence as a sheer coincidence over all space and time. It requires an evolutionary process to make something that complex. 4 bytes worth of coincidence happens essentially instantaneously.
A single brain cell is so complex that nothing that complex could come into existence as a sheer coincidence over all space and time. It requires an evolutionary process to make something that complex.
I think you may have missed the point of the Boltzmann-brain hypothetical. As the volume of space and time goes to infinity, the chance of such a thing forming due to chance will converge to one.
4 bytes worth of coincidence happens essentially instantaneously.
I have no idea how to attach meaning to this sentence. Surely the frequency of a one-in-four-billion event depends how many trials you conduct per unit time.
My fault for not describing this more specifically. I know that in truly vast spaces of space and time, it eventually becomes quite likely that a Boltzmann brain emerges in the vastness of the space. But the space and time required is much greater than our observable universe, which is what I was referring to in the first case.
I guess my second sentence is intended to mean that any real universe gets through four billion events of the requisite size (cosmic rays) pretty quickly.
The interesting part of the hypothesis, as I understand it, is less that the probability of a Boltzmann brain approaches one as the universe grows older (trivially true) and more that the amount of negentropy needed to generate a universe is vastly, sillily larger than that needed to generate a small self-aware system that thinks it’s embedded in a universe at some point in time—and thus that anthropic considerations should guide us to favor the latter. This is of course predicated on the idea that the universe arose from a random event obeying the kind of probability distributions that govern vacuum fluctuations and similar events.
I agree that there isn’t likely to be any way of calculating the inputs without simulating the missing part of the brain. But that’s not really my point. For every time the correct inputs happen by chance, there will be many other occasions where such a half-BB comes into existence and then gets completely incorrect information. But getting the correct information is definitely cheaper, in probabilistic terms, than getting the rest of the brain—it will happen more often.
That’s what I’m disagreeing with—the assertion that its more likely for you to “accidentally” get the other inputs than it is for you to just get the rest of the brain.
There’s about 200-250 million axons in the corpus collosum, which goes between the right and left hemispheres. There’s about 7000 synapses per neuron.
P(a, b, c, …) = P(a)P(b|a)P(c|a,b)...
If you don’t have a brain, the individual P(x)s are pretty much independent, and in order to get a particular stimulus pattern you need a few hundred billion fairly-unlikely things to happen at once. In order for the brain to have any sort of sustained existence, you need a few hundred billion fairly unlikely things to happen in a way that corresponds to brain states. So a few hundred billion times a few hundred billion a bunch of times, and more times the longer you run it.
If you have a brain, you take massive penalties on a few P(x)s in order to “buy” your neurons, but after that, things aren’t at all independent. Given what one synapse is doing, you have a much better guess at the other 7000/neuron. So you’re only guessing a few hundred billion things if you just connect neurons to the other half.
However, neurons interact with each other, and given what everything connected to it is doing, you have a pretty good guess of what it’s doing.
So you’re guessing a few orders of magnitude fewer things, on top of the 3 orders of magnitude savings from encoding in neurons.
On top of that, sustained interaction with the other hemisphere is much cheaper, given the fact that you already have the other half of the brain, it’s fairly probable that it will respond the way the other half of the brain would over the next few seconds.
I didn’t have the numbers of axons in the corpus callosum, and they are interesting. If we assume they either fire, or not, independently of each other, at a rate of up to 200Hz, then the bit rate for the bus is about 4 Gigabits per second. If the brain lives a couple of minutes, you’ll need about 400 Gigabits, or 40 Gigabytes. This means you get about 4 bytes per brain cell in the other hemisphere.
A single brain cell is so complex that nothing that complex could come into existence as a sheer coincidence over all space and time. It requires an evolutionary process to make something that complex. 4 bytes worth of coincidence happens essentially instantaneously.
I think you may have missed the point of the Boltzmann-brain hypothetical. As the volume of space and time goes to infinity, the chance of such a thing forming due to chance will converge to one.
I have no idea how to attach meaning to this sentence. Surely the frequency of a one-in-four-billion event depends how many trials you conduct per unit time.
My fault for not describing this more specifically. I know that in truly vast spaces of space and time, it eventually becomes quite likely that a Boltzmann brain emerges in the vastness of the space. But the space and time required is much greater than our observable universe, which is what I was referring to in the first case.
I guess my second sentence is intended to mean that any real universe gets through four billion events of the requisite size (cosmic rays) pretty quickly.
The interesting part of the hypothesis, as I understand it, is less that the probability of a Boltzmann brain approaches one as the universe grows older (trivially true) and more that the amount of negentropy needed to generate a universe is vastly, sillily larger than that needed to generate a small self-aware system that thinks it’s embedded in a universe at some point in time—and thus that anthropic considerations should guide us to favor the latter. This is of course predicated on the idea that the universe arose from a random event obeying the kind of probability distributions that govern vacuum fluctuations and similar events.