Just for the Least Convenient World, what if the zombies build a supercomputer and simulate random universes, and find that in 98% of simulated universes life forms like theirs do have shadow brains, and that the programs for the remaining 2% are usually significantly longer?
How can the version without shadow brains be significantly longer? Even in the worst possible world, it seems like the 2% of non-shadow-brain programs could be encoded by copying their corresponding shadow-brain programs and adding a few lines telling the computer how to garbage-collect shadows using a straightforward pruning algorithm on the causal graph.
by the programs being short enough in the first place that those few lines still doubles the length? By the universe like part not being straightforwardly encoded so that to distinguish anything about it you first need a long AI-like interpreter just to get there?
That would strongly indicate that something caused the zombies to write a program for generating simulations that was likely to create simulated shadow brains in most of the simulations. (The compiler’s built in prover for things like type checking was inefficient and left behind a lot of baggage that produced second tier shadow brains in all but 2% of simulations). It might cause the zombies to conclude that they probably had shadow brains and start talking about the possibility of shadow brains, but it should be equally likely to do that whether the shadow brains were real or not. (Which means any zombie with a sound epistemology would not give additional credence to the existence of shadow brains after the simulation caused other zombies to start talking about shadow brains than it would if the source of the discussion of shadow brains had come from a random number generator producing a very large number, and that large number being interpreted as a string in some normal encoding for the zombies producing a paper that discussed shadow brains. Shadow brains in that world should be an idea analogous to Russell’s teapot, astrology, or the invisible pink unicorn in our world.)
Now, if there was some outside universe capable of looking at all of the universes and seeing some universes with shadow brains and some without, and in the the universes with shadow brains zombies were significantly more likely to produce simulations that created shadow brains than in the universe in which zombies had shadow brains they were much more likely to create simulations that predicted shadow brains similar to their actual shadow brains—then, we would be back to seeing exactly what we see when philosophers talk about shadow brains directly: namely, the shadow brains are causing the zombies to imagine shadow brains which means that the shadow brains aren’t really shadow brains because they are affecting the world (with probability 1).
Either the result of the simulations points to gross inefficiency somewhere (their simulations predicted something that their simulations shouldn’t have been able to predict) or the shadow brains not really being shadow brains because they are causally impacting the world. (This is slightly more plausible than philosopher’s postulating shadow brains correctly for no reason only because we don’t necessarily know that there is anything driving the zombies to produce simulations efficiently; whereas, we know in our world that we can assume that brains typically produce non-gibberish because enormous selective pressures have caused brains to create non-gibberish.)
I were talking about the logical counter-factual, where it genuinely is true and knowably so through rationality.
It might be easier to think about it like this: there is a large number of civilizations in T4, each of which can observe that almost all the other have shadow brains but none of which can see if they have them themselves.
Just for the Least Convenient World, what if the zombies build a supercomputer and simulate random universes, and find that in 98% of simulated universes life forms like theirs do have shadow brains, and that the programs for the remaining 2% are usually significantly longer?
When you “simulate random universes,” what distribution are you randomizing over?
Seems like the simulations only help if you somehow already know the true probability distribution from which the actual universe was selected.
How can the version without shadow brains be significantly longer? Even in the worst possible world, it seems like the 2% of non-shadow-brain programs could be encoded by copying their corresponding shadow-brain programs and adding a few lines telling the computer how to garbage-collect shadows using a straightforward pruning algorithm on the causal graph.
by the programs being short enough in the first place that those few lines still doubles the length? By the universe like part not being straightforwardly encoded so that to distinguish anything about it you first need a long AI-like interpreter just to get there?
That would strongly indicate that something caused the zombies to write a program for generating simulations that was likely to create simulated shadow brains in most of the simulations. (The compiler’s built in prover for things like type checking was inefficient and left behind a lot of baggage that produced second tier shadow brains in all but 2% of simulations). It might cause the zombies to conclude that they probably had shadow brains and start talking about the possibility of shadow brains, but it should be equally likely to do that whether the shadow brains were real or not. (Which means any zombie with a sound epistemology would not give additional credence to the existence of shadow brains after the simulation caused other zombies to start talking about shadow brains than it would if the source of the discussion of shadow brains had come from a random number generator producing a very large number, and that large number being interpreted as a string in some normal encoding for the zombies producing a paper that discussed shadow brains. Shadow brains in that world should be an idea analogous to Russell’s teapot, astrology, or the invisible pink unicorn in our world.)
Now, if there was some outside universe capable of looking at all of the universes and seeing some universes with shadow brains and some without, and in the the universes with shadow brains zombies were significantly more likely to produce simulations that created shadow brains than in the universe in which zombies had shadow brains they were much more likely to create simulations that predicted shadow brains similar to their actual shadow brains—then, we would be back to seeing exactly what we see when philosophers talk about shadow brains directly: namely, the shadow brains are causing the zombies to imagine shadow brains which means that the shadow brains aren’t really shadow brains because they are affecting the world (with probability 1).
Either the result of the simulations points to gross inefficiency somewhere (their simulations predicted something that their simulations shouldn’t have been able to predict) or the shadow brains not really being shadow brains because they are causally impacting the world. (This is slightly more plausible than philosopher’s postulating shadow brains correctly for no reason only because we don’t necessarily know that there is anything driving the zombies to produce simulations efficiently; whereas, we know in our world that we can assume that brains typically produce non-gibberish because enormous selective pressures have caused brains to create non-gibberish.)
I were talking about the logical counter-factual, where it genuinely is true and knowably so through rationality.
It might be easier to think about it like this: there is a large number of civilizations in T4, each of which can observe that almost all the other have shadow brains but none of which can see if they have them themselves.