Lets say we accept that p-zombies don’t make sense because of the core arguments. However what about a creature who’s behavior is very similar to the target but with a different neural software? That is a simulated person who’s behavior is different only by a small, perhaps undetectable amount from the real person, but with internal software so different that we would not expect them to have the same consciousness, if any at all?
Approximate p-zombies would be very useful for Simulations
Say you want to run an ancestor simulation. Your goal is to have the resulting world turn out as accurate as possible after a number of years. This Sim could start by having a software model of a person that is very similar to a brain and takes a considerable amount of processing power. It would also have to simulate the physical environment. The core simulation hypothesis argument from my understanding is that we are probably in one of say 10^15 ancestor Sims running in the future in a universe with our physical laws.
Compression is essential
In order to have a Sim, compression is essential. For example the simulated physical earth would have to be very highly compressed in terms of base reality hardware required to Sim earth space. This goal of the compression algorithm is to preserve as much as possible the accuracy of the predicted world while saving as much computation as possible. Preserving the consciousness of human like creatures in it is not a goal.
So the questions is, how would the simulation of a brain change when subjected to a Super-intelligent compression algorithm honed by running a significant amount of simulated time already? If it can be compressed to behave much the same way with less computation, then it will do. This is where GPT-X comes in. Say we want to produce an essay any way we can. If GPT-X can do one that looks like it was written by a person, then we know that there is a compression algorithm that can make the essay with far less computation than simulating a persons brain during that time if the goal is a text output. So we have the same output with presumably no human like consciousness needed. We don’t believe that GPT-X had any “I think therefore I am” moments while writing such an essay.
How far can this go? Can we get a 99.9% accurate simulation of a person with an algorithm so compressed and different to our brain that its not conscious like us at all? If we aggregate a society with most people simulated in such a way, will the course of history still be within the random the bounds of what would happen if they were simulated to have full consciousness?
Detailed physical simulation is sometimes very impactful—COVID mutation
Note that the limit on the overall accuracy of an ancestor sim could be determined by the physical world rather than the thoughts of humans. For example viruses need to be simulated accurately to meaningfully mutate. A recent ancestor sim would have to get Covid evolution correct as much as the details of our behavior. If such a Sim was optimized for simulating high impact events, then the existence of the Delta variant then Omicron arguably was more impactful than the actions of almost anyone alive during that time. If you wanted to get an ensemble of Sims starting from 2020 to 2023 as correct as possible then simulating viral replication and mutation would be more important than getting right what someone said on a zoom call! If you can deep fake everyone on the call, then no-one was conscious of it in the same way they would be in base reality.
We are making Sims now
Note Chat GPT sims - some people look at this and think it makes us more likely to be in a sim. However I thought the opposite—a Sim was created that clearly had agents without our consciousness but with complex seemingly humanlike behavior. You could think that the further we can get this to look like our world but with AI that are not conscious like us, the more evidence we aren’t living in one.
Defining some terms
Br—the behavior of a real person—that is the effect they have on their world
Cr—their amount phenomenal consciousness
Bs—behavior of Sim person. Think of this in contrast to Br. This is the difference between the behavior of the Sim and the real person for a given amount of time.
Cs—the amount of phenomenal consciousness of the Sim person
Rs—ratio of Cs/Cr when the Sim and real person are in the same situation, producing Bs and Br.
If Bs=Br then Rs = 1 if we don’t allow p-zombies. However as Br-Bs gets larger because of compression then Rs tends to zero and presumably becomes zero at a point where Bs is still somewhat similar to Br.
Examples—deepfakes and essays
If we have a deepfake of a person that is completely convincing to another person then it seems like Br-Bs would be close enough to zero for the Sim to be accurate, but Rs = 0 if the deepfake creation is not conscious.
Similarly if a chatbot produces an essay that is indistinguishable from what the person would have created then Rs = 0 also for that situation. However as writing an essay would also effect the real person through their memories its not clear Rs stays 0 as the Sim may have to virtually update their memories also to behave correctly in later situations.
Compression across similar sims
If you believe you are more likely to be in a sim because there are say a trillion of you running in Sim but only 1 in base reality, that assumes those trillion are independent to each other to be in the same reference class. However the compression algorithm can also operate across Sims in ways we can’t anticipate. If a very large number of you behave the same way in all those Sims, then your consciousness could be run once, with results shared. Perhaps partial conscious states can be shared in some massive look up table shared between all Sims. Yes using just a massive LUT alone instead of calculation fails, but what combination of the two is best.
At least one of the following propositions is true:
Proposition 1 (P1): The human species is very likely to go extinct before reaching a “posthuman” stage;
Proposition 2 (P2): Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
Proposition 3 (P3): We are almost certainly living in a computer simulation.
They can all be false if P2 creatures are non-conscious or have sufficiently different consciousness to be in a different reference class to us. There could still be a significant number of simulations run.
Simulation hypothesis and alignment concerns
Some people such as Elon Musk clearly do seem to both believe in the simulation hypothesis and have AI concerns. I don’t personally understand how this works.
If you believe you are simulated, how are you so concerned about alignment? Surely there is no Cosmic Endowment if we live in an ancestor simulation and get alignment right as our computational requirements would then squeeze out countless other simulations. We would actually need as much space in base reality as in sim reality so no 10^20 stars for us. Our Sim would get shut down. Additionally if we mess up our sim won’t the freed up computation likely now go to a very similar one probably with humankind or something very similar? Especially if there is cross-compression going on it may not matter at all.
Additionally if we assume the number of Sims are not constant then if one goes wrong (or stops being interesting) and is shut down does that then mean others on the boundary between say doom and post human success will then be spun up to see where exactly the boundary is? For example if nuke war is triggered and the Sim stops are others created with a very slightly different potential trigger?
If we get alignment right, our sim stops, if we get it wrong it stops, AND we may get potentially recreated in a very similar Sim anyway.
Tongue in cheek final thought?
If you are doing something repetitive that doesn’t use your consciousness but you do consciously think “I’m bored” and that thought has little effect on your output then it won’t be simulated in a Sim, hence you are in base reality in that moment? This only applies to unimportant people however whose actions wouldn’t change the Sim in the years to come.
P-zombies, Compression and the Simulation Hypothesis
Can we have approximate p-zombies?
Lets say we accept that p-zombies don’t make sense because of the core arguments. However what about a creature who’s behavior is very similar to the target but with a different neural software? That is a simulated person who’s behavior is different only by a small, perhaps undetectable amount from the real person, but with internal software so different that we would not expect them to have the same consciousness, if any at all?
Approximate p-zombies would be very useful for Simulations
Say you want to run an ancestor simulation. Your goal is to have the resulting world turn out as accurate as possible after a number of years. This Sim could start by having a software model of a person that is very similar to a brain and takes a considerable amount of processing power. It would also have to simulate the physical environment. The core simulation hypothesis argument from my understanding is that we are probably in one of say 10^15 ancestor Sims running in the future in a universe with our physical laws.
Compression is essential
In order to have a Sim, compression is essential. For example the simulated physical earth would have to be very highly compressed in terms of base reality hardware required to Sim earth space. This goal of the compression algorithm is to preserve as much as possible the accuracy of the predicted world while saving as much computation as possible. Preserving the consciousness of human like creatures in it is not a goal.
So the questions is, how would the simulation of a brain change when subjected to a Super-intelligent compression algorithm honed by running a significant amount of simulated time already? If it can be compressed to behave much the same way with less computation, then it will do. This is where GPT-X comes in. Say we want to produce an essay any way we can. If GPT-X can do one that looks like it was written by a person, then we know that there is a compression algorithm that can make the essay with far less computation than simulating a persons brain during that time if the goal is a text output. So we have the same output with presumably no human like consciousness needed. We don’t believe that GPT-X had any “I think therefore I am” moments while writing such an essay.
How far can this go? Can we get a 99.9% accurate simulation of a person with an algorithm so compressed and different to our brain that its not conscious like us at all? If we aggregate a society with most people simulated in such a way, will the course of history still be within the random the bounds of what would happen if they were simulated to have full consciousness?
Detailed physical simulation is sometimes very impactful—COVID mutation
Note that the limit on the overall accuracy of an ancestor sim could be determined by the physical world rather than the thoughts of humans. For example viruses need to be simulated accurately to meaningfully mutate. A recent ancestor sim would have to get Covid evolution correct as much as the details of our behavior. If such a Sim was optimized for simulating high impact events, then the existence of the Delta variant then Omicron arguably was more impactful than the actions of almost anyone alive during that time. If you wanted to get an ensemble of Sims starting from 2020 to 2023 as correct as possible then simulating viral replication and mutation would be more important than getting right what someone said on a zoom call! If you can deep fake everyone on the call, then no-one was conscious of it in the same way they would be in base reality.
We are making Sims now
Note Chat GPT sims - some people look at this and think it makes us more likely to be in a sim. However I thought the opposite—a Sim was created that clearly had agents without our consciousness but with complex seemingly humanlike behavior. You could think that the further we can get this to look like our world but with AI that are not conscious like us, the more evidence we aren’t living in one.
Defining some terms
Br—the behavior of a real person—that is the effect they have on their world
Cr—their amount phenomenal consciousness
Bs—behavior of Sim person. Think of this in contrast to Br. This is the difference between the behavior of the Sim and the real person for a given amount of time.
Cs—the amount of phenomenal consciousness of the Sim person
Rs—ratio of Cs/Cr when the Sim and real person are in the same situation, producing Bs and Br.
If Bs=Br then Rs = 1 if we don’t allow p-zombies. However as Br-Bs gets larger because of compression then Rs tends to zero and presumably becomes zero at a point where Bs is still somewhat similar to Br.
Examples—deepfakes and essays
If we have a deepfake of a person that is completely convincing to another person then it seems like Br-Bs would be close enough to zero for the Sim to be accurate, but Rs = 0 if the deepfake creation is not conscious.
Similarly if a chatbot produces an essay that is indistinguishable from what the person would have created then Rs = 0 also for that situation. However as writing an essay would also effect the real person through their memories its not clear Rs stays 0 as the Sim may have to virtually update their memories also to behave correctly in later situations.
Compression across similar sims
If you believe you are more likely to be in a sim because there are say a trillion of you running in Sim but only 1 in base reality, that assumes those trillion are independent to each other to be in the same reference class. However the compression algorithm can also operate across Sims in ways we can’t anticipate. If a very large number of you behave the same way in all those Sims, then your consciousness could be run once, with results shared. Perhaps partial conscious states can be shared in some massive look up table shared between all Sims. Yes using just a massive LUT alone instead of calculation fails, but what combination of the two is best.
Bostrom’s claim: At least one is true?
Here is Bostrom’s well known claim:
At least one of the following propositions is true:
Proposition 1 (P1): The human species is very likely to go extinct before reaching a “posthuman” stage;
Proposition 2 (P2): Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
Proposition 3 (P3): We are almost certainly living in a computer simulation.
They can all be false if P2 creatures are non-conscious or have sufficiently different consciousness to be in a different reference class to us. There could still be a significant number of simulations run.
Simulation hypothesis and alignment concerns
Some people such as Elon Musk clearly do seem to both believe in the simulation hypothesis and have AI concerns. I don’t personally understand how this works.
If you believe you are simulated, how are you so concerned about alignment? Surely there is no Cosmic Endowment if we live in an ancestor simulation and get alignment right as our computational requirements would then squeeze out countless other simulations. We would actually need as much space in base reality as in sim reality so no 10^20 stars for us. Our Sim would get shut down. Additionally if we mess up our sim won’t the freed up computation likely now go to a very similar one probably with humankind or something very similar? Especially if there is cross-compression going on it may not matter at all.
Additionally if we assume the number of Sims are not constant then if one goes wrong (or stops being interesting) and is shut down does that then mean others on the boundary between say doom and post human success will then be spun up to see where exactly the boundary is? For example if nuke war is triggered and the Sim stops are others created with a very slightly different potential trigger?
If we get alignment right, our sim stops, if we get it wrong it stops, AND we may get potentially recreated in a very similar Sim anyway.
Tongue in cheek final thought?
If you are doing something repetitive that doesn’t use your consciousness but you do consciously think “I’m bored” and that thought has little effect on your output then it won’t be simulated in a Sim, hence you are in base reality in that moment? This only applies to unimportant people however whose actions wouldn’t change the Sim in the years to come.