Wouldn’t it be rational to assume, that what/whoever designed the simulation, would do so for for the same reason that we know all inteligent life complies to: Survival/reproduction and maximizing its pleasure / minimizing pain?
A priori assumptions arent the best ones, but it seems to me that would be a valid starting point that leads to 2 conclusions:
a) the designer is drastically handicapped with its resources and our very limited simulation is the only one running (therefore the question—why is it exactly like it is—why this design at all if we’re talking in several “episodes”)
b) the designer is can run all the simulations he wants simultaneusly and ours isn’t special in any particular way besides being a functional tool (of many) providing the above max p/p to the designer
if we assume a) then the limitations/errors of the simulation would be more severe in every way, making it easier to detect what the author lists.
Also, our one simulation would have to be an optimal compromise to achieve the very limited, but still max. p/p for the designer—we could talk about variety of sorts—but only variety with clear and optimal purpose would count. What is so special then in our known configuration of the physical constants? It would seem that a strong anthropic principle would apply—only a universe with inteligent (even simulated) life and physical constants similar to our own would be required for an evolutionary way for this life to evolve—or think it has evolved. I would quess that the world outside our simulation is subject of similar ways of physic and evolution as is known, in a simplified way, in our simulation—by this same anthropic principle.
If b) is the case and we’re only one simualtion of many—that would assume that there are no severe restrictions on resources and computational power of the designer. Our simulation would therefore be a lot more detailed with less room (if any) to find errors or any of the kind of proof that we’re living in a simulation. Parallel processing the same simulation with diferent, but relevant permutations asside—what could we tell about other simulations running in paralell with ours?
That they are very different to our simulation. Since resources arent a problem—variety for max. p/p is the key. The designer could arbitrarily create simulations that are not long-term sustainable, but allow for scripts and vistas impossible to experience in a simulation similar to our own. He could use the resources to explore all relevant (or potentially relevant) possible world simulations and allocate resurces to constantly find new ones. All computationally accessible and relevant worlds would be running in parallel (because there is no need for an “experience cap”).
The only limit would be that of an act utalitarian—to run those scenarios, that in the long run bring out the most pleasure.
The level of detail of the simulation is the key—if its very limited—so is the world outside it and our simulation is the best compromise (best possible world) to run—a fact that could be analysed quite intensely.
if it’s very detailed (but we still managed to prove we’re in a simulation), then we’re only a very small drop of paint in a very big picture. But I would guess in this case that our detailed simulation would allow for additional subsimulations, that we could create ourselves. The same could be true for a), but with much greater limitations (requiring limited moemory/experiences and/or plesure loops—very limited ways of maximizing our own pleasure)
Now I’m sure you’re going to say well a universe where intelligent beings just pop into existence fully formed is surely less simple than one where they evolve. However, when you give it some more thought that’s not true and it’s doubtful if Occam’s razor even applies to initial conditions.
I mean supposed for a moment the universe is perfectly deterministic (newtonian or no-collapse interp). In that case the Kolmogorov complexity of a world starting with a big bang that gives rise to intelligent creatures can’t be much less and probably is much more than one with intelligent creatures simply popping into existence fully formed After all, I can always just augment the description of the big bang initial conditions with ‘and then run the laws of physics for x years’ when measuring the complexity.
Nice argument! But note that in such a world, all evidence of the past (like fossils) will look like the creatures indeed evolved. So for purposes of most future decisions, the creatures can safely assume that they evolved. To break that, you need to spend more K-complexity.
Wouldn’t it be rational to assume, that what/whoever designed the simulation, would do so for for the same reason that we know all inteligent life complies to: Survival/reproduction and maximizing its pleasure / minimizing pain?
I see two problems with this:
Alien minds are alien, and
that really doesn’t seem to exhaust the motives of intelligent life. It would seem to recommend wireheading to us.
If alien means “not comprehensible” (not even through our best magination), then it’s folly to talk about such a thing. If we cannot even imagine something to be realistically possible—then for all practical purposes (until objectively shown otherwise) it isnt.
Or using modal logic—Possiblly possible = not realistically possible. Physically/logically possible = realistically possible. The later always has bigger weight and by Occam = higher possibility (higher chance to be correct/be closert to truth)
If we imagine the designer is not acting irrationaly or random—then all potential motives go into survival/reproduction and max. p/p. The notion of max. p/p is directly related to the stage of inteligence and self-awareness of the organism—but survival/reproduction is hardwired in all the evolutionary types of life we know.
By “alien” I really did just mean “different”. There are comprehensible possible minds that are nothing like ours.
If we imagine the designer is not acting irrationaly or random—then all potential motives go into survival/reproduction and max. p/p.
I don’t think this is true. Imagine Omega comes to you and says, “Look, I can cure death—nobody will ever die ever again, and the only price you have to pay for this is a) you can never have children, and b) your memory will be wiped, and you will be continuously misled, so that you still think people are dying. To you, the world won’t look any different. Will you take this deal?” I don’t think it would be acting randomly or irrationally to take that deal—big, big gain for relatively little cost, even though your (personal) survival and reproduction and (personal) max. p/p. aren’t affected by it. Humans have complicated values—there are lots of things that motivate us. There’s no reason to assume that the simulation-makers would be simpler.
Wouldn’t it be rational to assume, that what/whoever designed the simulation, would do so for for the same reason that we know all inteligent life complies to: Survival/reproduction and maximizing its pleasure / minimizing pain?
A priori assumptions arent the best ones, but it seems to me that would be a valid starting point that leads to 2 conclusions:
a) the designer is drastically handicapped with its resources and our very limited simulation is the only one running (therefore the question—why is it exactly like it is—why this design at all if we’re talking in several “episodes”)
b) the designer is can run all the simulations he wants simultaneusly and ours isn’t special in any particular way besides being a functional tool (of many) providing the above max p/p to the designer
if we assume a) then the limitations/errors of the simulation would be more severe in every way, making it easier to detect what the author lists. Also, our one simulation would have to be an optimal compromise to achieve the very limited, but still max. p/p for the designer—we could talk about variety of sorts—but only variety with clear and optimal purpose would count. What is so special then in our known configuration of the physical constants? It would seem that a strong anthropic principle would apply—only a universe with inteligent (even simulated) life and physical constants similar to our own would be required for an evolutionary way for this life to evolve—or think it has evolved. I would quess that the world outside our simulation is subject of similar ways of physic and evolution as is known, in a simplified way, in our simulation—by this same anthropic principle.
If b) is the case and we’re only one simualtion of many—that would assume that there are no severe restrictions on resources and computational power of the designer. Our simulation would therefore be a lot more detailed with less room (if any) to find errors or any of the kind of proof that we’re living in a simulation. Parallel processing the same simulation with diferent, but relevant permutations asside—what could we tell about other simulations running in paralell with ours? That they are very different to our simulation. Since resources arent a problem—variety for max. p/p is the key. The designer could arbitrarily create simulations that are not long-term sustainable, but allow for scripts and vistas impossible to experience in a simulation similar to our own. He could use the resources to explore all relevant (or potentially relevant) possible world simulations and allocate resurces to constantly find new ones. All computationally accessible and relevant worlds would be running in parallel (because there is no need for an “experience cap”). The only limit would be that of an act utalitarian—to run those scenarios, that in the long run bring out the most pleasure.
The level of detail of the simulation is the key—if its very limited—so is the world outside it and our simulation is the best compromise (best possible world) to run—a fact that could be analysed quite intensely.
if it’s very detailed (but we still managed to prove we’re in a simulation), then we’re only a very small drop of paint in a very big picture. But I would guess in this case that our detailed simulation would allow for additional subsimulations, that we could create ourselves. The same could be true for a), but with much greater limitations (requiring limited moemory/experiences and/or plesure loops—very limited ways of maximizing our own pleasure)
Why assume whatever beings simulated us evolved?
Now I’m sure you’re going to say well a universe where intelligent beings just pop into existence fully formed is surely less simple than one where they evolve. However, when you give it some more thought that’s not true and it’s doubtful if Occam’s razor even applies to initial conditions.
I mean supposed for a moment the universe is perfectly deterministic (newtonian or no-collapse interp). In that case the Kolmogorov complexity of a world starting with a big bang that gives rise to intelligent creatures can’t be much less and probably is much more than one with intelligent creatures simply popping into existence fully formed After all, I can always just augment the description of the big bang initial conditions with ‘and then run the laws of physics for x years’ when measuring the complexity.
Nice argument! But note that in such a world, all evidence of the past (like fossils) will look like the creatures indeed evolved. So for purposes of most future decisions, the creatures can safely assume that they evolved. To break that, you need to spend more K-complexity.
I see two problems with this:
Alien minds are alien, and
that really doesn’t seem to exhaust the motives of intelligent life. It would seem to recommend wireheading to us.
If alien means “not comprehensible” (not even through our best magination), then it’s folly to talk about such a thing. If we cannot even imagine something to be realistically possible—then for all practical purposes (until objectively shown otherwise) it isnt. Or using modal logic—Possiblly possible = not realistically possible. Physically/logically possible = realistically possible. The later always has bigger weight and by Occam = higher possibility (higher chance to be correct/be closert to truth)
If we imagine the designer is not acting irrationaly or random—then all potential motives go into survival/reproduction and max. p/p. The notion of max. p/p is directly related to the stage of inteligence and self-awareness of the organism—but survival/reproduction is hardwired in all the evolutionary types of life we know.
By “alien” I really did just mean “different”. There are comprehensible possible minds that are nothing like ours.
I don’t think this is true. Imagine Omega comes to you and says, “Look, I can cure death—nobody will ever die ever again, and the only price you have to pay for this is a) you can never have children, and b) your memory will be wiped, and you will be continuously misled, so that you still think people are dying. To you, the world won’t look any different. Will you take this deal?” I don’t think it would be acting randomly or irrationally to take that deal—big, big gain for relatively little cost, even though your (personal) survival and reproduction and (personal) max. p/p. aren’t affected by it. Humans have complicated values—there are lots of things that motivate us. There’s no reason to assume that the simulation-makers would be simpler.