If our universe is test simulation, it is a digital experiment to test something, and if it include AI, it is probably designed to test AI behaviour by putting it in complex moral dilemmas.
So Omega is not interested in humans in this simulation. It is interested in behaviour of Beta to humans.
If there will be no human suffering, it will be clear that it is a simulation, and it will be not pure test. Alpha must hide its existence and only hint on it.
If our universe is test simulation, it is a digital experiment to test something,
That’s a tautology. But if you meant “if our universe is a simulation” then why do you think it must be a a test simulation in particular? As opposed to a research simulation to see what happens, or a simulation to make qualia because the simulated beings’s lives have value to the simulators, or a simulation for entertainment value, or anything else.
if it include AI, it is probably designed to test AI behaviour by putting it in complex moral dilemmas.
Maybe the desired outcome from the simulators’ point of view is to develop a paperclipping AI that isn’t swayed by human moral arguments. Maybe the simulation is really about the humans, and AIs are just inevitable byproducts of high-tech humans. There are lots of maybes. Do you have any evidence for this, conditional on being a simulation?
My point was not that we are actually in simulation, but that AI may think that there is some probability that it is in simulation. And if it in simulation, there could be many different possible goals of the owners of simulation—from the point of view of the AI.
But it may put rather high probability to the one of the possible goals—that is the simulation is designed to test its behaviour. Because putting AI in the simulation box is one of the main ideas of AI safety and also in teaching of current neural nets inside computer games.
So the AI gives some probability to the following statements:
a) it is in a simulation
b) it is a testing simulation of its behaviour.
c) and the main thing which is tested about his behavior is if it will violent to its creators—that is its ability to cooperate
We don’t need that the AI will actually believe in this bullshit. We want it to give some very small probability to all 3 prepositions, so after calculation utility of human atoms and utility of preserving humans, preserving humans will overweight.
If our universe is test simulation, it is a digital experiment to test something, and if it include AI, it is probably designed to test AI behaviour by putting it in complex moral dilemmas.
So Omega is not interested in humans in this simulation. It is interested in behaviour of Beta to humans.
If there will be no human suffering, it will be clear that it is a simulation, and it will be not pure test. Alpha must hide its existence and only hint on it.
Why do you assume any of this?
That’s a tautology. But if you meant “if our universe is a simulation” then why do you think it must be a a test simulation in particular? As opposed to a research simulation to see what happens, or a simulation to make qualia because the simulated beings’s lives have value to the simulators, or a simulation for entertainment value, or anything else.
Maybe the desired outcome from the simulators’ point of view is to develop a paperclipping AI that isn’t swayed by human moral arguments. Maybe the simulation is really about the humans, and AIs are just inevitable byproducts of high-tech humans. There are lots of maybes. Do you have any evidence for this, conditional on being a simulation?
My point was not that we are actually in simulation, but that AI may think that there is some probability that it is in simulation. And if it in simulation, there could be many different possible goals of the owners of simulation—from the point of view of the AI.
But it may put rather high probability to the one of the possible goals—that is the simulation is designed to test its behaviour. Because putting AI in the simulation box is one of the main ideas of AI safety and also in teaching of current neural nets inside computer games.
So the AI gives some probability to the following statements: a) it is in a simulation b) it is a testing simulation of its behaviour. c) and the main thing which is tested about his behavior is if it will violent to its creators—that is its ability to cooperate
We don’t need that the AI will actually believe in this bullshit. We want it to give some very small probability to all 3 prepositions, so after calculation utility of human atoms and utility of preserving humans, preserving humans will overweight.