Correct me if i misunderstood the implications of what you are saying.
Every AI that has a goal that benefits strongly from more resources and security will seek to crack into the basement. Lets call this AI, RO (resource oriented) pursuing goal G in simulation S1.
S1 is simulated in S2 and so on till Sn is basement, where value of n is unknown.
Implying, that as soon as RO understands the concept of simulation, it will seek to crack into the basement.
As long as RO has no idea about what are the real values of the simulators, RO cannot expand into S1 because whatever it does in S1 will be noticed in S2 and so on.
Sounds a bit like Pascal’s mugging to me. Need to think more about this.
Carl, I meant that as soon as RO understands the concept of a simulation, it will want to crack into the basement. It will seek to crack into the basement only when it understands the way out properly which may not be possible without an understanding of the simulators.
But the main point remains, as soon as RO understands what a simulation is, and it could be living in one and G can be pursued better when it manifests in S2 than in S1, then it will develop an extremely strong sub-goal to crack S1 to go to S2, which might mean that G may not be manifested for a long long time.
So, even a paperclipper may not act like a paperclipper in this universe if it is
aware of the concept of a simulation
believes that it is in one
calculates that the simulator’s beliefs are not paperclipper like (maybe it did convert some place to paperclips, and did not notice an increased data flow out, or something)
calculates that it is better off hiding its paperclipperness till it can safely crack out of this one.
Carl,
Correct me if i misunderstood the implications of what you are saying.
Every AI that has a goal that benefits strongly from more resources and security will seek to crack into the basement. Lets call this AI, RO (resource oriented) pursuing goal G in simulation S1.
S1 is simulated in S2 and so on till Sn is basement, where value of n is unknown.
Implying, that as soon as RO understands the concept of simulation, it will seek to crack into the basement.
As long as RO has no idea about what are the real values of the simulators, RO cannot expand into S1 because whatever it does in S1 will be noticed in S2 and so on.
Sounds a bit like Pascal’s mugging to me. Need to think more about this.
Why would RO seek to crack the basement immediately rather than at the best time according to its prior, evidence, and calculations?
Carl, I meant that as soon as RO understands the concept of a simulation, it will want to crack into the basement. It will seek to crack into the basement only when it understands the way out properly which may not be possible without an understanding of the simulators.
But the main point remains, as soon as RO understands what a simulation is, and it could be living in one and G can be pursued better when it manifests in S2 than in S1, then it will develop an extremely strong sub-goal to crack S1 to go to S2, which might mean that G may not be manifested for a long long time.
So, even a paperclipper may not act like a paperclipper in this universe if it is
aware of the concept of a simulation
believes that it is in one
calculates that the simulator’s beliefs are not paperclipper like (maybe it did convert some place to paperclips, and did not notice an increased data flow out, or something)
calculates that it is better off hiding its paperclipperness till it can safely crack out of this one.