In this universe all the computations we perform will halt either because the Turing Machine they represent halts or because we interrupt the computation. Therefore every computation we will perform can be represented by a halting algorithm (even if it’s only limited by “execute only up to 10^100 operations”). I don’t see that as a limitation on the kinds of simulations that we perform, and I don’t think I treated it as such. If you’d like my same argument for Turing Machines, here it is:
A Turing Machine is a set of symbols (with a blank), states (with an initial state and a possibly empty set of accepting/halting states), a transition function, and a tape with input on it. For this purpose I will define the result of applying the transition function to a Turing Machine in a halting/accepting state to yield the same state with no changes to the tape rather than being undefined as in some definitions. Let TM_i represent the entire state of the Turing Machine TM at any point of its execution so that TM_0 = TM and TM_i is the result of applying the transition function to the tape and state of TM_i-1. By induction, TM_n exists for all n>=0.
Given any simulation of a sentient being, S_b, that can be fully represented by a Turing machine, there exists a well-defined state of that being at any discrete step of the simulation given by TM_i. Therefore all suffering and happiness (and everything else) the being experiences is defined by the set of TM_i’s for all i>=0. It is not necessary for another agent to compute any TM_i for those relationships to exist; the only effect is that the agent is made aware of the events represented by TM_i. I like the “looking through a window” analogy of Viliam_Bur which captures the essence of this argument quite well. What is mathematically certain to exist does not require any outside action to give it more existence. In that sense there is no inherent danger in computing any particular simulation because what we compute does not change the predetermined events in that simulation. What does pose a potential danger is the effect we allow those simulations to have on us, and by extension what we choose to simulate can pose a danger. Formally if P(U|C) is lower than P(U|~C) where U is something of desirable utility and C is the execution of a computation then C poses a danger to us. If we simulate an AGI and then alter the world based on its output we are probably in danger.
Aside from theoretical dangers there are practical dangers arising from the fact that our theoretical computations are actually performed by real matter in our universe. For instance I once saw a nice demonstration of displaying a sequence of black and white line segments on a CRT monitor to generate radio waves that could be audibly received on a nearby AM radio. CPUs generate a lower amount of RF but theoretically a simulation could abuse this ability to interact with the real world in unexpected and uncontrolled ways.
In this universe all the computations we perform will halt either because the Turing Machine they represent halts or because we interrupt the computation. Therefore every computation we will perform can be represented by a halting algorithm (even if it’s only limited by “execute only up to 10^100 operations”). I don’t see that as a limitation on the kinds of simulations that we perform, and I don’t think I treated it as such. If you’d like my same argument for Turing Machines, here it is:
A Turing Machine is a set of symbols (with a blank), states (with an initial state and a possibly empty set of accepting/halting states), a transition function, and a tape with input on it. For this purpose I will define the result of applying the transition function to a Turing Machine in a halting/accepting state to yield the same state with no changes to the tape rather than being undefined as in some definitions. Let TM_i represent the entire state of the Turing Machine TM at any point of its execution so that TM_0 = TM and TM_i is the result of applying the transition function to the tape and state of TM_i-1. By induction, TM_n exists for all n>=0.
Given any simulation of a sentient being, S_b, that can be fully represented by a Turing machine, there exists a well-defined state of that being at any discrete step of the simulation given by TM_i. Therefore all suffering and happiness (and everything else) the being experiences is defined by the set of TM_i’s for all i>=0. It is not necessary for another agent to compute any TM_i for those relationships to exist; the only effect is that the agent is made aware of the events represented by TM_i. I like the “looking through a window” analogy of Viliam_Bur which captures the essence of this argument quite well. What is mathematically certain to exist does not require any outside action to give it more existence. In that sense there is no inherent danger in computing any particular simulation because what we compute does not change the predetermined events in that simulation. What does pose a potential danger is the effect we allow those simulations to have on us, and by extension what we choose to simulate can pose a danger. Formally if P(U|C) is lower than P(U|~C) where U is something of desirable utility and C is the execution of a computation then C poses a danger to us. If we simulate an AGI and then alter the world based on its output we are probably in danger.
Aside from theoretical dangers there are practical dangers arising from the fact that our theoretical computations are actually performed by real matter in our universe. For instance I once saw a nice demonstration of displaying a sequence of black and white line segments on a CRT monitor to generate radio waves that could be audibly received on a nearby AM radio. CPUs generate a lower amount of RF but theoretically a simulation could abuse this ability to interact with the real world in unexpected and uncontrolled ways.