Mathematically, every algorithm halts and has a well-defined deterministic sequence of operations that results in a final outcome. Therefore every algorithm that simulates a conscious being is already mathematically well-defined and everything that happens to the simulated being is equally well defined, including the internal relationships in the simulated being that represent thoughts, emotions, feelings, etc. In the mathematical sense every possible algorithmic simulation already exists regardless of whether we perform the full computation on some physical hardware. Therefore the suffering and happiness of every possible simulated being also exists, mathematically.
Does physically executing a particular algorithm fundamentally affect the real existence of a being simulated by that algorithm? To the ones running the simulation it becomes obvious what happens to the being whereas it may not be obvious otherwise, but to the simulated being itself it is indistinguishable from simply existing as a mathematical result of the definition of an algorithm and a particular input to that algorithm.
That said, it’s obvious that which algorithms we allow to interact with our universe is indeed important. It could be that simulating a suffering being would have negative consequences for us, similar to how observing real suffering can have negative consequences. Similarly, simulating an AGI that can interact with our universe (choosing input based upon our universe or especially upon previous executions of the algorithm) pulls that mathematically well-defined AGI out of algorithm-space and into real-space, allowing its computation to affect the world. It is conceivable that certain algorithms and inputs could even escape the most strictly controlled physical computer due to flaws in design or manufacturing.
I don’t think our intuitions about what “really happens” (versus what is “mathematically well defined”) are useful. I think we have to zoom out at least one level and realize that our moral and ethical intuitions only mean anything within our particular instantiation of our causal framework. We can’t be morally responsible for the notional space of computable torture simulations because they exist whether or not we “carry them out.” But perhaps we are morally responsible for particular instantiations of those algorithms.
I also want to draw attention to the statement in the original post,
If these simulations are sufficiently precise, then they will be people in and of themselves. The simulations could cause those people to suffer, and will likely kill them by ending the simulation when the prediction or answer is given.
This usage of “killing” is conceptually very distant from the intuitive notion for the reasons you (Pentashagon) indicate. I don’t feel that the matter of how to handle moral culpability for events occurring in causally disconnected algorithms is sufficiently settled that we can meaningfully have this conversion.
It depends on what you mean by the word algorithm; there are contexts where many authors find it useful to reserve the word for processes that do, in fact, halt. (Example citations: the definition of algorithm in Schneider et. al’s Invitation to Computer Science includes the phrase “halts in a finite amount of time”; Hopcroft et al.‘s Introduction to Automata Theory, Languages, and Computation says “Turing machines that always halt [...] are a good model of an ‘algorithm.’”)
Then Pentashagon is arguing by definition. The article discusses (and its arguments are relevant to) algorithms in general, not necessarily those that halt; I don’t believe that it has ever been proven that any algorithm representing a person necessarily halts.
In this universe all the computations we perform will halt either because the Turing Machine they represent halts or because we interrupt the computation. Therefore every computation we will perform can be represented by a halting algorithm (even if it’s only limited by “execute only up to 10^100 operations”). I don’t see that as a limitation on the kinds of simulations that we perform, and I don’t think I treated it as such. If you’d like my same argument for Turing Machines, here it is:
A Turing Machine is a set of symbols (with a blank), states (with an initial state and a possibly empty set of accepting/halting states), a transition function, and a tape with input on it. For this purpose I will define the result of applying the transition function to a Turing Machine in a halting/accepting state to yield the same state with no changes to the tape rather than being undefined as in some definitions. Let TM_i represent the entire state of the Turing Machine TM at any point of its execution so that TM_0 = TM and TM_i is the result of applying the transition function to the tape and state of TM_i-1. By induction, TM_n exists for all n>=0.
Given any simulation of a sentient being, S_b, that can be fully represented by a Turing machine, there exists a well-defined state of that being at any discrete step of the simulation given by TM_i. Therefore all suffering and happiness (and everything else) the being experiences is defined by the set of TM_i’s for all i>=0. It is not necessary for another agent to compute any TM_i for those relationships to exist; the only effect is that the agent is made aware of the events represented by TM_i. I like the “looking through a window” analogy of Viliam_Bur which captures the essence of this argument quite well. What is mathematically certain to exist does not require any outside action to give it more existence. In that sense there is no inherent danger in computing any particular simulation because what we compute does not change the predetermined events in that simulation. What does pose a potential danger is the effect we allow those simulations to have on us, and by extension what we choose to simulate can pose a danger. Formally if P(U|C) is lower than P(U|~C) where U is something of desirable utility and C is the execution of a computation then C poses a danger to us. If we simulate an AGI and then alter the world based on its output we are probably in danger.
Aside from theoretical dangers there are practical dangers arising from the fact that our theoretical computations are actually performed by real matter in our universe. For instance I once saw a nice demonstration of displaying a sequence of black and white line segments on a CRT monitor to generate radio waves that could be audibly received on a nearby AM radio. CPUs generate a lower amount of RF but theoretically a simulation could abuse this ability to interact with the real world in unexpected and uncontrolled ways.
I suspect that Pentashagon is not using “algorithm” as a synonym for “Turing machine”—I’ve often seen the word used to mean a deterministic computation that always halts.
Mathematically, every algorithm halts and has a well-defined deterministic sequence of operations that results in a final outcome. Therefore every algorithm that simulates a conscious being is already mathematically well-defined and everything that happens to the simulated being is equally well defined, including the internal relationships in the simulated being that represent thoughts, emotions, feelings, etc. In the mathematical sense every possible algorithmic simulation already exists regardless of whether we perform the full computation on some physical hardware. Therefore the suffering and happiness of every possible simulated being also exists, mathematically.
Does physically executing a particular algorithm fundamentally affect the real existence of a being simulated by that algorithm? To the ones running the simulation it becomes obvious what happens to the being whereas it may not be obvious otherwise, but to the simulated being itself it is indistinguishable from simply existing as a mathematical result of the definition of an algorithm and a particular input to that algorithm.
That said, it’s obvious that which algorithms we allow to interact with our universe is indeed important. It could be that simulating a suffering being would have negative consequences for us, similar to how observing real suffering can have negative consequences. Similarly, simulating an AGI that can interact with our universe (choosing input based upon our universe or especially upon previous executions of the algorithm) pulls that mathematically well-defined AGI out of algorithm-space and into real-space, allowing its computation to affect the world. It is conceivable that certain algorithms and inputs could even escape the most strictly controlled physical computer due to flaws in design or manufacturing.
I’m going to shamelessly quote myself from a previous discussion on waterfall ethics,
I also want to draw attention to the statement in the original post,
This usage of “killing” is conceptually very distant from the intuitive notion for the reasons you (Pentashagon) indicate. I don’t feel that the matter of how to handle moral culpability for events occurring in causally disconnected algorithms is sufficiently settled that we can meaningfully have this conversion.
This is not true.
It depends on what you mean by the word algorithm; there are contexts where many authors find it useful to reserve the word for processes that do, in fact, halt. (Example citations: the definition of algorithm in Schneider et. al’s Invitation to Computer Science includes the phrase “halts in a finite amount of time”; Hopcroft et al.‘s Introduction to Automata Theory, Languages, and Computation says “Turing machines that always halt [...] are a good model of an ‘algorithm.’”)
Then Pentashagon is arguing by definition. The article discusses (and its arguments are relevant to) algorithms in general, not necessarily those that halt; I don’t believe that it has ever been proven that any algorithm representing a person necessarily halts.
In this universe all the computations we perform will halt either because the Turing Machine they represent halts or because we interrupt the computation. Therefore every computation we will perform can be represented by a halting algorithm (even if it’s only limited by “execute only up to 10^100 operations”). I don’t see that as a limitation on the kinds of simulations that we perform, and I don’t think I treated it as such. If you’d like my same argument for Turing Machines, here it is:
A Turing Machine is a set of symbols (with a blank), states (with an initial state and a possibly empty set of accepting/halting states), a transition function, and a tape with input on it. For this purpose I will define the result of applying the transition function to a Turing Machine in a halting/accepting state to yield the same state with no changes to the tape rather than being undefined as in some definitions. Let TM_i represent the entire state of the Turing Machine TM at any point of its execution so that TM_0 = TM and TM_i is the result of applying the transition function to the tape and state of TM_i-1. By induction, TM_n exists for all n>=0.
Given any simulation of a sentient being, S_b, that can be fully represented by a Turing machine, there exists a well-defined state of that being at any discrete step of the simulation given by TM_i. Therefore all suffering and happiness (and everything else) the being experiences is defined by the set of TM_i’s for all i>=0. It is not necessary for another agent to compute any TM_i for those relationships to exist; the only effect is that the agent is made aware of the events represented by TM_i. I like the “looking through a window” analogy of Viliam_Bur which captures the essence of this argument quite well. What is mathematically certain to exist does not require any outside action to give it more existence. In that sense there is no inherent danger in computing any particular simulation because what we compute does not change the predetermined events in that simulation. What does pose a potential danger is the effect we allow those simulations to have on us, and by extension what we choose to simulate can pose a danger. Formally if P(U|C) is lower than P(U|~C) where U is something of desirable utility and C is the execution of a computation then C poses a danger to us. If we simulate an AGI and then alter the world based on its output we are probably in danger.
Aside from theoretical dangers there are practical dangers arising from the fact that our theoretical computations are actually performed by real matter in our universe. For instance I once saw a nice demonstration of displaying a sequence of black and white line segments on a CRT monitor to generate radio waves that could be audibly received on a nearby AM radio. CPUs generate a lower amount of RF but theoretically a simulation could abuse this ability to interact with the real world in unexpected and uncontrolled ways.
I suspect that Pentashagon is not using “algorithm” as a synonym for “Turing machine”—I’ve often seen the word used to mean a deterministic computation that always halts.