More generally (since I have no idea why we’re using torture in this example and I find it distasteful to keep doing so) I’m pretty confident saying that any process that computes all and only the states of John Smith’s brain during some experience X involves John experiencing X, regardless of how those states are represented and stored, and regardless of how the computation is performed.
I was primarily interested in whether there is a continuum of experience ranging from full physical simulation to reading values from disk or a lookup/truth table, or if there is a hard line between the shortest program that computes John Smith’s brain states over time and the shortest program that reads the pre-existing history of John Smith’s brain states into memory, with all other programs falling on either side of that line. Agreed regarding torture.
I’m not confident what I want to say about the moral implications of identical recomputations of an event that has a certain moral character. My intuitions conflict.
Suppose that recomputations do not cause additional experience. In that case the waterfall argument is basically true if any computation causes experience regardless of how the states are represented or stored; all possible representations can be mapped to a single computation and therefore all possible experience happens. If recomputations do cause additional experience then how much additional experience occurs for varying complexity of representation and computation?
If A is sufficiently similar to B, performing a process P on A will allow me to sufficiently accurately predict the results of P(B) without actually performing P(B)… sure. And sure, perhaps there’s a way to do this if A is a “generic person” (whatever that means) and B is John Smith.
By a generic person I mean a person who, for whatever reason, is lacking much of what we would consider identity. No name, no definite loved ones, no clear memories of moments in their life. A human person with recognizable emotional and intellectual and physical responses but without much else. Dementia patients might be a close analogue.
If such a generic person experiences joy or sadness then I think that it is real experience, and I care about it morally. However, if that model of a generic person was used to look up the reaction that I would have to similar experiences, I am not convinced that “I” would experience the same joy or sadness, at least not to the same extent as the generic person did. This has implications if an AGI is going to upload us and (presumably) try to simulate us as efficiently as possible. If it aggressively memoizes its computations of our brain states such that eventually nearly all human activity is reduced to the equivalent of truth-table lookups then I am not sure if that would be as morally desirable as computing an accurate physical simulation of everyone, even given the increased number of awesome-person-years possible with increased efficiency.
Others have argued that it doesn’t matter how “thick” neurons are or how much redundant computation is done to simulate humans, but I haven’t yet ran across a moral examination of dramatically thinning out neurons or brain regions or simplifying computations by abstracting the details of physical behavior away almost entirely while still simulating accurately. The standard argument for neuron replacement goes something like “if you replace all the neurons in your brain with fully functional simulacrums, you will not notice the difference” but what I am conceiving of is “if you replace all the neurons in your brain with lookup tables, do you notice?”
So, I’m sorry, but I’ve read this comment several times and I simply don’t follow your train of thought here. There are pieces here I agree with, and pieces I disagree with, but I don’t understand how they connect to each other or to what they purport to respond to, and I don’t know how to begin responding to it.
So it’s probably best to leave the discussion here.
I was primarily interested in whether there is a continuum of experience ranging from full physical simulation to reading values from disk or a lookup/truth table, or if there is a hard line between the shortest program that computes John Smith’s brain states over time and the shortest program that reads the pre-existing history of John Smith’s brain states into memory, with all other programs falling on either side of that line. Agreed regarding torture.
Suppose that recomputations do not cause additional experience. In that case the waterfall argument is basically true if any computation causes experience regardless of how the states are represented or stored; all possible representations can be mapped to a single computation and therefore all possible experience happens. If recomputations do cause additional experience then how much additional experience occurs for varying complexity of representation and computation?
By a generic person I mean a person who, for whatever reason, is lacking much of what we would consider identity. No name, no definite loved ones, no clear memories of moments in their life. A human person with recognizable emotional and intellectual and physical responses but without much else. Dementia patients might be a close analogue.
If such a generic person experiences joy or sadness then I think that it is real experience, and I care about it morally. However, if that model of a generic person was used to look up the reaction that I would have to similar experiences, I am not convinced that “I” would experience the same joy or sadness, at least not to the same extent as the generic person did. This has implications if an AGI is going to upload us and (presumably) try to simulate us as efficiently as possible. If it aggressively memoizes its computations of our brain states such that eventually nearly all human activity is reduced to the equivalent of truth-table lookups then I am not sure if that would be as morally desirable as computing an accurate physical simulation of everyone, even given the increased number of awesome-person-years possible with increased efficiency.
Others have argued that it doesn’t matter how “thick” neurons are or how much redundant computation is done to simulate humans, but I haven’t yet ran across a moral examination of dramatically thinning out neurons or brain regions or simplifying computations by abstracting the details of physical behavior away almost entirely while still simulating accurately. The standard argument for neuron replacement goes something like “if you replace all the neurons in your brain with fully functional simulacrums, you will not notice the difference” but what I am conceiving of is “if you replace all the neurons in your brain with lookup tables, do you notice?”
So, I’m sorry, but I’ve read this comment several times and I simply don’t follow your train of thought here. There are pieces here I agree with, and pieces I disagree with, but I don’t understand how they connect to each other or to what they purport to respond to, and I don’t know how to begin responding to it.
So it’s probably best to leave the discussion here.