Suppose we discovered a gigantic truth-table in space such that we could point a telescope at some part of it and discover the result of S(x,t) where x is the initial state of a brain and t is the amount of time S simulates the brain being tortured. Is John Smith tortured if we point the telescope at the location where S(John Smith, 10 minutes) is found? Now suppose that instead there are several truth tables, one for each major region of John Smith’s brain and having enough inputs and outputs such that we can look up the results of torturing parts of John Smith’s brain and match them together at appropriately small intervals to give us the same output as S(x,t). Is John Smith tortured by using this method? What about truth tables for neuron groups or a truth table for a sufficiently generic individual neuron? How about a truth table that gives us the four results of AND NOT for two boolean variables, and we have to interpret S as a logical circuit to look up the results using many, many iterations?
Is John Smith tortured just by the existence of a particular level of truth table (perhaps the one for S(x,t) ) if no one computed it? If so, does it matter if someone re-computes that truth table since John Smith is tortured by it anyway? If John Smith would only be tortured by computing the tables then suppose P=NP and we can somehow calculate the truth tables directly without all the intermediate calculations. Would that torture John Smith or does the act of each individual computation contribute to the experience?
For the largest truth table that causes the lookup procedure to torture John Smith, does it matter how many times the lookup is done? For instance, if looking at S(John Smith,t) tortures John Smith, does it matter how many times we look or does John Smith experience each of the S(John Smith,t) only once? The smallest truth table that allows torture-free lookups corresponds to the post’s C—E options of doing unlimited lookups without increasing torture.
Is there a level of truth tables that doesn’t torture John Smith and won’t cause his torture if someone looks up the results of S(x,t) using those tables? This seems the most unlikely, but perhaps there is a way to compute truth tables where a generic person is tortured which still returns accurate S(x,t) without torturing John Smith.
I don’t have good answers to all those questions but I think that “doing computation = torture” is too simple an answer. I am on the fence about mathematical realism and that has a large impact on what the truth tables would mean. If mathematical realism is true then the truth tables already exist and correspond to real experience. If it’s false then it’s just a convenient thought experiment to determine where, how, and when (and how often) experience actually occurs: If truth tables at the whole brain or large brain region level constitute torture then I would assume that the experience happens once when (or if) the tables are generated and that multiple lookups probably don’t cause further experience. Once neural groups or neuron lookups are being used to run a simulation I think some experience probably exists each time. By the time everything is computed I think it’s almost certainly causing experience during each simulation. But suppose we find the mechanism of conscious awareness and it’s possible to figure out what a conscious person feels while being tortured using truth tables for conscious thoughts and a full simulation of the rest of their brain. Is that as bad as physically torturing them? I don’t think so, but it would probably still be morally wrong.
If you’ve read Nick Bostrom’s paper on Unification vs. Duplication I think I find myself somewhere in the middle; using truth tables to find the result of a simulation seems a lot like Unification while direct computation fits with Duplication.
For my own part, I’m pretty confident labeling as “torturing John Smith” any process that computes all and only the states of John Smith’s brain during torture, regardless of how those states are represented and stored, and regardless of how the computation is performed.
More generally (since I have no idea why we’re using torture in this example and I find it distasteful to keep doing so) I’m pretty confident saying that any process that computes all and only the states of John Smith’s brain during some experience X involves John experiencing X, regardless of how those states are represented and stored, and regardless of how the computation is performed.
I certainly agree that if we describe the computation as being performed “off-camera” (by whatever unimaginable process created it), or being performed by a combination of that ineffable process and manual lookups, or distract attention from the process altogether, our intuitions are led to conclude that X is not experienced… for example, that Searle’s Chinese Room is not actually experiencing the human-level Chinese conversations it’s involved in.
But I don’t find that such intuitions are stable under reflection.
Is John Smith tortured just by the existence of a particular level of truth table (perhaps the one for S(x,t) ) if no one computed it?
Wait, what? You mean, if the states are somehow brought into existence ex nihilo without any process having computed them? I have no idea. I’m not sure the question makes sense.
I think what I want to say about such things is that moral judgments are about actions and events. In this scenario I have no idea what action is being performed and no idea what event occurred, so I don’t know how to make a moral judgment about it.
If so, does it matter if someone re-computes that truth table since John Smith is tortured by it anyway?
Well, as above, I’m pretty confident that re-computing the table causes John to experience X (in addition to causing there to have been a John to experience it). I’m not confident what I want to say about the moral implications of identical recomputations of an event that has a certain moral character. My intuitions conflict.
That said, at the moment I’m inclined to say that all the computations have equivalent moral status, and their moral statuses add in the ordinary way for two discrete events, whatever that is.
perhaps there is a way to compute truth tables where a generic person is tortured which still returns accurate S(x,t) without torturing John Smith
If A is sufficiently similar to B, performing a process P on A will allow me to sufficiently accurately predict the results of P(B) without actually performing P(B)… sure. And sure, perhaps there’s a way to do this if A is a “generic person” (whatever that means) and B is John Smith.
More generally (since I have no idea why we’re using torture in this example and I find it distasteful to keep doing so) I’m pretty confident saying that any process that computes all and only the states of John Smith’s brain during some experience X involves John experiencing X, regardless of how those states are represented and stored, and regardless of how the computation is performed.
I picked the torture example, because I’m not sure what “John experiences X” really means, once you taboo all the confusing terms about personal identity and consciousness”—but I think the moral question is a “territory” question, not a “map” question.
The “all states and only the states of the brain” part confuses me. Suppose we do time-slicing; the computer takes turns simulating John and simulating Richard. That can’t be a moral distinction. I suspect it will take some very careful phrasing to find a definition for “all states and only those states” that isn’t obviously wrong.
Well, as above, I’m pretty confident that re-computing the table causes John to experience X (in addition to causing there to have been a John to experience it). I’m not confident what I want to say about the moral implications of identical recomputations of an event that has a certain moral character. My intuitions conflict.
Yah. After thinking about this for a couple of days the only firm conclusion I have is that moral intuition doesn’t work in these cases. I have a slight worry that thinking too hard about these sorts of hypotheticals will damage my moral intuition for the real-world cases—but I don’t think this is anything more than a baby basilisk at most.
I picked the torture example, because I’m not sure what “John experiences X” really means, once you taboo all the confusing terms about personal identity and consciousness”—but I think the moral question is a “territory” question, not a “map” question.
I don’t quite understand this. If a given event is not an example of John experiencing torture, then how is the moral status of John experiencing torture relevant?
The “all states and only the states of the brain” part confuses me.
I wasn’t trying to argue that if this condition is not met, then there is no moral difficulty, I was just trying to narrow my initial claim to one I could make with confidence.
If I remove the “and only” clause I open myself up to a wide range of rabbit holes that confuse my intuitions, such as “we generate the GLUT of all possible future experiences John might have, including both torture and a wildly wonderful life”.
the only firm conclusion I have is that moral intuition doesn’t work in these cases.
IME moral intuitions do work in these cases, but they conflict, so it becomes necessary to think carefully about tradeoffs and boundary conditions to come up with a more precise and consistent formulation of those intuitions. That said, changing the intuitions themselves is certainly simpler, but has obvious difficulties.
More generally (since I have no idea why we’re using torture in this example and I find it distasteful to keep doing so) I’m pretty confident saying that any process that computes all and only the states of John Smith’s brain during some experience X involves John experiencing X, regardless of how those states are represented and stored, and regardless of how the computation is performed.
I was primarily interested in whether there is a continuum of experience ranging from full physical simulation to reading values from disk or a lookup/truth table, or if there is a hard line between the shortest program that computes John Smith’s brain states over time and the shortest program that reads the pre-existing history of John Smith’s brain states into memory, with all other programs falling on either side of that line. Agreed regarding torture.
I’m not confident what I want to say about the moral implications of identical recomputations of an event that has a certain moral character. My intuitions conflict.
Suppose that recomputations do not cause additional experience. In that case the waterfall argument is basically true if any computation causes experience regardless of how the states are represented or stored; all possible representations can be mapped to a single computation and therefore all possible experience happens. If recomputations do cause additional experience then how much additional experience occurs for varying complexity of representation and computation?
If A is sufficiently similar to B, performing a process P on A will allow me to sufficiently accurately predict the results of P(B) without actually performing P(B)… sure. And sure, perhaps there’s a way to do this if A is a “generic person” (whatever that means) and B is John Smith.
By a generic person I mean a person who, for whatever reason, is lacking much of what we would consider identity. No name, no definite loved ones, no clear memories of moments in their life. A human person with recognizable emotional and intellectual and physical responses but without much else. Dementia patients might be a close analogue.
If such a generic person experiences joy or sadness then I think that it is real experience, and I care about it morally. However, if that model of a generic person was used to look up the reaction that I would have to similar experiences, I am not convinced that “I” would experience the same joy or sadness, at least not to the same extent as the generic person did. This has implications if an AGI is going to upload us and (presumably) try to simulate us as efficiently as possible. If it aggressively memoizes its computations of our brain states such that eventually nearly all human activity is reduced to the equivalent of truth-table lookups then I am not sure if that would be as morally desirable as computing an accurate physical simulation of everyone, even given the increased number of awesome-person-years possible with increased efficiency.
Others have argued that it doesn’t matter how “thick” neurons are or how much redundant computation is done to simulate humans, but I haven’t yet ran across a moral examination of dramatically thinning out neurons or brain regions or simplifying computations by abstracting the details of physical behavior away almost entirely while still simulating accurately. The standard argument for neuron replacement goes something like “if you replace all the neurons in your brain with fully functional simulacrums, you will not notice the difference” but what I am conceiving of is “if you replace all the neurons in your brain with lookup tables, do you notice?”
So, I’m sorry, but I’ve read this comment several times and I simply don’t follow your train of thought here. There are pieces here I agree with, and pieces I disagree with, but I don’t understand how they connect to each other or to what they purport to respond to, and I don’t know how to begin responding to it.
So it’s probably best to leave the discussion here.
Suppose we discovered a gigantic truth-table in space such that we could point a telescope at some part of it and discover the result of S(x,t) where x is the initial state of a brain and t is the amount of time S simulates the brain being tortured. Is John Smith tortured if we point the telescope at the location where S(John Smith, 10 minutes) is found? Now suppose that instead there are several truth tables, one for each major region of John Smith’s brain and having enough inputs and outputs such that we can look up the results of torturing parts of John Smith’s brain and match them together at appropriately small intervals to give us the same output as S(x,t). Is John Smith tortured by using this method? What about truth tables for neuron groups or a truth table for a sufficiently generic individual neuron? How about a truth table that gives us the four results of AND NOT for two boolean variables, and we have to interpret S as a logical circuit to look up the results using many, many iterations?
Is John Smith tortured just by the existence of a particular level of truth table (perhaps the one for S(x,t) ) if no one computed it? If so, does it matter if someone re-computes that truth table since John Smith is tortured by it anyway? If John Smith would only be tortured by computing the tables then suppose P=NP and we can somehow calculate the truth tables directly without all the intermediate calculations. Would that torture John Smith or does the act of each individual computation contribute to the experience?
For the largest truth table that causes the lookup procedure to torture John Smith, does it matter how many times the lookup is done? For instance, if looking at S(John Smith,t) tortures John Smith, does it matter how many times we look or does John Smith experience each of the S(John Smith,t) only once? The smallest truth table that allows torture-free lookups corresponds to the post’s C—E options of doing unlimited lookups without increasing torture.
Is there a level of truth tables that doesn’t torture John Smith and won’t cause his torture if someone looks up the results of S(x,t) using those tables? This seems the most unlikely, but perhaps there is a way to compute truth tables where a generic person is tortured which still returns accurate S(x,t) without torturing John Smith.
I don’t have good answers to all those questions but I think that “doing computation = torture” is too simple an answer. I am on the fence about mathematical realism and that has a large impact on what the truth tables would mean. If mathematical realism is true then the truth tables already exist and correspond to real experience. If it’s false then it’s just a convenient thought experiment to determine where, how, and when (and how often) experience actually occurs: If truth tables at the whole brain or large brain region level constitute torture then I would assume that the experience happens once when (or if) the tables are generated and that multiple lookups probably don’t cause further experience. Once neural groups or neuron lookups are being used to run a simulation I think some experience probably exists each time. By the time everything is computed I think it’s almost certainly causing experience during each simulation. But suppose we find the mechanism of conscious awareness and it’s possible to figure out what a conscious person feels while being tortured using truth tables for conscious thoughts and a full simulation of the rest of their brain. Is that as bad as physically torturing them? I don’t think so, but it would probably still be morally wrong.
If you’ve read Nick Bostrom’s paper on Unification vs. Duplication I think I find myself somewhere in the middle; using truth tables to find the result of a simulation seems a lot like Unification while direct computation fits with Duplication.
For my own part, I’m pretty confident labeling as “torturing John Smith” any process that computes all and only the states of John Smith’s brain during torture, regardless of how those states are represented and stored, and regardless of how the computation is performed.
More generally (since I have no idea why we’re using torture in this example and I find it distasteful to keep doing so) I’m pretty confident saying that any process that computes all and only the states of John Smith’s brain during some experience X involves John experiencing X, regardless of how those states are represented and stored, and regardless of how the computation is performed.
I certainly agree that if we describe the computation as being performed “off-camera” (by whatever unimaginable process created it), or being performed by a combination of that ineffable process and manual lookups, or distract attention from the process altogether, our intuitions are led to conclude that X is not experienced… for example, that Searle’s Chinese Room is not actually experiencing the human-level Chinese conversations it’s involved in.
But I don’t find that such intuitions are stable under reflection.
Wait, what? You mean, if the states are somehow brought into existence ex nihilo without any process having computed them? I have no idea. I’m not sure the question makes sense.
I think what I want to say about such things is that moral judgments are about actions and events. In this scenario I have no idea what action is being performed and no idea what event occurred, so I don’t know how to make a moral judgment about it.
Well, as above, I’m pretty confident that re-computing the table causes John to experience X (in addition to causing there to have been a John to experience it). I’m not confident what I want to say about the moral implications of identical recomputations of an event that has a certain moral character. My intuitions conflict.
That said, at the moment I’m inclined to say that all the computations have equivalent moral status, and their moral statuses add in the ordinary way for two discrete events, whatever that is.
If A is sufficiently similar to B, performing a process P on A will allow me to sufficiently accurately predict the results of P(B) without actually performing P(B)… sure. And sure, perhaps there’s a way to do this if A is a “generic person” (whatever that means) and B is John Smith.
I picked the torture example, because I’m not sure what “John experiences X” really means, once you taboo all the confusing terms about personal identity and consciousness”—but I think the moral question is a “territory” question, not a “map” question.
The “all states and only the states of the brain” part confuses me. Suppose we do time-slicing; the computer takes turns simulating John and simulating Richard. That can’t be a moral distinction. I suspect it will take some very careful phrasing to find a definition for “all states and only those states” that isn’t obviously wrong.
Yah. After thinking about this for a couple of days the only firm conclusion I have is that moral intuition doesn’t work in these cases. I have a slight worry that thinking too hard about these sorts of hypotheticals will damage my moral intuition for the real-world cases—but I don’t think this is anything more than a baby basilisk at most.
I don’t quite understand this. If a given event is not an example of John experiencing torture, then how is the moral status of John experiencing torture relevant?
I wasn’t trying to argue that if this condition is not met, then there is no moral difficulty, I was just trying to narrow my initial claim to one I could make with confidence.
If I remove the “and only” clause I open myself up to a wide range of rabbit holes that confuse my intuitions, such as “we generate the GLUT of all possible future experiences John might have, including both torture and a wildly wonderful life”.
IME moral intuitions do work in these cases, but they conflict, so it becomes necessary to think carefully about tradeoffs and boundary conditions to come up with a more precise and consistent formulation of those intuitions. That said, changing the intuitions themselves is certainly simpler, but has obvious difficulties.
I was primarily interested in whether there is a continuum of experience ranging from full physical simulation to reading values from disk or a lookup/truth table, or if there is a hard line between the shortest program that computes John Smith’s brain states over time and the shortest program that reads the pre-existing history of John Smith’s brain states into memory, with all other programs falling on either side of that line. Agreed regarding torture.
Suppose that recomputations do not cause additional experience. In that case the waterfall argument is basically true if any computation causes experience regardless of how the states are represented or stored; all possible representations can be mapped to a single computation and therefore all possible experience happens. If recomputations do cause additional experience then how much additional experience occurs for varying complexity of representation and computation?
By a generic person I mean a person who, for whatever reason, is lacking much of what we would consider identity. No name, no definite loved ones, no clear memories of moments in their life. A human person with recognizable emotional and intellectual and physical responses but without much else. Dementia patients might be a close analogue.
If such a generic person experiences joy or sadness then I think that it is real experience, and I care about it morally. However, if that model of a generic person was used to look up the reaction that I would have to similar experiences, I am not convinced that “I” would experience the same joy or sadness, at least not to the same extent as the generic person did. This has implications if an AGI is going to upload us and (presumably) try to simulate us as efficiently as possible. If it aggressively memoizes its computations of our brain states such that eventually nearly all human activity is reduced to the equivalent of truth-table lookups then I am not sure if that would be as morally desirable as computing an accurate physical simulation of everyone, even given the increased number of awesome-person-years possible with increased efficiency.
Others have argued that it doesn’t matter how “thick” neurons are or how much redundant computation is done to simulate humans, but I haven’t yet ran across a moral examination of dramatically thinning out neurons or brain regions or simplifying computations by abstracting the details of physical behavior away almost entirely while still simulating accurately. The standard argument for neuron replacement goes something like “if you replace all the neurons in your brain with fully functional simulacrums, you will not notice the difference” but what I am conceiving of is “if you replace all the neurons in your brain with lookup tables, do you notice?”
So, I’m sorry, but I’ve read this comment several times and I simply don’t follow your train of thought here. There are pieces here I agree with, and pieces I disagree with, but I don’t understand how they connect to each other or to what they purport to respond to, and I don’t know how to begin responding to it.
So it’s probably best to leave the discussion here.