For my own part, I’m pretty confident labeling as “torturing John Smith” any process that computes all and only the states of John Smith’s brain during torture, regardless of how those states are represented and stored, and regardless of how the computation is performed.
More generally (since I have no idea why we’re using torture in this example and I find it distasteful to keep doing so) I’m pretty confident saying that any process that computes all and only the states of John Smith’s brain during some experience X involves John experiencing X, regardless of how those states are represented and stored, and regardless of how the computation is performed.
I certainly agree that if we describe the computation as being performed “off-camera” (by whatever unimaginable process created it), or being performed by a combination of that ineffable process and manual lookups, or distract attention from the process altogether, our intuitions are led to conclude that X is not experienced… for example, that Searle’s Chinese Room is not actually experiencing the human-level Chinese conversations it’s involved in.
But I don’t find that such intuitions are stable under reflection.
Is John Smith tortured just by the existence of a particular level of truth table (perhaps the one for S(x,t) ) if no one computed it?
Wait, what? You mean, if the states are somehow brought into existence ex nihilo without any process having computed them? I have no idea. I’m not sure the question makes sense.
I think what I want to say about such things is that moral judgments are about actions and events. In this scenario I have no idea what action is being performed and no idea what event occurred, so I don’t know how to make a moral judgment about it.
If so, does it matter if someone re-computes that truth table since John Smith is tortured by it anyway?
Well, as above, I’m pretty confident that re-computing the table causes John to experience X (in addition to causing there to have been a John to experience it). I’m not confident what I want to say about the moral implications of identical recomputations of an event that has a certain moral character. My intuitions conflict.
That said, at the moment I’m inclined to say that all the computations have equivalent moral status, and their moral statuses add in the ordinary way for two discrete events, whatever that is.
perhaps there is a way to compute truth tables where a generic person is tortured which still returns accurate S(x,t) without torturing John Smith
If A is sufficiently similar to B, performing a process P on A will allow me to sufficiently accurately predict the results of P(B) without actually performing P(B)… sure. And sure, perhaps there’s a way to do this if A is a “generic person” (whatever that means) and B is John Smith.
More generally (since I have no idea why we’re using torture in this example and I find it distasteful to keep doing so) I’m pretty confident saying that any process that computes all and only the states of John Smith’s brain during some experience X involves John experiencing X, regardless of how those states are represented and stored, and regardless of how the computation is performed.
I picked the torture example, because I’m not sure what “John experiences X” really means, once you taboo all the confusing terms about personal identity and consciousness”—but I think the moral question is a “territory” question, not a “map” question.
The “all states and only the states of the brain” part confuses me. Suppose we do time-slicing; the computer takes turns simulating John and simulating Richard. That can’t be a moral distinction. I suspect it will take some very careful phrasing to find a definition for “all states and only those states” that isn’t obviously wrong.
Well, as above, I’m pretty confident that re-computing the table causes John to experience X (in addition to causing there to have been a John to experience it). I’m not confident what I want to say about the moral implications of identical recomputations of an event that has a certain moral character. My intuitions conflict.
Yah. After thinking about this for a couple of days the only firm conclusion I have is that moral intuition doesn’t work in these cases. I have a slight worry that thinking too hard about these sorts of hypotheticals will damage my moral intuition for the real-world cases—but I don’t think this is anything more than a baby basilisk at most.
I picked the torture example, because I’m not sure what “John experiences X” really means, once you taboo all the confusing terms about personal identity and consciousness”—but I think the moral question is a “territory” question, not a “map” question.
I don’t quite understand this. If a given event is not an example of John experiencing torture, then how is the moral status of John experiencing torture relevant?
The “all states and only the states of the brain” part confuses me.
I wasn’t trying to argue that if this condition is not met, then there is no moral difficulty, I was just trying to narrow my initial claim to one I could make with confidence.
If I remove the “and only” clause I open myself up to a wide range of rabbit holes that confuse my intuitions, such as “we generate the GLUT of all possible future experiences John might have, including both torture and a wildly wonderful life”.
the only firm conclusion I have is that moral intuition doesn’t work in these cases.
IME moral intuitions do work in these cases, but they conflict, so it becomes necessary to think carefully about tradeoffs and boundary conditions to come up with a more precise and consistent formulation of those intuitions. That said, changing the intuitions themselves is certainly simpler, but has obvious difficulties.
More generally (since I have no idea why we’re using torture in this example and I find it distasteful to keep doing so) I’m pretty confident saying that any process that computes all and only the states of John Smith’s brain during some experience X involves John experiencing X, regardless of how those states are represented and stored, and regardless of how the computation is performed.
I was primarily interested in whether there is a continuum of experience ranging from full physical simulation to reading values from disk or a lookup/truth table, or if there is a hard line between the shortest program that computes John Smith’s brain states over time and the shortest program that reads the pre-existing history of John Smith’s brain states into memory, with all other programs falling on either side of that line. Agreed regarding torture.
I’m not confident what I want to say about the moral implications of identical recomputations of an event that has a certain moral character. My intuitions conflict.
Suppose that recomputations do not cause additional experience. In that case the waterfall argument is basically true if any computation causes experience regardless of how the states are represented or stored; all possible representations can be mapped to a single computation and therefore all possible experience happens. If recomputations do cause additional experience then how much additional experience occurs for varying complexity of representation and computation?
If A is sufficiently similar to B, performing a process P on A will allow me to sufficiently accurately predict the results of P(B) without actually performing P(B)… sure. And sure, perhaps there’s a way to do this if A is a “generic person” (whatever that means) and B is John Smith.
By a generic person I mean a person who, for whatever reason, is lacking much of what we would consider identity. No name, no definite loved ones, no clear memories of moments in their life. A human person with recognizable emotional and intellectual and physical responses but without much else. Dementia patients might be a close analogue.
If such a generic person experiences joy or sadness then I think that it is real experience, and I care about it morally. However, if that model of a generic person was used to look up the reaction that I would have to similar experiences, I am not convinced that “I” would experience the same joy or sadness, at least not to the same extent as the generic person did. This has implications if an AGI is going to upload us and (presumably) try to simulate us as efficiently as possible. If it aggressively memoizes its computations of our brain states such that eventually nearly all human activity is reduced to the equivalent of truth-table lookups then I am not sure if that would be as morally desirable as computing an accurate physical simulation of everyone, even given the increased number of awesome-person-years possible with increased efficiency.
Others have argued that it doesn’t matter how “thick” neurons are or how much redundant computation is done to simulate humans, but I haven’t yet ran across a moral examination of dramatically thinning out neurons or brain regions or simplifying computations by abstracting the details of physical behavior away almost entirely while still simulating accurately. The standard argument for neuron replacement goes something like “if you replace all the neurons in your brain with fully functional simulacrums, you will not notice the difference” but what I am conceiving of is “if you replace all the neurons in your brain with lookup tables, do you notice?”
So, I’m sorry, but I’ve read this comment several times and I simply don’t follow your train of thought here. There are pieces here I agree with, and pieces I disagree with, but I don’t understand how they connect to each other or to what they purport to respond to, and I don’t know how to begin responding to it.
So it’s probably best to leave the discussion here.
For my own part, I’m pretty confident labeling as “torturing John Smith” any process that computes all and only the states of John Smith’s brain during torture, regardless of how those states are represented and stored, and regardless of how the computation is performed.
More generally (since I have no idea why we’re using torture in this example and I find it distasteful to keep doing so) I’m pretty confident saying that any process that computes all and only the states of John Smith’s brain during some experience X involves John experiencing X, regardless of how those states are represented and stored, and regardless of how the computation is performed.
I certainly agree that if we describe the computation as being performed “off-camera” (by whatever unimaginable process created it), or being performed by a combination of that ineffable process and manual lookups, or distract attention from the process altogether, our intuitions are led to conclude that X is not experienced… for example, that Searle’s Chinese Room is not actually experiencing the human-level Chinese conversations it’s involved in.
But I don’t find that such intuitions are stable under reflection.
Wait, what? You mean, if the states are somehow brought into existence ex nihilo without any process having computed them? I have no idea. I’m not sure the question makes sense.
I think what I want to say about such things is that moral judgments are about actions and events. In this scenario I have no idea what action is being performed and no idea what event occurred, so I don’t know how to make a moral judgment about it.
Well, as above, I’m pretty confident that re-computing the table causes John to experience X (in addition to causing there to have been a John to experience it). I’m not confident what I want to say about the moral implications of identical recomputations of an event that has a certain moral character. My intuitions conflict.
That said, at the moment I’m inclined to say that all the computations have equivalent moral status, and their moral statuses add in the ordinary way for two discrete events, whatever that is.
If A is sufficiently similar to B, performing a process P on A will allow me to sufficiently accurately predict the results of P(B) without actually performing P(B)… sure. And sure, perhaps there’s a way to do this if A is a “generic person” (whatever that means) and B is John Smith.
I picked the torture example, because I’m not sure what “John experiences X” really means, once you taboo all the confusing terms about personal identity and consciousness”—but I think the moral question is a “territory” question, not a “map” question.
The “all states and only the states of the brain” part confuses me. Suppose we do time-slicing; the computer takes turns simulating John and simulating Richard. That can’t be a moral distinction. I suspect it will take some very careful phrasing to find a definition for “all states and only those states” that isn’t obviously wrong.
Yah. After thinking about this for a couple of days the only firm conclusion I have is that moral intuition doesn’t work in these cases. I have a slight worry that thinking too hard about these sorts of hypotheticals will damage my moral intuition for the real-world cases—but I don’t think this is anything more than a baby basilisk at most.
I don’t quite understand this. If a given event is not an example of John experiencing torture, then how is the moral status of John experiencing torture relevant?
I wasn’t trying to argue that if this condition is not met, then there is no moral difficulty, I was just trying to narrow my initial claim to one I could make with confidence.
If I remove the “and only” clause I open myself up to a wide range of rabbit holes that confuse my intuitions, such as “we generate the GLUT of all possible future experiences John might have, including both torture and a wildly wonderful life”.
IME moral intuitions do work in these cases, but they conflict, so it becomes necessary to think carefully about tradeoffs and boundary conditions to come up with a more precise and consistent formulation of those intuitions. That said, changing the intuitions themselves is certainly simpler, but has obvious difficulties.
I was primarily interested in whether there is a continuum of experience ranging from full physical simulation to reading values from disk or a lookup/truth table, or if there is a hard line between the shortest program that computes John Smith’s brain states over time and the shortest program that reads the pre-existing history of John Smith’s brain states into memory, with all other programs falling on either side of that line. Agreed regarding torture.
Suppose that recomputations do not cause additional experience. In that case the waterfall argument is basically true if any computation causes experience regardless of how the states are represented or stored; all possible representations can be mapped to a single computation and therefore all possible experience happens. If recomputations do cause additional experience then how much additional experience occurs for varying complexity of representation and computation?
By a generic person I mean a person who, for whatever reason, is lacking much of what we would consider identity. No name, no definite loved ones, no clear memories of moments in their life. A human person with recognizable emotional and intellectual and physical responses but without much else. Dementia patients might be a close analogue.
If such a generic person experiences joy or sadness then I think that it is real experience, and I care about it morally. However, if that model of a generic person was used to look up the reaction that I would have to similar experiences, I am not convinced that “I” would experience the same joy or sadness, at least not to the same extent as the generic person did. This has implications if an AGI is going to upload us and (presumably) try to simulate us as efficiently as possible. If it aggressively memoizes its computations of our brain states such that eventually nearly all human activity is reduced to the equivalent of truth-table lookups then I am not sure if that would be as morally desirable as computing an accurate physical simulation of everyone, even given the increased number of awesome-person-years possible with increased efficiency.
Others have argued that it doesn’t matter how “thick” neurons are or how much redundant computation is done to simulate humans, but I haven’t yet ran across a moral examination of dramatically thinning out neurons or brain regions or simplifying computations by abstracting the details of physical behavior away almost entirely while still simulating accurately. The standard argument for neuron replacement goes something like “if you replace all the neurons in your brain with fully functional simulacrums, you will not notice the difference” but what I am conceiving of is “if you replace all the neurons in your brain with lookup tables, do you notice?”
So, I’m sorry, but I’ve read this comment several times and I simply don’t follow your train of thought here. There are pieces here I agree with, and pieces I disagree with, but I don’t understand how they connect to each other or to what they purport to respond to, and I don’t know how to begin responding to it.
So it’s probably best to leave the discussion here.