If we can make a high resolution record of what happens in their brain when they report qualia, we can look at what kind of computation those qualia are, and therefore determine if other agents have them too.
I’m confused...you seem to be suggesting that we use behavioral output to determine which parts of the brain are responsible for qualia, which you say should define morality… didn’t you just tell me that I shouldn’t use behavioral output to define my morality?
If we did it the way you said, and looked at the brain to see what happened when people reported perceiving things, we’d find out some cool things about human perception. However, there’s no guarantee that other minds will use the computation. That’s why I’m emphasizing that it’s important to focus on the input-output functions of the algorithm, rather than the content of the algorithm itself. (Again, this does not mean we ignore the algorithm altogether—it means that we look at the algorithm with respect to what it would output for a given input—so we still care about paralyzed people, brains in vats, etc...since we can make guesses as to what they would output given minor changes to the situation.).
(Not to mention, there is a cascade of things happening from the moment your eyes perceive red to the moment your mouth outputs “Yeah, that’s red” and looking at an actual brain will tell you nothing about which part of the computation gets the “qualia” designation. At best, you’ll find some central hubs which handle information from many parts. Qualia, like free will, is a philosophical question—all the neuroscience knowledge in the world won’t help answer it. Neuroscience might help eliminate some obviously wrong hypotheses, as it did with free will, but fundamentally this is a question that can and should be settled without neuroscience. )
didn’t you just tell me that I shouldn’t use behavioral output to define my morality?
There’s probably a lot of misunderstanding going on between us. I thought you meant you always need the output. In my interpretation you only need the output once for a particular qualia in the optimal situation. After that, you can just start scanning brains or programs for similar computations. How much output we need, if any, depends on at what stage of understanding we are.
However, there’s no guarantee that other minds will use the computation.
True. However, if the reporting of qualia corresponds to certain patterns of brain activity, and that brain activity can be expressed mathematically, then we have a computation and we can think about other ways the computation could be performed. We might even be able to test different forms of the computation on EMs, and see what they report.
“Yeah, that’s red” and looking at an actual brain will tell you nothing about which part of the computation gets the “qualia” designation.
This is incorrect, because there are temporal differences in brain activity. Light on your retina doesn’t instantly transfer information to all parts of your brain responsible for visual processing. Also, there’s no theoretical limitation on temporarily disabling certain brain areas or even single neurons, and examining how that corresponds to reporting of qualia.
Qualia, like free will, is a philosophical question—all the neuroscience knowledge in the world won’t help answer it.
You should think about this further. How much would you be willing to bet that unconscious people experience qualia? How about rocks?
Also, there’s no theoretical limitation on temporarily disabling certain brain areas or even single neurons, and examining how that corresponds to reporting of qualia.
Sure, but disable one part and the person won’t be able to verbally report information about the input, but can still use information from the input non-verbally. Which part is the “qualia” part?
You should think about this further. How much would you be willing to bet that unconscious people experience qualia? How about rocks?
I can’t bet on that until we agree upon a definition of qualia. Personally, as per the definition that makes coherent sense to me, qualia is the section of reality that I’ve got access to (and epistemology is an attempt to understand the most parsimonious system that explains my qualia). I don’t think it makes sense for anyone to talk about qualia, except in reference to themselves in the current moment. I suppose I’m a “soft” solopsist.
On the other hand, I like to define “consciousness” as “self-aware + environment-aware”. So to answer the spirit of the question, I’ll take qualia to mean “awareness”, and then we can at least say that interacting with something is a necessary condition for being “aware” of it. So rocks can’t be very aware, since they aren’t really interacting much with anything....whereas the various brain sections of unconscious people are sometimes interacting with themselves, so they might sometimes be self aware.
misunderstanding
I think the crux of that disagreement is as follows:
You think the algorithm matters morally, and the input-output function is relevant insofar as it gives us information about what the various algorithms mean.
I think the input-output function is what matters morally, and the algorithm is relevant insofar as gives us information about what the input- outputs function is.
-- Stop reading here if brevity is important, otherwise...
To turn this into a more concrete problem: Suppose algorithm X made people cry and verbally report that they feel sad. You conclude that X is sadness. I conclude that X implements sadness.
If we then take X and modified all the things to which X was connected to, such that it now makes people smile and verbally report that they feel happy, I say that X now implement happiness.
I’m guessing you’d say that X was never “happiness” in the first place, and “happiness” is actually in the interaction between X and the surrounding regions.
My argument is that there are infinite configurations of X and its surroundings. Since our judgement of what X+surroundings signifies finally depends on the output, it’s the output that really matters. If someone came to us saying they were in pain, we’d immediately care because of the output—it wouldn’t matter what the circuitry creating the pain looked like.
If a shallow mechanism for generating the output breaks (say, spinal cord injury) then we know what the output would be in a mildly counter-factual scenario, and that’s what matters morally.
The degree to which we need to make counter-factual assumptions before getting to output is important as well—on one extreme, if we are looking at a blank slate and we have to counter factually assume the entire brain, the object has no moral significance. If we just have to counter-factually assume someone’s spinal cord is repaired, there is high moral significance. Something like a coma states would be an intermediate scenario...the question is basically, how much information do we have to add to this algorithm before it generates meaningful output.
(Note: the above paragraph’s reasoning is re-purposed—was originally made for settling abortion and person-hood debates)
Another edge-case: Suppose you had a conscious being which was happy, but contained intact, suffering human brains in its algorithm. Because it would only take a very slight counter-factual modification to get those suffering human brains to generate suffering behavioral output, we still care about them morally.
I’m confused...you seem to be suggesting that we use behavioral output to determine which parts of the brain are responsible for qualia, which you say should define morality… didn’t you just tell me that I shouldn’t use behavioral output to define my morality?
If we did it the way you said, and looked at the brain to see what happened when people reported perceiving things, we’d find out some cool things about human perception. However, there’s no guarantee that other minds will use the computation. That’s why I’m emphasizing that it’s important to focus on the input-output functions of the algorithm, rather than the content of the algorithm itself. (Again, this does not mean we ignore the algorithm altogether—it means that we look at the algorithm with respect to what it would output for a given input—so we still care about paralyzed people, brains in vats, etc...since we can make guesses as to what they would output given minor changes to the situation.).
(Not to mention, there is a cascade of things happening from the moment your eyes perceive red to the moment your mouth outputs “Yeah, that’s red” and looking at an actual brain will tell you nothing about which part of the computation gets the “qualia” designation. At best, you’ll find some central hubs which handle information from many parts. Qualia, like free will, is a philosophical question—all the neuroscience knowledge in the world won’t help answer it. Neuroscience might help eliminate some obviously wrong hypotheses, as it did with free will, but fundamentally this is a question that can and should be settled without neuroscience. )
There’s probably a lot of misunderstanding going on between us. I thought you meant you always need the output. In my interpretation you only need the output once for a particular qualia in the optimal situation. After that, you can just start scanning brains or programs for similar computations. How much output we need, if any, depends on at what stage of understanding we are.
True. However, if the reporting of qualia corresponds to certain patterns of brain activity, and that brain activity can be expressed mathematically, then we have a computation and we can think about other ways the computation could be performed. We might even be able to test different forms of the computation on EMs, and see what they report.
This is incorrect, because there are temporal differences in brain activity. Light on your retina doesn’t instantly transfer information to all parts of your brain responsible for visual processing. Also, there’s no theoretical limitation on temporarily disabling certain brain areas or even single neurons, and examining how that corresponds to reporting of qualia.
You should think about this further. How much would you be willing to bet that unconscious people experience qualia? How about rocks?
Sure, but disable one part and the person won’t be able to verbally report information about the input, but can still use information from the input non-verbally. Which part is the “qualia” part?
I can’t bet on that until we agree upon a definition of qualia. Personally, as per the definition that makes coherent sense to me, qualia is the section of reality that I’ve got access to (and epistemology is an attempt to understand the most parsimonious system that explains my qualia). I don’t think it makes sense for anyone to talk about qualia, except in reference to themselves in the current moment. I suppose I’m a “soft” solopsist.
On the other hand, I like to define “consciousness” as “self-aware + environment-aware”. So to answer the spirit of the question, I’ll take qualia to mean “awareness”, and then we can at least say that interacting with something is a necessary condition for being “aware” of it. So rocks can’t be very aware, since they aren’t really interacting much with anything....whereas the various brain sections of unconscious people are sometimes interacting with themselves, so they might sometimes be self aware.
I think the crux of that disagreement is as follows:
You think the algorithm matters morally, and the input-output function is relevant insofar as it gives us information about what the various algorithms mean.
I think the input-output function is what matters morally, and the algorithm is relevant insofar as gives us information about what the input- outputs function is.
-- Stop reading here if brevity is important, otherwise...
To turn this into a more concrete problem: Suppose algorithm X made people cry and verbally report that they feel sad. You conclude that X is sadness. I conclude that X implements sadness.
If we then take X and modified all the things to which X was connected to, such that it now makes people smile and verbally report that they feel happy, I say that X now implement happiness.
I’m guessing you’d say that X was never “happiness” in the first place, and “happiness” is actually in the interaction between X and the surrounding regions.
My argument is that there are infinite configurations of X and its surroundings. Since our judgement of what X+surroundings signifies finally depends on the output, it’s the output that really matters. If someone came to us saying they were in pain, we’d immediately care because of the output—it wouldn’t matter what the circuitry creating the pain looked like.
If a shallow mechanism for generating the output breaks (say, spinal cord injury) then we know what the output would be in a mildly counter-factual scenario, and that’s what matters morally.
The degree to which we need to make counter-factual assumptions before getting to output is important as well—on one extreme, if we are looking at a blank slate and we have to counter factually assume the entire brain, the object has no moral significance. If we just have to counter-factually assume someone’s spinal cord is repaired, there is high moral significance. Something like a coma states would be an intermediate scenario...the question is basically, how much information do we have to add to this algorithm before it generates meaningful output.
(Note: the above paragraph’s reasoning is re-purposed—was originally made for settling abortion and person-hood debates)
Another edge-case: Suppose you had a conscious being which was happy, but contained intact, suffering human brains in its algorithm. Because it would only take a very slight counter-factual modification to get those suffering human brains to generate suffering behavioral output, we still care about them morally.