Also, there’s no theoretical limitation on temporarily disabling certain brain areas or even single neurons, and examining how that corresponds to reporting of qualia.
Sure, but disable one part and the person won’t be able to verbally report information about the input, but can still use information from the input non-verbally. Which part is the “qualia” part?
You should think about this further. How much would you be willing to bet that unconscious people experience qualia? How about rocks?
I can’t bet on that until we agree upon a definition of qualia. Personally, as per the definition that makes coherent sense to me, qualia is the section of reality that I’ve got access to (and epistemology is an attempt to understand the most parsimonious system that explains my qualia). I don’t think it makes sense for anyone to talk about qualia, except in reference to themselves in the current moment. I suppose I’m a “soft” solopsist.
On the other hand, I like to define “consciousness” as “self-aware + environment-aware”. So to answer the spirit of the question, I’ll take qualia to mean “awareness”, and then we can at least say that interacting with something is a necessary condition for being “aware” of it. So rocks can’t be very aware, since they aren’t really interacting much with anything....whereas the various brain sections of unconscious people are sometimes interacting with themselves, so they might sometimes be self aware.
misunderstanding
I think the crux of that disagreement is as follows:
You think the algorithm matters morally, and the input-output function is relevant insofar as it gives us information about what the various algorithms mean.
I think the input-output function is what matters morally, and the algorithm is relevant insofar as gives us information about what the input- outputs function is.
-- Stop reading here if brevity is important, otherwise...
To turn this into a more concrete problem: Suppose algorithm X made people cry and verbally report that they feel sad. You conclude that X is sadness. I conclude that X implements sadness.
If we then take X and modified all the things to which X was connected to, such that it now makes people smile and verbally report that they feel happy, I say that X now implement happiness.
I’m guessing you’d say that X was never “happiness” in the first place, and “happiness” is actually in the interaction between X and the surrounding regions.
My argument is that there are infinite configurations of X and its surroundings. Since our judgement of what X+surroundings signifies finally depends on the output, it’s the output that really matters. If someone came to us saying they were in pain, we’d immediately care because of the output—it wouldn’t matter what the circuitry creating the pain looked like.
If a shallow mechanism for generating the output breaks (say, spinal cord injury) then we know what the output would be in a mildly counter-factual scenario, and that’s what matters morally.
The degree to which we need to make counter-factual assumptions before getting to output is important as well—on one extreme, if we are looking at a blank slate and we have to counter factually assume the entire brain, the object has no moral significance. If we just have to counter-factually assume someone’s spinal cord is repaired, there is high moral significance. Something like a coma states would be an intermediate scenario...the question is basically, how much information do we have to add to this algorithm before it generates meaningful output.
(Note: the above paragraph’s reasoning is re-purposed—was originally made for settling abortion and person-hood debates)
Another edge-case: Suppose you had a conscious being which was happy, but contained intact, suffering human brains in its algorithm. Because it would only take a very slight counter-factual modification to get those suffering human brains to generate suffering behavioral output, we still care about them morally.
Sure, but disable one part and the person won’t be able to verbally report information about the input, but can still use information from the input non-verbally. Which part is the “qualia” part?
I can’t bet on that until we agree upon a definition of qualia. Personally, as per the definition that makes coherent sense to me, qualia is the section of reality that I’ve got access to (and epistemology is an attempt to understand the most parsimonious system that explains my qualia). I don’t think it makes sense for anyone to talk about qualia, except in reference to themselves in the current moment. I suppose I’m a “soft” solopsist.
On the other hand, I like to define “consciousness” as “self-aware + environment-aware”. So to answer the spirit of the question, I’ll take qualia to mean “awareness”, and then we can at least say that interacting with something is a necessary condition for being “aware” of it. So rocks can’t be very aware, since they aren’t really interacting much with anything....whereas the various brain sections of unconscious people are sometimes interacting with themselves, so they might sometimes be self aware.
I think the crux of that disagreement is as follows:
You think the algorithm matters morally, and the input-output function is relevant insofar as it gives us information about what the various algorithms mean.
I think the input-output function is what matters morally, and the algorithm is relevant insofar as gives us information about what the input- outputs function is.
-- Stop reading here if brevity is important, otherwise...
To turn this into a more concrete problem: Suppose algorithm X made people cry and verbally report that they feel sad. You conclude that X is sadness. I conclude that X implements sadness.
If we then take X and modified all the things to which X was connected to, such that it now makes people smile and verbally report that they feel happy, I say that X now implement happiness.
I’m guessing you’d say that X was never “happiness” in the first place, and “happiness” is actually in the interaction between X and the surrounding regions.
My argument is that there are infinite configurations of X and its surroundings. Since our judgement of what X+surroundings signifies finally depends on the output, it’s the output that really matters. If someone came to us saying they were in pain, we’d immediately care because of the output—it wouldn’t matter what the circuitry creating the pain looked like.
If a shallow mechanism for generating the output breaks (say, spinal cord injury) then we know what the output would be in a mildly counter-factual scenario, and that’s what matters morally.
The degree to which we need to make counter-factual assumptions before getting to output is important as well—on one extreme, if we are looking at a blank slate and we have to counter factually assume the entire brain, the object has no moral significance. If we just have to counter-factually assume someone’s spinal cord is repaired, there is high moral significance. Something like a coma states would be an intermediate scenario...the question is basically, how much information do we have to add to this algorithm before it generates meaningful output.
(Note: the above paragraph’s reasoning is re-purposed—was originally made for settling abortion and person-hood debates)
Another edge-case: Suppose you had a conscious being which was happy, but contained intact, suffering human brains in its algorithm. Because it would only take a very slight counter-factual modification to get those suffering human brains to generate suffering behavioral output, we still care about them morally.