I feel like I already understand, reasonably well, the chain of causation in my brain that leads to me saying the thing in the previous paragraph, i.e. “I’m conscious right now, let me describe the qualia…” See my Book Review: Rethinking Consciousness.
You only have evidence that understand a chain of causation. You don’t have evidence that no alternative account is possible.
…And it turns out that there is nothing whatsoever in that chain of causation that looks like what we intuitively expect consciousness and qualia to look like.
If you look at a brain from the outside, its qualia aren’t visible. Equally, if you look at your brain from the inside, you see nothing but qualia...you do not see neural activity as such.
And your internal view of causality is that your pains cause you ouches.
Therefore, I need to conclude that either consciousness and qualia don’t exist, or that consciousness and qualia exist, but that they are not the ontologically fundamental parts of reality that they intuitively seem to be.
I don’t think qualia seem to be fundamental.
As I understand it, here I’m endorsing the “illusionism” perspective, as advocated (for example) by Keith Frankish, Dan Dennett, and Michael Graziano.
Illusionism is the claim that qualia don’t exist at all, not the claim they are merely non-fundamental. An emergentist could agree that they are non-fundamental.
Next, if a computer chip is running similar algorithms as a human philosopher, expressing a similar chain of causation, that leads to that chip emitting similar descriptions of consciousness and qualia as human philosophers emit, for similar underlying reasons, then I think we have to say that whatever consciousness and qualia are (if anything), this computer chip has those things just as much as the human does
That isn’t illusionism. The most an illusionist would say is that a computer would be subject to the same illusions/delusions.
You have bypassed the possibility that what causes qualia to emerge is not computation, but the concrete physics of the brain.… something that can only be captured by a physical description.
(Side note: Transformer-based self-supervised language models like GPT-3 can emit human-sounding descriptions of consciousness, but (I claim) they emit those descriptions for very different underlying reasons than brains do—i.e., as a result of a very different chain of causation / algorithm
Different chain of physical causation , or different algorithm? It’s quite possible for the same algorithm to be implemented in physically different ways...and it’s quite possible for emergent consciousness to supervene on physics.
However, nihilism is not decision-relevant
Nihilism about what, and why? I don’t think you have a theory that consciousness doesn’t exist or that qualia don’t exist. And even if you did, I don’t see how it implies the non existence of values, or preferences or selves or purposes… or whatever else it takes to undermine decision theory.
When I do that, I wind up feeling pretty strongly that if an AGI can describe joy and suffering in a human-like way, thanks to human-like underlying algorithmic processes, then I ought to care about that AGI’s well-being.
Because they have the qualia, or because qualia don’t matter?
Because if the agent has to (meta)learn better and better strategies for things like brainstorming and learning and planning and understanding, I think this process entails the kind of self-reflection which comprises full-fledged self-aware human-like consciousness.
Meaning that qualia aren’t even one component of human consciousness? Or one possible meaning of “consciousness”?
So I don’t even think the AGI would be in a gray area—I think it would be indisputably conscious, conscious according to any reasonable definition
Illusionist don’t think humans are conscious, for some definitions of consciousness.
Illusionism is the claim that qualia don’t exist at all, not the claim they are merely non-fundamental. An emergentist could agree that they are non-fundamental.
I’m unclear on this part. It seems like maybe just terminology to me. Suppose
Alice says “Qualia are an illusion, they don’t exist”,
Bob says “Qualia are an illusion. And they exist. They exist as an illusion.”
…I’m not sure Alice and Bob are actually disagreeing about anything of substance here, and my vague impression is that you can find self-described illusionists on both sides of that (non?)-dispute. For example, Frankish uses Alice-type descriptions, whereas Dennett and Graziano use Bob-type descriptions, I think.
Analogy: in the moving-Mario optical illusion, Alice would say “moving-Mario does not exist”, and Bob would say “there is an illusion (mental model) of moving-Mario, and it’s in your brain, and that illusion definitely exists, how else could I be talking about it?”
And if you’re on the Bob side of the dispute here, that would seem to me to be a form of emergentism, right??
You have bypassed the possibility that what causes qualia to emerge is the concrete physics of the brain, something that can only be captured by a physical description.
I don’t think I understand this part. According to the possibility that you have in mind, does the computer chip emit similar descriptions of consciousness and qualia as the human philosopher? Or not?
And then follow-up questions:
If yes, then do you agree that (on this possibility) actual consciousness and qualia are not involved in the chain of causation in your brain that leads to your describing your own consciousness and qualia? After all, presumably the chain of causation is the same in the computer chip, right?
If no, then does this possibility require that it’s fundamentally impossible to simulate a brain on a computer, such that the simulation and the actual brain emit the same outputs in the same situations?
Therefore, I need to conclude that either consciousness and qualia don’t exist, or that consciousness and qualia exist, but that they are not the ontologically fundamental parts of reality that they intuitively seem to be.
Illusionism is the claim that qualia don’t exist at all, not the claim they are merely non-fundamental. An emergentist could agree that they are non-fundamental
I’m unclear on this part. It seems like maybe just terminology to me
I don’t think so because “fundamental” and “illusory” are not obvious antonyms.
Alice says “Qualia are an illusion, they don’t exist”,
Bob says “Qualia are an illusion. And they exist. They exist as an illusion.”
The Bob version runs into a basic problem with illusionism, which is that it is self contradictory: an illusion is a false appearance a false appearance is an appearance and an appearance is a quale
The Bob version could be rectified as
Charlie says “Qualia are a delusion. People have a false belief that they have them , but don’t have them.
And some illusionists believe that, but don’t call it delusionism.
[Edit I think the Charlie claim is Dennets position.]
[Edit: I think I understand your position much better after having read your reply to Mitchell. Must exist, since neither brain states nor perceived objects have their properties, but only in a virtual sense...?
]
And if you’re on the Bob side of the dispute here, that would seem to me to be a form of emergentism, right??
Only Bob’s (or
Robs’s )
self-defeating form of illusionism. Basically, illusionists are trying to deny qualia, and if they let them in by the back door, that’s probably a mistake. Also, they don’t believe in the full panoply of qualia anyway, only the one responsible for the illusion.
I don’t think I understand this part. According to the possibility that you have in mind, does the computer chip emit similar descriptions of consciousness and qualia as the human philosopher
I’m taking that as true by hypothesis.
If yes, then do you agree that (on this possibility) actual consciousness and qualia are not involved in the chain of causation in your brain that leads to your describing your own consciousness and qualia? After all, presumably the chain of causation is the same in the computer chip, right?
The chain of causation is definitely different because silicon isn’t protoplasm. By hypothesis , the computation is the same but computation isn’t causation. Computation is essentially a lossy, high level description of the physical behaviour.
If no, then does this possibility require that it’s fundamentally impossible to simulate a brain on a computer, such that the simulation and the actual brain emit the same outputs in the same situations
No, but that says nothing about qualia. It’s possible for qualia to depend on some aspects of the physics that isn’t captured the computational description …which means that out of two systems running the same algorithm on different hardware,one could have qualia , but the other not. The other is a kind of zombie, but not a p-zombie because of the physical difference.
I strongly disagree with “computation is a lossy high-level description”. For what we’re talking about, I think computation is a lossless description. I believe the thing we are calling ‘qualia’ is equivalent to a python function written on a computer. It is not a ‘real’ function on the computer it is written on, but a ‘zombie’ function when run on a different computer. If the computation is exactly the same, the underlying physical process that produced it is irrelevant. It is the same function.
Computation in general is a lossy high level description, but not invariably.
For what we’re talking about, I think computation is a lossless description.
And what we are talking about is the computational theory of consciousness
If the computational theory of consciousness is correct, then computation is a lossless description.
But that doesn’t prove anything relevant, because it doesn’t show that computational theory is actually or necessarily correct. It is possibly wrong , so computational zombies are still possible.
I believe the thing we are calling ‘qualia’ is equivalent to a python function written on a computer
You only have evidence that understand a chain of causation. You don’t have evidence that no alternative account is possible.
If you look at a brain from the outside, its qualia aren’t visible. Equally, if you look at your brain from the inside, you see nothing but qualia...you do not see neural activity as such.
And your internal view of causality is that your pains cause you ouches.
I don’t think qualia seem to be fundamental.
Illusionism is the claim that qualia don’t exist at all, not the claim they are merely non-fundamental. An emergentist could agree that they are non-fundamental.
That isn’t illusionism. The most an illusionist would say is that a computer would be subject to the same illusions/delusions.
You have bypassed the possibility that what causes qualia to emerge is not computation, but the concrete physics of the brain.… something that can only be captured by a physical description.
Different chain of physical causation , or different algorithm? It’s quite possible for the same algorithm to be implemented in physically different ways...and it’s quite possible for emergent consciousness to supervene on physics.
Nihilism about what, and why? I don’t think you have a theory that consciousness doesn’t exist or that qualia don’t exist. And even if you did, I don’t see how it implies the non existence of values, or preferences or selves or purposes… or whatever else it takes to undermine decision theory.
Because they have the qualia, or because qualia don’t matter?
Meaning that qualia aren’t even one component of human consciousness? Or one possible meaning of “consciousness”?
Illusionist don’t think humans are conscious, for some definitions of consciousness.
Thanks!
I’m unclear on this part. It seems like maybe just terminology to me. Suppose
Alice says “Qualia are an illusion, they don’t exist”,
Bob says “Qualia are an illusion. And they exist. They exist as an illusion.”
…I’m not sure Alice and Bob are actually disagreeing about anything of substance here, and my vague impression is that you can find self-described illusionists on both sides of that (non?)-dispute. For example, Frankish uses Alice-type descriptions, whereas Dennett and Graziano use Bob-type descriptions, I think.
Analogy: in the moving-Mario optical illusion, Alice would say “moving-Mario does not exist”, and Bob would say “there is an illusion (mental model) of moving-Mario, and it’s in your brain, and that illusion definitely exists, how else could I be talking about it?”
And if you’re on the Bob side of the dispute here, that would seem to me to be a form of emergentism, right??
I don’t think I understand this part. According to the possibility that you have in mind, does the computer chip emit similar descriptions of consciousness and qualia as the human philosopher? Or not?
And then follow-up questions:
If yes, then do you agree that (on this possibility) actual consciousness and qualia are not involved in the chain of causation in your brain that leads to your describing your own consciousness and qualia? After all, presumably the chain of causation is the same in the computer chip, right?
If no, then does this possibility require that it’s fundamentally impossible to simulate a brain on a computer, such that the simulation and the actual brain emit the same outputs in the same situations?
I don’t think so because “fundamental” and “illusory” are not obvious antonyms.
The Bob version runs into a basic problem with illusionism, which is that it is self contradictory: an illusion is a false appearance a false appearance is an appearance and an appearance is a quale
The Bob version could be rectified as
Charlie says “Qualia are a delusion. People have a false belief that they have them , but don’t have them.
And some illusionists believe that, but don’t call it delusionism.
[Edit I think the Charlie claim is Dennets position.]
[Edit: I think I understand your position much better after having read your reply to Mitchell. Must exist, since neither brain states nor perceived objects have their properties, but only in a virtual sense...? ]
Only Bob’s (or Robs’s ) self-defeating form of illusionism. Basically, illusionists are trying to deny qualia, and if they let them in by the back door, that’s probably a mistake. Also, they don’t believe in the full panoply of qualia anyway, only the one responsible for the illusion.
I’m taking that as true by hypothesis.
The chain of causation is definitely different because silicon isn’t protoplasm. By hypothesis , the computation is the same but computation isn’t causation. Computation is essentially a lossy, high level description of the physical behaviour.
No, but that says nothing about qualia. It’s possible for qualia to depend on some aspects of the physics that isn’t captured the computational description …which means that out of two systems running the same algorithm on different hardware,one could have qualia , but the other not. The other is a kind of zombie, but not a p-zombie because of the physical difference.
And since that is true , the GAZP is false.
I strongly disagree with “computation is a lossy high-level description”. For what we’re talking about, I think computation is a lossless description. I believe the thing we are calling ‘qualia’ is equivalent to a python function written on a computer. It is not a ‘real’ function on the computer it is written on, but a ‘zombie’ function when run on a different computer. If the computation is exactly the same, the underlying physical process that produced it is irrelevant. It is the same function.
Computation in general is a lossy high level description, but not invariably.
And what we are talking about is the computational theory of consciousness
If the computational theory of consciousness is correct, then computation is a lossless description.
But that doesn’t prove anything relevant, because it doesn’t show that computational theory is actually or necessarily correct. It is possibly wrong , so computational zombies are still possible.
Can you state the function?