So maybe consciousness has always been a linguistic debate?
It has always been at least a linguistic debate , but that does not show that it as at most a linguistic debate. The thing is that the Hard Problem is already only about one specific sub-meaning of “consciousness”, namely phenomenal consciousness...so you can’t dissolve it just by saying “consciousness” means more than one thing, or “let’s focus on the components of the problem”. A problem is at least as hard as its hardest sub-problem.
And “let’s focus on the components of the problem” isn’t eliminativism… eliminativism the claim that there is nothing for the problem to be about.
And breaking the problem into a set of sub problems that dont contain the Hard Problem is not dissolving it. (C.f. Kaj’s comments)
Don’t start by asking ‘what is consciousness’ or ‘what are qualia’; start by asking ’what are the cognitive causes of people talking about consciousness and qualia
Which might be real things that are adequately designated by the words “consciousness”and “qualia” or real things that are not adequately designated by the words “consciousness”and “qualia”, or by nothing, or...
I don’t think it would tell you much, because it only excludes the case where the HP is a meme, not the case where it’s a repeatable error.
Boxed AI tells you there is an HP.
Could be because it is phenomenally conscious, and it has noticed there is a real HP.
Could be because it is repeating a conceptual conclusion.
Boxed AI tells you there is not an HP.
Could be because it is a zombie, so it can’t understand what PC is.
Could be because it has PC, but isn’t subject to the erroneous thinking that causes the HP.
Note that you can’t specify whether an AI is or isn’t conscious, or that it’s a perfect reasoner.
Note that philosophers don’t agree on what constitutes a conceptual confusion.
Note that being able to trace back the causal history of an output doesn’t tell you it wasn’t caused by PC: one of the possible solutions to the HP is that certain kinds of physical activity, or information processing are identical to PC, so there is.not necessarily an xor between PC and physical causation. Of course, there is also a fact that human pronouncements have alone sort of causal history, and it doesn’t settle much.
Note that, as things stand, the thought experiment is an intuition pump like Mary’s Room, etc.
It has always been at least a linguistic debate , but that does not show that it as at most a linguistic debate. The thing is that the Hard Problem is already only about one specific sub-meaning of “consciousness”, namely phenomenal consciousness...so you can’t dissolve it just by saying “consciousness” means more than one thing, or “let’s focus on the components of the problem”. A problem is at least as hard as its hardest sub-problem.
And “let’s focus on the components of the problem” isn’t eliminativism… eliminativism the claim that there is nothing for the problem to be about.
And breaking the problem into a set of sub problems that dont contain the Hard Problem is not dissolving it. (C.f. Kaj’s comments)
Which might be real things that are adequately designated by the words “consciousness”and “qualia” or real things that are not adequately designated by the words “consciousness”and “qualia”, or by nothing, or...
The dictum tells you nothing.
I would be curious to know what you think about the box solving the meta-problem just before the addendum.
Do you think it is unlikely that AI would rediscover the hard problem in this setting?
I don’t think it would tell you much, because it only excludes the case where the HP is a meme, not the case where it’s a repeatable error.
Boxed AI tells you there is an HP.
Could be because it is phenomenally conscious, and it has noticed there is a real HP.
Could be because it is repeating a conceptual conclusion.
Boxed AI tells you there is not an HP.
Could be because it is a zombie, so it can’t understand what PC is.
Could be because it has PC, but isn’t subject to the erroneous thinking that causes the HP.
Note that you can’t specify whether an AI is or isn’t conscious, or that it’s a perfect reasoner.
Note that philosophers don’t agree on what constitutes a conceptual confusion.
Note that being able to trace back the causal history of an output doesn’t tell you it wasn’t caused by PC: one of the possible solutions to the HP is that certain kinds of physical activity, or information processing are identical to PC, so there is.not necessarily an xor between PC and physical causation. Of course, there is also a fact that human pronouncements have alone sort of causal history, and it doesn’t settle much.
Note that, as things stand, the thought experiment is an intuition pump like Mary’s Room, etc.