It is possible to use language meaningfully without knowing exactly how it pans out in terms of precise configurations of matter
I have never claimed otherwise. In fact, there is literally nothing that I have exact description of, in terms of matter—neither pain nor chairs. But you have to know something. I know that “chair is what I sit on” and from that there is a natural way to derive many statements about chairs. I know that “gravity is what makes things fall down”, and from that there is a fairly straightforward way to the current modern understanding of gravity. There is nothing that you know about consciousness, from which you can derive a more accurate and more material description.
Treating pain semantically as some specific brain activity buys you nothing
It buys me the ability to look at “do robots feel pain” and see that it’s a stupid question.
I see a model that claims to reproduce some of the behaviors of the human mind. Why is that relevant? Where are your subjective experiences in it?
Also, to clarify, when I say “you know nothing”, I’m not asking for some complex model or theory, I’m asking for the starting point from which those models and theories were constructed.
prove that it is a stupid question.
Proof is a high bar, and I don’t know how to reach it. You could teach me by showing a proof, for example, that “is purple bitter” is a stupid question. Although I suspect that I would find your proof circular.
Well, for one, you have been unwilling to share any such knowledge. Is it a secret, perhaps?
It’s very difficult to prove that something is impossible, and you can’t do it by noting that it has never happened yet.
Where are your subjective experiences in it?
I was responding to your claim that “there is nothing that you know about consciousness, from which you can derive a more accurate and more material description.”. This has been done, so that claim was false. You have shifted the ground.
that “is purple bitter” is a stupid question.
Purple is a colour, bitter is taste, therefore category error.
Proof is a high bar
Then why be so sure about things? Why not say “dunno” to “can robots feel pain?”.
While GWT is a model, it’s not a model of the consciousness as you use that word. It’s just a model of a human brain and some of the things happening in it. I ask you if it has subjective experiences, because that seems to be the most important aspect of consciousness to you. If you can’t find them in this model, then the model is on my side, not yours.
Purple is a colour, bitter is taste, therefore category error.
That’s ridiculous. Grapefruit is a fruit, bitter is taste, but somehow “grapefruit is bitter” is true and not a category error.
It’s very difficult to prove that something is impossible, and you can’t do it by noting that it has never happened yet.
<...>
Then why be so sure about things?
Because then I’d have to say “dunno” about literally almost everything, including the bitterness of purple. Maybe we mean different things when we say “proof”?
While GWT is a model, it’s not a model of the consciousness as you use that word. It’s just a model of a human brain and some of the things happening in it
That’s still an irrelevant objection. The issue is whether the concept of consciousness can be built on and refined, or whether it should be abandoned. GWT shows that it can be built on, and it is unreasonable to demand perfection.
That’s ridiculous. Grapefruit is a fruit, bitter is taste, but somehow “grapefruit is bitter” is true and not a category error.
Because then I’d have to say “dunno” about literally almost everything, including the bitterness of purple.
Is that worse than saying you know things you don’t know?
Sometimes different people use the same words to mean different things. I deduce that GWT does not build on consciousness as you understand it, because it doesn’t have the most important feature to you. It builds on consciousness as I understand it. How is that irrelevant?
Is that worse than saying you know things you don’t know?
You mean, is saying “dunno” to everything worse than saying something is true without having absolute 100% confidence? Yes. What kind of question is that?
Also, why did you quote my “category error” response? This doesn’t answer that at all.
I have never claimed otherwise. In fact, there is literally nothing that I have exact description of, in terms of matter—neither pain nor chairs. But you have to know something. I know that “chair is what I sit on” and from that there is a natural way to derive many statements about chairs. I know that “gravity is what makes things fall down”, and from that there is a fairly straightforward way to the current modern understanding of gravity. There is nothing that you know about consciousness, from which you can derive a more accurate and more material description.
It buys me the ability to look at “do robots feel pain” and see that it’s a stupid question.
What evil dualism?
How do you know? And what of things like https://en.wikipedia.org/wiki/Global_Workspace_Theory ?
It doesn’t seem to have given you the ability to prove that it is a stupid question.
Well, for one, you have been unwilling to share any such knowledge. Is it a secret, perhaps?
I see a model that claims to reproduce some of the behaviors of the human mind. Why is that relevant? Where are your subjective experiences in it?
Also, to clarify, when I say “you know nothing”, I’m not asking for some complex model or theory, I’m asking for the starting point from which those models and theories were constructed.
Proof is a high bar, and I don’t know how to reach it. You could teach me by showing a proof, for example, that “is purple bitter” is a stupid question. Although I suspect that I would find your proof circular.
It’s very difficult to prove that something is impossible, and you can’t do it by noting that it has never happened yet.
I was responding to your claim that “there is nothing that you know about consciousness, from which you can derive a more accurate and more material description.”. This has been done, so that claim was false. You have shifted the ground.
Purple is a colour, bitter is taste, therefore category error.
Then why be so sure about things? Why not say “dunno” to “can robots feel pain?”.
While GWT is a model, it’s not a model of the consciousness as you use that word. It’s just a model of a human brain and some of the things happening in it. I ask you if it has subjective experiences, because that seems to be the most important aspect of consciousness to you. If you can’t find them in this model, then the model is on my side, not yours.
That’s ridiculous. Grapefruit is a fruit, bitter is taste, but somehow “grapefruit is bitter” is true and not a category error.
Because then I’d have to say “dunno” about literally almost everything, including the bitterness of purple. Maybe we mean different things when we say “proof”?
That’s still an irrelevant objection. The issue is whether the concept of consciousness can be built on and refined, or whether it should be abandoned. GWT shows that it can be built on, and it is unreasonable to demand perfection.
Is that worse than saying you know things you don’t know?
Sometimes different people use the same words to mean different things. I deduce that GWT does not build on consciousness as you understand it, because it doesn’t have the most important feature to you. It builds on consciousness as I understand it. How is that irrelevant?
You mean, is saying “dunno” to everything worse than saying something is true without having absolute 100% confidence? Yes. What kind of question is that?
Also, why did you quote my “category error” response? This doesn’t answer that at all.
If we assume that the sweet spot is somwhere between 0% and 100%, are you sure you are saying “dunno” enough.
Quite sure. How about you?
And, again, what sort of question is that? What response did you expect?