Or maybe it’s like the times I’ve read poorly-written math textbooks, and there’s a complicated proof of Theorem X, and I’m able to check that every step of the proof is correct, but all the steps seem random, and then out of nowhere, the last step says “Therefore, Theorem X is true”. OK, well, I guess Theorem X is true then.
...But if I had previously found Theorem X to be unintuitive (“it seems like it shouldn’t be true”), I’m now obligated to fix my faulty intuitions and construct new better ones to replace them, and doing so can be extremely challenging. In that sense, reading and verifying the confusing proof of Theorem X is “annoying and deeply unsatisfying”.
(The really good math books offer both a rigorous proof of Theorem X and an intuitive way to think about things such that Theorem X is obviously true once those intuitions are internalized. That saves readers the work of searching out those intuitions for themselves from scratch.)
So, I’m not saying that Graziano’s argument is poorly-written per se, but having read the book, I find myself more-or-less without any intuitions about consciousness that I can endorse upon reflection, and this is an annoying and unsatisfying situation. Hopefully I’ll construct new better intuitions sooner or later. Or—less likely I think—I’ll decide that Graziano’s argument is baloney after all :-)
Sorry, but you can be better than that. You should not be trusting textbook authors when they say that Theorem X is true. If you don’t follow the chain of reasoning and see for yourself why it works, then you shouldn’t take it on face value. You can do better.
This is an unpopular opinion because people don’t like doing the work. But if you’ve read the memoirs of anyone who has achieved greatness through originality in their work, like Richard Feynman for example, there is a consistent lesson: don’t trust what you don’t understand yourself.
In a community where the explicit goal is to be less wrong, then I cannot think of a stronger mandate than to not trust authority and to develop your own intuitive understanding of everything. Anyone who says this isn’t possible hasn’t really tried.
develop your own intuitive understanding of everything
I agree 100%!! That’s the goal. And I’m not there yet with consciousness. That’s why I used the word “annoying and unsatisfying” to describe my attempts to understand consciousness thus far. :-P
You should not be trusting textbook authors when they say that Theorem X is true
I’m not sure you quite followed what I wrote here.
I am saying that it’s possible to understand a math proof well enough to have 100% confidence—on solely one’s own authority—that the proof is mathematically correct, but still not understand it well enough to intuitively grok it. This typically happens when you can confirm that each step of the proof, taken on its own, is mathematically correct.
If you haven’t lived this experience, maybe imagine that I give you a proof of the Riemann hypothesis in the form of 500 pages of equations kinda like this, with no English-language prose or variable names whatsoever. Then you spend 6 months checking rigorously that every line follows from the previous line (or program a computer to do that for you). OK, you have now verified on solely your own authority that the Riemann hypothesis is true. But if I now ask you why it’s true, you can’t give any answer better than “It’s true because this 500-page argument shows it to be true”.
So, that’s a bit like where I’m at on consciousness. My “proof” is not 500 pages, it’s just 4 steps, but that’s still too much for me to hold the whole thing in my head and feel satisfied that I intuitively grok it.
I am strongly disinclined to believe (as I think David Chalmers has suggested) that there’s a notion of p-zombies, in which an unconscious system could have exactly the same thoughts and behaviors as a conscious one, even including writing books about the philosophy of consciousness, for reasons described here and elsewhere.
If I believe (1), it seems to follow that I should endorse the claim “if we have a complete explanation of the meta-problem of consciousness, then there is nothing left to explain regarding the hard problem of consciousness”. The argument more specifically is: Either the behavior in which a philosopher writes a book about consciousness has some causal relation to the nature of consciousness itself (in which case, solving the meta-problem requires understanding the nature of consciousness), or it doesn’t (in which case, unconscious p-zombies should (bizarrely) be equally capable of writing philosophy books about consciousness).
I think that Attention Schema Theory offers a complete and correct answer to every aspect of the meta-problem of consciousness, at least every aspect that I can think of.
...Therefore, I conclude that there is nothing to consciousness beyond the processes discussed in Attention Schema Theory.
I keep going through these steps and they all seem pretty solid, and so I feel somewhat obligated to accept the conclusion in step 4. But I find that conclusion highly unintuitive, I think for the same reason most people do—sorta like, why should any information processing feel like anything at all?
So, I need to either drag my intuitions into line with 1-4, or else crystallize my intuitions into a specific error in one of the steps 1-4. That’s where I’m at right now. I appreciate you and others in this comment thread pointing me to helpful and interesting resources! :-)
I am strongly disinclined to believe (as I think David Chalmers has suggested) that there’s a notion of p-zombies, in which an unconscious system could have exactly the same thoughts and behaviors as a conscious one, even including writing books about the philosophy of consciousness, for reasons described here and elsewhere.
Again: Chalmers doesn’t think p-zombies are actually possible.
If I believe (1), it seems to follow that I should endorse the claim “if we have a complete explanation of the meta-problem of consciousness, then there is nothing left to explain regarding the hard problem of consciousness”.
That doesn’t follow from (1). It would follow from the claim that everyone is a zombie, because then there would be nothing to consciousness except false claims to be conscious. However, if you take the view that reports of consciousness are caused by consciousness per se, then consciousness per se exists and needs to be explained separately from reports and behaviour.
Hmm. I do take the view that reports of consciousness are (at least in part) caused by consciousness (whatever that is!). (Does anyone disagree with that?) I think a complete explanation of reports of consciousness necessarily include any upstream cause of those reports. By analogy, I report that I am wearing a watch. If you want a “complete and correct explanation” of that report, you need to bring up the fact that I am in fact wearing a watch, and to describe what a watch is. Any explanation omitting the existence of my actual watch would not match the data. Thus, if reports of consciousness are partly caused by consciousness, then it will not be possible to correctly explain those reports unless, somewhere buried within the explanation of the report of consciousness, there is an explanation of consciousness itself. Do you see where I’m coming from?
If explaining reports of consciousness involves solving the hard problem, then no one has explained reports of consciousness, since no one has has solved the HP.
Of course, some people (eg. Dennett) think that reports of consciousness can be explained … and don’t accept that there is an HP.
And the HP isn’t about consciousness in general, it is about qualia or phenomenal consciousnessm the very thing that illusionism denies.
Edit: the basic problem with what you are saying is that there are disagreements about what explanation is, and about what needs to be explained. The Dennet side was that once you have explained all the objective phenomena objectively, you have explained everything. The Chalmers side thinks that leaves out the most important stuff.
Ha! Maybe!
Or maybe it’s like the times I’ve read poorly-written math textbooks, and there’s a complicated proof of Theorem X, and I’m able to check that every step of the proof is correct, but all the steps seem random, and then out of nowhere, the last step says “Therefore, Theorem X is true”. OK, well, I guess Theorem X is true then.
...But if I had previously found Theorem X to be unintuitive (“it seems like it shouldn’t be true”), I’m now obligated to fix my faulty intuitions and construct new better ones to replace them, and doing so can be extremely challenging. In that sense, reading and verifying the confusing proof of Theorem X is “annoying and deeply unsatisfying”.
(The really good math books offer both a rigorous proof of Theorem X and an intuitive way to think about things such that Theorem X is obviously true once those intuitions are internalized. That saves readers the work of searching out those intuitions for themselves from scratch.)
So, I’m not saying that Graziano’s argument is poorly-written per se, but having read the book, I find myself more-or-less without any intuitions about consciousness that I can endorse upon reflection, and this is an annoying and unsatisfying situation. Hopefully I’ll construct new better intuitions sooner or later. Or—less likely I think—I’ll decide that Graziano’s argument is baloney after all :-)
Sorry, but you can be better than that. You should not be trusting textbook authors when they say that Theorem X is true. If you don’t follow the chain of reasoning and see for yourself why it works, then you shouldn’t take it on face value. You can do better.
This is an unpopular opinion because people don’t like doing the work. But if you’ve read the memoirs of anyone who has achieved greatness through originality in their work, like Richard Feynman for example, there is a consistent lesson: don’t trust what you don’t understand yourself.
In a community where the explicit goal is to be less wrong, then I cannot think of a stronger mandate than to not trust authority and to develop your own intuitive understanding of everything. Anyone who says this isn’t possible hasn’t really tried.
I agree 100%!! That’s the goal. And I’m not there yet with consciousness. That’s why I used the word “annoying and unsatisfying” to describe my attempts to understand consciousness thus far. :-P
I’m not sure you quite followed what I wrote here.
I am saying that it’s possible to understand a math proof well enough to have 100% confidence—on solely one’s own authority—that the proof is mathematically correct, but still not understand it well enough to intuitively grok it. This typically happens when you can confirm that each step of the proof, taken on its own, is mathematically correct.
If you haven’t lived this experience, maybe imagine that I give you a proof of the Riemann hypothesis in the form of 500 pages of equations kinda like this, with no English-language prose or variable names whatsoever. Then you spend 6 months checking rigorously that every line follows from the previous line (or program a computer to do that for you). OK, you have now verified on solely your own authority that the Riemann hypothesis is true. But if I now ask you why it’s true, you can’t give any answer better than “It’s true because this 500-page argument shows it to be true”.
So, that’s a bit like where I’m at on consciousness. My “proof” is not 500 pages, it’s just 4 steps, but that’s still too much for me to hold the whole thing in my head and feel satisfied that I intuitively grok it.
I am strongly disinclined to believe (as I think David Chalmers has suggested) that there’s a notion of p-zombies, in which an unconscious system could have exactly the same thoughts and behaviors as a conscious one, even including writing books about the philosophy of consciousness, for reasons described here and elsewhere.
If I believe (1), it seems to follow that I should endorse the claim “if we have a complete explanation of the meta-problem of consciousness, then there is nothing left to explain regarding the hard problem of consciousness”. The argument more specifically is: Either the behavior in which a philosopher writes a book about consciousness has some causal relation to the nature of consciousness itself (in which case, solving the meta-problem requires understanding the nature of consciousness), or it doesn’t (in which case, unconscious p-zombies should (bizarrely) be equally capable of writing philosophy books about consciousness).
I think that Attention Schema Theory offers a complete and correct answer to every aspect of the meta-problem of consciousness, at least every aspect that I can think of.
...Therefore, I conclude that there is nothing to consciousness beyond the processes discussed in Attention Schema Theory.
I keep going through these steps and they all seem pretty solid, and so I feel somewhat obligated to accept the conclusion in step 4. But I find that conclusion highly unintuitive, I think for the same reason most people do—sorta like, why should any information processing feel like anything at all?
So, I need to either drag my intuitions into line with 1-4, or else crystallize my intuitions into a specific error in one of the steps 1-4. That’s where I’m at right now. I appreciate you and others in this comment thread pointing me to helpful and interesting resources! :-)
Have you seen my response to shminux elsewhere in this thread?
https://www.lesswrong.com/posts/biKchmLrkatdBbiH8/book-review-rethinking-consciousness?commentId=D82Gs8cNhGArfwEAQ
Again: Chalmers doesn’t think p-zombies are actually possible.
That doesn’t follow from (1). It would follow from the claim that everyone is a zombie, because then there would be nothing to consciousness except false claims to be conscious. However, if you take the view that reports of consciousness are caused by consciousness per se, then consciousness per se exists and needs to be explained separately from reports and behaviour.
Hmm. I do take the view that reports of consciousness are (at least in part) caused by consciousness (whatever that is!). (Does anyone disagree with that?) I think a complete explanation of reports of consciousness necessarily include any upstream cause of those reports. By analogy, I report that I am wearing a watch. If you want a “complete and correct explanation” of that report, you need to bring up the fact that I am in fact wearing a watch, and to describe what a watch is. Any explanation omitting the existence of my actual watch would not match the data. Thus, if reports of consciousness are partly caused by consciousness, then it will not be possible to correctly explain those reports unless, somewhere buried within the explanation of the report of consciousness, there is an explanation of consciousness itself. Do you see where I’m coming from?
If explaining reports of consciousness involves solving the hard problem, then no one has explained reports of consciousness, since no one has has solved the HP.
Of course, some people (eg. Dennett) think that reports of consciousness can be explained … and don’t accept that there is an HP.
And the HP isn’t about consciousness in general, it is about qualia or phenomenal consciousnessm the very thing that illusionism denies.
Edit: the basic problem with what you are saying is that there are disagreements about what explanation is, and about what needs to be explained. The Dennet side was that once you have explained all the objective phenomena objectively, you have explained everything. The Chalmers side thinks that leaves out the most important stuff.