In some recentcomments over at the Effective Altruism Forum you talk about anti-realism about consciousness, saying in particular “the case for accepting anti-realism as the answer to the problem of consciousness seems pretty weak, at least as explained by Brian”. I am wondering if you could elaborate more on this. Does the case for anti-realism about consciousness seem weak because of your general uncertainty on questions like this? Or is it more that you find the case for anti-realism specifically weak, and you hold some contrary position?
I do have a lot of uncertainty about many philosophical questions. Many people seem to have intuitions that are too strong or that they trust too much, and don’t seem to consider that the kinds of philosophical arguments we currently have are far from watertight, and there are lots of possible philosophical ideas/positions/arguments that have yet to be explored by anyone, which eventually might overturn their current beliefs. In this case, I also have two specific reasons to be skeptical about Brian’s position on consciousness.
I think for something to count as a solution to the problem of consciousness, it should at minimum have a (perhaps formal) language for describing first-person subjective experiences or qualia, and some algorithm or method of predicting or explaining those experiences from a third-person description of a physical system, or at least some sort of plan for how to eventually get something like that, or an explanation of why that will never be possible. Brian’s anti-realism doesn’t have this, so it seems unsatisfactory to me.
Relatedly, I think a solution to the problem of morality/axiology should include an explanation of why certain kinds of subjective experiences are good or valuable and others are bad or negatively valuable (and a way to generalize this to arbitrary kinds of minds and experiences), or an argument why this is impossible. Brian’s moral anti-realism which goes along with his consciousness anti-realism also seems unsatisfactory in this regard.
In some recent comments over at the Effective Altruism Forum you talk about anti-realism about consciousness, saying in particular “the case for accepting anti-realism as the answer to the problem of consciousness seems pretty weak, at least as explained by Brian”. I am wondering if you could elaborate more on this. Does the case for anti-realism about consciousness seem weak because of your general uncertainty on questions like this? Or is it more that you find the case for anti-realism specifically weak, and you hold some contrary position?
I am especially curious since I was under the impression that many people on LessWrong hold essentially similar views.
I do have a lot of uncertainty about many philosophical questions. Many people seem to have intuitions that are too strong or that they trust too much, and don’t seem to consider that the kinds of philosophical arguments we currently have are far from watertight, and there are lots of possible philosophical ideas/positions/arguments that have yet to be explored by anyone, which eventually might overturn their current beliefs. In this case, I also have two specific reasons to be skeptical about Brian’s position on consciousness.
I think for something to count as a solution to the problem of consciousness, it should at minimum have a (perhaps formal) language for describing first-person subjective experiences or qualia, and some algorithm or method of predicting or explaining those experiences from a third-person description of a physical system, or at least some sort of plan for how to eventually get something like that, or an explanation of why that will never be possible. Brian’s anti-realism doesn’t have this, so it seems unsatisfactory to me.
Relatedly, I think a solution to the problem of morality/axiology should include an explanation of why certain kinds of subjective experiences are good or valuable and others are bad or negatively valuable (and a way to generalize this to arbitrary kinds of minds and experiences), or an argument why this is impossible. Brian’s moral anti-realism which goes along with his consciousness anti-realism also seems unsatisfactory in this regard.