I do have a lot of uncertainty about many philosophical questions. Many people seem to have intuitions that are too strong or that they trust too much, and don’t seem to consider that the kinds of philosophical arguments we currently have are far from watertight, and there are lots of possible philosophical ideas/positions/arguments that have yet to be explored by anyone, which eventually might overturn their current beliefs. In this case, I also have two specific reasons to be skeptical about Brian’s position on consciousness.
I think for something to count as a solution to the problem of consciousness, it should at minimum have a (perhaps formal) language for describing first-person subjective experiences or qualia, and some algorithm or method of predicting or explaining those experiences from a third-person description of a physical system, or at least some sort of plan for how to eventually get something like that, or an explanation of why that will never be possible. Brian’s anti-realism doesn’t have this, so it seems unsatisfactory to me.
Relatedly, I think a solution to the problem of morality/axiology should include an explanation of why certain kinds of subjective experiences are good or valuable and others are bad or negatively valuable (and a way to generalize this to arbitrary kinds of minds and experiences), or an argument why this is impossible. Brian’s moral anti-realism which goes along with his consciousness anti-realism also seems unsatisfactory in this regard.
I do have a lot of uncertainty about many philosophical questions. Many people seem to have intuitions that are too strong or that they trust too much, and don’t seem to consider that the kinds of philosophical arguments we currently have are far from watertight, and there are lots of possible philosophical ideas/positions/arguments that have yet to be explored by anyone, which eventually might overturn their current beliefs. In this case, I also have two specific reasons to be skeptical about Brian’s position on consciousness.
I think for something to count as a solution to the problem of consciousness, it should at minimum have a (perhaps formal) language for describing first-person subjective experiences or qualia, and some algorithm or method of predicting or explaining those experiences from a third-person description of a physical system, or at least some sort of plan for how to eventually get something like that, or an explanation of why that will never be possible. Brian’s anti-realism doesn’t have this, so it seems unsatisfactory to me.
Relatedly, I think a solution to the problem of morality/axiology should include an explanation of why certain kinds of subjective experiences are good or valuable and others are bad or negatively valuable (and a way to generalize this to arbitrary kinds of minds and experiences), or an argument why this is impossible. Brian’s moral anti-realism which goes along with his consciousness anti-realism also seems unsatisfactory in this regard.