Indeed, it’s precisely because these type errors happen explicitly within a mind that humans go around talking about a “hard
problem of conscious experience”. The problem of selecting a phenomenological bridge hypothesis is a generalization of
the problem of reducing human-style conscious experience to unconscious computation.
This is a great post, but it is not made better by this quote.
I guess I think it is distracting. Someone like Chalmers is unlikely to be convinced (since he thinks there’s something more to the problem than a reductive explanation), and the resulting argument is sort of orthogonal to the main thrust of the post (which is naturalized induction). I think it’s unwise to fight on multiple fronts at once.
Similar situation would be: someone writes a great post on decision theory and concludes with “btw deontologists are confused.”
I guess I think it is distracting. Someone like Chalmers is unlikely to be convinced
Convinced of what? The only thing the paragraph you cited mentions is that (a) the hard problem concerns bridge hypotheses, and (b) the hard problem arises for minds (and not, say, squirrels or digestion) and is noticed by minds because minds type their subprocesses differently. Are those especially partisan or extreme statements? What would Chalmers’ alternatives to (a) or (b) be?
I bring up the hard problem here because it’s genuinely relevant. It’s a real problem, and it really is hard. It’s not a confusion, or if it is then it’s not obvious how best to dissolve it. If the framework I provide above helps philosophers and psychological theorists like Chalmers come up with new and better theories for how human consciousness relates to neural computations, so much the better.
This is a great post, but it is not made better by this quote.
I think the value is increased by this section—could you make your criticism more precise?
Are you objecting to talking about the hard problem here? Or to something about the way I talked about it?
I guess I think it is distracting. Someone like Chalmers is unlikely to be convinced (since he thinks there’s something more to the problem than a reductive explanation), and the resulting argument is sort of orthogonal to the main thrust of the post (which is naturalized induction). I think it’s unwise to fight on multiple fronts at once.
Similar situation would be: someone writes a great post on decision theory and concludes with “btw deontologists are confused.”
Convinced of what? The only thing the paragraph you cited mentions is that (a) the hard problem concerns bridge hypotheses, and (b) the hard problem arises for minds (and not, say, squirrels or digestion) and is noticed by minds because minds type their subprocesses differently. Are those especially partisan or extreme statements? What would Chalmers’ alternatives to (a) or (b) be?
I bring up the hard problem here because it’s genuinely relevant. It’s a real problem, and it really is hard. It’s not a confusion, or if it is then it’s not obvious how best to dissolve it. If the framework I provide above helps philosophers and psychological theorists like Chalmers come up with new and better theories for how human consciousness relates to neural computations, so much the better.
-