But using either interpretation, how puzzling is the view, that the activity of these little material things somehow is responsible for conscious qualia? This is where a lot of critical thinking has led many people to say things like “consciousness must be what an algorithm implemented on a physical machine feels like from the ‘inside.’” And this is a decent hypothesis, but not an explanatory one at all. The emergence of consciousness and qualia is just something that materialists need to accept as a spooky phenomenon. It’s not a very satisfying solution to the hard problem of consciousness.
“lack of a satisfying explanatory solution” does not imply low likelihood if you think that the explanatory solution exists but is computationally hard to find (which in fact seems pretty reasonable).
Like, the same structure of argument could be used to argue that computers are extremely low likelihood—how puzzling is the view, that the activity of electrons moving around somehow is responsible for proving mathematical theorems?
With laptops, we of course have a good explanation of how computation arises from electrons, but that’s because we designed them—it would probably be much harder if we had no knowledge of laptops or even electricity and then were handed a laptop and asked to explain how it could reliably produce true mathematical theorems. This seems pretty analogous to the situation we find ourselves in with consciousness.
Thanks for the comment. I’m not 100% on the computers analogy. I think answering the hard problem of consciousness is significantly different compared to understanding how complex information processing systems like computers work. Any definition or framing of consciousness in terms of informational or computational theory may allow it to be studied in those terms in the same way that computers are can be understood by system based theoretical reasoning based on abstraction. However, I don’t think this is what it means to solve the hard problem of consciousness. It seems more like solving the problem with a definition rather than an explanation.
I wonder how much differing perspectives here are due to differing intuitions. But in any case, I hope this makes my thinking more clear.
“lack of a satisfying explanatory solution” does not imply low likelihood if you think that the explanatory solution exists but is computationally hard to find (which in fact seems pretty reasonable).
OTOH, you should keep lowering the probability of ever finding a satisfactory explanation the longer you keep failing to find one.
This update seems like it would be extraordinarily small, given our poor understanding of the brain, and the relatively small amount of concerted effort that goes into understanding consciousness.
“lack of a satisfying explanatory solution” does not imply low likelihood if you think that the explanatory solution exists but is computationally hard to find (which in fact seems pretty reasonable).
Like, the same structure of argument could be used to argue that computers are extremely low likelihood—how puzzling is the view, that the activity of electrons moving around somehow is responsible for proving mathematical theorems?
With laptops, we of course have a good explanation of how computation arises from electrons, but that’s because we designed them—it would probably be much harder if we had no knowledge of laptops or even electricity and then were handed a laptop and asked to explain how it could reliably produce true mathematical theorems. This seems pretty analogous to the situation we find ourselves in with consciousness.
Thanks for the comment. I’m not 100% on the computers analogy. I think answering the hard problem of consciousness is significantly different compared to understanding how complex information processing systems like computers work. Any definition or framing of consciousness in terms of informational or computational theory may allow it to be studied in those terms in the same way that computers are can be understood by system based theoretical reasoning based on abstraction. However, I don’t think this is what it means to solve the hard problem of consciousness. It seems more like solving the problem with a definition rather than an explanation.
I wonder how much differing perspectives here are due to differing intuitions. But in any case, I hope this makes my thinking more clear.
OTOH, you should keep lowering the probability of ever finding a satisfactory explanation the longer you keep failing to find one.
This update seems like it would be extraordinarily small, given our poor understanding of the brain, and the relatively small amount of concerted effort that goes into understanding consciousness.
We don’t have a uniformly poor understanding: we understand some aspects of mentality much better than others..