What about if it’s a philosophical problem that has empirical consequences? I.e., suppose answering the philosophical questions tells you enough about the brain that you know how hard it would be to simulate it on a digital computer. In this case, the answer can be tested—but not yet—and I still think you wouldn’t know if someone had the answer already.
I’d call that an empirical problem that has philosophical consequences :)
And it’s still not worth a lot of debate about far-mode possibilities, but it MAY be worth exploring what we actually know and we we can test in the near-term. They’ve fully(*) emulated some brains—https://openworm.org/ is fascinating in how far it’s come very recently. They’re nowhere near to emulating a brain big enough to try to compare WRT complex behaviors from which consciousness can be inferred.
* “fully” is not actually claimed nor tested. Only the currently-measurable neural weights and interactions are emulated. More subtle physical properties may well turn out to be important, but we can’t tell yet if that’s so.
I’d call that an empirical problem that has philosophical consequences :)
That’s arguable, but I think the key point is that if the reasoning used to solve the problem is philosophical, then a correct solution is quite unlikely to be recognized as such just because someone posted it somewhere. Even if it’s in a peer-reviewed journal somewhere. That’s the claim I would make, anyway. (I think when it comes to consciousness, whatever philosophical solution you have will probably have empirical consequences in principle, but they’ll often not be practically measurable with current neurotech.)
Hmm, still not following, or maybe not agreeing. I think that “if the reasoning used to solve the problem is philosophical” then “correct solution” is not available. “useful”, “consensus”, or “applicable in current societal context” might be better evaluations of a philosophical reasoning.
Couldn’t you imagine that you use philosophical reasoning to derive accurate facts about consciousness, which will come with insights about the biological/computational structure of consciousness in the human brain, which will then tell you things about which features are critical / how hard human brains are to simulate / etc.? This would be in the realm of “empirical predictions derived from philosophical reasoning that are theoretically testable but not practically testable”.
I think most solutions to consciousness should be like that, although I’d grant that it’s not strictly necessary. (Integrated Information Theory might be an example of a theory that’s difficult to test even in principle if it were true.)
Couldn’t you imagine that you use philosophical reasoning to derive accurate facts about consciousness,
My imagination is pretty good, and while I can imagine that, it’s not about this universe or my experience in reasoning and prediction.
Can you give an example in another domain where philosophical reasoning about a topic led to empirical facts about that topic? Not meta-reasoning about science, but actual reasoning about a real thing?
Can you give an example in another domain where philosophical reasoning about a topic led to empirical facts about that topic?
Yes—I think evolution is a pretty clean example. Darwin didn’t have any more facts than other biologists or philosophers, and he didn’t derive his theory by collecting facts; he was just doing better philosophy than everyone else. His philosophy led to a large set of empirical predictions, those predictions were validated, and that’s how and when the theory was broadly accepted. (Edit: I think that’s a pretty accurate description of what happened, maybe you could argue with some parts of it?)
I think we should expect that consciousness works out the same way—the problem has been solved, the solution comes with a large set of empirical predictions, it will be broadly accepted once the empirical evidence is overwhelming, and not before. (I would count camp #1 broadly as a ‘theory’ with the empirical prediction that no crisp divide between conscious and unconscious processing exists in the brain, and that consciousness has no elegant mathematical structure in any meaningful sense. I’d consider this empirically validated once all higher cognitive functions have been reverse-engineered as regular algorithms with no crisp/qualitative features separating conscious and unconscious cognition.)
(GPT-4o says that before evolution was proposed, the evolution of humans was considered a question of philosophy, so I think it’s quite analogous in that sense.)
Edit: I think that’s a pretty accurate description of what happened, maybe you could argue with some parts of it?
I think one could argue with a lot of your description of how Charles Darwin developed his theory of evolution after the H.M.S. Beagle expedition and decades of compiling examples and gradually elaborating a theory before he finally finished Origin of Species.
Fair enough; the more accurate response would have been that evolution might be an example, depending on how the theory was derived (which I don’t know). Maybe it’s not actually an example.
The crux would be when exactly he got the idea; if the idea came first and the examples later, then it’s still largely analogous (imo); if the examples were causally upstream of the core idea, then not so much.
That’s a really good example, thank you! I see at least some of the analogous questions, in terms of physical measurements and variance in observations of behavioral and reported experiences. I’m not sure I see the analogy in terms of qualia and other unsure-even-how-to-detect phenomena.
What about if it’s a philosophical problem that has empirical consequences? I.e., suppose answering the philosophical questions tells you enough about the brain that you know how hard it would be to simulate it on a digital computer. In this case, the answer can be tested—but not yet—and I still think you wouldn’t know if someone had the answer already.
I’d call that an empirical problem that has philosophical consequences :)
And it’s still not worth a lot of debate about far-mode possibilities, but it MAY be worth exploring what we actually know and we we can test in the near-term. They’ve fully(*) emulated some brains—https://openworm.org/ is fascinating in how far it’s come very recently. They’re nowhere near to emulating a brain big enough to try to compare WRT complex behaviors from which consciousness can be inferred.
* “fully” is not actually claimed nor tested. Only the currently-measurable neural weights and interactions are emulated. More subtle physical properties may well turn out to be important, but we can’t tell yet if that’s so.
That’s arguable, but I think the key point is that if the reasoning used to solve the problem is philosophical, then a correct solution is quite unlikely to be recognized as such just because someone posted it somewhere. Even if it’s in a peer-reviewed journal somewhere. That’s the claim I would make, anyway. (I think when it comes to consciousness, whatever philosophical solution you have will probably have empirical consequences in principle, but they’ll often not be practically measurable with current neurotech.)
Hmm, still not following, or maybe not agreeing. I think that “if the reasoning used to solve the problem is philosophical” then “correct solution” is not available. “useful”, “consensus”, or “applicable in current societal context” might be better evaluations of a philosophical reasoning.
Couldn’t you imagine that you use philosophical reasoning to derive accurate facts about consciousness, which will come with insights about the biological/computational structure of consciousness in the human brain, which will then tell you things about which features are critical / how hard human brains are to simulate / etc.? This would be in the realm of “empirical predictions derived from philosophical reasoning that are theoretically testable but not practically testable”.
I think most solutions to consciousness should be like that, although I’d grant that it’s not strictly necessary. (Integrated Information Theory might be an example of a theory that’s difficult to test even in principle if it were true.)
My imagination is pretty good, and while I can imagine that, it’s not about this universe or my experience in reasoning and prediction.
Can you give an example in another domain where philosophical reasoning about a topic led to empirical facts about that topic? Not meta-reasoning about science, but actual reasoning about a real thing?
Yes—I think evolution is a pretty clean example. Darwin didn’t have any more facts than other biologists or philosophers, and he didn’t derive his theory by collecting facts; he was just doing better philosophy than everyone else. His philosophy led to a large set of empirical predictions, those predictions were validated, and that’s how and when the theory was broadly accepted. (Edit: I think that’s a pretty accurate description of what happened, maybe you could argue with some parts of it?)
I think we should expect that consciousness works out the same way—the problem has been solved, the solution comes with a large set of empirical predictions, it will be broadly accepted once the empirical evidence is overwhelming, and not before. (I would count camp #1 broadly as a ‘theory’ with the empirical prediction that no crisp divide between conscious and unconscious processing exists in the brain, and that consciousness has no elegant mathematical structure in any meaningful sense. I’d consider this empirically validated once all higher cognitive functions have been reverse-engineered as regular algorithms with no crisp/qualitative features separating conscious and unconscious cognition.)
(GPT-4o says that before evolution was proposed, the evolution of humans was considered a question of philosophy, so I think it’s quite analogous in that sense.)
I think one could argue with a lot of your description of how Charles Darwin developed his theory of evolution after the H.M.S. Beagle expedition and decades of compiling examples and gradually elaborating a theory before he finally finished Origin of Species.
Fair enough; the more accurate response would have been that evolution might be an example, depending on how the theory was derived (which I don’t know). Maybe it’s not actually an example.
The crux would be when exactly he got the idea; if the idea came first and the examples later, then it’s still largely analogous (imo); if the examples were causally upstream of the core idea, then not so much.
That’s a really good example, thank you! I see at least some of the analogous questions, in terms of physical measurements and variance in observations of behavioral and reported experiences. I’m not sure I see the analogy in terms of qualia and other unsure-even-how-to-detect phenomena.