I’d call that an empirical problem that has philosophical consequences :)
And it’s still not worth a lot of debate about far-mode possibilities, but it MAY be worth exploring what we actually know and we we can test in the near-term. They’ve fully(*) emulated some brains—https://openworm.org/ is fascinating in how far it’s come very recently. They’re nowhere near to emulating a brain big enough to try to compare WRT complex behaviors from which consciousness can be inferred.
* “fully” is not actually claimed nor tested. Only the currently-measurable neural weights and interactions are emulated. More subtle physical properties may well turn out to be important, but we can’t tell yet if that’s so.
I’d call that an empirical problem that has philosophical consequences :)
That’s arguable, but I think the key point is that if the reasoning used to solve the problem is philosophical, then a correct solution is quite unlikely to be recognized as such just because someone posted it somewhere. Even if it’s in a peer-reviewed journal somewhere. That’s the claim I would make, anyway. (I think when it comes to consciousness, whatever philosophical solution you have will probably have empirical consequences in principle, but they’ll often not be practically measurable with current neurotech.)
I’d call that an empirical problem that has philosophical consequences :)
And it’s still not worth a lot of debate about far-mode possibilities, but it MAY be worth exploring what we actually know and we we can test in the near-term. They’ve fully(*) emulated some brains—https://openworm.org/ is fascinating in how far it’s come very recently. They’re nowhere near to emulating a brain big enough to try to compare WRT complex behaviors from which consciousness can be inferred.
* “fully” is not actually claimed nor tested. Only the currently-measurable neural weights and interactions are emulated. More subtle physical properties may well turn out to be important, but we can’t tell yet if that’s so.
That’s arguable, but I think the key point is that if the reasoning used to solve the problem is philosophical, then a correct solution is quite unlikely to be recognized as such just because someone posted it somewhere. Even if it’s in a peer-reviewed journal somewhere. That’s the claim I would make, anyway. (I think when it comes to consciousness, whatever philosophical solution you have will probably have empirical consequences in principle, but they’ll often not be practically measurable with current neurotech.)