But if someone finds the correct answer to a philosophical question, then they can… try to write essays about it explaining the answer? Which maybe will be slightly more effective than essays arguing for any number of different positions because the answer is true?
I think this is a crux. To the extent that it’s a purely philosophical problem (a modeling choice, contingent mostly on opinions and consensus about “useful” rather than “true”), posts like this one make no sense. To the extent that it’s expressed as propositions that can be tested (even if not now, it could be described how it will resolve), it’s NOT purely philosophical.
This post appears to be about an empirical question—can a human brain be simulated with sufficient fidelity to be indistinguishable from a biological brain. It’s not clear whether OP is talking about an arbitrary new person, or if they include the upload problem as part of the unlikelihood. It’s also not clear why anyone cares about this specific aspect of it, so maybe your comments are appropriate.
What about if it’s a philosophical problem that has empirical consequences? I.e., suppose answering the philosophical questions tells you enough about the brain that you know how hard it would be to simulate it on a digital computer. In this case, the answer can be tested—but not yet—and I still think you wouldn’t know if someone had the answer already.
I’d call that an empirical problem that has philosophical consequences :)
And it’s still not worth a lot of debate about far-mode possibilities, but it MAY be worth exploring what we actually know and we we can test in the near-term. They’ve fully(*) emulated some brains—https://openworm.org/ is fascinating in how far it’s come very recently. They’re nowhere near to emulating a brain big enough to try to compare WRT complex behaviors from which consciousness can be inferred.
* “fully” is not actually claimed nor tested. Only the currently-measurable neural weights and interactions are emulated. More subtle physical properties may well turn out to be important, but we can’t tell yet if that’s so.
I’d call that an empirical problem that has philosophical consequences :)
That’s arguable, but I think the key point is that if the reasoning used to solve the problem is philosophical, then a correct solution is quite unlikely to be recognized as such just because someone posted it somewhere. Even if it’s in a peer-reviewed journal somewhere. That’s the claim I would make, anyway. (I think when it comes to consciousness, whatever philosophical solution you have will probably have empirical consequences in principle, but they’ll often not be practically measurable with current neurotech.)
I think this is a crux. To the extent that it’s a purely philosophical problem (a modeling choice, contingent mostly on opinions and consensus about “useful” rather than “true”), posts like this one make no sense. To the extent that it’s expressed as propositions that can be tested (even if not now, it could be described how it will resolve), it’s NOT purely philosophical.
This post appears to be about an empirical question—can a human brain be simulated with sufficient fidelity to be indistinguishable from a biological brain. It’s not clear whether OP is talking about an arbitrary new person, or if they include the upload problem as part of the unlikelihood. It’s also not clear why anyone cares about this specific aspect of it, so maybe your comments are appropriate.
What about if it’s a philosophical problem that has empirical consequences? I.e., suppose answering the philosophical questions tells you enough about the brain that you know how hard it would be to simulate it on a digital computer. In this case, the answer can be tested—but not yet—and I still think you wouldn’t know if someone had the answer already.
I’d call that an empirical problem that has philosophical consequences :)
And it’s still not worth a lot of debate about far-mode possibilities, but it MAY be worth exploring what we actually know and we we can test in the near-term. They’ve fully(*) emulated some brains—https://openworm.org/ is fascinating in how far it’s come very recently. They’re nowhere near to emulating a brain big enough to try to compare WRT complex behaviors from which consciousness can be inferred.
* “fully” is not actually claimed nor tested. Only the currently-measurable neural weights and interactions are emulated. More subtle physical properties may well turn out to be important, but we can’t tell yet if that’s so.
That’s arguable, but I think the key point is that if the reasoning used to solve the problem is philosophical, then a correct solution is quite unlikely to be recognized as such just because someone posted it somewhere. Even if it’s in a peer-reviewed journal somewhere. That’s the claim I would make, anyway. (I think when it comes to consciousness, whatever philosophical solution you have will probably have empirical consequences in principle, but they’ll often not be practically measurable with current neurotech.)