I don’t know the answer, and I’m pretty sure nobody else does either.
I see similar statements all the time (no one has solved consciousness yet / no one knows whether LLMs/chickens/insects/fish are conscious / I can only speculate and I’m pretty sure this is true for everyone else / etc.) … and I don’t see how this confidence is justified. The idea seems to be that no one can’t have found the correct answers to the philosophical problems yet because if they had, those answers would immediately make waves and soon everyone on LessWrong would know about it. But it’s like… do you really think this? People can’t even agree on whether consciousness is a well-defined thing or a lossy abstraction; do we really think that if someone had the right answers, they would automatically convince everyone of them?
There are other fields where those kinds of statements make sense, like physics. If someone finds the correct answer to a physics question, they can run an experiment and prove it. But if someone finds the correct answer to a philosophical question, then they can… try to write essays about it explaining the answer? Which maybe will be slightly more effective than essays arguing for any number of different positions because the answer is true?
I can imagine someone several hundred years ago having figured out, purely based on first-principles reasoning, that life is no crisp category at the territory but just a lossy conceptual abstraction. I can imagine them being highly confident in this result because they’ve derived it for correct reasons and they’ve verified all the steps that got them there. And I can imagine someone else throwing their hands up and saying “I don’t know what mysterious force is behind the phenomenon of life, and I’m pretty sure no one else does, either”.
Which is all just to say—isn’t it much more likely that the problem has been solved, and there are people who are highly confident in the solution because they have verified all the steps that led them there, and they know with high confidence which features need to be replicated to preserve consciousness… but you just don’t know about it because “find the correct solution” and “convince people of a solution” are mostly independent problems, and there’s just no reason why the correct solution would organically spread?
(As I’ve mentioned, the “we know that no one knows” thing is something I see expressed all the time, usually just stated as a self-evident fact—so I’m equally arguing against everyone else who’s expressed it. This just happens to the be first time that I’ve decided to formulate my objection.)
But if someone finds the correct answer to a philosophical question, then they can… try to write essays about it explaining the answer? Which maybe will be slightly more effective than essays arguing for any number of different positions because the answer is true?
I think this is a crux. To the extent that it’s a purely philosophical problem (a modeling choice, contingent mostly on opinions and consensus about “useful” rather than “true”), posts like this one make no sense. To the extent that it’s expressed as propositions that can be tested (even if not now, it could be described how it will resolve), it’s NOT purely philosophical.
This post appears to be about an empirical question—can a human brain be simulated with sufficient fidelity to be indistinguishable from a biological brain. It’s not clear whether OP is talking about an arbitrary new person, or if they include the upload problem as part of the unlikelihood. It’s also not clear why anyone cares about this specific aspect of it, so maybe your comments are appropriate.
What about if it’s a philosophical problem that has empirical consequences? I.e., suppose answering the philosophical questions tells you enough about the brain that you know how hard it would be to simulate it on a digital computer. In this case, the answer can be tested—but not yet—and I still think you wouldn’t know if someone had the answer already.
I’d call that an empirical problem that has philosophical consequences :)
And it’s still not worth a lot of debate about far-mode possibilities, but it MAY be worth exploring what we actually know and we we can test in the near-term. They’ve fully(*) emulated some brains—https://openworm.org/ is fascinating in how far it’s come very recently. They’re nowhere near to emulating a brain big enough to try to compare WRT complex behaviors from which consciousness can be inferred.
* “fully” is not actually claimed nor tested. Only the currently-measurable neural weights and interactions are emulated. More subtle physical properties may well turn out to be important, but we can’t tell yet if that’s so.
I’d call that an empirical problem that has philosophical consequences :)
That’s arguable, but I think the key point is that if the reasoning used to solve the problem is philosophical, then a correct solution is quite unlikely to be recognized as such just because someone posted it somewhere. Even if it’s in a peer-reviewed journal somewhere. That’s the claim I would make, anyway. (I think when it comes to consciousness, whatever philosophical solution you have will probably have empirical consequences in principle, but they’ll often not be practically measurable with current neurotech.)
Hmm, still not following, or maybe not agreeing. I think that “if the reasoning used to solve the problem is philosophical” then “correct solution” is not available. “useful”, “consensus”, or “applicable in current societal context” might be better evaluations of a philosophical reasoning.
Couldn’t you imagine that you use philosophical reasoning to derive accurate facts about consciousness, which will come with insights about the biological/computational structure of consciousness in the human brain, which will then tell you things about which features are critical / how hard human brains are to simulate / etc.? This would be in the realm of “empirical predictions derived from philosophical reasoning that are theoretically testable but not practically testable”.
I think most solutions to consciousness should be like that, although I’d grant that it’s not strictly necessary. (Integrated Information Theory might be an example of a theory that’s difficult to test even in principle if it were true.)
Couldn’t you imagine that you use philosophical reasoning to derive accurate facts about consciousness,
My imagination is pretty good, and while I can imagine that, it’s not about this universe or my experience in reasoning and prediction.
Can you give an example in another domain where philosophical reasoning about a topic led to empirical facts about that topic? Not meta-reasoning about science, but actual reasoning about a real thing?
Can you give an example in another domain where philosophical reasoning about a topic led to empirical facts about that topic?
Yes—I think evolution is a pretty clean example. Darwin didn’t have any more facts than other biologists or philosophers, and he didn’t derive his theory by collecting facts; he was just doing better philosophy than everyone else. His philosophy led to a large set of empirical predictions, those predictions were validated, and that’s how and when the theory was broadly accepted. (Edit: I think that’s a pretty accurate description of what happened, maybe you could argue with some parts of it?)
I think we should expect that consciousness works out the same way—the problem has been solved, the solution comes with a large set of empirical predictions, it will be broadly accepted once the empirical evidence is overwhelming, and not before. (I would count camp #1 broadly as a ‘theory’ with the empirical prediction that no crisp divide between conscious and unconscious processing exists in the brain, and that consciousness has no elegant mathematical structure in any meaningful sense. I’d consider this empirically validated once all higher cognitive functions have been reverse-engineered as regular algorithms with no crisp/qualitative features separating conscious and unconscious cognition.)
(GPT-4o says that before evolution was proposed, the evolution of humans was considered a question of philosophy, so I think it’s quite analogous in that sense.)
Edit: I think that’s a pretty accurate description of what happened, maybe you could argue with some parts of it?
I think one could argue with a lot of your description of how Charles Darwin developed his theory of evolution after the H.M.S. Beagle expedition and decades of compiling examples and gradually elaborating a theory before he finally finished Origin of Species.
Fair enough; the more accurate response would have been that evolution might be an example, depending on how the theory was derived (which I don’t know). Maybe it’s not actually an example.
The crux would be when exactly he got the idea; if the idea came first and the examples later, then it’s still largely analogous (imo); if the examples were causally upstream of the core idea, then not so much.
That’s a really good example, thank you! I see at least some of the analogous questions, in terms of physical measurements and variance in observations of behavioral and reported experiences. I’m not sure I see the analogy in terms of qualia and other unsure-even-how-to-detect phenomena.
I can imagine someone several hundred years ago having figured out, purely based on first-principles reasoning, that life is no crisp category at the territory but just a lossy conceptual abstraction. I can imagine them being highly confident in this result because they’ve derived it for correct reasons and they’ve verified all the steps that got them there. And I can imagine someone else throwing their hands up and saying “I don’t know what mysterious force is behind the phenomenon of life, and I’m pretty sure no one else does, either”.
But is this a correct conclusion? I have an option right now to make a civilization out of brains-in-vats in a sandbox simulation similar to our reality but with clear useful distinction on life VS non life. Like, suppose there is a “mob” class.
Like, then, this person there, inside it, who figured out that life and non life is a same thing is wrong in a local useful sense, and correct in a useless global sense (like, everything is code / matter in outer reality). People inside the simulation who found the actual working thing that is life scientifically, would laugh at them 1000 simulated years later and present it as an example of presumptuousness of philosophers. And i agree with them, it was a misapplication.
I see your point, but I don’t think this undermines the example? Like okay, the ‘life is not a crisp category’ claim has nuance to it, but we could imagine the hypothetical smart philosopher figuring out that as well. I.e., life is not a crisp category in the territory, but it is an abstraction that’s well-defined in most cases and actually a useful category because of this <+ any other nuance that’s appropriate>.
It’s true that the example here (figuring out that life isn’t a binary/well-defined thing) is not as practically relevant as figuring out stuff about consciousness. (Nonetheless I think the property of ‘being correct doesn’t entail being persuasive’ still holds.) I’m not sure if there is a good example of an insight that has been derived philosophically, is now widely accepted, and has clear practical benefits. (Free Will and implications for the morality of punishment are pretty useful imo, but they’re not universally accepted so not a real example, and also no clear empirical predictions.)
Well, it’s one thing to explore the possibility space and completely the other one to pinpoint where you are in it. Many people will confidently say they are at X or at Y, but all that they do is propose some idea and cling to it irrationally. In aggregate, in hindsight there will be people who bonded to the right idea, quite possibly. But it’s all mix Gettier cases and true negative cases.
And very often it’s not even “incorrect” it’s “neither correct nor incorrect”. Often there is frame of reference shift such that all the questions posed before it turn out to be completely meaningless. Like “what speed?”, you need more context as we know now.
And then science pinpoints where you are by actually digging into the subject matter. It’s a kind of sad state of “diverse hypothesis generation” when it’s a lot easier just go blind into it.
This comes down to a HUGE unknown—what features of reality need to be replicated in another medium in order to result in sufficiently-close results
That’s at least two unknowns: what needs to be replicated in order to get the objective functioning; and what needs to be replicated to get the subjective awarness as well.
Which is all just to say—isn’t it much more likely that the problem has been solved, and there are people who are highly confident in the solution because they have verified all the steps that led them there, and they know with high confidence which features need to be replicated to preserve consciousness...
And how do they that, in terms of the second problem? The final stage would need to be confirmation of subjective awareness. We don’t have instruments for that, and it’s no good just asking the sim, since a functional duplicate is likely to answer yes, even if it’s a zombie.
And that’s why it it can be argued that consciousness is a uniqueness difficult problem, beyond the “non-existent proof”
because “find the correct solution” and “convince people of a solution” are mostly independent problems,
That’s not just a theoretical possibility People , eg. Dennett,keep claiming to have explained consciousness, and other people keep being unconvinced because they notice they have skipped the hard part.
“That’s just saying he hasn’t explained some invisible essence of consciousness , equivalent to élan vital”.
“Qualia aren’t invisible, they are the most obvious thing there is to the person that has them”.
I see similar statements all the time (no one has solved consciousness yet / no one knows whether LLMs/chickens/insects/fish are conscious / I can only speculate and I’m pretty sure this is true for everyone else / etc.) … and I don’t see how this confidence is justified. The idea seems to be that no one can’t have found the correct answers to the philosophical problems yet because if they had, those answers would immediately make waves and soon everyone on LessWrong would know about it. But it’s like… do you really think this? People can’t even agree on whether consciousness is a well-defined thing or a lossy abstraction; do we really think that if someone had the right answers, they would automatically convince everyone of them?
There are other fields where those kinds of statements make sense, like physics. If someone finds the correct answer to a physics question, they can run an experiment and prove it. But if someone finds the correct answer to a philosophical question, then they can… try to write essays about it explaining the answer? Which maybe will be slightly more effective than essays arguing for any number of different positions because the answer is true?
I can imagine someone several hundred years ago having figured out, purely based on first-principles reasoning, that life is no crisp category at the territory but just a lossy conceptual abstraction. I can imagine them being highly confident in this result because they’ve derived it for correct reasons and they’ve verified all the steps that got them there. And I can imagine someone else throwing their hands up and saying “I don’t know what mysterious force is behind the phenomenon of life, and I’m pretty sure no one else does, either”.
Which is all just to say—isn’t it much more likely that the problem has been solved, and there are people who are highly confident in the solution because they have verified all the steps that led them there, and they know with high confidence which features need to be replicated to preserve consciousness… but you just don’t know about it because “find the correct solution” and “convince people of a solution” are mostly independent problems, and there’s just no reason why the correct solution would organically spread?
(As I’ve mentioned, the “we know that no one knows” thing is something I see expressed all the time, usually just stated as a self-evident fact—so I’m equally arguing against everyone else who’s expressed it. This just happens to the be first time that I’ve decided to formulate my objection.)
I think this is a crux. To the extent that it’s a purely philosophical problem (a modeling choice, contingent mostly on opinions and consensus about “useful” rather than “true”), posts like this one make no sense. To the extent that it’s expressed as propositions that can be tested (even if not now, it could be described how it will resolve), it’s NOT purely philosophical.
This post appears to be about an empirical question—can a human brain be simulated with sufficient fidelity to be indistinguishable from a biological brain. It’s not clear whether OP is talking about an arbitrary new person, or if they include the upload problem as part of the unlikelihood. It’s also not clear why anyone cares about this specific aspect of it, so maybe your comments are appropriate.
What about if it’s a philosophical problem that has empirical consequences? I.e., suppose answering the philosophical questions tells you enough about the brain that you know how hard it would be to simulate it on a digital computer. In this case, the answer can be tested—but not yet—and I still think you wouldn’t know if someone had the answer already.
I’d call that an empirical problem that has philosophical consequences :)
And it’s still not worth a lot of debate about far-mode possibilities, but it MAY be worth exploring what we actually know and we we can test in the near-term. They’ve fully(*) emulated some brains—https://openworm.org/ is fascinating in how far it’s come very recently. They’re nowhere near to emulating a brain big enough to try to compare WRT complex behaviors from which consciousness can be inferred.
* “fully” is not actually claimed nor tested. Only the currently-measurable neural weights and interactions are emulated. More subtle physical properties may well turn out to be important, but we can’t tell yet if that’s so.
That’s arguable, but I think the key point is that if the reasoning used to solve the problem is philosophical, then a correct solution is quite unlikely to be recognized as such just because someone posted it somewhere. Even if it’s in a peer-reviewed journal somewhere. That’s the claim I would make, anyway. (I think when it comes to consciousness, whatever philosophical solution you have will probably have empirical consequences in principle, but they’ll often not be practically measurable with current neurotech.)
Hmm, still not following, or maybe not agreeing. I think that “if the reasoning used to solve the problem is philosophical” then “correct solution” is not available. “useful”, “consensus”, or “applicable in current societal context” might be better evaluations of a philosophical reasoning.
Couldn’t you imagine that you use philosophical reasoning to derive accurate facts about consciousness, which will come with insights about the biological/computational structure of consciousness in the human brain, which will then tell you things about which features are critical / how hard human brains are to simulate / etc.? This would be in the realm of “empirical predictions derived from philosophical reasoning that are theoretically testable but not practically testable”.
I think most solutions to consciousness should be like that, although I’d grant that it’s not strictly necessary. (Integrated Information Theory might be an example of a theory that’s difficult to test even in principle if it were true.)
My imagination is pretty good, and while I can imagine that, it’s not about this universe or my experience in reasoning and prediction.
Can you give an example in another domain where philosophical reasoning about a topic led to empirical facts about that topic? Not meta-reasoning about science, but actual reasoning about a real thing?
Yes—I think evolution is a pretty clean example. Darwin didn’t have any more facts than other biologists or philosophers, and he didn’t derive his theory by collecting facts; he was just doing better philosophy than everyone else. His philosophy led to a large set of empirical predictions, those predictions were validated, and that’s how and when the theory was broadly accepted. (Edit: I think that’s a pretty accurate description of what happened, maybe you could argue with some parts of it?)
I think we should expect that consciousness works out the same way—the problem has been solved, the solution comes with a large set of empirical predictions, it will be broadly accepted once the empirical evidence is overwhelming, and not before. (I would count camp #1 broadly as a ‘theory’ with the empirical prediction that no crisp divide between conscious and unconscious processing exists in the brain, and that consciousness has no elegant mathematical structure in any meaningful sense. I’d consider this empirically validated once all higher cognitive functions have been reverse-engineered as regular algorithms with no crisp/qualitative features separating conscious and unconscious cognition.)
(GPT-4o says that before evolution was proposed, the evolution of humans was considered a question of philosophy, so I think it’s quite analogous in that sense.)
I think one could argue with a lot of your description of how Charles Darwin developed his theory of evolution after the H.M.S. Beagle expedition and decades of compiling examples and gradually elaborating a theory before he finally finished Origin of Species.
Fair enough; the more accurate response would have been that evolution might be an example, depending on how the theory was derived (which I don’t know). Maybe it’s not actually an example.
The crux would be when exactly he got the idea; if the idea came first and the examples later, then it’s still largely analogous (imo); if the examples were causally upstream of the core idea, then not so much.
That’s a really good example, thank you! I see at least some of the analogous questions, in terms of physical measurements and variance in observations of behavioral and reported experiences. I’m not sure I see the analogy in terms of qualia and other unsure-even-how-to-detect phenomena.
But is this a correct conclusion? I have an option right now to make a civilization out of brains-in-vats in a sandbox simulation similar to our reality but with clear useful distinction on life VS non life. Like, suppose there is a “mob” class.
Like, then, this person there, inside it, who figured out that life and non life is a same thing is wrong in a local useful sense, and correct in a useless global sense (like, everything is code / matter in outer reality). People inside the simulation who found the actual working thing that is life scientifically, would laugh at them 1000 simulated years later and present it as an example of presumptuousness of philosophers. And i agree with them, it was a misapplication.
I see your point, but I don’t think this undermines the example? Like okay, the ‘life is not a crisp category’ claim has nuance to it, but we could imagine the hypothetical smart philosopher figuring out that as well. I.e., life is not a crisp category in the territory, but it is an abstraction that’s well-defined in most cases and actually a useful category because of this <+ any other nuance that’s appropriate>.
It’s true that the example here (figuring out that life isn’t a binary/well-defined thing) is not as practically relevant as figuring out stuff about consciousness. (Nonetheless I think the property of ‘being correct doesn’t entail being persuasive’ still holds.) I’m not sure if there is a good example of an insight that has been derived philosophically, is now widely accepted, and has clear practical benefits. (Free Will and implications for the morality of punishment are pretty useful imo, but they’re not universally accepted so not a real example, and also no clear empirical predictions.)
Well, it’s one thing to explore the possibility space and completely the other one to pinpoint where you are in it. Many people will confidently say they are at X or at Y, but all that they do is propose some idea and cling to it irrationally. In aggregate, in hindsight there will be people who bonded to the right idea, quite possibly. But it’s all mix Gettier cases and true negative cases.
And very often it’s not even “incorrect” it’s “neither correct nor incorrect”. Often there is frame of reference shift such that all the questions posed before it turn out to be completely meaningless. Like “what speed?”, you need more context as we know now.
And then science pinpoints where you are by actually digging into the subject matter. It’s a kind of sad state of “diverse hypothesis generation” when it’s a lot easier just go blind into it.
@Dagon
That’s at least two unknowns: what needs to be replicated in order to get the objective functioning; and what needs to be replicated to get the subjective awarness as well.
And how do they that, in terms of the second problem? The final stage would need to be confirmation of subjective awareness. We don’t have instruments for that, and it’s no good just asking the sim, since a functional duplicate is likely to answer yes, even if it’s a zombie.
And that’s why it it can be argued that consciousness is a uniqueness difficult problem, beyond the “non-existent proof”
That’s not just a theoretical possibility People , eg. Dennett,keep claiming to have explained consciousness, and other people keep being unconvinced because they notice they have skipped the hard part.
“That’s just saying he hasn’t explained some invisible essence of consciousness , equivalent to élan vital”.
“Qualia aren’t invisible, they are the most obvious thing there is to the person that has them”.