I argue that computation is fuzzy, it’s a property of our map of a system rather than the territory.
This is false. Everything exists in the territory to the extent to which it can interact with us. While different models can output a different answer as to which computation something runs, that doesn’t mean the computation isn’t real (or, even, that no computation is real). The computation is real in the sense of it influencing our sense impressions (I can observe my computer running a specific computation, for example). Someone else, whose model doesn’t return “yes” to the question whether my computer runs a particular computation will then have to explain my reports of my sense impressions (why does this person claim their computer runs Windows, when I’m predicting it runs CP/M?), and they will have to either change their model, or make systematically incorrect predictions about my utterances.
In this way, every computation that can be ascribed to a physical system is intersubjectively real, which is the only kind of reality there could, in principle, be.
(Philosophical zombies, by the way, don’t refer to functional isomorphs, but to physical duplicates, so even if you lost your consciousness after having your brain converted, it wouldn’t turn you into a philosophical zombie.)
Could any device ever run such simulations quickly enough (so as to keep up with the pace of the biological neurons) on a chip small enough (so as to fit in amongst the biological neurons)?
In principle, yes. The upper physical limit for the amount of computation per kg of material per second is incredibly high.
Following this to its logical conclusion: when it comes down to actually designing these chips, a designer may end up discovering that the only way to reproduce all of the relevant in/out behavior of a neuron, is just to build a neuron!
This is false. It’s known that any subset of the universe can be simulated on a classical computer to an arbitrary precision.
The non-functionalist audience is also not obliged to trust the introspective reports at intermediate stages.
This introduces a bizarre disconnect between your beliefs about your qualia, and the qualia themselves. Imagine: It would be possible, for example, that you believe you’re in pain, and act in all ways as if you’re in pain, but actually, you’re not in pain.
Whatever I denote by “qualia,” it certainly doesn’t have this extremely bizarre property.
But since we’re interested in the phenomenal texture of that experience, we’re left with the question: how can we assume that octopus pain and human pain have the same quality?
Because then, the functional properties of a quale and the quale itself would be synchronized only in Homo sapiens. Other species (like octopus) might have qualia, but since they’re made of different matter, they (the non-computationalist would argue) certainly have a different quality, so while they funtionally behave the same way, the quale itself is different. This would introduce a bizarre desynchronization between behavior and qualia, that just happens to match for Homo sapiens.
(This isn’t something that I ever thought would be written in net-upvoted posts about on LessWrong, let alone ending in a sequence. Identity is necessarily in the pattern, and there is no reason to think the meat-parts of the pattern are necessary in addition to the computation-parts.)
The non-functionalist audience is also not obliged to trust the introspective reports at intermediate stages.
This introduces a bizarre disconnect between your beliefs about your qualia, and the qualia themselves. Imagine: It would be possible, for example, that you believe you’re in pain, and act in all ways as if you’re in pain, but actually, you’re not in pain.
I think “belief” is overloaded here. We could distinguish two kinds of “believing you’re in pain” in this context:
Patterns in some algorithm (resulting from some noxious stimulus) that, combined with other dispositions, lead to the agent’s behavior, including uttering “I’m in pain.”
A first-person response of recognition of the subjective experience of pain.
I’d agree it’s totally bizarre (if not incoherent) for someone to (2)-believe they’re in pain yet be mistaken about that. But in order to resist the fading qualia argument along the quoted lines, I think we only need someone to (1)-believe they’re in pain yet be mistaken. Which doesn’t seem bizarre to me.
(And no, you don’t need to be an epiphenomenalist to buy this, I think. Quoting Block: “Consider two computationally identical computers, one that works via electronic mechanisms, the other that works via hydraulic mechanisms. (Suppose that the fluid in one does the same job that the electricity does in the other.) We are not entitled to infer from the causal efficacy of the fluid in the hydraulic machine that the electrical machine also has fluid. One could not conclude that the presence or absence of the fluid makes no difference, just because there is a functional equivalent that has no fluid.”)
I think “belief” is overloaded here. We could distinguish two kinds of “believing you’re in pain” in this context:
(1) isn’t a belief (unless accompanied by (2)).
But in order to resist the fading qualia argument along the quoted lines, I think we only need someone to (1)-believe they’re in pain yet be mistaken.
That’s not possible, because the belief_2 that one isn’t in pain has nowhere to be instantiated.
Even if the intermediate stages believed_2 they’re not in pain and only spoke and acted that way (which isn’t possible), it would introduce a desynchronization between the consciousness on one side, and the behavior and cognitive processes on the other. The fact that the person isn’t in pain would be hidden entirely from their cognitive processes, and instead they would reflect on their false belief_1 about how they are, in fact, in pain.
That quale would then be shielded from them in this way, rendering its existence meaningless (since every time they would try to think about it, they would arrive at the conclusion that they don’t actually have it and that they actually have the opposite quale).
In fact, aren’t we lucky that our cognition and qualia are perfectly coupled? Just think about how many coincidences had to happen during evolution to get our brain exactly right.
(It would also rob qualia of their causal power. (Now the quale of being in pain can’t cause the quale of feeling depressed, because that quale is accessible to my cognitive processes, and so now I would talk about being (and really be) depressed for no physical reason.) Such a quale would be shielded not only from our cognition, but also from our other qualia, thereby not existing in any meaningful sense.)
Whatever I call “qualia,” it doesn’t (even possibly) have these properties.
(Also, different qualia of the same person necessarily create a coherent whole, which wouldn’t be the case here.)
Quoting Block: “Consider two computationally identical computers, one that works via electronic mechanisms, the other that works via hydraulic mechanisms. (Suppose that the fluid in one does the same job that the electricity does in the other.) We are not entitled to infer from the causal efficacy of the fluid in the hydraulic machine that the electrical machine also has fluid. One could not conclude that the presence or absence of the fluid makes no difference, just because there is a functional equivalent that has no fluid.”
There is no analogue of “fluid” in the brain. There is only the pattern. (If there were, there would still be all the other reasons why it can’t work that way.)
Why not? Call it what you like, but it has all the properties relevant to your argument, because your concern was that the person would “act in all ways as if they’re in pain” but not actually be in pain. (Seems like you’d be begging the question in favor of functionalism if you claimed that the first-person recognition ((2)-belief) necessarily occurs whenever there’s something playing the functional role of a (1)-belief.)
That’s not possible, because the belief_2 that one isn’t in pain has nowhere to be instantiated.
I’m saying that no belief_2 exists in this scenario (where there is no pain) at all. Not that the person has a belief_2 that they aren’t in pain.
Even if the intermediate stages believed_2 they’re not in pain and only spoke and acted that way (which isn’t possible), it would introduce a desynchronization between the consciousness on one side, and the behavior and cognitive processes on the other.
I don’t find this compelling, because denying epiphenomenalism doesn’t require us to think that changing the first-person aspect of X alwayschanges the third-person aspect of some Y that X causally influences. Only that this sometimes can happen. If we artificially intervene on the person’s brain so as to replace X with something else designed to have the same third-person effects on Y as the original, it doesn’t follow that the new X has the same first-person aspect! The whole reason why given our actual brains our beliefs reliably track our subjective experiences is, the subjective experience is naturally coupled with some third-person aspect that tends to cause such beliefs. This no longer holds when we artificially intervene on the system as hypothesized.
There is no analogue of “fluid” in the brain. There is only the pattern.
We probably disagree at a more basic level then. I reject materialism. Subjective experiences are not just patterns.
This is false. Everything exists in the territory to the extent to which it can interact with us. While different models can output a different answer as to which computation something runs, that doesn’t mean the computation isn’t real (or, even, that no computation is real). The computation is real in the sense of it influencing our sense impressions (I can observe my computer running a specific computation, for example). Someone else, whose model doesn’t return “yes” to the question whether my computer runs a particular computation will then have to explain my reports of my sense impressions (why does this person claim their computer runs Windows, when I’m predicting it runs CP/M?), and they will have to either change their model, or make systematically incorrect predictions about my utterances.
In this way, every computation that can be ascribed to a physical system is intersubjectively real, which is the only kind of reality there could, in principle, be.
(Philosophical zombies, by the way, don’t refer to functional isomorphs, but to physical duplicates, so even if you lost your consciousness after having your brain converted, it wouldn’t turn you into a philosophical zombie.)
In principle, yes. The upper physical limit for the amount of computation per kg of material per second is incredibly high.
This is false. It’s known that any subset of the universe can be simulated on a classical computer to an arbitrary precision.
This introduces a bizarre disconnect between your beliefs about your qualia, and the qualia themselves. Imagine: It would be possible, for example, that you believe you’re in pain, and act in all ways as if you’re in pain, but actually, you’re not in pain.
Whatever I denote by “qualia,” it certainly doesn’t have this extremely bizarre property.
Because then, the functional properties of a quale and the quale itself would be synchronized only in Homo sapiens. Other species (like octopus) might have qualia, but since they’re made of different matter, they (the non-computationalist would argue) certainly have a different quality, so while they funtionally behave the same way, the quale itself is different. This would introduce a bizarre desynchronization between behavior and qualia, that just happens to match for Homo sapiens.
(This isn’t something that I ever thought would be written in net-upvoted posts about on LessWrong, let alone ending in a sequence. Identity is necessarily in the pattern, and there is no reason to think the meat-parts of the pattern are necessary in addition to the computation-parts.)
I think “belief” is overloaded here. We could distinguish two kinds of “believing you’re in pain” in this context:
Patterns in some algorithm (resulting from some noxious stimulus) that, combined with other dispositions, lead to the agent’s behavior, including uttering “I’m in pain.”
A first-person response of recognition of the subjective experience of pain.
I’d agree it’s totally bizarre (if not incoherent) for someone to (2)-believe they’re in pain yet be mistaken about that. But in order to resist the fading qualia argument along the quoted lines, I think we only need someone to (1)-believe they’re in pain yet be mistaken. Which doesn’t seem bizarre to me.
(And no, you don’t need to be an epiphenomenalist to buy this, I think. Quoting Block: “Consider two computationally identical computers, one that works via electronic mechanisms, the other that works via hydraulic mechanisms. (Suppose that the fluid in one does the same job that the electricity does in the other.) We are not entitled to infer from the causal efficacy of the fluid in the hydraulic machine that the electrical machine also has fluid. One could not conclude that the presence or absence of the fluid makes no difference, just because there is a functional equivalent that has no fluid.”)
(1) isn’t a belief (unless accompanied by (2)).
That’s not possible, because the belief_2 that one isn’t in pain has nowhere to be instantiated.
Even if the intermediate stages believed_2 they’re not in pain and only spoke and acted that way (which isn’t possible), it would introduce a desynchronization between the consciousness on one side, and the behavior and cognitive processes on the other. The fact that the person isn’t in pain would be hidden entirely from their cognitive processes, and instead they would reflect on their false belief_1 about how they are, in fact, in pain.
That quale would then be shielded from them in this way, rendering its existence meaningless (since every time they would try to think about it, they would arrive at the conclusion that they don’t actually have it and that they actually have the opposite quale).
In fact, aren’t we lucky that our cognition and qualia are perfectly coupled? Just think about how many coincidences had to happen during evolution to get our brain exactly right.
(It would also rob qualia of their causal power. (Now the quale of being in pain can’t cause the quale of feeling depressed, because that quale is accessible to my cognitive processes, and so now I would talk about being (and really be) depressed for no physical reason.) Such a quale would be shielded not only from our cognition, but also from our other qualia, thereby not existing in any meaningful sense.)
Whatever I call “qualia,” it doesn’t (even possibly) have these properties.
(Also, different qualia of the same person necessarily create a coherent whole, which wouldn’t be the case here.)
There is no analogue of “fluid” in the brain. There is only the pattern. (If there were, there would still be all the other reasons why it can’t work that way.)
Why not? Call it what you like, but it has all the properties relevant to your argument, because your concern was that the person would “act in all ways as if they’re in pain” but not actually be in pain. (Seems like you’d be begging the question in favor of functionalism if you claimed that the first-person recognition ((2)-belief) necessarily occurs whenever there’s something playing the functional role of a (1)-belief.)
I’m saying that no belief_2 exists in this scenario (where there is no pain) at all. Not that the person has a belief_2 that they aren’t in pain.
I don’t find this compelling, because denying epiphenomenalism doesn’t require us to think that changing the first-person aspect of X always changes the third-person aspect of some Y that X causally influences. Only that this sometimes can happen. If we artificially intervene on the person’s brain so as to replace X with something else designed to have the same third-person effects on Y as the original, it doesn’t follow that the new X has the same first-person aspect! The whole reason why given our actual brains our beliefs reliably track our subjective experiences is, the subjective experience is naturally coupled with some third-person aspect that tends to cause such beliefs. This no longer holds when we artificially intervene on the system as hypothesized.
We probably disagree at a more basic level then. I reject materialism. Subjective experiences are not just patterns.