I don’t like this writing style. It feels like you are saying a lot of things, without trying to demarcate boundaries for what you actually mean, and I also don’t see you criticizing your sentences before you put them down. For example, with these two paragraphs:
Surely there can’t be a single neuron replacement that turns you into a philosophical zombie? That would mean your consciousness was reliant on that single neuron, which seems implausible.
The other option is that your consciousness gradually fades over the course of the operations. But surely you would notice that your experience was gradually fading and report it? To not notice the fading would be a catastrophic failure of introspection.
If you’re aware that there is a map and a territory, you should never be dealing with absolutes like, “a single neuron...” You’re right that the only other option (I would say, the only option) is your consciousness gradually fades away, but what do you mean by that? It’s such a loose idea, which makes it harder to look at it critically. I don’t really understand the point of this thought experiment, because if it wasn’t phrased in such a mysterious manner, it wouldn’t seem relevant to computational functionalism.
I also don’t understand a single one of your arguments against computational functionalism, and that’s because I think you don’t understand them either. For example,
In the theoretical CF post, I give a more abstract argument against the CF classifier. I argue that computation is fuzzy, it’s a property of our map of a system rather than the territory. In contrast, given my realist assumptions above, phenomenal consciousness is not a fuzzy property of a map, it is the territory. So consciousness cannot be computation.
You can’t just claim that consciousness is “real” and computation is not, and thus they’re distinct. You haven’t even defined what “real” is. Besides, most people actually take the opposite approch: computation is the most “real” thing out there, and the universe—and any consciouses therein—arise from it. Finally, how is computation being fuzzy even related to this question? Consciousness can be the same way.
Why do you say, “Besides, most people actually take the opposite approch: computation is the most “real” thing out there, and the universe—and any consciouses therein—arise from it.”
Euan McLean said at the top of his post he was assuming a materialist perspective. If you believe there exists “a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness” you believe you can define consciousness with a computation. In fact, anytime you believe something can be explicitly defined and manipulated, you’ve invented a logic and computer. So, most people who take the materialist perspective believe the material world comes from a sort of “computational universe”, e.g. Tegmark IV.
Soldier mindset.
Here’s a soldier mindset: you’re wrong, and I’m much more confident on this than you are. This person’s thinking is very loosey-goosey and someone needed to point it out. His posts are mostly fluff with paradoxes and questions that would be completely answerable (or at least interesting) if he deleted half the paragraphs and tried to pin down definitions before running rampant with them.
Also, I think I can point to specific things that you might consider soldier mindset. For example,
It’s such a loose idea, which makes it harder to look at it critically. I don’t really understand the point of this thought experiment, because if it wasn’t phrased in such a mysterious manner, it wouldn’t seem relevant to computational functionalism.
If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away. I wasn’t giving him the answer, because his entire post is full of this same error: not defining his terms, running rampant with them, and then being shocked when things don’t make sense.
I reacted locally invalid (but didn’t downvote either comment) because I think “computation” as OP is using it is about the level of granularity/abstraction at which consciousness is located, and I think it’s logically coherent to believe both (1) materialism[1] and (2) consciousness is located at a fundamental/non-abstract level.
To make a very unrealistic analogy that I think nonetheless makes the point: suppose you believed that all ball-and-disk integrators were conscious. Do you automatically believe that consciousness can be defined with a computation? Not necessarily—you could have a theory according to which a digital computer computing the same integrals is not consciousness (since, again, consciousness is about the fine-grained physical steps, rather than the abstracted computational steps, and a digital computer calculating ∫50x2dx performs very different physical steps than a ball-and-disk integrator doing the same). The only way you now care about “computation” is if you think “computation” does refer to low-level physical steps. In that case, your implication is correct, but this isn’t what OP means, and OP did define their terms.
If you believe there exists “a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness” you believe you can define consciousness with a computation.
I’m not arguing against the claim that you could “define consciousness with a computation”. I am arguing against the claim that “consciousness is computation”. These are distinct claims.
So, most people who take the materialist perspective believe the material world comes from a sort of “computational universe”, e.g. Tegmark IV.
Massive claim, nothing to back it up.
This person’s thinking is very loosey-goosey and someone needed to point it out.
when you define the terms properly (i.e. KL-divergence from the firings that would have happened)
I think I have a sense of what’s happening here. You don’t consider an argument precise enough unless I define things in more mathematical terms. I’ve been reading a lot more philosophy recently so I’m a lot more of a wordcell than I used to be. You are only comfortable with grounding everything in maths and computation, which is chill. But my view is that maths and computation are not the only symbols upon which constructive discussion can be built.
If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away.
I’d be excited to actually see this counterargument. Is it written down anywhere that you can link to?
But my view is that maths and computation are not the only symbols upon which constructive discussion can be built.
I find it useful to take an axiom of extensionality—if I cannot distinguish between two things in any way, I may as well consider them the same thing for all that it could affect me. Given maths/computation/logic is the process of asserting things are the same or different, it seems to me to be tautologically true that maths and computaiton are the only symbols upon which useful discussion can be built.
I’m not arguing against the claim that you could “define consciousness with a computation”. I am arguing against the claim that “consciousness is computation”. These are distinct claims.
Maybe you want to include some undefinable aspect to consciousness. But anytime it functions differently, you can use that to modify your definition. I don’t think the adherents for computational functionalism, or even a computational universe, need to claim it encapsulates everything there could possibly be in the territory. Only that it encapsulates anything you can perceive in the territory.
There is an objective fact-of-the-matter whether a conscious experience is occurring, and what that experience is. It is not observer-dependent. It is not down to interpretation. It is an intrinsic property of a system.
I believe this is your definition of real consciousness? This tells me properties about consciousness, but doesn’t really help me define consciousness. It’s intrinsic and objective, but what is it? For example, if I told you that the Serpinski triangle is created by combining three copies of itself, I still don’t know what it actually looks like. If I want to work with it, I need to know how the base case is defined. Once you have a definition, you’ve invented computational functionalism (for the Serpinski triangle, for consciousness, for the universe at large).
I think I have a sense of what’s happening here. You don’t consider an argument precise enough unless I define things in more mathematical terms.
Yes, exactly! To be precise, I don’t consider an argument useful unless it is defined through a constructive logic (e.g. mathematics through ZF set theory).
If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away.
I’d be excited to actually see this counterargument. Is it written down anywhere that you can link to?
Note: this assumes computational functionalism.
I haven’t seen it written down explicitly anywhere, but I’ve seen echoes of it here and there. Essentially, in RL, agents are defined via their policies. If you want to modify the agent to be good at a particular task, while still being pretty much the “same agent”, you add a KL-divergence anchor term:
Loss(π)=Subgame Loss(π)+λKL(π,πoriginal).
This is known as piKL and was used for Diplomacy, where it’s important to act similarly to humans. When we think of consciousness or the mind, we can divide thoughts into two categories: the self-sustaining (memes/particles/holonomies), and noise (temperature). Temperature just makes things fuzzy, while memes will proscribe specific actions. On a broad scale, maybe they tell your body to take specific actions, like jumping in front of a trolley. Let’s call these “macrostates”. Since a lot of memes will produce the same macrostates, let’s call them “microstates”. When comparing two consciousnesses, we want to see how well the microstates match up.
The only way we can distinguish between microstates is by increasing the number of macrostates—maybe looking at neuron firings rather than body movements. So, using our axiom of reducibilityextensionality, to determine how “different” two things are, the best we can do is count the difference in the number of microstates filling each macrostate. Actually, we could scale the microstate counts and temperature by some constant factor and end up with the same distribution, so it’s better to look at the difference in their logarithms. This is exactly the cross-entropy. The KL-divergence subtracts off the entropy of the anchor policy (the thing you’re comparing to), but that’s just a constant.
So, let’s apply this to the paradox. Suppose my brain is slowly being replaced by silicon, and I’m worried about losing consciousness. I acknowledge there are impossible-to-determine properties that I could be losing; maybe the gods do not let cyborgs into heaven. However, that isn’t useful to include in my definition of consciousness. All the useful properties can be observed, and I can measure how much they are changing with a KL-divergence.
When it comes to other people, I pretty much don’t care if they’re p-zombies, only how their actions effect me. So a very good definition for their consciousness is simply the equivalence class of programs that would produce the actions I see them taking. If they start acting radically different, I would expect this class to have changed, i.e. their consciousness is different. I’ve heard some people care about the substrate their program runs on. “It wouldn’t be me if the program was run by a bunch of aliens waving yellow and blue flags around.” I think that’s fine. They’ve merely committed suicide in all the worlds their substrate didn’t align with their preferences. They could similarly play the quantum lottery for a billion dollars, though this isn’t a great way to ensure your program’s proliferation.
I don’t really understand the point of this thought experiment, because if it wasn’t phrased in such a mysterious manner, it wouldn’t seem relevant to computational functionalism.
I’m sorry my summary of the thought experiment wasn’t precise enough for you. You’re welcome to read Chalmers’ original paper for more details, which I link to at the top of that section.
I also don’t understand a single one of your arguments against computational functionalism
I gave very brief recaps of my arguments from the other posts in the sequence here so I can connect those arguments to more general CF (rather than theoretical & practical CF). Sorry if they’re too fast. You are welcome to go into the previous posts I link to for more details.
and that’s because I think you don’t understand them either.
What am I supposed to do with this? The one effect this has is to piss me off and make me less interested in engaging with anything you’ve said.
You can’t just claim that consciousness is “real”
This is an assumption I state at the top of this very article.
and computation is not
I don’t “just claim” this, this is what I argue in the theoretical CF post I link to.
You haven’t even defined what “real” is.
I define this when I state my “realism about phenomenal consciousness” assumption, to the precision I judge is necessary for this discussion.
most people actually take the opposite approch: computation is the most “real” thing out there, and the universe—and any consciouses therein—arise from it
Big claims. Nothing to back it up. Not sure why you expect me to update on this.
how is computation being fuzzy even related to this question? Consciousness can be the same way.
This is all covered in the theoretical CF post I link to.
and that’s because I think you don’t understand them either.
What am I supposed to do with this? The one effect this has is to piss me off and make me less interested in engaging with anything you’ve said.
Why is that the one effect? Jordan Peterson says that the one answer he routinely gives to Christians and atheists that piss them off is, “what do you mean by that?” In an interview with Alex O’Conner he says,
So people will say, well, do you believe that happened literally, historically? It’s like, well, yes, I believe that it’s okay. Okay. What do you mean by that? That you believe that exactly. Yeah. So, so you tell me you’re there in the way that you describe it.
Right, right. What do you see? What are the fish doing exactly? And the answer is you don’t know. You have no notion about it at all. You have no theory about it. Sure. You have no theory about it. So your belief is, what’s your belief exactly?
(25:19–25:36, The Jordan B. Peterson Podcast − 451. Navigating Belief, Skepticism, and the Afterlife w/ Alex O’Connor)
Sure, this pisses off a lot of people, but it also gets some people thinking about what they actually mean. So, there’s your answer: you’re supposed to go back and figure out what you mean. A side benefit is if it pisses you off, maybe I won’t see your writing anymore. I’m pretty annoyed at how the quality of posts has gone down on this website in the past few years.
I don’t like this writing style. It feels like you are saying a lot of things, without trying to demarcate boundaries for what you actually mean, and I also don’t see you criticizing your sentences before you put them down. For example, with these two paragraphs:
If you’re aware that there is a map and a territory, you should never be dealing with absolutes like, “a single neuron...” You’re right that the only other option (I would say, the only option) is your consciousness gradually fades away, but what do you mean by that? It’s such a loose idea, which makes it harder to look at it critically. I don’t really understand the point of this thought experiment, because if it wasn’t phrased in such a mysterious manner, it wouldn’t seem relevant to computational functionalism.
I also don’t understand a single one of your arguments against computational functionalism, and that’s because I think you don’t understand them either. For example,
You can’t just claim that consciousness is “real” and computation is not, and thus they’re distinct. You haven’t even defined what “real” is. Besides, most people actually take the opposite approch: computation is the most “real” thing out there, and the universe—and any consciouses therein—arise from it. Finally, how is computation being fuzzy even related to this question? Consciousness can be the same way.
In response to the two reactions:
Why do you say, “Besides, most people actually take the opposite approch: computation is the most “real” thing out there, and the universe—and any consciouses therein—arise from it.”
Euan McLean said at the top of his post he was assuming a materialist perspective. If you believe there exists “a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness” you believe you can define consciousness with a computation. In fact, anytime you believe something can be explicitly defined and manipulated, you’ve invented a logic and computer. So, most people who take the materialist perspective believe the material world comes from a sort of “computational universe”, e.g. Tegmark IV.
Soldier mindset.
Here’s a soldier mindset: you’re wrong, and I’m much more confident on this than you are. This person’s thinking is very loosey-goosey and someone needed to point it out. His posts are mostly fluff with paradoxes and questions that would be completely answerable (or at least interesting) if he deleted half the paragraphs and tried to pin down definitions before running rampant with them.
Also, I think I can point to specific things that you might consider soldier mindset. For example,
If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away. I wasn’t giving him the answer, because his entire post is full of this same error: not defining his terms, running rampant with them, and then being shocked when things don’t make sense.
I reacted locally invalid (but didn’t downvote either comment) because I think “computation” as OP is using it is about the level of granularity/abstraction at which consciousness is located, and I think it’s logically coherent to believe both (1) materialism[1] and (2) consciousness is located at a fundamental/non-abstract level.
To make a very unrealistic analogy that I think nonetheless makes the point: suppose you believed that all ball-and-disk integrators were conscious. Do you automatically believe that consciousness can be defined with a computation? Not necessarily—you could have a theory according to which a digital computer computing the same integrals is not consciousness (since, again, consciousness is about the fine-grained physical steps, rather than the abstracted computational steps, and a digital computer calculating ∫50x2dx performs very different physical steps than a ball-and-disk integrator doing the same). The only way you now care about “computation” is if you think “computation” does refer to low-level physical steps. In that case, your implication is correct, but this isn’t what OP means, and OP did define their terms.
as OP defines the term; in my terminology, materialism means something different
I’m not arguing against the claim that you could “define consciousness with a computation”. I am arguing against the claim that “consciousness is computation”. These are distinct claims.
Massive claim, nothing to back it up.
I think I have a sense of what’s happening here. You don’t consider an argument precise enough unless I define things in more mathematical terms. I’ve been reading a lot more philosophy recently so I’m a lot more of a wordcell than I used to be. You are only comfortable with grounding everything in maths and computation, which is chill. But my view is that maths and computation are not the only symbols upon which constructive discussion can be built.
I’d be excited to actually see this counterargument. Is it written down anywhere that you can link to?
I find it useful to take an axiom of extensionality—if I cannot distinguish between two things in any way, I may as well consider them the same thing for all that it could affect me. Given maths/computation/logic is the process of asserting things are the same or different, it seems to me to be tautologically true that maths and computaiton are the only symbols upon which useful discussion can be built.
Maybe you want to include some undefinable aspect to consciousness. But anytime it functions differently, you can use that to modify your definition. I don’t think the adherents for computational functionalism, or even a computational universe, need to claim it encapsulates everything there could possibly be in the territory. Only that it encapsulates anything you can perceive in the territory.
I believe this is your definition of real consciousness? This tells me properties about consciousness, but doesn’t really help me define consciousness. It’s intrinsic and objective, but what is it? For example, if I told you that the Serpinski triangle is created by combining three copies of itself, I still don’t know what it actually looks like. If I want to work with it, I need to know how the base case is defined. Once you have a definition, you’ve invented computational functionalism (for the Serpinski triangle, for consciousness, for the universe at large).
Yes, exactly! To be precise, I don’t consider an argument useful unless it is defined through a constructive logic (e.g. mathematics through ZF set theory).
Note: this assumes computational functionalism.
I haven’t seen it written down explicitly anywhere, but I’ve seen echoes of it here and there. Essentially, in RL, agents are defined via their policies. If you want to modify the agent to be good at a particular task, while still being pretty much the “same agent”, you add a KL-divergence anchor term:
Loss(π)=Subgame Loss(π)+λKL(π,πoriginal).
This is known as piKL and was used for Diplomacy, where it’s important to act similarly to humans. When we think of consciousness or the mind, we can divide thoughts into two categories: the self-sustaining (memes/particles/holonomies), and noise (temperature). Temperature just makes things fuzzy, while memes will proscribe specific actions. On a broad scale, maybe they tell your body to take specific actions, like jumping in front of a trolley. Let’s call these “macrostates”. Since a lot of memes will produce the same macrostates, let’s call them “microstates”. When comparing two consciousnesses, we want to see how well the microstates match up.
The only way we can distinguish between microstates is by increasing the number of macrostates—maybe looking at neuron firings rather than body movements. So, using our axiom of
reducibilityextensionality, to determine how “different” two things are, the best we can do is count the difference in the number of microstates filling each macrostate. Actually, we could scale the microstate counts and temperature by some constant factor and end up with the same distribution, so it’s better to look at the difference in their logarithms. This is exactly the cross-entropy. The KL-divergence subtracts off the entropy of the anchor policy (the thing you’re comparing to), but that’s just a constant.So, let’s apply this to the paradox. Suppose my brain is slowly being replaced by silicon, and I’m worried about losing consciousness. I acknowledge there are impossible-to-determine properties that I could be losing; maybe the gods do not let cyborgs into heaven. However, that isn’t useful to include in my definition of consciousness. All the useful properties can be observed, and I can measure how much they are changing with a KL-divergence.
When it comes to other people, I pretty much don’t care if they’re p-zombies, only how their actions effect me. So a very good definition for their consciousness is simply the equivalence class of programs that would produce the actions I see them taking. If they start acting radically different, I would expect this class to have changed, i.e. their consciousness is different. I’ve heard some people care about the substrate their program runs on. “It wouldn’t be me if the program was run by a bunch of aliens waving yellow and blue flags around.” I think that’s fine. They’ve merely committed suicide in all the worlds their substrate didn’t align with their preferences. They could similarly play the quantum lottery for a billion dollars, though this isn’t a great way to ensure your program’s proliferation.
I’m sorry my summary of the thought experiment wasn’t precise enough for you. You’re welcome to read Chalmers’ original paper for more details, which I link to at the top of that section.
I gave very brief recaps of my arguments from the other posts in the sequence here so I can connect those arguments to more general CF (rather than theoretical & practical CF). Sorry if they’re too fast. You are welcome to go into the previous posts I link to for more details.
What am I supposed to do with this? The one effect this has is to piss me off and make me less interested in engaging with anything you’ve said.
This is an assumption I state at the top of this very article.
I don’t “just claim” this, this is what I argue in the theoretical CF post I link to.
I define this when I state my “realism about phenomenal consciousness” assumption, to the precision I judge is necessary for this discussion.
Big claims. Nothing to back it up. Not sure why you expect me to update on this.
This is all covered in the theoretical CF post I link to.
Why is that the one effect? Jordan Peterson says that the one answer he routinely gives to Christians and atheists that piss them off is, “what do you mean by that?” In an interview with Alex O’Conner he says,
Sure, this pisses off a lot of people, but it also gets some people thinking about what they actually mean. So, there’s your answer: you’re supposed to go back and figure out what you mean. A side benefit is if it pisses you off, maybe I won’t see your writing anymore. I’m pretty annoyed at how the quality of posts has gone down on this website in the past few years.