Why do you say, “Besides, most people actually take the opposite approch: computation is the most “real” thing out there, and the universe—and any consciouses therein—arise from it.”
Euan McLean said at the top of his post he was assuming a materialist perspective. If you believe there exists “a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness” you believe you can define consciousness with a computation. In fact, anytime you believe something can be explicitly defined and manipulated, you’ve invented a logic and computer. So, most people who take the materialist perspective believe the material world comes from a sort of “computational universe”, e.g. Tegmark IV.
Soldier mindset.
Here’s a soldier mindset: you’re wrong, and I’m much more confident on this than you are. This person’s thinking is very loosey-goosey and someone needed to point it out. His posts are mostly fluff with paradoxes and questions that would be completely answerable (or at least interesting) if he deleted half the paragraphs and tried to pin down definitions before running rampant with them.
Also, I think I can point to specific things that you might consider soldier mindset. For example,
It’s such a loose idea, which makes it harder to look at it critically. I don’t really understand the point of this thought experiment, because if it wasn’t phrased in such a mysterious manner, it wouldn’t seem relevant to computational functionalism.
If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away. I wasn’t giving him the answer, because his entire post is full of this same error: not defining his terms, running rampant with them, and then being shocked when things don’t make sense.
I reacted locally invalid (but didn’t downvote either comment) because I think “computation” as OP is using it is about the level of granularity/abstraction at which consciousness is located, and I think it’s logically coherent to believe both (1) materialism[1] and (2) consciousness is located at a fundamental/non-abstract level.
To make a very unrealistic analogy that I think nonetheless makes the point: suppose you believed that all ball-and-disk integrators were conscious. Do you automatically believe that consciousness can be defined with a computation? Not necessarily—you could have a theory according to which a digital computer computing the same integrals is not consciousness (since, again, consciousness is about the fine-grained physical steps, rather than the abstracted computational steps, and a digital computer calculating ∫50x2dx performs very different physical steps than a ball-and-disk integrator doing the same). The only way you now care about “computation” is if you think “computation” does refer to low-level physical steps. In that case, your implication is correct, but this isn’t what OP means, and OP did define their terms.
If you believe there exists “a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness” you believe you can define consciousness with a computation.
I’m not arguing against the claim that you could “define consciousness with a computation”. I am arguing against the claim that “consciousness is computation”. These are distinct claims.
So, most people who take the materialist perspective believe the material world comes from a sort of “computational universe”, e.g. Tegmark IV.
Massive claim, nothing to back it up.
This person’s thinking is very loosey-goosey and someone needed to point it out.
when you define the terms properly (i.e. KL-divergence from the firings that would have happened)
I think I have a sense of what’s happening here. You don’t consider an argument precise enough unless I define things in more mathematical terms. I’ve been reading a lot more philosophy recently so I’m a lot more of a wordcell than I used to be. You are only comfortable with grounding everything in maths and computation, which is chill. But my view is that maths and computation are not the only symbols upon which constructive discussion can be built.
If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away.
I’d be excited to actually see this counterargument. Is it written down anywhere that you can link to?
But my view is that maths and computation are not the only symbols upon which constructive discussion can be built.
I find it useful to take an axiom of extensionality—if I cannot distinguish between two things in any way, I may as well consider them the same thing for all that it could affect me. Given maths/computation/logic is the process of asserting things are the same or different, it seems to me to be tautologically true that maths and computaiton are the only symbols upon which useful discussion can be built.
I’m not arguing against the claim that you could “define consciousness with a computation”. I am arguing against the claim that “consciousness is computation”. These are distinct claims.
Maybe you want to include some undefinable aspect to consciousness. But anytime it functions differently, you can use that to modify your definition. I don’t think the adherents for computational functionalism, or even a computational universe, need to claim it encapsulates everything there could possibly be in the territory. Only that it encapsulates anything you can perceive in the territory.
There is an objective fact-of-the-matter whether a conscious experience is occurring, and what that experience is. It is not observer-dependent. It is not down to interpretation. It is an intrinsic property of a system.
I believe this is your definition of real consciousness? This tells me properties about consciousness, but doesn’t really help me define consciousness. It’s intrinsic and objective, but what is it? For example, if I told you that the Serpinski triangle is created by combining three copies of itself, I still don’t know what it actually looks like. If I want to work with it, I need to know how the base case is defined. Once you have a definition, you’ve invented computational functionalism (for the Serpinski triangle, for consciousness, for the universe at large).
I think I have a sense of what’s happening here. You don’t consider an argument precise enough unless I define things in more mathematical terms.
Yes, exactly! To be precise, I don’t consider an argument useful unless it is defined through a constructive logic (e.g. mathematics through ZF set theory).
If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away.
I’d be excited to actually see this counterargument. Is it written down anywhere that you can link to?
Note: this assumes computational functionalism.
I haven’t seen it written down explicitly anywhere, but I’ve seen echoes of it here and there. Essentially, in RL, agents are defined via their policies. If you want to modify the agent to be good at a particular task, while still being pretty much the “same agent”, you add a KL-divergence anchor term:
Loss(π)=Subgame Loss(π)+λKL(π,πoriginal).
This is known as piKL and was used for Diplomacy, where it’s important to act similarly to humans. When we think of consciousness or the mind, we can divide thoughts into two categories: the self-sustaining (memes/particles/holonomies), and noise (temperature). Temperature just makes things fuzzy, while memes will proscribe specific actions. On a broad scale, maybe they tell your body to take specific actions, like jumping in front of a trolley. Let’s call these “macrostates”. Since a lot of memes will produce the same macrostates, let’s call them “microstates”. When comparing two consciousnesses, we want to see how well the microstates match up.
The only way we can distinguish between microstates is by increasing the number of macrostates—maybe looking at neuron firings rather than body movements. So, using our axiom of reducibilityextensionality, to determine how “different” two things are, the best we can do is count the difference in the number of microstates filling each macrostate. Actually, we could scale the microstate counts and temperature by some constant factor and end up with the same distribution, so it’s better to look at the difference in their logarithms. This is exactly the cross-entropy. The KL-divergence subtracts off the entropy of the anchor policy (the thing you’re comparing to), but that’s just a constant.
So, let’s apply this to the paradox. Suppose my brain is slowly being replaced by silicon, and I’m worried about losing consciousness. I acknowledge there are impossible-to-determine properties that I could be losing; maybe the gods do not let cyborgs into heaven. However, that isn’t useful to include in my definition of consciousness. All the useful properties can be observed, and I can measure how much they are changing with a KL-divergence.
When it comes to other people, I pretty much don’t care if they’re p-zombies, only how their actions effect me. So a very good definition for their consciousness is simply the equivalence class of programs that would produce the actions I see them taking. If they start acting radically different, I would expect this class to have changed, i.e. their consciousness is different. I’ve heard some people care about the substrate their program runs on. “It wouldn’t be me if the program was run by a bunch of aliens waving yellow and blue flags around.” I think that’s fine. They’ve merely committed suicide in all the worlds their substrate didn’t align with their preferences. They could similarly play the quantum lottery for a billion dollars, though this isn’t a great way to ensure your program’s proliferation.
In response to the two reactions:
Why do you say, “Besides, most people actually take the opposite approch: computation is the most “real” thing out there, and the universe—and any consciouses therein—arise from it.”
Euan McLean said at the top of his post he was assuming a materialist perspective. If you believe there exists “a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness” you believe you can define consciousness with a computation. In fact, anytime you believe something can be explicitly defined and manipulated, you’ve invented a logic and computer. So, most people who take the materialist perspective believe the material world comes from a sort of “computational universe”, e.g. Tegmark IV.
Soldier mindset.
Here’s a soldier mindset: you’re wrong, and I’m much more confident on this than you are. This person’s thinking is very loosey-goosey and someone needed to point it out. His posts are mostly fluff with paradoxes and questions that would be completely answerable (or at least interesting) if he deleted half the paragraphs and tried to pin down definitions before running rampant with them.
Also, I think I can point to specific things that you might consider soldier mindset. For example,
If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away. I wasn’t giving him the answer, because his entire post is full of this same error: not defining his terms, running rampant with them, and then being shocked when things don’t make sense.
I reacted locally invalid (but didn’t downvote either comment) because I think “computation” as OP is using it is about the level of granularity/abstraction at which consciousness is located, and I think it’s logically coherent to believe both (1) materialism[1] and (2) consciousness is located at a fundamental/non-abstract level.
To make a very unrealistic analogy that I think nonetheless makes the point: suppose you believed that all ball-and-disk integrators were conscious. Do you automatically believe that consciousness can be defined with a computation? Not necessarily—you could have a theory according to which a digital computer computing the same integrals is not consciousness (since, again, consciousness is about the fine-grained physical steps, rather than the abstracted computational steps, and a digital computer calculating ∫50x2dx performs very different physical steps than a ball-and-disk integrator doing the same). The only way you now care about “computation” is if you think “computation” does refer to low-level physical steps. In that case, your implication is correct, but this isn’t what OP means, and OP did define their terms.
as OP defines the term; in my terminology, materialism means something different
I’m not arguing against the claim that you could “define consciousness with a computation”. I am arguing against the claim that “consciousness is computation”. These are distinct claims.
Massive claim, nothing to back it up.
I think I have a sense of what’s happening here. You don’t consider an argument precise enough unless I define things in more mathematical terms. I’ve been reading a lot more philosophy recently so I’m a lot more of a wordcell than I used to be. You are only comfortable with grounding everything in maths and computation, which is chill. But my view is that maths and computation are not the only symbols upon which constructive discussion can be built.
I’d be excited to actually see this counterargument. Is it written down anywhere that you can link to?
I find it useful to take an axiom of extensionality—if I cannot distinguish between two things in any way, I may as well consider them the same thing for all that it could affect me. Given maths/computation/logic is the process of asserting things are the same or different, it seems to me to be tautologically true that maths and computaiton are the only symbols upon which useful discussion can be built.
Maybe you want to include some undefinable aspect to consciousness. But anytime it functions differently, you can use that to modify your definition. I don’t think the adherents for computational functionalism, or even a computational universe, need to claim it encapsulates everything there could possibly be in the territory. Only that it encapsulates anything you can perceive in the territory.
I believe this is your definition of real consciousness? This tells me properties about consciousness, but doesn’t really help me define consciousness. It’s intrinsic and objective, but what is it? For example, if I told you that the Serpinski triangle is created by combining three copies of itself, I still don’t know what it actually looks like. If I want to work with it, I need to know how the base case is defined. Once you have a definition, you’ve invented computational functionalism (for the Serpinski triangle, for consciousness, for the universe at large).
Yes, exactly! To be precise, I don’t consider an argument useful unless it is defined through a constructive logic (e.g. mathematics through ZF set theory).
Note: this assumes computational functionalism.
I haven’t seen it written down explicitly anywhere, but I’ve seen echoes of it here and there. Essentially, in RL, agents are defined via their policies. If you want to modify the agent to be good at a particular task, while still being pretty much the “same agent”, you add a KL-divergence anchor term:
Loss(π)=Subgame Loss(π)+λKL(π,πoriginal).
This is known as piKL and was used for Diplomacy, where it’s important to act similarly to humans. When we think of consciousness or the mind, we can divide thoughts into two categories: the self-sustaining (memes/particles/holonomies), and noise (temperature). Temperature just makes things fuzzy, while memes will proscribe specific actions. On a broad scale, maybe they tell your body to take specific actions, like jumping in front of a trolley. Let’s call these “macrostates”. Since a lot of memes will produce the same macrostates, let’s call them “microstates”. When comparing two consciousnesses, we want to see how well the microstates match up.
The only way we can distinguish between microstates is by increasing the number of macrostates—maybe looking at neuron firings rather than body movements. So, using our axiom of
reducibilityextensionality, to determine how “different” two things are, the best we can do is count the difference in the number of microstates filling each macrostate. Actually, we could scale the microstate counts and temperature by some constant factor and end up with the same distribution, so it’s better to look at the difference in their logarithms. This is exactly the cross-entropy. The KL-divergence subtracts off the entropy of the anchor policy (the thing you’re comparing to), but that’s just a constant.So, let’s apply this to the paradox. Suppose my brain is slowly being replaced by silicon, and I’m worried about losing consciousness. I acknowledge there are impossible-to-determine properties that I could be losing; maybe the gods do not let cyborgs into heaven. However, that isn’t useful to include in my definition of consciousness. All the useful properties can be observed, and I can measure how much they are changing with a KL-divergence.
When it comes to other people, I pretty much don’t care if they’re p-zombies, only how their actions effect me. So a very good definition for their consciousness is simply the equivalence class of programs that would produce the actions I see them taking. If they start acting radically different, I would expect this class to have changed, i.e. their consciousness is different. I’ve heard some people care about the substrate their program runs on. “It wouldn’t be me if the program was run by a bunch of aliens waving yellow and blue flags around.” I think that’s fine. They’ve merely committed suicide in all the worlds their substrate didn’t align with their preferences. They could similarly play the quantum lottery for a billion dollars, though this isn’t a great way to ensure your program’s proliferation.