But my view is that maths and computation are not the only symbols upon which constructive discussion can be built.
I find it useful to take an axiom of extensionality—if I cannot distinguish between two things in any way, I may as well consider them the same thing for all that it could affect me. Given maths/computation/logic is the process of asserting things are the same or different, it seems to me to be tautologically true that maths and computaiton are the only symbols upon which useful discussion can be built.
I’m not arguing against the claim that you could “define consciousness with a computation”. I am arguing against the claim that “consciousness is computation”. These are distinct claims.
Maybe you want to include some undefinable aspect to consciousness. But anytime it functions differently, you can use that to modify your definition. I don’t think the adherents for computational functionalism, or even a computational universe, need to claim it encapsulates everything there could possibly be in the territory. Only that it encapsulates anything you can perceive in the territory.
There is an objective fact-of-the-matter whether a conscious experience is occurring, and what that experience is. It is not observer-dependent. It is not down to interpretation. It is an intrinsic property of a system.
I believe this is your definition of real consciousness? This tells me properties about consciousness, but doesn’t really help me define consciousness. It’s intrinsic and objective, but what is it? For example, if I told you that the Serpinski triangle is created by combining three copies of itself, I still don’t know what it actually looks like. If I want to work with it, I need to know how the base case is defined. Once you have a definition, you’ve invented computational functionalism (for the Serpinski triangle, for consciousness, for the universe at large).
I think I have a sense of what’s happening here. You don’t consider an argument precise enough unless I define things in more mathematical terms.
Yes, exactly! To be precise, I don’t consider an argument useful unless it is defined through a constructive logic (e.g. mathematics through ZF set theory).
If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away.
I’d be excited to actually see this counterargument. Is it written down anywhere that you can link to?
Note: this assumes computational functionalism.
I haven’t seen it written down explicitly anywhere, but I’ve seen echoes of it here and there. Essentially, in RL, agents are defined via their policies. If you want to modify the agent to be good at a particular task, while still being pretty much the “same agent”, you add a KL-divergence anchor term:
Loss(π)=Subgame Loss(π)+λKL(π,πoriginal).
This is known as piKL and was used for Diplomacy, where it’s important to act similarly to humans. When we think of consciousness or the mind, we can divide thoughts into two categories: the self-sustaining (memes/particles/holonomies), and noise (temperature). Temperature just makes things fuzzy, while memes will proscribe specific actions. On a broad scale, maybe they tell your body to take specific actions, like jumping in front of a trolley. Let’s call these “macrostates”. Since a lot of memes will produce the same macrostates, let’s call them “microstates”. When comparing two consciousnesses, we want to see how well the microstates match up.
The only way we can distinguish between microstates is by increasing the number of macrostates—maybe looking at neuron firings rather than body movements. So, using our axiom of reducibilityextensionality, to determine how “different” two things are, the best we can do is count the difference in the number of microstates filling each macrostate. Actually, we could scale the microstate counts and temperature by some constant factor and end up with the same distribution, so it’s better to look at the difference in their logarithms. This is exactly the cross-entropy. The KL-divergence subtracts off the entropy of the anchor policy (the thing you’re comparing to), but that’s just a constant.
So, let’s apply this to the paradox. Suppose my brain is slowly being replaced by silicon, and I’m worried about losing consciousness. I acknowledge there are impossible-to-determine properties that I could be losing; maybe the gods do not let cyborgs into heaven. However, that isn’t useful to include in my definition of consciousness. All the useful properties can be observed, and I can measure how much they are changing with a KL-divergence.
When it comes to other people, I pretty much don’t care if they’re p-zombies, only how their actions effect me. So a very good definition for their consciousness is simply the equivalence class of programs that would produce the actions I see them taking. If they start acting radically different, I would expect this class to have changed, i.e. their consciousness is different. I’ve heard some people care about the substrate their program runs on. “It wouldn’t be me if the program was run by a bunch of aliens waving yellow and blue flags around.” I think that’s fine. They’ve merely committed suicide in all the worlds their substrate didn’t align with their preferences. They could similarly play the quantum lottery for a billion dollars, though this isn’t a great way to ensure your program’s proliferation.
I find it useful to take an axiom of extensionality—if I cannot distinguish between two things in any way, I may as well consider them the same thing for all that it could affect me. Given maths/computation/logic is the process of asserting things are the same or different, it seems to me to be tautologically true that maths and computaiton are the only symbols upon which useful discussion can be built.
Maybe you want to include some undefinable aspect to consciousness. But anytime it functions differently, you can use that to modify your definition. I don’t think the adherents for computational functionalism, or even a computational universe, need to claim it encapsulates everything there could possibly be in the territory. Only that it encapsulates anything you can perceive in the territory.
I believe this is your definition of real consciousness? This tells me properties about consciousness, but doesn’t really help me define consciousness. It’s intrinsic and objective, but what is it? For example, if I told you that the Serpinski triangle is created by combining three copies of itself, I still don’t know what it actually looks like. If I want to work with it, I need to know how the base case is defined. Once you have a definition, you’ve invented computational functionalism (for the Serpinski triangle, for consciousness, for the universe at large).
Yes, exactly! To be precise, I don’t consider an argument useful unless it is defined through a constructive logic (e.g. mathematics through ZF set theory).
Note: this assumes computational functionalism.
I haven’t seen it written down explicitly anywhere, but I’ve seen echoes of it here and there. Essentially, in RL, agents are defined via their policies. If you want to modify the agent to be good at a particular task, while still being pretty much the “same agent”, you add a KL-divergence anchor term:
Loss(π)=Subgame Loss(π)+λKL(π,πoriginal).
This is known as piKL and was used for Diplomacy, where it’s important to act similarly to humans. When we think of consciousness or the mind, we can divide thoughts into two categories: the self-sustaining (memes/particles/holonomies), and noise (temperature). Temperature just makes things fuzzy, while memes will proscribe specific actions. On a broad scale, maybe they tell your body to take specific actions, like jumping in front of a trolley. Let’s call these “macrostates”. Since a lot of memes will produce the same macrostates, let’s call them “microstates”. When comparing two consciousnesses, we want to see how well the microstates match up.
The only way we can distinguish between microstates is by increasing the number of macrostates—maybe looking at neuron firings rather than body movements. So, using our axiom of
reducibilityextensionality, to determine how “different” two things are, the best we can do is count the difference in the number of microstates filling each macrostate. Actually, we could scale the microstate counts and temperature by some constant factor and end up with the same distribution, so it’s better to look at the difference in their logarithms. This is exactly the cross-entropy. The KL-divergence subtracts off the entropy of the anchor policy (the thing you’re comparing to), but that’s just a constant.So, let’s apply this to the paradox. Suppose my brain is slowly being replaced by silicon, and I’m worried about losing consciousness. I acknowledge there are impossible-to-determine properties that I could be losing; maybe the gods do not let cyborgs into heaven. However, that isn’t useful to include in my definition of consciousness. All the useful properties can be observed, and I can measure how much they are changing with a KL-divergence.
When it comes to other people, I pretty much don’t care if they’re p-zombies, only how their actions effect me. So a very good definition for their consciousness is simply the equivalence class of programs that would produce the actions I see them taking. If they start acting radically different, I would expect this class to have changed, i.e. their consciousness is different. I’ve heard some people care about the substrate their program runs on. “It wouldn’t be me if the program was run by a bunch of aliens waving yellow and blue flags around.” I think that’s fine. They’ve merely committed suicide in all the worlds their substrate didn’t align with their preferences. They could similarly play the quantum lottery for a billion dollars, though this isn’t a great way to ensure your program’s proliferation.