A starting point is self-reports. If I truthfully say “I see my wristwatch”, then, somewhere in the chain of causation that eventually led to me uttering those words, there’s an actual watch, and photons are bouncing off it and entering my eyes then stimulating neurons etc.
So by the same token, if I say “your phenomenal consciousness is a salty yellow substance that smells like bananas and oozes out of your bellybutton”, and then you reply “no it isn’t!”, then let’s talk about how it is that you are so confident about that.
(I’m using “phenomenal consciousness” as an example, but ditto for “my sense of self / identity” or whatever else.)
So here, you uttered a reply (“No it isn’t!”). And we can assume that somewhere in the chain of causation is ‘phenomenal consciousness’ (whatever that is, if anything), and you were somehow introspecting upon it in order to get that information. You can’t know things in any other way—that’s the basic, hopefully-obvious point that I understand Eliezer was trying to make here.
Now, what’s a ‘chain of causation’, in the relevant sense? Let’s start with a passage from Age of Em:
The brain does not just happen to transform input signals into state changes and output signals; this transformation is the primary function of the brain, both to us and to the evolutionary processes that designed brains. The brain is designed to make this signal processing robust and efficient. Because of this, we expect the physical variables (technically, “degrees of freedom”) within the brain that encode signals and signal-relevant states, which transform these signals and states, and which transmit them elsewhere, to be overall rather physically isolated and disconnected from the other far more numerous unrelated physical degrees of freedom and processes in the brain. That is, changes in other aspects of the brain only rarely influence key brain parts that encode mental states and signals.
In other words, if your body temperature had been 0.1° colder, or if you were hanging upside down, or whatever, then the atoms in your brain would be configured differently in all kinds of ways … but you would still say “no it isn’t!” in response to my proposal that maybe your phenomenal consciousness is a salty yellow substance that oozes out of your bellybutton. And you would say it for the exact same reason.
This kind of thinking leads to the more general idea that the brain has inputs (e.g. photoreceptor cells), outputs (e.g. motoneurons … also, fun fact, the brain is a gland!), and algorithms connecting them. Those algorithms describe what Hanson’s “degrees of freedom” are doing from moment to moment, and why, and how. Whenever brains systematically do characteristically-brain-ish things—things like uttering grammatical sentences rather than moving mouth muscles randomly—then the explanation of that systematic pattern lies in the brain’s inputs, outputs, and/or algorithms. Yes, there’s randomness in what brains do, but whenever brains do characteristically-brainy-things reliably (e.g. disbelieve, and verbally deny, that your consciousness is a salty yellow substance that oozes out of your bellybutton), those things are evidently not the result of random fluctuations or whatever, but rather they follow from the properties of the algorithms and/or their inputs and outputs.
That doesn’t quite get us all the way to computationalist theories of consciousness or identity. Why not? Well, here are two ways I can think of to be non-computationalist within physicalism:
One could argue that consciousness & sense-of-identity etc. are just confused nonsense reifications of mental models with no referents at all, akin to “pure white” [because white is not pure, it’s a mix of wavelengths]. (Cf. “illusionism”.) I’m very sympathetic to this kind of view. And you could reasonably say “it’s not a computationalist theory of consciousness / identity, but rather a rejection of consciousness / identity altogether!” But I dunno, I think it’s still kinda computationalist in spirit, in the sense that one would presumably instead make the move of choosing to (re)define ‘consciousness’ and ‘sense-of-identity’ in such a way that those words point to things that actually exist at all (which is good), at the expense of being inconsistent with some of our intuitions about what those words are supposed to represent (which is bad). And when you make that move, those terms almost inevitably wind up pointing towards some aspect(s) of brain algorithms.
One could argue that we learn about consciousness & sense-of-identity via inputs to the brain algorithm rather than inherent properties of the algorithm itself—basically the idea that “I self-report about my phenomenal consciousness analogously to how I self-report about my wristwatch”, i.e. my brain perceives my consciousness & identity through some kind of sensory input channel, and maybe also my brain controls my consciousness & identity through some kind of motor or other output channel. If you believe something like that, then you could be physicalist but not a computationalist, I think. But I can’t think of any way to flesh out such a theory that’s remotely plausible.
I’m not a philosopher and am probably misusing technical terms in various ways. (If so, I’m open to corrections!)
(Note, I find these kinds of conversations to be very time-consuming and often not go anywhere, so I’ll read replies but am pretty unlikely to comment further. I hope this is helpful at all. I mostly didn’t read the previous conversation, so I’m sorry if I’m missing the point, answering the wrong question, etc.)
(Note, I find these kinds of conversations to be very time-consuming and often not go anywhere, so I’ll read replies but am pretty unlikely to comment further. I hope this is helpful at all. I mostly didn’t read the previous conversation, so I’m sorry if I’m missing the point, answering the wrong question, etc.)
That’s fine. Your answer doesn’t quite address the core of my arguments and confusions, but it’s useful in its own right.
As I understood it, your objection was that computation is an abstraction/compression of the real thing, which is not the same as the real thing. (Is that correct?)
First, let’s check how important is the “compression” part. Imagine that someone would emulate your brain and body without compression—in a huge computer the size of the Moon, faithfully, particle by particle, including whatever quantum effects are necessary (for the sake of thought experiment, let’s assume that it is possible). Would such simulation be you in some sense?
If we get that out of the way, I think that the part about compression was addressed. Lossy compression loses some information, but the argument was that consciousness is implemented in a robust way, and can survive some noise. Too much noise would ruin it. On the other hand, individual neurons die every day, so it seems like a quantitative question: it’s not whether the simulation would be you, but how much would the simulation be you. Maybe simulating 50% of the neurons could still be 99% you, although this is just a speculation.
A starting point is self-reports. If I truthfully say “I see my wristwatch”, then, somewhere in the chain of causation that eventually led to me uttering those words, there’s an actual watch, and photons are bouncing off it and entering my eyes then stimulating neurons etc.
So by the same token, if I say “your phenomenal consciousness is a salty yellow substance that smells like bananas and oozes out of your bellybutton”, and then you reply “no it isn’t!”, then let’s talk about how it is that you are so confident about that.
(I’m using “phenomenal consciousness” as an example, but ditto for “my sense of self / identity” or whatever else.)
So here, you uttered a reply (“No it isn’t!”). And we can assume that somewhere in the chain of causation is ‘phenomenal consciousness’ (whatever that is, if anything), and you were somehow introspecting upon it in order to get that information. You can’t know things in any other way—that’s the basic, hopefully-obvious point that I understand Eliezer was trying to make here.
Now, what’s a ‘chain of causation’, in the relevant sense? Let’s start with a passage from Age of Em:
In other words, if your body temperature had been 0.1° colder, or if you were hanging upside down, or whatever, then the atoms in your brain would be configured differently in all kinds of ways … but you would still say “no it isn’t!” in response to my proposal that maybe your phenomenal consciousness is a salty yellow substance that oozes out of your bellybutton. And you would say it for the exact same reason.
This kind of thinking leads to the more general idea that the brain has inputs (e.g. photoreceptor cells), outputs (e.g. motoneurons … also, fun fact, the brain is a gland!), and algorithms connecting them. Those algorithms describe what Hanson’s “degrees of freedom” are doing from moment to moment, and why, and how. Whenever brains systematically do characteristically-brain-ish things—things like uttering grammatical sentences rather than moving mouth muscles randomly—then the explanation of that systematic pattern lies in the brain’s inputs, outputs, and/or algorithms. Yes, there’s randomness in what brains do, but whenever brains do characteristically-brainy-things reliably (e.g. disbelieve, and verbally deny, that your consciousness is a salty yellow substance that oozes out of your bellybutton), those things are evidently not the result of random fluctuations or whatever, but rather they follow from the properties of the algorithms and/or their inputs and outputs.
That doesn’t quite get us all the way to computationalist theories of consciousness or identity. Why not? Well, here are two ways I can think of to be non-computationalist within physicalism:
One could argue that consciousness & sense-of-identity etc. are just confused nonsense reifications of mental models with no referents at all, akin to “pure white” [because white is not pure, it’s a mix of wavelengths]. (Cf. “illusionism”.) I’m very sympathetic to this kind of view. And you could reasonably say “it’s not a computationalist theory of consciousness / identity, but rather a rejection of consciousness / identity altogether!” But I dunno, I think it’s still kinda computationalist in spirit, in the sense that one would presumably instead make the move of choosing to (re)define ‘consciousness’ and ‘sense-of-identity’ in such a way that those words point to things that actually exist at all (which is good), at the expense of being inconsistent with some of our intuitions about what those words are supposed to represent (which is bad). And when you make that move, those terms almost inevitably wind up pointing towards some aspect(s) of brain algorithms.
One could argue that we learn about consciousness & sense-of-identity via inputs to the brain algorithm rather than inherent properties of the algorithm itself—basically the idea that “I self-report about my phenomenal consciousness analogously to how I self-report about my wristwatch”, i.e. my brain perceives my consciousness & identity through some kind of sensory input channel, and maybe also my brain controls my consciousness & identity through some kind of motor or other output channel. If you believe something like that, then you could be physicalist but not a computationalist, I think. But I can’t think of any way to flesh out such a theory that’s remotely plausible.
I’m not a philosopher and am probably misusing technical terms in various ways. (If so, I’m open to corrections!)
(Note, I find these kinds of conversations to be very time-consuming and often not go anywhere, so I’ll read replies but am pretty unlikely to comment further. I hope this is helpful at all. I mostly didn’t read the previous conversation, so I’m sorry if I’m missing the point, answering the wrong question, etc.)
That’s fine. Your answer doesn’t quite address the core of my arguments and confusions, but it’s useful in its own right.
As I understood it, your objection was that computation is an abstraction/compression of the real thing, which is not the same as the real thing. (Is that correct?)
First, let’s check how important is the “compression” part. Imagine that someone would emulate your brain and body without compression—in a huge computer the size of the Moon, faithfully, particle by particle, including whatever quantum effects are necessary (for the sake of thought experiment, let’s assume that it is possible). Would such simulation be you in some sense?
If we get that out of the way, I think that the part about compression was addressed. Lossy compression loses some information, but the argument was that consciousness is implemented in a robust way, and can survive some noise. Too much noise would ruin it. On the other hand, individual neurons die every day, so it seems like a quantitative question: it’s not whether the simulation would be you, but how much would the simulation be you. Maybe simulating 50% of the neurons could still be 99% you, although this is just a speculation.