I’m arguing that if you think the mind can be reduced to algorithms implemented on computational substrate, then it is a logical consequence from our understanding of the rules of physics and the nature of computation that what we call subjective experience must also scale down as you reduce a computational machine down to its parts. After all, the algorithms themselves too also reducible down to stepwise axiomatic logical operations, implemented as transistors or interpretable machine code.
The only way to preserve the common intuition that “it takes (simulation of) a brain or equivalent to produce a mind” is to posit some form of dualism. I don’t think it is silly to ask “where is the microsoft-word-ness” about a subset of a computer—you can for example point to the regions of memory and disk where the spellchecker is located, and say “this is the part that matches user input against tables of linguistic data,” just like we point to regions of the brain and say “this is your language processing centers.”
The experience of having a single, unified me directing my conscious experience is an illusion—it’s what the integration process feels like from the inside, but it does not correspond to reality (we have psychological data to back this up!). I am in fact a society of agents, each simpler but also relying on an entire bureaucracy of other agents in an enormous distributed structure. Eventually though, things reduce down to individual circuits, then ultimately to the level of individual cell receptors and chemical pathways. At no point along the way is there a clear division where it is obvious that conscious experience ends and what follows is merely mechanical, electrical, and chemical processes. In fact as I’ve tried to point out the divisions between higher level abstractions and their messy implementations is in the map, not the territory.
To assert that “this level of algorithmic complexity is a mind, and below that is mere machines” is a retreat to dualism, though you may not yet see it in that way. What you are asserting is that there is this ontologically basic mind-ness which spontaneously emerges when an algorithm has reached a certain level of complexity, but which is not the aggregation of smaller phenomenon.
I think we have really different models of how algorithms and their sub-components work.
it is a logical consequence from our understanding of the rules of physics and the nature of computation that what we call subjective experience must also scale down as you reduce a computational machine down to its parts.
Suppose I have a computation that produces the digits of pi. It has subroutines which multiply and add. Is it an accurate description of these subroutines that they have a scaled down property of computes-pi-ness? I think this is not a useful way to understand things. Subroutines do not have a scaled-down percentage of the properties of their containing algorithm, they do a discrete chunk of its work. It’s just madness to say that, e.g., your language processing center is 57% conscious.
The experience of having a single, unified me directing my conscious experience is an illusion...
I agree with all this. Humans probably are not the minimal conscious system, and there are probably subsets of our component circuitry which maintain the property of conciousness. But yes, I maintain that eventually, you’ll get to an algorithm that is conscious while none of its subroutines are.
If this makes me a dualist then I’m a dualist, but that doesn’t feel right. I mean, the only way you can really explain a thing is to show how it arises from something that’s not like it in the first place, right?
I think we have different models of what consciousness is. In your pi example, the multiplier has multiply-ness, and the adder has add-ness properties, and when combined together in a certain way you get computes-pi-ness. Likewise our minds have many, many, many different components which—somehow, someway—each have a small experiential qualia which when you sum together yield the human condition.
Through brain damage studies, for example, we have descriptions of what it feels like to live without certain mental capabilities. I think you would agree with this, but for others reading take this thought experiment: imagine that I were to systematically shut down portions of your brain, or in simulation, delete regions of your memory space. For the purpose of the argument I do it slowly over time in relatively small amounts, and cleaning up dangling references so the whole system doesn’t shut down. Certainly as time goes by your mental functionality is reduced, and you stop being capable of having experiences you once took for granted. But at what point, precisely, do you stop experiencing at all qualia of any form? When you’re down to just a billion neurons? A million? A thousand? When you’re down to just one processing region? Is one tiny algorithm on a single circuit enough?
Humans probably are not the minimal conscious system, and there are probably subsets of our component circuitry which maintain the property of consciousness. But yes, I maintain that eventually, you’ll get to an algorithm that is conscious while none of its subroutines are.
What is the minimal conscious system? It’s easy and perhaps accurate to say “I don’t know.” After all, neither one of us know enough neural and cognitive science to make this call, I assume. But we should be able to answer this question: “if presented criteria for a minimally-conscious-system, what would convince me of its validity?”
If this makes me a dualist then I’m a dualist, but that doesn’t feel right. I mean, the only way you can really explain a thing is to show how it arises from something that’s not like it in the first place, right?
Eliezer’s post on reductionism is relevant here. In a reductionist universe, anything and everything is fully defined by its constituent elements—no more, no less. There’s a popular phrase that has no place is reductionist theories: “the whole is greater than the sum of its parts.” Typically what this actually means is that you failed to count the “parts” correctly: a part list should also include spatial configurations and initial conditions, which together imply the dynamic behaviors as well. For example, a pulley is more than a hunk of metal and some rope, but it is fully defined if you specify how the metal is shaped, how the rope is threaded through it and fixed to objects with knots, how the whole contraption is oriented with respect to gravity, and the procedure for applying rope-pulling-force. Combined with the fundamental laws of physics, this is a fully reductive explanation of a rope-pulley system which is the sum of its fully-defined parts.
And so it goes with consciousness. Unless we are comfortable with the mysterious answers provided by dualism—or empirical evidence like confirmation of psychic phenomenon compels us to go there—then we must demand that an explanation be provided that explains consciousness fully as the aggregation of smaller processes.
When I look a explanations of the workings of the brain, starting with the highest level psychological theories and neural structure, and working the way all the way down the abstraction hierarchy to individual neural synapses and biochemical pathways, nowhere along the way do I see an obvious place to stop and say “here is where consciousness begins!” Likewise, I can start from the level of mere atoms and work my way up to the full neural architecture, without finding any step that adds something which could be consciousness, but which isn’t fundamentally like the levels below it. But when you get to the highest level, you’ve described the full brain without finding consciousness anywhere along the way.
I can see how this leads otherwise intelligent philosophers like David Chalmers to epiphenomenalism. But I’m not going to go down that path, because the whole situation is the result of mental confusion.
The Standard Rationalist Answer is that mental processes are information patterns, nothing more, and tat consciousness is an illusion, end of story. But that still leaves me confused! It’s not like free will for example, where because of the mind projection fallacy I think I have free will due to how a deterministic decision theory algorithm feels from the inside. I get that. No, the answer of “that subjective experience of consciousness isn’t real, get over it” is unsatisfactory because if I don’t have conscious, how am I experiencing thinking in the first place? Cogito ergo sum.
However there is a way out. I went looking for a source of consciousness because I like nearly every other philosopher assumed that there was something special and unique which set brains aside as having minds which other more mundane objects—like rocks and staplers—do not possess. That’s so obviously true, but honestly I have no real justification for that belief. So let’s try negating it. What is possible if we don’t exclude mundane things from having minds too?
Well, what does it feel like to be a quark and a lepton exchanging a photon? I’m not really sure, but let’s call that approximately the minimum possible “experience”, and for the duration of the interaction continuous interaction over time, the two particles share a “mind”. Arrange a number of these objects together and you get an atom, which itself also has a shared/merged experience so long as the particles remain in bonded interaction. Arrange a lot of atoms together and you get a electrical transistor. Now we’re finally starting to get to a level where I have some idea of what the “shared experience of being a transistor” would be (rather boring, by my standards), and more importantly, it’s clear how that experience is aggregated together from its constituent parts. From here, computing theory takes over as more complex interdependent systems are constructed, each merging experiences together into a shared hive mind, until you reach the level of the human being or AI.
Are you at least following what I’m saying, even if you don’t agree?
That was a very long comment (thank you for your effort) and I don’t think I have the energy to exhaustively go through it.
I believe I follow what you’re saying. It doesn’t make much sense to me, so maybe that belief is false.
I think the fact that if you start with a brain, which is presumably conscious, and zoom in all the way looking for the conciousness boundary, and then start with a quark, which is presumably not conscious, and zoom all the way out to the entire brain, also without finding a consciousness barrier—I think this means that the best we can do at the moment is set upper and lower bounds.
A minimally conscious system—say, something that can convince me that it thinks it is conscious. “echo ‘I’m conscious!’” doesn’t quite cut it, things that recognize themselves in mirrors probably do, and I could go either way on the stuff in between.
I think your reductionism is a little misapplied. My pi-calculating program develops a new property of pi-computation when you put the adders and multipliers together right, but is completely described in terms of adders and multipliers. I expect consciousness to be exactly the same; it’ll be completely described in terms of qualia generating algorithms (or some such), which won’t themselves have the consciousness property.
This is hard to see because the algorithms are written in spaghetti code, in the wiring between neurons. In computer terms, we have access to the I/O system and all the gates in the CPU, but we don’t currently know how they’re connected. Looking at more or fewer of the gates doesn’t help, because the critical piece of information is how they’re connected and what algorithm they implement.
IMO, my guess (P=.65) is that qualia are going to turn out to be something like vectors in a feature space. Under this model, clearly systems incapable of representing such a vector can’t have any qualia at all. Rocks and single molecules, for example.
I’m arguing that if you think the mind can be reduced to algorithms implemented on computational substrate, then it is a logical consequence from our understanding of the rules of physics and the nature of computation that what we call subjective experience must also scale down as you reduce a computational machine down to its parts. After all, the algorithms themselves too also reducible down to stepwise axiomatic logical operations, implemented as transistors or interpretable machine code.
The only way to preserve the common intuition that “it takes (simulation of) a brain or equivalent to produce a mind” is to posit some form of dualism. I don’t think it is silly to ask “where is the microsoft-word-ness” about a subset of a computer—you can for example point to the regions of memory and disk where the spellchecker is located, and say “this is the part that matches user input against tables of linguistic data,” just like we point to regions of the brain and say “this is your language processing centers.”
The experience of having a single, unified me directing my conscious experience is an illusion—it’s what the integration process feels like from the inside, but it does not correspond to reality (we have psychological data to back this up!). I am in fact a society of agents, each simpler but also relying on an entire bureaucracy of other agents in an enormous distributed structure. Eventually though, things reduce down to individual circuits, then ultimately to the level of individual cell receptors and chemical pathways. At no point along the way is there a clear division where it is obvious that conscious experience ends and what follows is merely mechanical, electrical, and chemical processes. In fact as I’ve tried to point out the divisions between higher level abstractions and their messy implementations is in the map, not the territory.
To assert that “this level of algorithmic complexity is a mind, and below that is mere machines” is a retreat to dualism, though you may not yet see it in that way. What you are asserting is that there is this ontologically basic mind-ness which spontaneously emerges when an algorithm has reached a certain level of complexity, but which is not the aggregation of smaller phenomenon.
I think we have really different models of how algorithms and their sub-components work.
Suppose I have a computation that produces the digits of pi. It has subroutines which multiply and add. Is it an accurate description of these subroutines that they have a scaled down property of computes-pi-ness? I think this is not a useful way to understand things. Subroutines do not have a scaled-down percentage of the properties of their containing algorithm, they do a discrete chunk of its work. It’s just madness to say that, e.g., your language processing center is 57% conscious.
I agree with all this. Humans probably are not the minimal conscious system, and there are probably subsets of our component circuitry which maintain the property of conciousness. But yes, I maintain that eventually, you’ll get to an algorithm that is conscious while none of its subroutines are.
If this makes me a dualist then I’m a dualist, but that doesn’t feel right. I mean, the only way you can really explain a thing is to show how it arises from something that’s not like it in the first place, right?
I think we have different models of what consciousness is. In your pi example, the multiplier has multiply-ness, and the adder has add-ness properties, and when combined together in a certain way you get computes-pi-ness. Likewise our minds have many, many, many different components which—somehow, someway—each have a small experiential qualia which when you sum together yield the human condition.
Through brain damage studies, for example, we have descriptions of what it feels like to live without certain mental capabilities. I think you would agree with this, but for others reading take this thought experiment: imagine that I were to systematically shut down portions of your brain, or in simulation, delete regions of your memory space. For the purpose of the argument I do it slowly over time in relatively small amounts, and cleaning up dangling references so the whole system doesn’t shut down. Certainly as time goes by your mental functionality is reduced, and you stop being capable of having experiences you once took for granted. But at what point, precisely, do you stop experiencing at all qualia of any form? When you’re down to just a billion neurons? A million? A thousand? When you’re down to just one processing region? Is one tiny algorithm on a single circuit enough?
What is the minimal conscious system? It’s easy and perhaps accurate to say “I don’t know.” After all, neither one of us know enough neural and cognitive science to make this call, I assume. But we should be able to answer this question: “if presented criteria for a minimally-conscious-system, what would convince me of its validity?”
Eliezer’s post on reductionism is relevant here. In a reductionist universe, anything and everything is fully defined by its constituent elements—no more, no less. There’s a popular phrase that has no place is reductionist theories: “the whole is greater than the sum of its parts.” Typically what this actually means is that you failed to count the “parts” correctly: a part list should also include spatial configurations and initial conditions, which together imply the dynamic behaviors as well. For example, a pulley is more than a hunk of metal and some rope, but it is fully defined if you specify how the metal is shaped, how the rope is threaded through it and fixed to objects with knots, how the whole contraption is oriented with respect to gravity, and the procedure for applying rope-pulling-force. Combined with the fundamental laws of physics, this is a fully reductive explanation of a rope-pulley system which is the sum of its fully-defined parts.
And so it goes with consciousness. Unless we are comfortable with the mysterious answers provided by dualism—or empirical evidence like confirmation of psychic phenomenon compels us to go there—then we must demand that an explanation be provided that explains consciousness fully as the aggregation of smaller processes.
When I look a explanations of the workings of the brain, starting with the highest level psychological theories and neural structure, and working the way all the way down the abstraction hierarchy to individual neural synapses and biochemical pathways, nowhere along the way do I see an obvious place to stop and say “here is where consciousness begins!” Likewise, I can start from the level of mere atoms and work my way up to the full neural architecture, without finding any step that adds something which could be consciousness, but which isn’t fundamentally like the levels below it. But when you get to the highest level, you’ve described the full brain without finding consciousness anywhere along the way.
I can see how this leads otherwise intelligent philosophers like David Chalmers to epiphenomenalism. But I’m not going to go down that path, because the whole situation is the result of mental confusion.
The Standard Rationalist Answer is that mental processes are information patterns, nothing more, and tat consciousness is an illusion, end of story. But that still leaves me confused! It’s not like free will for example, where because of the mind projection fallacy I think I have free will due to how a deterministic decision theory algorithm feels from the inside. I get that. No, the answer of “that subjective experience of consciousness isn’t real, get over it” is unsatisfactory because if I don’t have conscious, how am I experiencing thinking in the first place? Cogito ergo sum.
However there is a way out. I went looking for a source of consciousness because I like nearly every other philosopher assumed that there was something special and unique which set brains aside as having minds which other more mundane objects—like rocks and staplers—do not possess. That’s so obviously true, but honestly I have no real justification for that belief. So let’s try negating it. What is possible if we don’t exclude mundane things from having minds too?
Well, what does it feel like to be a quark and a lepton exchanging a photon? I’m not really sure, but let’s call that approximately the minimum possible “experience”, and for the duration of the interaction continuous interaction over time, the two particles share a “mind”. Arrange a number of these objects together and you get an atom, which itself also has a shared/merged experience so long as the particles remain in bonded interaction. Arrange a lot of atoms together and you get a electrical transistor. Now we’re finally starting to get to a level where I have some idea of what the “shared experience of being a transistor” would be (rather boring, by my standards), and more importantly, it’s clear how that experience is aggregated together from its constituent parts. From here, computing theory takes over as more complex interdependent systems are constructed, each merging experiences together into a shared hive mind, until you reach the level of the human being or AI.
Are you at least following what I’m saying, even if you don’t agree?
That was a very long comment (thank you for your effort) and I don’t think I have the energy to exhaustively go through it.
I believe I follow what you’re saying. It doesn’t make much sense to me, so maybe that belief is false.
I think the fact that if you start with a brain, which is presumably conscious, and zoom in all the way looking for the conciousness boundary, and then start with a quark, which is presumably not conscious, and zoom all the way out to the entire brain, also without finding a consciousness barrier—I think this means that the best we can do at the moment is set upper and lower bounds.
A minimally conscious system—say, something that can convince me that it thinks it is conscious. “echo ‘I’m conscious!’” doesn’t quite cut it, things that recognize themselves in mirrors probably do, and I could go either way on the stuff in between.
I think your reductionism is a little misapplied. My pi-calculating program develops a new property of pi-computation when you put the adders and multipliers together right, but is completely described in terms of adders and multipliers. I expect consciousness to be exactly the same; it’ll be completely described in terms of qualia generating algorithms (or some such), which won’t themselves have the consciousness property.
This is hard to see because the algorithms are written in spaghetti code, in the wiring between neurons. In computer terms, we have access to the I/O system and all the gates in the CPU, but we don’t currently know how they’re connected. Looking at more or fewer of the gates doesn’t help, because the critical piece of information is how they’re connected and what algorithm they implement.
IMO, my guess (P=.65) is that qualia are going to turn out to be something like vectors in a feature space. Under this model, clearly systems incapable of representing such a vector can’t have any qualia at all. Rocks and single molecules, for example.