No, but as others pointed out, an animated GIF is not a simulation of the thing it represents.
The animated GIF, as I originally described it, is an “imitation of the operation of a real-world process or system over time”, which is the verbatim definition (from Wikipedia) of a simulation. Counterfactual dependencies are not needed for imitation.
Just to be clear, when we are talking of simulations of a computational system, we mean something that computes the same input to output mapping of the system that is simulated, the same mathematical function
Ok, let’s go with this definition. As I understand it then, machine functionalism is not about simulation (as imitation) per se but rather about recreating the mathematical function that the human brain is computing. Is this correct?
An animated GIF doesn’t respond to inputs, therefore it doesn’t compute the same function that the brain computes.
A brain doesn’t necessarily respond to inputs, but sure, we can require that the simulation responds to inputs, though I find this requirement a bit strange.
“Being a video game” is a property of certain patterns of input-output mappings, and this property is invariant (up to a performance overhead) under simulation, it is independent on the physical substrate.
It sounds like a beautiful idea, being invariant under a simulation that is independent of substrate.
There are claims now and then that some chatbot passed the Turing test, but if you look past the hype, all these claims are fundamentally false.
I agree.
About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over.
I’m asking how you understand the term at operational level right now.
In short, it’s a combination of a Turing test and the possession of a functioning human brain-like structure. If an entity exhibits awake human-like behavior (i.e, by passing the Turing test or suitable approximation) and possesses a living human brain (inferred from visual inspection of their biological form) or human brain-like equivalent (which I’ve yet to see, except possibly in some non-human primates), then I generally conclude it has human or human-like consciousness.
When I consider your comment here with your previous comment above that “definitions of consciousness which are not invariant under simulation have little epistemic usefulness”, I think I understand your argument better. However the epistemic argument you’re advancing is a fallacy because you’re demonstrating what you assume: If I run an accurate simulation of a human brain on a computer and ask it whether it has human consciousness, of course it will say ‘yes’ and it will even pass the Turing test because we’re assuming it’s an accurate simulation of a human brain. The reasoning is circular and does not actually inform us whether the simulation is conscious. So your “epistemic usefulness” appears irrelevant to the question of whether machine functionalism is correct. Or am I missing something?
My general question to the machine functionalists here is, why are you assuming it is sufficient to merely simulate the human brain to recreate its conscious experience? The human brain is a chemico-physical system and such systems are generally explained in terms of causal structures involving physical or chemical entities, though such explanations (including simulations) are never mistaken for the thing itself. So why should human consciousness, which is a part of the natural world and whose basis we know first-hand involves the human brain, be any different?
If the question here is, is consciousness a substrate-independent function that the brain computes or is it associated with a unique type of physico-chemical causal (space-time) structure, then I would say the latter is more likely due to the past successes in physics and chemistry in explaining natural phenomena. In any event, our knowledge of the basis of consciousness is still highly speculative. I can attempt further reductio ad absurdums with machine functionalism involving ever more ridiculous scenarios but will probably not convince anyone who has taken the requisite leap of faith.
You seem to be discussing in good faith here, and I think it’s worth continuing so we can both get a better idea of what the other is saying. I think differing non-verbal intuitions drive a lot of these debates, and so to avoid talking past one another it’s best to try to zoom in on intuitions and verbalize them as much as possible. To that end (keeping in mind that I’m still very confused about consciousness in general): I think a large part of what makes me a machine functionalist is an intuition that neurons...aren’t that special. Like, you view the China Brain argument as a reductio because it seems so absurd. And I guess I actually kind of agree with that, it does seem absurd that a bunch of people talking to one another via walkie-talkie could generate consciousness. But it seems no more absurd to me than consciousness being generated by a bunch of cells sending action potentials to one another. Why should we have expected either of those processes to generate consciousness? In both cases you just have non-mental, syntactical operations taking place. If you hadn’t heard of neurons, wouldn’t they also seem like a reductio to you?
What it comes down to is that consciousness seems mysterious to me. And (on an intuitive level) it kind of feels like I need to throw something “special” at consciousness to explain it. What kind of special something? Well, you could say that the brain has the special something, by virtue of the fact that it’s made of neurons. But that doesn’t seem like the right kind of specialness to me, somehow. Yes, neurons are special in that they have a “unique” physico-chemical causal structure, by why single that out? To me that seems as arbitrary as singling out only specific types of atoms as being able to instantiate consciousness (which some people seem to do, and which I don’t think you’re doing, correct?). It just seems too contingent, too earth-specific an explanation. What if you came across aliens that acted conscious but didn’t have any neurons or a close equivalent? I think you’d have to concede that they were conscious, wouldn’t you? Of course, such aliens may not exist, so I can’t really make an argument based on that. But still—really, the answer to the mystery of consciousness is going to come down to the fact that particular kinds of cells evolved in earth animals? Not special enough! (or so say my intuitions, anyway)
So I’m led in a different direction. When I look at the brain and try to see what could be generating consciousness, what pops out to me is that the brain does computations. It has a particular pattern, a particular high-level causal structure that seems to lie at the heart of its ability to perform the amazing mental feats it does. The computations it performs are implemented on neurons, of course, but that doesn’t seem central to me—if they were implemented on some other substrate, the amazing feats would still get done (Shakespeare would still get written, Fermat’s Last Theorem would still get proved). What does seem central, then? Well, the way the neurons are wired up. My understanding (correct me if I’m wrong) is that in a neural network such as the brain, any given neuron fires iff all the inhibitory and excitatory inputs feeding into the neuron exceed some threshold. So roughly speaking, any given brain can be characterized by which neurons are connected to which other neurons, and what the weights of those connections are, yes? In that case (forgetting consciousness for a moment), what really matters in terms of creating a brain that can perform impressive mental feats is setting up those connections in the right way. But that just amounts to defining a specific high-level causal structure—and yes, that will require you to define a set of counterfactual dependencies (if neurons A and B had fired, then neuron C wouldn’t have fired, etc). I was kind of surprised that you were surprised that we brought up counterfactual dependence earlier in the discussion. For one I think it’s a standard-ish way of defining causality in philosophy (it’s at least the first section in the wikipedia article, anyway, and it’s the definition that makes the most sense to me). But even beyond that, it seems intuitively obvious to me that your brain’s counterfactual dependencies are what make your brain, your brain. If you had a different set of dependencies, you would have to have different neuronal wirings and therefore a different brain.
Anyway, this whole business of computation and higher-level causal structure and counterfactual dependencies: that does seem to have the right kind of specialness to me to generate consciousness. It’s hard for me to break the intuition down further than that, beyond saying that it’s the if-then pattern that seems like the really important thing here. I just can’t see what else it could be. And this view does have some nice features—if you wind up meeting apparently-conscious aliens, you don’t have to look to see if they have neurons. You can just look to see if they have the right if-then pattern in their mind.
To answer your question about simulations not being the thing that they’re simulating: I think the view of consciousness as a particular causal pattern kind of dissolves that question. If you think the only thing that matters in terms of creating consciousness is that there be a particular if-then causal structure (as I do), then in what sense are you “simulating” the causal structure when you implement it on a computer? It’s still the same structure, still has the same dependencies. That seems just as real to me as what the brain does—you could just as easily say that neurons are “simulating” consciousness. Essentially machine functionalists think that causal structure is all there is in terms of consciousness, and under that view the line between something being a “simulation” versus being “real” kind of disappears.
Does that help you understand where I’m coming from? I’d be interested to hear where in that line of arguments/intuitions I lost you.
I think a large part of what makes me a machine functionalist is an intuition that neurons...aren’t that special. Like, you view the China Brain argument as a reductio because it seems so absurd. And I guess I actually kind of agree with that, it does seem absurd that a bunch of people talking to one another via walkie-talkie could generate consciousness. But it seems no more absurd to me than consciousness being generated by a bunch of cells sending action potentials to one another.
Aren’t neurons special? At the very least, they’re mysterious. We’re far from understanding them as physico-chemical systems. I’ve had the same reaction and incredulity as you to the idea that interacting neurons can ‘generate consciousness’. The thing is, we don’t understand individual neurons. Yes, neurons compute. The brain computes. But so does every physical system we encounter. So why should computation be the defining feature of consciousness? It’s not obvious to me. In the end, consciousness is still a mystery and machine functionalism requires a leap of faith that I’m not prepared to take without convincing evidence.
But even beyond that, it seems intuitively obvious to me that your brain’s counterfactual dependencies are what make your brain, your brain.
Yes, counterfactual dependencies appear necessary for simulating a brain (and other systems) but the causal structure of the simulated objects is not necessarily the same as the causal structure of the underlying physical system running the simulation, which is my objection to Turing machines and Von Neumann architectures.
you could just as easily say that neurons are “simulating” consciousness. Essentially machine functionalists think that causal structure is all there is in terms of consciousness, and under that view the line between something being a “simulation” versus being “real” kind of disappears.
it’s an interesting thought, and I generally agree with this. The question seems to come down to defining causal structure. The problem is that the causal structure of the computer system running a simulation of an object does not appear anything like that of the object. A Turing machine running a human brain simulation appears to have a very different causal structure compared with the human brain.
So, one reason I pointed you at orthonormal’s sequence is that if you read all those posts they seem likely to trigger different intuitions for you.
I would also ask if you think that Aristotle—had he only been smarter—could have figured out his “unique type of physico-chemical causal (space-time) structure” from pure introspection. A negative answer would not automatically prove functionalism. We know of other limits on knowledge. But it does show that the thought experiment in which you are currently a simulation is at least as ‘conceivable’ as the thought experiment of a zombie without consciousness and perhaps even your scenarios. Furthermore, the mathematical examples of limits on self-knowledge actually point towards structure being independent of ‘substrates’. That’s how computer science started in the first place.
The animated GIF, as I originally described it, is an “imitation of the operation of a real-world process or system over time”, which is the verbatim definition (from Wikipedia) of a simulation. Counterfactual dependencies are not needed for imitation.
Ok, let’s go with this definition. As I understand it then, machine functionalism is not about simulation (as imitation) per se but rather about recreating the mathematical function that the human brain is computing. Is this correct?
A brain doesn’t necessarily respond to inputs, but sure, we can require that the simulation responds to inputs, though I find this requirement a bit strange.
It sounds like a beautiful idea, being invariant under a simulation that is independent of substrate.
I agree.
In short, it’s a combination of a Turing test and the possession of a functioning human brain-like structure. If an entity exhibits awake human-like behavior (i.e, by passing the Turing test or suitable approximation) and possesses a living human brain (inferred from visual inspection of their biological form) or human brain-like equivalent (which I’ve yet to see, except possibly in some non-human primates), then I generally conclude it has human or human-like consciousness.
When I consider your comment here with your previous comment above that “definitions of consciousness which are not invariant under simulation have little epistemic usefulness”, I think I understand your argument better. However the epistemic argument you’re advancing is a fallacy because you’re demonstrating what you assume: If I run an accurate simulation of a human brain on a computer and ask it whether it has human consciousness, of course it will say ‘yes’ and it will even pass the Turing test because we’re assuming it’s an accurate simulation of a human brain. The reasoning is circular and does not actually inform us whether the simulation is conscious. So your “epistemic usefulness” appears irrelevant to the question of whether machine functionalism is correct. Or am I missing something?
My general question to the machine functionalists here is, why are you assuming it is sufficient to merely simulate the human brain to recreate its conscious experience? The human brain is a chemico-physical system and such systems are generally explained in terms of causal structures involving physical or chemical entities, though such explanations (including simulations) are never mistaken for the thing itself. So why should human consciousness, which is a part of the natural world and whose basis we know first-hand involves the human brain, be any different?
If the question here is, is consciousness a substrate-independent function that the brain computes or is it associated with a unique type of physico-chemical causal (space-time) structure, then I would say the latter is more likely due to the past successes in physics and chemistry in explaining natural phenomena. In any event, our knowledge of the basis of consciousness is still highly speculative. I can attempt further reductio ad absurdums with machine functionalism involving ever more ridiculous scenarios but will probably not convince anyone who has taken the requisite leap of faith.
You seem to be discussing in good faith here, and I think it’s worth continuing so we can both get a better idea of what the other is saying. I think differing non-verbal intuitions drive a lot of these debates, and so to avoid talking past one another it’s best to try to zoom in on intuitions and verbalize them as much as possible. To that end (keeping in mind that I’m still very confused about consciousness in general): I think a large part of what makes me a machine functionalist is an intuition that neurons...aren’t that special. Like, you view the China Brain argument as a reductio because it seems so absurd. And I guess I actually kind of agree with that, it does seem absurd that a bunch of people talking to one another via walkie-talkie could generate consciousness. But it seems no more absurd to me than consciousness being generated by a bunch of cells sending action potentials to one another. Why should we have expected either of those processes to generate consciousness? In both cases you just have non-mental, syntactical operations taking place. If you hadn’t heard of neurons, wouldn’t they also seem like a reductio to you?
What it comes down to is that consciousness seems mysterious to me. And (on an intuitive level) it kind of feels like I need to throw something “special” at consciousness to explain it. What kind of special something? Well, you could say that the brain has the special something, by virtue of the fact that it’s made of neurons. But that doesn’t seem like the right kind of specialness to me, somehow. Yes, neurons are special in that they have a “unique” physico-chemical causal structure, by why single that out? To me that seems as arbitrary as singling out only specific types of atoms as being able to instantiate consciousness (which some people seem to do, and which I don’t think you’re doing, correct?). It just seems too contingent, too earth-specific an explanation. What if you came across aliens that acted conscious but didn’t have any neurons or a close equivalent? I think you’d have to concede that they were conscious, wouldn’t you? Of course, such aliens may not exist, so I can’t really make an argument based on that. But still—really, the answer to the mystery of consciousness is going to come down to the fact that particular kinds of cells evolved in earth animals? Not special enough! (or so say my intuitions, anyway)
So I’m led in a different direction. When I look at the brain and try to see what could be generating consciousness, what pops out to me is that the brain does computations. It has a particular pattern, a particular high-level causal structure that seems to lie at the heart of its ability to perform the amazing mental feats it does. The computations it performs are implemented on neurons, of course, but that doesn’t seem central to me—if they were implemented on some other substrate, the amazing feats would still get done (Shakespeare would still get written, Fermat’s Last Theorem would still get proved). What does seem central, then? Well, the way the neurons are wired up. My understanding (correct me if I’m wrong) is that in a neural network such as the brain, any given neuron fires iff all the inhibitory and excitatory inputs feeding into the neuron exceed some threshold. So roughly speaking, any given brain can be characterized by which neurons are connected to which other neurons, and what the weights of those connections are, yes? In that case (forgetting consciousness for a moment), what really matters in terms of creating a brain that can perform impressive mental feats is setting up those connections in the right way. But that just amounts to defining a specific high-level causal structure—and yes, that will require you to define a set of counterfactual dependencies (if neurons A and B had fired, then neuron C wouldn’t have fired, etc). I was kind of surprised that you were surprised that we brought up counterfactual dependence earlier in the discussion. For one I think it’s a standard-ish way of defining causality in philosophy (it’s at least the first section in the wikipedia article, anyway, and it’s the definition that makes the most sense to me). But even beyond that, it seems intuitively obvious to me that your brain’s counterfactual dependencies are what make your brain, your brain. If you had a different set of dependencies, you would have to have different neuronal wirings and therefore a different brain.
Anyway, this whole business of computation and higher-level causal structure and counterfactual dependencies: that does seem to have the right kind of specialness to me to generate consciousness. It’s hard for me to break the intuition down further than that, beyond saying that it’s the if-then pattern that seems like the really important thing here. I just can’t see what else it could be. And this view does have some nice features—if you wind up meeting apparently-conscious aliens, you don’t have to look to see if they have neurons. You can just look to see if they have the right if-then pattern in their mind.
To answer your question about simulations not being the thing that they’re simulating: I think the view of consciousness as a particular causal pattern kind of dissolves that question. If you think the only thing that matters in terms of creating consciousness is that there be a particular if-then causal structure (as I do), then in what sense are you “simulating” the causal structure when you implement it on a computer? It’s still the same structure, still has the same dependencies. That seems just as real to me as what the brain does—you could just as easily say that neurons are “simulating” consciousness. Essentially machine functionalists think that causal structure is all there is in terms of consciousness, and under that view the line between something being a “simulation” versus being “real” kind of disappears.
Does that help you understand where I’m coming from? I’d be interested to hear where in that line of arguments/intuitions I lost you.
Thank you for the thoughtful reply.
Aren’t neurons special? At the very least, they’re mysterious. We’re far from understanding them as physico-chemical systems. I’ve had the same reaction and incredulity as you to the idea that interacting neurons can ‘generate consciousness’. The thing is, we don’t understand individual neurons. Yes, neurons compute. The brain computes. But so does every physical system we encounter. So why should computation be the defining feature of consciousness? It’s not obvious to me. In the end, consciousness is still a mystery and machine functionalism requires a leap of faith that I’m not prepared to take without convincing evidence.
Yes, counterfactual dependencies appear necessary for simulating a brain (and other systems) but the causal structure of the simulated objects is not necessarily the same as the causal structure of the underlying physical system running the simulation, which is my objection to Turing machines and Von Neumann architectures.
it’s an interesting thought, and I generally agree with this. The question seems to come down to defining causal structure. The problem is that the causal structure of the computer system running a simulation of an object does not appear anything like that of the object. A Turing machine running a human brain simulation appears to have a very different causal structure compared with the human brain.
So, one reason I pointed you at orthonormal’s sequence is that if you read all those posts they seem likely to trigger different intuitions for you.
I would also ask if you think that Aristotle—had he only been smarter—could have figured out his “unique type of physico-chemical causal (space-time) structure” from pure introspection. A negative answer would not automatically prove functionalism. We know of other limits on knowledge. But it does show that the thought experiment in which you are currently a simulation is at least as ‘conceivable’ as the thought experiment of a zombie without consciousness and perhaps even your scenarios. Furthermore, the mathematical examples of limits on self-knowledge actually point towards structure being independent of ‘substrates’. That’s how computer science started in the first place.