Thanks for the replies. I will try to answer and expand on the points raised. There are a number of reductio ad absurdums that dissuade me from machine functionalism, including Ned Block’s China brain and also the idea that a Turing machine running a human brain simulation would possess human consciousness. Let me try to take the absurdity to the next level with the following example:
Does an animated GIF possess human consciousness?
Imagine we record the activity of every neuron in a human brain at every millisecond; at each millisecond, we record whether each of the 100 billion neurons in the human brain is firing an action potental or not. We record all of this for a 1 second duration. Now, for each of the 1000 milliseconds, we represent the neural firing state of all neurons as a binary GIF image of about 333,000 pixels in height and width (this probably exceeds GIF format specifications, but who cares), where each pixel represents the firing state of a specific neuron. We can make 1000 of these GIFs for each millisecond over the 1 second duration. With these 1000 GIFs, we concatenate them to form an animated GIF and then play the animated GIF on an endless loop. Since we are now “simulating” the neural activities of all the neurons in the human brain, we might expect that the animated GIF possesses human consciousness… But this view is absurd and this exercise suggests there is more to consciousness than reproducing neural activities in different substrates.
To V_V, I don’t think it has human consciousness. If I answer otherwise, I’m pressed to acknowledge that well-coded chatbots have human consciousness, which is absurd. With regard to what “conscious” means in epistemic terms, I don’t know, but I do know that the Turing test is insufficient because it only deals with appearances and it’s easy to be duped. About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over.
To Kyre, you hit the crux in your second example. The absurdity of China brain and the Turing machine with human consciousness stems from the fact that the causal structures (i.e., space-time diagrams) in these physical systems are completely different from the causal structure of the human brain. As you describe, in a typical computer there are honest-to-god physical cause and effect in the voltage levels in the memory gates, but the causal structure is completely different from wetware, and this is where the absurdity of attributing consciousness to computations (or simulation) comes from, at least for me. Consciousness is not just computational. Otherwise you have absurdities like China brain and animated GIFs with human consciousness. It seems more likely to be physico-computational, as reflected in the causal structure of interactions of the physical system which underlies the computations and simulations.
There may be a computer architecture that reproduces the correct causal structure, but Von Neumann and related architectures do not. And to your last question, yes! A simulation is just an image. If you think it is the real thing, then you must accept that an animated GIF can possess human consciousness. Personally, this conclusion is too absurd for me to accept.
To jacob_cannell, thanks for the congrats. Sure, consciousness has baggage but using self-awareness instead already commits one to consciousness as a special type of computation, which the reductio ad absurdums above try to disprove. I agree it’s likely that “Self-awareness is just a computational capability”, depending on what you mean by ‘Self’ and ‘awareness’. You state that “The ‘causal structure’ is just the key algorithmic computations” but this is not quite right. The algorithmic computations can be instantiated in many different causal structures but only some will resemble those of the human brain and presumably possess human consciousness.
TLDR: The basis of consciousness is very speculative and there is good reason to believe it goes beyond computation to the physico-computational and causal (space-time) structure.
I think we might be working with different definitions of the term “causal structure”? The way I see it, what matters for whether or not two things have the same causal structure is counterfactual dependency—if neuron A hadn’t have fired, then neuron B would have fired. And we all agree that in a perfect simulation this kind of dependency is preserved. So yes, neurons and transistors have different lower-level causal behaviour, but I wouldn’t call that a different causal structure as long as they both implement a system that behaves the same under different counterfactuals. That’s what I think is wrong with your GIF example, btw—there’s no counterfactual dependency whatsoever. If I delete a particular pixel from one frame of the animation, the next frame wouldn’t change at all. Of course there was the proper dependency when the GIF was originally computed, and I would certainly say that that computation, however it was implemented, was conscious. But not the GIF itself, no.
Anyway, beyond that, we’re obviously working from very different intuitions, because I don’t see the China Brain or Turing machine examples as reductio’s at all—I’m perfectly willing to accept that those entities would be conscious.
It’s unclear why counterfactual dependencies would be necessary for machine functionalism, but ok, let’s include them in the GIF example. Take the first GIF as the initial condition and let the (binary) state of pixel, Xi, at time step, t, take the form, f(i,X1(t-1),X2(t-1),...Xn(t-1)). Does this make it any more plausible that the animated GIF has human consciousness? If you think the GIF has human consciousness, then what is the significance of the fact that the system of equations is generally underdetermined? Personally, it’s not plausible that the GIF has human consciousness, but would agree that since it’s an extreme example, my intuition could be wrong. Unfortunately, this appears to mean that we must agree to disagree on the question of the validity of machine functionalism, or is there another way forward?
I’m not sure I understand you. What do you mean by the system of equations being undetermined. Are you saying to take the same animated gif and not alter the actual physics in any way, and just refer to it differently? That obviously doesn’t change anything. You need to alter the causal structure.
My problem with non-machine functionalism is that any reason we have to say we’re conscious would equally apply to a simulation. If you one day found out that you were really a simulation would you decide your consciousness is an illusion, or figure you must have gotten it backwards which one is conscious, and it’s the simulations that are conscious and the real people that are p-zombies?
Thank you, you saved me a lot of typing. No amount of straight copying of that GIF will generate a conscious experience; but if you print out the first frame and give it to a person with a set of rules for simulating neural behaviour and tell them to calculate the subsequent frames into a gigantic paper notebook, that might generate consciousness.
No, but as others pointed out, an animated GIF is not a simulation of the thing it represents.
Just to be clear, when we are talking of simulations of a computational system, we mean something that computes the same input to output mapping of the system that is simulated, the same mathematical function (or, more precisely the same posterior, if the system is stochastic).
An animated GIF doesn’t respond to inputs, therefore it doesn’t compute the same function that the brain computes.
Think of playing an old console video game on an emulator vs watching a video recorded from the console screen of somebody playing that game. Clearly the emulator and the video are very different objects: you can legitimately say that the emulator is simulating the game, furthermore you can say that the emulator is actually running the game: “Being a video game” is a property of certain patterns of input-output mappings, and this property is invariant (up to a performance overhead) under simulation, it is independent on the physical substrate. on the other hand, the video record of somebody playing a game can’t be said to be a game, or even the simulation of a game.
To V_V, I don’t think it has human consciousness. If I answer otherwise, I’m pressed to acknowledge that well-coded chatbots have human consciousness, which is absurd. With regard to what “conscious” means in epistemic terms, I don’t know, but I do know that the Turing test is insufficient because it only deals with appearances and it’s easy to be duped.
Well-coded chatbots don’t come any close to simulating the linguistic behavior of humans. There are claims now and then that some chatbot passed the Turing test, but if you look past the hype, all these claims are fundamentally false. Here it is Scott Aaronson’s take on the last of these claims.
Seriously, if we really had computer programs passing the Turing test, we would probably also have computer programs working as engineers or lawyers.
About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over.
I’m asking how you understand the term at operational level right now.
Let me introduce you Foo. Foo may be a human, an animal, a plant, a non-living object, etc. It may be an artifact, or a natural-occurring object, or a combination of both. It may be in a normal state for its kind of objects or an abnormal state (e.g. in coma, out of fuel, out of battery charge) I won’t tell you. If I ask you questions about the behavior of Foo, e.g. “Does Foo move if prodded with a stick?”, “Can Foo find the exit of a maze?”, “How does Foo behave in front of a mirror?”, “Can you train Foo to push a button when a certain light goes on?”, “Can you trade with Foo?”, “Can you discuss philosophy with Foo?”, you can’t answer these questions. In Bayesian terms, your subjective probability distribution over possible empirical observations about Foo has a large entropy.
Now I tell you that Foo is conscious. I tell you what I mean by “conscious”, I’m leaving that to your interpretation. I bet that now you can answer many of the questions above, if not with certainty at least with some significant confidence. In Bayesian terms, after conditioning on the piece of evidence “Foo is conscious”, the entropy of your subjective probability distribution over possible empirical observations about Foo became smaller. Do you agree with that? If so, how do you reconcile that with non-functionalism?
No, but as others pointed out, an animated GIF is not a simulation of the thing it represents.
The animated GIF, as I originally described it, is an “imitation of the operation of a real-world process or system over time”, which is the verbatim definition (from Wikipedia) of a simulation. Counterfactual dependencies are not needed for imitation.
Just to be clear, when we are talking of simulations of a computational system, we mean something that computes the same input to output mapping of the system that is simulated, the same mathematical function
Ok, let’s go with this definition. As I understand it then, machine functionalism is not about simulation (as imitation) per se but rather about recreating the mathematical function that the human brain is computing. Is this correct?
An animated GIF doesn’t respond to inputs, therefore it doesn’t compute the same function that the brain computes.
A brain doesn’t necessarily respond to inputs, but sure, we can require that the simulation responds to inputs, though I find this requirement a bit strange.
“Being a video game” is a property of certain patterns of input-output mappings, and this property is invariant (up to a performance overhead) under simulation, it is independent on the physical substrate.
It sounds like a beautiful idea, being invariant under a simulation that is independent of substrate.
There are claims now and then that some chatbot passed the Turing test, but if you look past the hype, all these claims are fundamentally false.
I agree.
About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over.
I’m asking how you understand the term at operational level right now.
In short, it’s a combination of a Turing test and the possession of a functioning human brain-like structure. If an entity exhibits awake human-like behavior (i.e, by passing the Turing test or suitable approximation) and possesses a living human brain (inferred from visual inspection of their biological form) or human brain-like equivalent (which I’ve yet to see, except possibly in some non-human primates), then I generally conclude it has human or human-like consciousness.
When I consider your comment here with your previous comment above that “definitions of consciousness which are not invariant under simulation have little epistemic usefulness”, I think I understand your argument better. However the epistemic argument you’re advancing is a fallacy because you’re demonstrating what you assume: If I run an accurate simulation of a human brain on a computer and ask it whether it has human consciousness, of course it will say ‘yes’ and it will even pass the Turing test because we’re assuming it’s an accurate simulation of a human brain. The reasoning is circular and does not actually inform us whether the simulation is conscious. So your “epistemic usefulness” appears irrelevant to the question of whether machine functionalism is correct. Or am I missing something?
My general question to the machine functionalists here is, why are you assuming it is sufficient to merely simulate the human brain to recreate its conscious experience? The human brain is a chemico-physical system and such systems are generally explained in terms of causal structures involving physical or chemical entities, though such explanations (including simulations) are never mistaken for the thing itself. So why should human consciousness, which is a part of the natural world and whose basis we know first-hand involves the human brain, be any different?
If the question here is, is consciousness a substrate-independent function that the brain computes or is it associated with a unique type of physico-chemical causal (space-time) structure, then I would say the latter is more likely due to the past successes in physics and chemistry in explaining natural phenomena. In any event, our knowledge of the basis of consciousness is still highly speculative. I can attempt further reductio ad absurdums with machine functionalism involving ever more ridiculous scenarios but will probably not convince anyone who has taken the requisite leap of faith.
You seem to be discussing in good faith here, and I think it’s worth continuing so we can both get a better idea of what the other is saying. I think differing non-verbal intuitions drive a lot of these debates, and so to avoid talking past one another it’s best to try to zoom in on intuitions and verbalize them as much as possible. To that end (keeping in mind that I’m still very confused about consciousness in general): I think a large part of what makes me a machine functionalist is an intuition that neurons...aren’t that special. Like, you view the China Brain argument as a reductio because it seems so absurd. And I guess I actually kind of agree with that, it does seem absurd that a bunch of people talking to one another via walkie-talkie could generate consciousness. But it seems no more absurd to me than consciousness being generated by a bunch of cells sending action potentials to one another. Why should we have expected either of those processes to generate consciousness? In both cases you just have non-mental, syntactical operations taking place. If you hadn’t heard of neurons, wouldn’t they also seem like a reductio to you?
What it comes down to is that consciousness seems mysterious to me. And (on an intuitive level) it kind of feels like I need to throw something “special” at consciousness to explain it. What kind of special something? Well, you could say that the brain has the special something, by virtue of the fact that it’s made of neurons. But that doesn’t seem like the right kind of specialness to me, somehow. Yes, neurons are special in that they have a “unique” physico-chemical causal structure, by why single that out? To me that seems as arbitrary as singling out only specific types of atoms as being able to instantiate consciousness (which some people seem to do, and which I don’t think you’re doing, correct?). It just seems too contingent, too earth-specific an explanation. What if you came across aliens that acted conscious but didn’t have any neurons or a close equivalent? I think you’d have to concede that they were conscious, wouldn’t you? Of course, such aliens may not exist, so I can’t really make an argument based on that. But still—really, the answer to the mystery of consciousness is going to come down to the fact that particular kinds of cells evolved in earth animals? Not special enough! (or so say my intuitions, anyway)
So I’m led in a different direction. When I look at the brain and try to see what could be generating consciousness, what pops out to me is that the brain does computations. It has a particular pattern, a particular high-level causal structure that seems to lie at the heart of its ability to perform the amazing mental feats it does. The computations it performs are implemented on neurons, of course, but that doesn’t seem central to me—if they were implemented on some other substrate, the amazing feats would still get done (Shakespeare would still get written, Fermat’s Last Theorem would still get proved). What does seem central, then? Well, the way the neurons are wired up. My understanding (correct me if I’m wrong) is that in a neural network such as the brain, any given neuron fires iff all the inhibitory and excitatory inputs feeding into the neuron exceed some threshold. So roughly speaking, any given brain can be characterized by which neurons are connected to which other neurons, and what the weights of those connections are, yes? In that case (forgetting consciousness for a moment), what really matters in terms of creating a brain that can perform impressive mental feats is setting up those connections in the right way. But that just amounts to defining a specific high-level causal structure—and yes, that will require you to define a set of counterfactual dependencies (if neurons A and B had fired, then neuron C wouldn’t have fired, etc). I was kind of surprised that you were surprised that we brought up counterfactual dependence earlier in the discussion. For one I think it’s a standard-ish way of defining causality in philosophy (it’s at least the first section in the wikipedia article, anyway, and it’s the definition that makes the most sense to me). But even beyond that, it seems intuitively obvious to me that your brain’s counterfactual dependencies are what make your brain, your brain. If you had a different set of dependencies, you would have to have different neuronal wirings and therefore a different brain.
Anyway, this whole business of computation and higher-level causal structure and counterfactual dependencies: that does seem to have the right kind of specialness to me to generate consciousness. It’s hard for me to break the intuition down further than that, beyond saying that it’s the if-then pattern that seems like the really important thing here. I just can’t see what else it could be. And this view does have some nice features—if you wind up meeting apparently-conscious aliens, you don’t have to look to see if they have neurons. You can just look to see if they have the right if-then pattern in their mind.
To answer your question about simulations not being the thing that they’re simulating: I think the view of consciousness as a particular causal pattern kind of dissolves that question. If you think the only thing that matters in terms of creating consciousness is that there be a particular if-then causal structure (as I do), then in what sense are you “simulating” the causal structure when you implement it on a computer? It’s still the same structure, still has the same dependencies. That seems just as real to me as what the brain does—you could just as easily say that neurons are “simulating” consciousness. Essentially machine functionalists think that causal structure is all there is in terms of consciousness, and under that view the line between something being a “simulation” versus being “real” kind of disappears.
Does that help you understand where I’m coming from? I’d be interested to hear where in that line of arguments/intuitions I lost you.
I think a large part of what makes me a machine functionalist is an intuition that neurons...aren’t that special. Like, you view the China Brain argument as a reductio because it seems so absurd. And I guess I actually kind of agree with that, it does seem absurd that a bunch of people talking to one another via walkie-talkie could generate consciousness. But it seems no more absurd to me than consciousness being generated by a bunch of cells sending action potentials to one another.
Aren’t neurons special? At the very least, they’re mysterious. We’re far from understanding them as physico-chemical systems. I’ve had the same reaction and incredulity as you to the idea that interacting neurons can ‘generate consciousness’. The thing is, we don’t understand individual neurons. Yes, neurons compute. The brain computes. But so does every physical system we encounter. So why should computation be the defining feature of consciousness? It’s not obvious to me. In the end, consciousness is still a mystery and machine functionalism requires a leap of faith that I’m not prepared to take without convincing evidence.
But even beyond that, it seems intuitively obvious to me that your brain’s counterfactual dependencies are what make your brain, your brain.
Yes, counterfactual dependencies appear necessary for simulating a brain (and other systems) but the causal structure of the simulated objects is not necessarily the same as the causal structure of the underlying physical system running the simulation, which is my objection to Turing machines and Von Neumann architectures.
you could just as easily say that neurons are “simulating” consciousness. Essentially machine functionalists think that causal structure is all there is in terms of consciousness, and under that view the line between something being a “simulation” versus being “real” kind of disappears.
it’s an interesting thought, and I generally agree with this. The question seems to come down to defining causal structure. The problem is that the causal structure of the computer system running a simulation of an object does not appear anything like that of the object. A Turing machine running a human brain simulation appears to have a very different causal structure compared with the human brain.
So, one reason I pointed you at orthonormal’s sequence is that if you read all those posts they seem likely to trigger different intuitions for you.
I would also ask if you think that Aristotle—had he only been smarter—could have figured out his “unique type of physico-chemical causal (space-time) structure” from pure introspection. A negative answer would not automatically prove functionalism. We know of other limits on knowledge. But it does show that the thought experiment in which you are currently a simulation is at least as ‘conceivable’ as the thought experiment of a zombie without consciousness and perhaps even your scenarios. Furthermore, the mathematical examples of limits on self-knowledge actually point towards structure being independent of ‘substrates’. That’s how computer science started in the first place.
thanks. I’m not sure if you were pointing me in that direction for a specific reason but found commentator pjeby’s explanation for the ineffability of qualia insightful.
Since we are now “simulating” the neural activities of all the neurons in the human brain, we might expect that the animated GIF possesses human consciousness...
A GIF is just an image, it is not a simulation. The appeal of the GIF thought experiment relies on a misunderstanding of computation and simulation.
Take a photo of a dolphin swimming—can the photo swim? Of course not. But imagine scanning a perfect nanometer resolution 3D image of a dolphin and using that data to construct an artificial robotic dolphin. Can the robot dolphin swim? Obviously—yes, if constructed correctly. Can the 3D image swim by itself ? No. Now replace dolphin with brain, and swim with think.
Thinking is a computational process, and computation is physical, like swimming—it involves energy, mass, and state transitions. Physics is computational.
You state that “The ‘causal structure’ is just the key algorithmic computations” but this is not quite right.
Yes it is—causal structure is just computational structure, there is no difference.
The algorithmic computations can be instantiated in many different causal structures but only some will
Any sentence of this form is provably false, due to the universality of computation and multiple realizability. Any algorithmic computation can be instantiated in any universal computer and is always the same.
The algorithmic computations can be instantiated in many different causal structures but only some will
Any sentence of this form is provably false, due to the universality of computation and multiple realizability.
This is incorrect because the causal structure of a Turing machine simulating a human brain is very different from an actual human brain. Of course, you can redefine causality in terms of “simulation causality” but the underlying causal structure of the respective systems will be very different.
Yes it is—causal structure is just computational structure, there is no difference.
If you accept Wheeler’s “it from bit” argument, then anything can be instantiated with information. But at this point, you’re veering far from science.
This is incorrect because the causal structure of a Turing machine simulating a human brain is very different from an actual human brain.
There are at least two causal structure levels in a computational system: the physical substrate level and the program level (and potentially more with multiple levels of simulation). A computational system is one that can organize it’s energy flow (state transitions in the substrate) in a very particular way so as to realize/implement any computable causal structure at the program/simulation level.
The causal structure at the substrate level is literally factored out—it does not matter (beyond performance constraints). Universal computability is not a theory at this point—it is a proven hard true fact.
causal structure of a Turing machine simulating a human brain is very different from an actual human brain.
This statement contravenes universal computability, and is therefore false. A universal computer can instantiate any other causal structure. Remember: the causal structure at the substrate level is irrelevant due to the universality of computation. Causal structures can be embedded within other causal structures (multiple realizability).
A brain is just matter, and more specifically it is just an electromechanical biological computer. It is also just a conventional irreversible computer which dissipates energy along it’s wires and junctions according to the same exact physical constraints that face modern electronic computers. It can be simulated because anything can be simulated!
Let’s cut to the chase: are there any empirical predictions where your viewpoint disagrees with functionalism?
For example, I predict that within a decade or two, computers with about 10^14 ops will run human mind simulations, and these sims will pass any and all objective tests for human intelligence, self-awareness, consciousness, etc.
Furthermore, you won’t be able to tell the difference between a human controlling a humanoid avatar in virtual reality and an AI controlling a humanoid avatar (imitating human control).
People will just accept that sims are conscious/self-aware for the exact same reasons that we reject solipsism.
causal structure of a Turing machine simulating a human brain is very different from an actual human brain.
This statement contravenes universal computability, and is therefore false. A universal computer can instantiate any other causal structure. Remember: the causal structure at the substrate level is irrelevant due to the universality of computation. Causal structures can be embedded within other causal structures (multiple realizability).
My statement does not contravene universal computability since I’m assuming a Turing machine can simulate a human brain. Let me try another approach: Look at the space-time diagram of a Turing machine adding two numbers and compare with the space-time diagram of a neuron performing a similar summation. The causal structures in the space-time diagrams are very different. Yes, you can simulate a causal structure, but this is not the same thing as the causal structure of the underlying physical substrate performing the simulation.
It can be simulated because anything can be simulated!
Anything can be simulated imperfectly. Take the weather or C. elegans nervous system.
are there any empirical predictions where your viewpoint disagrees with functionalism?
I’m just exhibiting skepticism over claims from machine functionalism relating to Turing (and related) machine consciousness. I’m not promoting a specific viewpoint.
I predict that within a decade or two, computers with about 10^14 ops will run human mind simulations, and these sims will pass any and all objective tests for human intelligence, self-awareness, consciousness, etc.
There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.
People will just accept that sims are conscious/self-aware for the exact same reasons that we reject solipsism.
Have we rejected solipsism? Certainly panpsychism is consistent with it and this appears untouched in consciousness research.
My statement does not contravene universal computability since I’m assuming a Turing machine can simulate a human brain.
Well, if you assume that, then you are already most of the way to functionalism, but I suspect we may be talking about different types of simulations.
Let me try another approach: Look at the space-time diagram of a Turing machine adding two numbers and compare with the space-time diagram of a neuron performing a similar summation.
Neurons perform analog summation, so the space-time diagram or causal structure is stochastic/statistical rather than deterministic (addition over real-number distributions rather than digital addition) . My use of the term ‘simulation’ encompasses probabilistic simulation which entails matching the statistical distribution over state transitions rather than deterministic simulation.
Anything can be simulated imperfectly. Take the weather or C. elegans nervous system.
Neural analog computational systems can be simulated perfectly in a probabilistic sense when you can recreate the exact conditional probability distributions that govern spike events. You can’t necessarily predict the exact actions the brain will output (due to noise effects), but you can—in theory—predict actions from the exact correct distribution. At the limits of simulation we can predict exact samples from our multiverse distribution, rather than predict the exact future of our particular (unknowable) branch.
Simulation of intelligent minds is fundamentally different than weather simulation—for the weather we are interested in the exact outcome in our specific universe. That would be comparable to simulating the exact thoughts of a particular human mind in some situation—which in general is computationally intractable (and unimportant for AI).
There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.
Science is concerned with objective reality. A definition of consciousness which precludes objective testing is outside the realm of scientific inquiry at best, and pseudo-science at worse.
In common usage the term consciousness refers to objective reality. Sentences of the form ” I was conscious of X”, or “Y rendered Bob unconscious”, or “Perhaps at a subconscious level” all suggest a common meaning involving objectively verifiable computations.
We know that consciousness is the particular mental state arising from various computations coordinated across some hundreds of major brain regions. We know that certain drugs can cause loss of consciousness even while neural activity persists. Consciousness depends on precise synchronized coordination between major brain circuits—a straightforward result of the brain being an hybrid digital/analog computer.
We aren’t so far away from being able to objectively detect consciousness via brain scanning and some form of statistical inference—see this interesting work for example (using a clever compressibility or k-complexity perturbation measure).
Neurons perform analog summation, so the space-time diagram or causal structure is stochastic/statistical rather than deterministic
Surely you realize that quibbling over the use of analog vs digital neural summation in my toy example does not address my main argument.
Neural analog computational systems can be simulated perfectly in a probabilistic sense
Anything can be simulated perfectly (and trivially) in a probabilistic sense.
There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.
A definition of consciousness which precludes objective testing is outside the realm of scientific inquiry at best, and pseudo-science at worse.
If we knew the basis for consciousness, we would have objective tests. It’s possible that studying the brain’s structural and connectional organization in detail will provide the clues we need to develop better informed opinions about the basis of consciousness.
This is my final post and I would like to thank everyone for the discussion. If anyone is interested in developing autotracing and autosegmentation programs for connectomics and neural circuit reconstruction in whole-brain volume electron microscopy datasets, please email me at brainmaps at gmail dot com or visit http://connectomes.org for more information. Thanks again.
Thanks for the replies. I will try to answer and expand on the points raised. There are a number of reductio ad absurdums that dissuade me from machine functionalism, including Ned Block’s China brain and also the idea that a Turing machine running a human brain simulation would possess human consciousness. Let me try to take the absurdity to the next level with the following example:
Does an animated GIF possess human consciousness?
Imagine we record the activity of every neuron in a human brain at every millisecond; at each millisecond, we record whether each of the 100 billion neurons in the human brain is firing an action potental or not. We record all of this for a 1 second duration. Now, for each of the 1000 milliseconds, we represent the neural firing state of all neurons as a binary GIF image of about 333,000 pixels in height and width (this probably exceeds GIF format specifications, but who cares), where each pixel represents the firing state of a specific neuron. We can make 1000 of these GIFs for each millisecond over the 1 second duration. With these 1000 GIFs, we concatenate them to form an animated GIF and then play the animated GIF on an endless loop. Since we are now “simulating” the neural activities of all the neurons in the human brain, we might expect that the animated GIF possesses human consciousness… But this view is absurd and this exercise suggests there is more to consciousness than reproducing neural activities in different substrates.
To V_V, I don’t think it has human consciousness. If I answer otherwise, I’m pressed to acknowledge that well-coded chatbots have human consciousness, which is absurd. With regard to what “conscious” means in epistemic terms, I don’t know, but I do know that the Turing test is insufficient because it only deals with appearances and it’s easy to be duped. About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over.
To Kyre, you hit the crux in your second example. The absurdity of China brain and the Turing machine with human consciousness stems from the fact that the causal structures (i.e., space-time diagrams) in these physical systems are completely different from the causal structure of the human brain. As you describe, in a typical computer there are honest-to-god physical cause and effect in the voltage levels in the memory gates, but the causal structure is completely different from wetware, and this is where the absurdity of attributing consciousness to computations (or simulation) comes from, at least for me. Consciousness is not just computational. Otherwise you have absurdities like China brain and animated GIFs with human consciousness. It seems more likely to be physico-computational, as reflected in the causal structure of interactions of the physical system which underlies the computations and simulations.
There may be a computer architecture that reproduces the correct causal structure, but Von Neumann and related architectures do not. And to your last question, yes! A simulation is just an image. If you think it is the real thing, then you must accept that an animated GIF can possess human consciousness. Personally, this conclusion is too absurd for me to accept.
To jacob_cannell, thanks for the congrats. Sure, consciousness has baggage but using self-awareness instead already commits one to consciousness as a special type of computation, which the reductio ad absurdums above try to disprove. I agree it’s likely that “Self-awareness is just a computational capability”, depending on what you mean by ‘Self’ and ‘awareness’. You state that “The ‘causal structure’ is just the key algorithmic computations” but this is not quite right. The algorithmic computations can be instantiated in many different causal structures but only some will resemble those of the human brain and presumably possess human consciousness.
TLDR: The basis of consciousness is very speculative and there is good reason to believe it goes beyond computation to the physico-computational and causal (space-time) structure.
I think we might be working with different definitions of the term “causal structure”? The way I see it, what matters for whether or not two things have the same causal structure is counterfactual dependency—if neuron A hadn’t have fired, then neuron B would have fired. And we all agree that in a perfect simulation this kind of dependency is preserved. So yes, neurons and transistors have different lower-level causal behaviour, but I wouldn’t call that a different causal structure as long as they both implement a system that behaves the same under different counterfactuals. That’s what I think is wrong with your GIF example, btw—there’s no counterfactual dependency whatsoever. If I delete a particular pixel from one frame of the animation, the next frame wouldn’t change at all. Of course there was the proper dependency when the GIF was originally computed, and I would certainly say that that computation, however it was implemented, was conscious. But not the GIF itself, no.
Anyway, beyond that, we’re obviously working from very different intuitions, because I don’t see the China Brain or Turing machine examples as reductio’s at all—I’m perfectly willing to accept that those entities would be conscious.
It’s unclear why counterfactual dependencies would be necessary for machine functionalism, but ok, let’s include them in the GIF example. Take the first GIF as the initial condition and let the (binary) state of pixel, Xi, at time step, t, take the form, f(i,X1(t-1),X2(t-1),...Xn(t-1)). Does this make it any more plausible that the animated GIF has human consciousness? If you think the GIF has human consciousness, then what is the significance of the fact that the system of equations is generally underdetermined? Personally, it’s not plausible that the GIF has human consciousness, but would agree that since it’s an extreme example, my intuition could be wrong. Unfortunately, this appears to mean that we must agree to disagree on the question of the validity of machine functionalism, or is there another way forward?
I’m not sure I understand you. What do you mean by the system of equations being undetermined. Are you saying to take the same animated gif and not alter the actual physics in any way, and just refer to it differently? That obviously doesn’t change anything. You need to alter the causal structure.
My problem with non-machine functionalism is that any reason we have to say we’re conscious would equally apply to a simulation. If you one day found out that you were really a simulation would you decide your consciousness is an illusion, or figure you must have gotten it backwards which one is conscious, and it’s the simulations that are conscious and the real people that are p-zombies?
Thank you, you saved me a lot of typing. No amount of straight copying of that GIF will generate a conscious experience; but if you print out the first frame and give it to a person with a set of rules for simulating neural behaviour and tell them to calculate the subsequent frames into a gigantic paper notebook, that might generate consciousness.
Thanks for your answers.
No, but as others pointed out, an animated GIF is not a simulation of the thing it represents.
Just to be clear, when we are talking of simulations of a computational system, we mean something that computes the same input to output mapping of the system that is simulated, the same mathematical function (or, more precisely the same posterior, if the system is stochastic).
An animated GIF doesn’t respond to inputs, therefore it doesn’t compute the same function that the brain computes.
Think of playing an old console video game on an emulator vs watching a video recorded from the console screen of somebody playing that game. Clearly the emulator and the video are very different objects:
you can legitimately say that the emulator is simulating the game, furthermore you can say that the emulator is actually running the game: “Being a video game” is a property of certain patterns of input-output mappings, and this property is invariant (up to a performance overhead) under simulation, it is independent on the physical substrate.
on the other hand, the video record of somebody playing a game can’t be said to be a game, or even the simulation of a game.
Well-coded chatbots don’t come any close to simulating the linguistic behavior of humans. There are claims now and then that some chatbot passed the Turing test, but if you look past the hype, all these claims are fundamentally false. Here it is Scott Aaronson’s take on the last of these claims.
Seriously, if we really had computer programs passing the Turing test, we would probably also have computer programs working as engineers or lawyers.
I’m asking how you understand the term at operational level right now.
Let me introduce you Foo. Foo may be a human, an animal, a plant, a non-living object, etc. It may be an artifact, or a natural-occurring object, or a combination of both. It may be in a normal state for its kind of objects or an abnormal state (e.g. in coma, out of fuel, out of battery charge) I won’t tell you.
If I ask you questions about the behavior of Foo, e.g. “Does Foo move if prodded with a stick?”, “Can Foo find the exit of a maze?”, “How does Foo behave in front of a mirror?”, “Can you train Foo to push a button when a certain light goes on?”, “Can you trade with Foo?”, “Can you discuss philosophy with Foo?”, you can’t answer these questions. In Bayesian terms, your subjective probability distribution over possible empirical observations about Foo has a large entropy.
Now I tell you that Foo is conscious. I tell you what I mean by “conscious”, I’m leaving that to your interpretation.
I bet that now you can answer many of the questions above, if not with certainty at least with some significant confidence. In Bayesian terms, after conditioning on the piece of evidence “Foo is conscious”, the entropy of your subjective probability distribution over possible empirical observations about Foo became smaller.
Do you agree with that? If so, how do you reconcile that with non-functionalism?
The animated GIF, as I originally described it, is an “imitation of the operation of a real-world process or system over time”, which is the verbatim definition (from Wikipedia) of a simulation. Counterfactual dependencies are not needed for imitation.
Ok, let’s go with this definition. As I understand it then, machine functionalism is not about simulation (as imitation) per se but rather about recreating the mathematical function that the human brain is computing. Is this correct?
A brain doesn’t necessarily respond to inputs, but sure, we can require that the simulation responds to inputs, though I find this requirement a bit strange.
It sounds like a beautiful idea, being invariant under a simulation that is independent of substrate.
I agree.
In short, it’s a combination of a Turing test and the possession of a functioning human brain-like structure. If an entity exhibits awake human-like behavior (i.e, by passing the Turing test or suitable approximation) and possesses a living human brain (inferred from visual inspection of their biological form) or human brain-like equivalent (which I’ve yet to see, except possibly in some non-human primates), then I generally conclude it has human or human-like consciousness.
When I consider your comment here with your previous comment above that “definitions of consciousness which are not invariant under simulation have little epistemic usefulness”, I think I understand your argument better. However the epistemic argument you’re advancing is a fallacy because you’re demonstrating what you assume: If I run an accurate simulation of a human brain on a computer and ask it whether it has human consciousness, of course it will say ‘yes’ and it will even pass the Turing test because we’re assuming it’s an accurate simulation of a human brain. The reasoning is circular and does not actually inform us whether the simulation is conscious. So your “epistemic usefulness” appears irrelevant to the question of whether machine functionalism is correct. Or am I missing something?
My general question to the machine functionalists here is, why are you assuming it is sufficient to merely simulate the human brain to recreate its conscious experience? The human brain is a chemico-physical system and such systems are generally explained in terms of causal structures involving physical or chemical entities, though such explanations (including simulations) are never mistaken for the thing itself. So why should human consciousness, which is a part of the natural world and whose basis we know first-hand involves the human brain, be any different?
If the question here is, is consciousness a substrate-independent function that the brain computes or is it associated with a unique type of physico-chemical causal (space-time) structure, then I would say the latter is more likely due to the past successes in physics and chemistry in explaining natural phenomena. In any event, our knowledge of the basis of consciousness is still highly speculative. I can attempt further reductio ad absurdums with machine functionalism involving ever more ridiculous scenarios but will probably not convince anyone who has taken the requisite leap of faith.
You seem to be discussing in good faith here, and I think it’s worth continuing so we can both get a better idea of what the other is saying. I think differing non-verbal intuitions drive a lot of these debates, and so to avoid talking past one another it’s best to try to zoom in on intuitions and verbalize them as much as possible. To that end (keeping in mind that I’m still very confused about consciousness in general): I think a large part of what makes me a machine functionalist is an intuition that neurons...aren’t that special. Like, you view the China Brain argument as a reductio because it seems so absurd. And I guess I actually kind of agree with that, it does seem absurd that a bunch of people talking to one another via walkie-talkie could generate consciousness. But it seems no more absurd to me than consciousness being generated by a bunch of cells sending action potentials to one another. Why should we have expected either of those processes to generate consciousness? In both cases you just have non-mental, syntactical operations taking place. If you hadn’t heard of neurons, wouldn’t they also seem like a reductio to you?
What it comes down to is that consciousness seems mysterious to me. And (on an intuitive level) it kind of feels like I need to throw something “special” at consciousness to explain it. What kind of special something? Well, you could say that the brain has the special something, by virtue of the fact that it’s made of neurons. But that doesn’t seem like the right kind of specialness to me, somehow. Yes, neurons are special in that they have a “unique” physico-chemical causal structure, by why single that out? To me that seems as arbitrary as singling out only specific types of atoms as being able to instantiate consciousness (which some people seem to do, and which I don’t think you’re doing, correct?). It just seems too contingent, too earth-specific an explanation. What if you came across aliens that acted conscious but didn’t have any neurons or a close equivalent? I think you’d have to concede that they were conscious, wouldn’t you? Of course, such aliens may not exist, so I can’t really make an argument based on that. But still—really, the answer to the mystery of consciousness is going to come down to the fact that particular kinds of cells evolved in earth animals? Not special enough! (or so say my intuitions, anyway)
So I’m led in a different direction. When I look at the brain and try to see what could be generating consciousness, what pops out to me is that the brain does computations. It has a particular pattern, a particular high-level causal structure that seems to lie at the heart of its ability to perform the amazing mental feats it does. The computations it performs are implemented on neurons, of course, but that doesn’t seem central to me—if they were implemented on some other substrate, the amazing feats would still get done (Shakespeare would still get written, Fermat’s Last Theorem would still get proved). What does seem central, then? Well, the way the neurons are wired up. My understanding (correct me if I’m wrong) is that in a neural network such as the brain, any given neuron fires iff all the inhibitory and excitatory inputs feeding into the neuron exceed some threshold. So roughly speaking, any given brain can be characterized by which neurons are connected to which other neurons, and what the weights of those connections are, yes? In that case (forgetting consciousness for a moment), what really matters in terms of creating a brain that can perform impressive mental feats is setting up those connections in the right way. But that just amounts to defining a specific high-level causal structure—and yes, that will require you to define a set of counterfactual dependencies (if neurons A and B had fired, then neuron C wouldn’t have fired, etc). I was kind of surprised that you were surprised that we brought up counterfactual dependence earlier in the discussion. For one I think it’s a standard-ish way of defining causality in philosophy (it’s at least the first section in the wikipedia article, anyway, and it’s the definition that makes the most sense to me). But even beyond that, it seems intuitively obvious to me that your brain’s counterfactual dependencies are what make your brain, your brain. If you had a different set of dependencies, you would have to have different neuronal wirings and therefore a different brain.
Anyway, this whole business of computation and higher-level causal structure and counterfactual dependencies: that does seem to have the right kind of specialness to me to generate consciousness. It’s hard for me to break the intuition down further than that, beyond saying that it’s the if-then pattern that seems like the really important thing here. I just can’t see what else it could be. And this view does have some nice features—if you wind up meeting apparently-conscious aliens, you don’t have to look to see if they have neurons. You can just look to see if they have the right if-then pattern in their mind.
To answer your question about simulations not being the thing that they’re simulating: I think the view of consciousness as a particular causal pattern kind of dissolves that question. If you think the only thing that matters in terms of creating consciousness is that there be a particular if-then causal structure (as I do), then in what sense are you “simulating” the causal structure when you implement it on a computer? It’s still the same structure, still has the same dependencies. That seems just as real to me as what the brain does—you could just as easily say that neurons are “simulating” consciousness. Essentially machine functionalists think that causal structure is all there is in terms of consciousness, and under that view the line between something being a “simulation” versus being “real” kind of disappears.
Does that help you understand where I’m coming from? I’d be interested to hear where in that line of arguments/intuitions I lost you.
Thank you for the thoughtful reply.
Aren’t neurons special? At the very least, they’re mysterious. We’re far from understanding them as physico-chemical systems. I’ve had the same reaction and incredulity as you to the idea that interacting neurons can ‘generate consciousness’. The thing is, we don’t understand individual neurons. Yes, neurons compute. The brain computes. But so does every physical system we encounter. So why should computation be the defining feature of consciousness? It’s not obvious to me. In the end, consciousness is still a mystery and machine functionalism requires a leap of faith that I’m not prepared to take without convincing evidence.
Yes, counterfactual dependencies appear necessary for simulating a brain (and other systems) but the causal structure of the simulated objects is not necessarily the same as the causal structure of the underlying physical system running the simulation, which is my objection to Turing machines and Von Neumann architectures.
it’s an interesting thought, and I generally agree with this. The question seems to come down to defining causal structure. The problem is that the causal structure of the computer system running a simulation of an object does not appear anything like that of the object. A Turing machine running a human brain simulation appears to have a very different causal structure compared with the human brain.
So, one reason I pointed you at orthonormal’s sequence is that if you read all those posts they seem likely to trigger different intuitions for you.
I would also ask if you think that Aristotle—had he only been smarter—could have figured out his “unique type of physico-chemical causal (space-time) structure” from pure introspection. A negative answer would not automatically prove functionalism. We know of other limits on knowledge. But it does show that the thought experiment in which you are currently a simulation is at least as ‘conceivable’ as the thought experiment of a zombie without consciousness and perhaps even your scenarios. Furthermore, the mathematical examples of limits on self-knowledge actually point towards structure being independent of ‘substrates’. That’s how computer science started in the first place.
You may want to look at the short sequence that starts here.
thanks. I’m not sure if you were pointing me in that direction for a specific reason but found commentator pjeby’s explanation for the ineffability of qualia insightful.
A GIF is just an image, it is not a simulation. The appeal of the GIF thought experiment relies on a misunderstanding of computation and simulation.
Take a photo of a dolphin swimming—can the photo swim? Of course not. But imagine scanning a perfect nanometer resolution 3D image of a dolphin and using that data to construct an artificial robotic dolphin. Can the robot dolphin swim? Obviously—yes, if constructed correctly. Can the 3D image swim by itself ? No. Now replace dolphin with brain, and swim with think.
Thinking is a computational process, and computation is physical, like swimming—it involves energy, mass, and state transitions. Physics is computational.
Yes it is—causal structure is just computational structure, there is no difference.
Any sentence of this form is provably false, due to the universality of computation and multiple realizability. Any algorithmic computation can be instantiated in any universal computer and is always the same.
This is incorrect because the causal structure of a Turing machine simulating a human brain is very different from an actual human brain. Of course, you can redefine causality in terms of “simulation causality” but the underlying causal structure of the respective systems will be very different.
If you accept Wheeler’s “it from bit” argument, then anything can be instantiated with information. But at this point, you’re veering far from science.
There are at least two causal structure levels in a computational system: the physical substrate level and the program level (and potentially more with multiple levels of simulation). A computational system is one that can organize it’s energy flow (state transitions in the substrate) in a very particular way so as to realize/implement any computable causal structure at the program/simulation level.
The causal structure at the substrate level is literally factored out—it does not matter (beyond performance constraints). Universal computability is not a theory at this point—it is a proven hard true fact.
This statement contravenes universal computability, and is therefore false. A universal computer can instantiate any other causal structure. Remember: the causal structure at the substrate level is irrelevant due to the universality of computation. Causal structures can be embedded within other causal structures (multiple realizability).
A brain is just matter, and more specifically it is just an electromechanical biological computer. It is also just a conventional irreversible computer which dissipates energy along it’s wires and junctions according to the same exact physical constraints that face modern electronic computers. It can be simulated because anything can be simulated!
Let’s cut to the chase: are there any empirical predictions where your viewpoint disagrees with functionalism?
For example, I predict that within a decade or two, computers with about 10^14 ops will run human mind simulations, and these sims will pass any and all objective tests for human intelligence, self-awareness, consciousness, etc.
Furthermore, you won’t be able to tell the difference between a human controlling a humanoid avatar in virtual reality and an AI controlling a humanoid avatar (imitating human control).
People will just accept that sims are conscious/self-aware for the exact same reasons that we reject solipsism.
My statement does not contravene universal computability since I’m assuming a Turing machine can simulate a human brain. Let me try another approach: Look at the space-time diagram of a Turing machine adding two numbers and compare with the space-time diagram of a neuron performing a similar summation. The causal structures in the space-time diagrams are very different. Yes, you can simulate a causal structure, but this is not the same thing as the causal structure of the underlying physical substrate performing the simulation.
Anything can be simulated imperfectly. Take the weather or C. elegans nervous system.
I’m just exhibiting skepticism over claims from machine functionalism relating to Turing (and related) machine consciousness. I’m not promoting a specific viewpoint.
There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.
Have we rejected solipsism? Certainly panpsychism is consistent with it and this appears untouched in consciousness research.
Well, if you assume that, then you are already most of the way to functionalism, but I suspect we may be talking about different types of simulations.
Neurons perform analog summation, so the space-time diagram or causal structure is stochastic/statistical rather than deterministic (addition over real-number distributions rather than digital addition) . My use of the term ‘simulation’ encompasses probabilistic simulation which entails matching the statistical distribution over state transitions rather than deterministic simulation.
Neural analog computational systems can be simulated perfectly in a probabilistic sense when you can recreate the exact conditional probability distributions that govern spike events. You can’t necessarily predict the exact actions the brain will output (due to noise effects), but you can—in theory—predict actions from the exact correct distribution. At the limits of simulation we can predict exact samples from our multiverse distribution, rather than predict the exact future of our particular (unknowable) branch.
Simulation of intelligent minds is fundamentally different than weather simulation—for the weather we are interested in the exact outcome in our specific universe. That would be comparable to simulating the exact thoughts of a particular human mind in some situation—which in general is computationally intractable (and unimportant for AI).
Science is concerned with objective reality. A definition of consciousness which precludes objective testing is outside the realm of scientific inquiry at best, and pseudo-science at worse.
In common usage the term consciousness refers to objective reality. Sentences of the form ” I was conscious of X”, or “Y rendered Bob unconscious”, or “Perhaps at a subconscious level” all suggest a common meaning involving objectively verifiable computations.
We know that consciousness is the particular mental state arising from various computations coordinated across some hundreds of major brain regions. We know that certain drugs can cause loss of consciousness even while neural activity persists. Consciousness depends on precise synchronized coordination between major brain circuits—a straightforward result of the brain being an hybrid digital/analog computer.
We aren’t so far away from being able to objectively detect consciousness via brain scanning and some form of statistical inference—see this interesting work for example (using a clever compressibility or k-complexity perturbation measure).
Surely you realize that quibbling over the use of analog vs digital neural summation in my toy example does not address my main argument.
Anything can be simulated perfectly (and trivially) in a probabilistic sense.
If we knew the basis for consciousness, we would have objective tests. It’s possible that studying the brain’s structural and connectional organization in detail will provide the clues we need to develop better informed opinions about the basis of consciousness.
This is my final post and I would like to thank everyone for the discussion. If anyone is interested in developing autotracing and autosegmentation programs for connectomics and neural circuit reconstruction in whole-brain volume electron microscopy datasets, please email me at brainmaps at gmail dot com or visit http://connectomes.org for more information. Thanks again.