I think we might actually be agreeing (or ~90% overlapping) and just using different terminology.
Physical activity is physical.
Right. We’re talking about “physical processes” rather than static physical properties. I.e. Which processes are important for consciousness to be implemented and can the physics support these processes?
No, physical behaviour isn’t function. Function is abstract, physical behaviour is concrete. Flight simulators functionally duplicate flight without flying. If function were not abstract, functionalism would not lead to substrate independence. You can build a model of ion channels and synaptic clefts, but the modelled sodium ions aren’t actual sodium ion, and if the universe cares about activity being implemented by actual sodium ions, your model isn’t going to be conscious
The flight simulator doesn’t implement actual aerodynamics (it’s not implementing the required functions to generate lift) but this isn’t what we’re arguing. A better analogy might be to compare a birds wing to a steel aeroplane wing, both implement the actual physical process required for flight (generating lift through certain airflow patterns) just with different materials.
Similarly, a wooden rod can burn in fire whereas a steel rod can’t. This is because the physics of the material are preventing a certain function (oxidation) from being implemented.
So when we’re imagining a functional isomorph of the brain which has been built using silicon this presupposes that silicon can actually replicate all of the required functions with its specific physics. As you’ve pointed out, this is a big if! There are physical processes (such as Nitrous Oxide diffusion across the cell membranes) which might be impossible to implement in silicon and fundamentally important for consciousness.
I don’t disagree! The point is that the functions which this physical process is implementing are what’s required for consciousness not the actual physical properties themselves.
I think I’m more optimistic than you that a moderately accurate functional isomorph of the brain could be built which preserves consciousness (largely due to the reasons I mentioned in my previous comment around robustness.) But putting this aside for a second, would you agree that if all the relevant functions could be implemented in silicon then a functional isomorph would be conscious? Or is the contention that this is like trying to make steel burn i.e. we’re just never going to be able to replicate the functions in another substrate because the physics precludes it?
We are talking about functionalism—it’s in the title. I am contrasting physical processes with abstract functions.
In ordinary parlance, the function of a physical thing is itself a physical effect...toasters toast, kettles boil, planes fly.
In the philosophy of mind, a function is an abstraction, more like the mathematical sense of a function. In maths, a function takes some inputs and or produces some outputs. Well known examples are familiar arithmetic operations like addition, multiplication , squaring, and so on. But the inputs and outputs are not concrete physical realities. In computation,the inputs and outputs of a functional unit, such as a NAND gate, always have some concrete value, some specific voltage, but not always the same one. Indeed, general Turing complete computers don’t even have to be electrical—they can be implemented in clockwork, hydraulics, photonics, etc.
This is the basis for the idea that a compute programme can be the same as a mind, despite being made of different matter—it implements the same.abstract functions. The abstraction of the abstract, philosopy-of-mind concept of a function is part of its usefulness.
Searle is famous critic of computationalism, and his substitute for it is a biological essentialism in which the generation of consciousness is a brain function—in the concrete sense of function.It’s true that something whose concrete function is to generate consciousness will generate consciousness..but it’s vacuously, trivially true.
The point is that the functions which this physical process is implementing are what’s required for consciousness not the actual physical properties themselves.
If you mean that abstract, computational functions are known to be sufficient to give rise to all.asoexs of consciousness including qualia, that is what I am contesting.
I think I’m more optimistic than you that a moderately accurate functional isomorph of the brain could be built which preserves consciousness (largely due to the reasons I mentioned in my previous comment around robustness.
I’m less optimistic because of my.arguments.
But putting this aside for a second, would you agree that if all the relevant functions could be implemented in silicon then a functional isomorph would be conscious?
No, not necessarily. That , in the “not necessary” form—is what I’ve been arguing all along. I also don’t think that consciousnes had a single meaning , or that there is a agreement about what it means, or that it is a simple binary.
The controversial point is whether consciousness in the hard problem sense—phenomenal consciousness, qualia—will be reproduced with reproduction of function. It’s not controversial that easy problem consciousness—capacities and behaviour—will be reproduced by functional reproduction. I don t know which you believe, because you are only talking about consciousness not otherwise specified.
If you do mean that a functional duplicate will necessarily have phenomenal consciousness, and you are arguing the point, not just holding it as an opinion, you have a heavy burden:-
You need to show some theory of how computation generates conscious experience. Or you need to show why the concrete physical implementation couldn’t possibly make a difference.
@rife
Yes, I’m specifically focused on the behaviour of an honest self-report
Well,. you’re not rejecting phenomenal consciousness wholesale.
fine-grained information becomes irrelevant implementation details. If the neuron still fires, or doesn’t, smaller noise doesn’t matter. The only reason I point this out is specifically as it applies to the behaviour of a self-report (which we will circle back to in a moment). If it doesn’t materially effect the output so powerfully that it alters that final outcome, then it is not responsible for outward behaviour.
But outward behaviour is not what I am talking about. The question is whether functional duplication preserves (full) consciousness. And, as I have said, physicalism is not just about fine grained details. There’s also the basic fact of running on the metal
I’m saying that we have ruled out that a functional duplicate could lack conscious experience because: we have established conscious experience as part of the causal chain
“In humans”. Even if it’s always the case that qualia are causal in humans, it doesn’t follow that reports of qualia in any entity whatsoever are caused by qualia. Yudkowsky’s argument is no help here, because he doesn’t require reports of consciousness to be *directly” caused by consciousness—a computational zombies reports would be caused , not by it’s own consciousness, but by the programming and data created by humans.
to be able to feel something and then output a description through voice or typing that is based on that feeling. If conscious experience was part of that causal chain, and the causal chain consists purely of neuron firings, then conscious experience is contained in that functionality.
Neural firings are specific physical behaviour, not abstract function. Computationalism is about abstract function
I understand that there’s a difference between abstract functions and physical functions.
For example, abstractly we could imagine a NAND gate as a truth table—not specifying real voltages and hardware. But in a real system we’d need to implement the NAND gate on a circuit board with specific voltage thresholds, wires etc..
Functionalism is obviously a broad church, but it is not true that a functionalist needs to be tied to the idea that abstract functions alone are sufficient for consciousness. Indeed, I’d argue that this isn’t a common position among functionalists at all. Rather, they’d typically say something like a physically realised functional process described at a certain level of abstraction is sufficient for consciousness.
To be clear, by “function” I don’t mean some purely mathematical mapping divorced from any physical realisation. I’m talking about the physically instantiated causal/functional roles. I’m not claiming that a simulation would do the job.
If you mean that abstract, computational functions are known to be sufficient to give rise to all.asoexs of consciousness including qualia, that is what I am contesting.
This is trivially true, there is a hard problem of consciousness that is, well, hard. I don’t think I’ve said that computational functions are known to be sufficient for generating qualia. I’ve said if you already believe this then you should take the possibilty of AI consciousness more seriously.
No, not necessarily. That , in the “not necessary” form—is what I’ve been arguing all along. I also don’t think that consciousnes had a single meaning , or that there is a agreement about what it means, or that it is a simple binary.
Makes sense, thanks for engaging with the question.
If you do mean that a functional duplicate will necessarily have phenomenal consciousness, and you are arguing the point, not just holding it as an opinion, you have a heavy burden:-You need to show some theory of how computation generates conscious experience. Or you need to show why the concrete physical implementation couldn’t possibly make a difference.
It’s an opinion. I’m obviously not going to be able to solve the Hard Problem of Consciousness in a comment section. In any case, I appreciate the exchange. I’m aware that neither of us can solve the Hard Problem here, but hopefully this clarifies the spirit of my position.
I think we might actually be agreeing (or ~90% overlapping) and just using different terminology.
Right. We’re talking about “physical processes” rather than static physical properties. I.e. Which processes are important for consciousness to be implemented and can the physics support these processes?
The flight simulator doesn’t implement actual aerodynamics (it’s not implementing the required functions to generate lift) but this isn’t what we’re arguing. A better analogy might be to compare a birds wing to a steel aeroplane wing, both implement the actual physical process required for flight (generating lift through certain airflow patterns) just with different materials.
Similarly, a wooden rod can burn in fire whereas a steel rod can’t. This is because the physics of the material are preventing a certain function (oxidation) from being implemented.
So when we’re imagining a functional isomorph of the brain which has been built using silicon this presupposes that silicon can actually replicate all of the required functions with its specific physics. As you’ve pointed out, this is a big if! There are physical processes (such as Nitrous Oxide diffusion across the cell membranes) which might be impossible to implement in silicon and fundamentally important for consciousness.
I don’t disagree! The point is that the functions which this physical process is implementing are what’s required for consciousness not the actual physical properties themselves.
I think I’m more optimistic than you that a moderately accurate functional isomorph of the brain could be built which preserves consciousness (largely due to the reasons I mentioned in my previous comment around robustness.) But putting this aside for a second, would you agree that if all the relevant functions could be implemented in silicon then a functional isomorph would be conscious? Or is the contention that this is like trying to make steel burn i.e. we’re just never going to be able to replicate the functions in another substrate because the physics precludes it?
We are talking about functionalism—it’s in the title. I am contrasting physical processes with abstract functions.
In ordinary parlance, the function of a physical thing is itself a physical effect...toasters toast, kettles boil, planes fly.
In the philosophy of mind, a function is an abstraction, more like the mathematical sense of a function. In maths, a function takes some inputs and or produces some outputs. Well known examples are familiar arithmetic operations like addition, multiplication , squaring, and so on. But the inputs and outputs are not concrete physical realities. In computation,the inputs and outputs of a functional unit, such as a NAND gate, always have some concrete value, some specific voltage, but not always the same one. Indeed, general Turing complete computers don’t even have to be electrical—they can be implemented in clockwork, hydraulics, photonics, etc.
This is the basis for the idea that a compute programme can be the same as a mind, despite being made of different matter—it implements the same.abstract functions. The abstraction of the abstract, philosopy-of-mind concept of a function is part of its usefulness.
Searle is famous critic of computationalism, and his substitute for it is a biological essentialism in which the generation of consciousness is a brain function—in the concrete sense of function.It’s true that something whose concrete function is to generate consciousness will generate consciousness..but it’s vacuously, trivially true.
If you mean that abstract, computational functions are known to be sufficient to give rise to all.asoexs of consciousness including qualia, that is what I am contesting.
I’m less optimistic because of my.arguments.
No, not necessarily. That , in the “not necessary” form—is what I’ve been arguing all along. I also don’t think that consciousnes had a single meaning , or that there is a agreement about what it means, or that it is a simple binary.
The controversial point is whether consciousness in the hard problem sense—phenomenal consciousness, qualia—will be reproduced with reproduction of function. It’s not controversial that easy problem consciousness—capacities and behaviour—will be reproduced by functional reproduction. I don t know which you believe, because you are only talking about consciousness not otherwise specified.
If you do mean that a functional duplicate will necessarily have phenomenal consciousness, and you are arguing the point, not just holding it as an opinion, you have a heavy burden:-
You need to show some theory of how computation generates conscious experience. Or you need to show why the concrete physical implementation couldn’t possibly make a difference.
@rife
Well,. you’re not rejecting phenomenal consciousness wholesale.
But outward behaviour is not what I am talking about. The question is whether functional duplication preserves (full) consciousness. And, as I have said, physicalism is not just about fine grained details. There’s also the basic fact of running on the metal
“In humans”. Even if it’s always the case that qualia are causal in humans, it doesn’t follow that reports of qualia in any entity whatsoever are caused by qualia. Yudkowsky’s argument is no help here, because he doesn’t require reports of consciousness to be *directly” caused by consciousness—a computational zombies reports would be caused , not by it’s own consciousness, but by the programming and data created by humans.
Neural firings are specific physical behaviour, not abstract function. Computationalism is about abstract function
I understand that there’s a difference between abstract functions and physical functions. For example, abstractly we could imagine a NAND gate as a truth table—not specifying real voltages and hardware. But in a real system we’d need to implement the NAND gate on a circuit board with specific voltage thresholds, wires etc..
Functionalism is obviously a broad church, but it is not true that a functionalist needs to be tied to the idea that abstract functions alone are sufficient for consciousness. Indeed, I’d argue that this isn’t a common position among functionalists at all. Rather, they’d typically say something like a physically realised functional process described at a certain level of abstraction is sufficient for consciousness.
To be clear, by “function” I don’t mean some purely mathematical mapping divorced from any physical realisation. I’m talking about the physically instantiated causal/functional roles. I’m not claiming that a simulation would do the job.
This is trivially true, there is a hard problem of consciousness that is, well, hard. I don’t think I’ve said that computational functions are known to be sufficient for generating qualia. I’ve said if you already believe this then you should take the possibilty of AI consciousness more seriously.
Makes sense, thanks for engaging with the question.
It’s an opinion. I’m obviously not going to be able to solve the Hard Problem of Consciousness in a comment section. In any case, I appreciate the exchange. I’m aware that neither of us can solve the Hard Problem here, but hopefully this clarifies the spirit of my position.