Causal closure is impossible for essentially every interesting system, including classical computers (my laptop currently has a wiring problem that definitely affects its behavior despite not being the sort of thing anyone would include in an abstract model).
Are there any measures of approximate simulation that you think are useful here? Computer science and nonlinear dynamics probably have some.
Yes, perfect causal closure is technically impossible, so it comes in degrees. My argument is that the degree of causal closure of possible abstractions in the brain is less than one might naively expect.
Are there any measures of approximate simulation that you think are useful here?
Sort of not obvious what exactly “causal closure” means if the error tolerance is not specified. We could differentiate literally 100% perfect causal closure, almost perfect causal closure, and “approximate” causal closure. Literally 100% perfect causal closure is impossible for any abstraction due to every electron exerting nonzero force on any other electron in its future lightcone. Almost perfect causal closure (like 99.9%+) might be given for your laptop if it doesn’t have a wiring issue(?), maybe if a few more details are included in the abstraction. And then whether or there exists an abstraction for the brain with approximate causal closure (95% maybe?) is an open question.
I’d argue that almost perfect causal closure is enough for an abstraction to contain relevant information about consciousness, and approximate causal closure probably as well. Of course there’s not really a bright line between those two, either. But I think insofar as OP’s argument is one against approximate causal closure, those details don’t really matter.
Nah, it’s about formalizing “you can just think about neurons, you don’t have to simulate individual atoms.” Which raises the question “don’t have to for what purpose?”, and causal closure answers “for literally perfect simulation.”
The neurons/atoms distinction isn’t causal closure. Causal closure means there is no outside influence entering the program (other than, let’s say, the sensory inputs of the person).
Euan seems to be using the phrase to mean (something like) causal closure (as the phrase would normally be used e.g. in talking about physicalism) of the upper level of description—basically saying every thing that actually happens makes sense in terms of the emergent theory, it doesn’t need to have interventions from outside or below.
I know the causal closure of the physical as the principle that nothing non-physical influences physical stuff, so that would be the causal closure of the bottom level of description (since there is no level below the physical), rather than the upper.
So if you mean by that that it’s enough to simulate neurons rather than individual atoms, that wouldn’t be “causal closure” as Wikipedia calls it.
Causal closure is impossible for essentially every interesting system, including classical computers (my laptop currently has a wiring problem that definitely affects its behavior despite not being the sort of thing anyone would include in an abstract model).
Are there any measures of approximate simulation that you think are useful here? Computer science and nonlinear dynamics probably have some.
Yes, perfect causal closure is technically impossible, so it comes in degrees. My argument is that the degree of causal closure of possible abstractions in the brain is less than one might naively expect.
I am yet to read this but I expect it will be very relevant! https://arxiv.org/abs/2402.09090
Sort of not obvious what exactly “causal closure” means if the error tolerance is not specified. We could differentiate literally 100% perfect causal closure, almost perfect causal closure, and “approximate” causal closure. Literally 100% perfect causal closure is impossible for any abstraction due to every electron exerting nonzero force on any other electron in its future lightcone. Almost perfect causal closure (like 99.9%+) might be given for your laptop if it doesn’t have a wiring issue(?), maybe if a few more details are included in the abstraction. And then whether or there exists an abstraction for the brain with approximate causal closure (95% maybe?) is an open question.
I’d argue that almost perfect causal closure is enough for an abstraction to contain relevant information about consciousness, and approximate causal closure probably as well. Of course there’s not really a bright line between those two, either. But I think insofar as OP’s argument is one against approximate causal closure, those details don’t really matter.
I’m thinking the causal closure part is more about the soul not existing than about anything else.
Nah, it’s about formalizing “you can just think about neurons, you don’t have to simulate individual atoms.” Which raises the question “don’t have to for what purpose?”, and causal closure answers “for literally perfect simulation.”
The neurons/atoms distinction isn’t causal closure. Causal closure means there is no outside influence entering the program (other than, let’s say, the sensory inputs of the person).
Euan seems to be using the phrase to mean (something like) causal closure (as the phrase would normally be used e.g. in talking about physicalism) of the upper level of description—basically saying every thing that actually happens makes sense in terms of the emergent theory, it doesn’t need to have interventions from outside or below.
I know the causal closure of the physical as the principle that nothing non-physical influences physical stuff, so that would be the causal closure of the bottom level of description (since there is no level below the physical), rather than the upper.
So if you mean by that that it’s enough to simulate neurons rather than individual atoms, that wouldn’t be “causal closure” as Wikipedia calls it.