Nah, it’s about formalizing “you can just think about neurons, you don’t have to simulate individual atoms.” Which raises the question “don’t have to for what purpose?”, and causal closure answers “for literally perfect simulation.”
The neurons/atoms distinction isn’t causal closure. Causal closure means there is no outside influence entering the program (other than, let’s say, the sensory inputs of the person).
Euan seems to be using the phrase to mean (something like) causal closure (as the phrase would normally be used e.g. in talking about physicalism) of the upper level of description—basically saying every thing that actually happens makes sense in terms of the emergent theory, it doesn’t need to have interventions from outside or below.
I know the causal closure of the physical as the principle that nothing non-physical influences physical stuff, so that would be the causal closure of the bottom level of description (since there is no level below the physical), rather than the upper.
So if you mean by that that it’s enough to simulate neurons rather than individual atoms, that wouldn’t be “causal closure” as Wikipedia calls it.
Nah, it’s about formalizing “you can just think about neurons, you don’t have to simulate individual atoms.” Which raises the question “don’t have to for what purpose?”, and causal closure answers “for literally perfect simulation.”
The neurons/atoms distinction isn’t causal closure. Causal closure means there is no outside influence entering the program (other than, let’s say, the sensory inputs of the person).
Euan seems to be using the phrase to mean (something like) causal closure (as the phrase would normally be used e.g. in talking about physicalism) of the upper level of description—basically saying every thing that actually happens makes sense in terms of the emergent theory, it doesn’t need to have interventions from outside or below.
I know the causal closure of the physical as the principle that nothing non-physical influences physical stuff, so that would be the causal closure of the bottom level of description (since there is no level below the physical), rather than the upper.
So if you mean by that that it’s enough to simulate neurons rather than individual atoms, that wouldn’t be “causal closure” as Wikipedia calls it.