eliminativists want to prove that humans, like the blue-minimizing robot, don’t have anything of the sort until you start looking at high level abstractions.
Just because something only exists at high levels of abstraction doesn’t mean it’s not real or explanatory. Surely the important question is whether humans genuinely have preferences that explain their behaviour (or at least whether a preference system can occasionally explain their behaviour—even if their behaviour is truly explained by the interaction of numerous systems) rather than how these preferences are encoded.
The information in a jpeg file that indicates a particular pixel should be red cannot be analysed down to a single bit that doesn’t do anything else, that doesn’t mean that there isn’t a sense in which the red pixel genuinely exists. Preferences could exist and be encoded holographically in the brain. Whether you can find a specific neuron or not is completely irrelevant to their reality.
Just because something only exists at high levels of abstraction doesn’t mean it’s not real or explanatory.
I have often stated that, as a physicalist, the mere fact that something does not independently exist—that is, it has no physically discrete existence—does not mean it isn’t real. The number three is real—but does not exist. It cannot be touched, sensed, or measured; yet if there are three rocks there really are three rocks. I define “real” as “a pattern that proscriptively constrains that which exists”. A human mind is real; but there is no single part of your physical body you can point to and say, “this is your mind”. You are the pattern that your physical components conform to.
It seems very often that objections to reductionism are founded in a problem of scale: the inability to recognize that things which are real from one perspective remain real at that perspective even if we consider a different scale.
It would seem, to me, that “eliminativism” is essentially a redux of this quandary but in terms of patterns of thought rather than discrete material. It’s still a case of missing the forest for the trees.
I agree. In particular I often find these discussions very frustrating because people arguing for elimination seem to think they are arguing about the ‘reality’ of things when in fact they’re arguing about the scale of things. (And sometimes about the specificity of the underlying structures that the higher level systems are implemented on). I don’t think anyone ever expected to be able to locate anything important in a single neuron or atom. Nearly everything interesting in the universe is found in the interactions of the parts not the parts themselves. (Also—why would we expect any biological system to do one thing and one thing only?).
I regard almost all these questions as very similar to the demarcation problem. A higher level abstraction is real if it provides predictions that often turn out to be true. It’s acceptable for it to be an incomplete / imperfect model, although generally speaking if there is another that provides better predictions we should adopt it instead.
This is what would convince me that preferences were not real: At the moment I model other people by imagining that they have preferences. Most of the time this works. The eliminativist needs to provide me with an alternate model that reliably provides better predictions. Arguments about theory will not sway me. Show me the model.
Just because something only exists at high levels of abstraction doesn’t mean it’s not real or explanatory. Surely the important question is whether humans genuinely have preferences that explain their behaviour (or at least whether a preference system can occasionally explain their behaviour—even if their behaviour is truly explained by the interaction of numerous systems) rather than how these preferences are encoded.
The information in a jpeg file that indicates a particular pixel should be red cannot be analysed down to a single bit that doesn’t do anything else, that doesn’t mean that there isn’t a sense in which the red pixel genuinely exists. Preferences could exist and be encoded holographically in the brain. Whether you can find a specific neuron or not is completely irrelevant to their reality.
I have often stated that, as a physicalist, the mere fact that something does not independently exist—that is, it has no physically discrete existence—does not mean it isn’t real. The number three is real—but does not exist. It cannot be touched, sensed, or measured; yet if there are three rocks there really are three rocks. I define “real” as “a pattern that proscriptively constrains that which exists”. A human mind is real; but there is no single part of your physical body you can point to and say, “this is your mind”. You are the pattern that your physical components conform to.
It seems very often that objections to reductionism are founded in a problem of scale: the inability to recognize that things which are real from one perspective remain real at that perspective even if we consider a different scale.
It would seem, to me, that “eliminativism” is essentially a redux of this quandary but in terms of patterns of thought rather than discrete material. It’s still a case of missing the forest for the trees.
I agree. In particular I often find these discussions very frustrating because people arguing for elimination seem to think they are arguing about the ‘reality’ of things when in fact they’re arguing about the scale of things. (And sometimes about the specificity of the underlying structures that the higher level systems are implemented on). I don’t think anyone ever expected to be able to locate anything important in a single neuron or atom. Nearly everything interesting in the universe is found in the interactions of the parts not the parts themselves. (Also—why would we expect any biological system to do one thing and one thing only?).
I regard almost all these questions as very similar to the demarcation problem. A higher level abstraction is real if it provides predictions that often turn out to be true. It’s acceptable for it to be an incomplete / imperfect model, although generally speaking if there is another that provides better predictions we should adopt it instead.
This is what would convince me that preferences were not real: At the moment I model other people by imagining that they have preferences. Most of the time this works. The eliminativist needs to provide me with an alternate model that reliably provides better predictions. Arguments about theory will not sway me. Show me the model.