Well, there’s certainly a set of sensory inputs that corresponds to /invisible-unicorn/, based on which one could build an invisible unicorn detector. Similarly, there’s a set of sensory inputs that corresponds to /pink-unicorn/, based on which one could build a pink unicorn detector.
If I wire a pink unicorn detector up to an invisible unicorn detector such that a light goes on iff both detectors fire on the same object, have I not just constructed an invisible-pink-unicorn detector?
Granted, a detector is not the same thing as a maximizer, but the conceptual issue seems identical in both cases.
If I wire a pink unicorn detector up to an invisible unicorn detector such that a light goes on iff both detectors fire on the same object, have I not just constructed an invisible-pink-unicorn detector?
Maybe. Or maybe you’ve constructed a square-circle detector; no experiment would let you tell the difference, no?
I think the way around this is some notion of which kind of counterfactuals are valid and which aren’t. I’ve seen posts here (and need to read more) about evaluating these counterfactuals via surgery on causal graphs. But while I can see how such reasoning would work an object that exists in a different possible world (i.e. a “contingently nonexistent” object) I don’t (yet?) see how to apply it to a logically impossible (“necessarily nonexistent”) object. Is there a good notion available that can say one counterfactuals involving such things is more valid than another?
Or maybe you’ve constructed a square-circle detector; no experiment would let you tell the difference, no?
Take the thing apart and test its components in isolation. If in isolation they test for squares and circles, their aggregate is a square-circle detector (which never fires). If in isolation they test for pink unicorns and invisible unicorns, their aggregate is an invisible-pink-unicorn detector (which never fires).
Well, there’s certainly a set of sensory inputs that corresponds to /invisible-unicorn/, based on which one could build an invisible unicorn detector. Similarly, there’s a set of sensory inputs that corresponds to /pink-unicorn/, based on which one could build a pink unicorn detector.
If I wire a pink unicorn detector up to an invisible unicorn detector such that a light goes on iff both detectors fire on the same object, have I not just constructed an invisible-pink-unicorn detector?
Granted, a detector is not the same thing as a maximizer, but the conceptual issue seems identical in both cases.
Maybe. Or maybe you’ve constructed a square-circle detector; no experiment would let you tell the difference, no?
I think the way around this is some notion of which kind of counterfactuals are valid and which aren’t. I’ve seen posts here (and need to read more) about evaluating these counterfactuals via surgery on causal graphs. But while I can see how such reasoning would work an object that exists in a different possible world (i.e. a “contingently nonexistent” object) I don’t (yet?) see how to apply it to a logically impossible (“necessarily nonexistent”) object. Is there a good notion available that can say one counterfactuals involving such things is more valid than another?
Take the thing apart and test its components in isolation. If in isolation they test for squares and circles, their aggregate is a square-circle detector (which never fires). If in isolation they test for pink unicorns and invisible unicorns, their aggregate is an invisible-pink-unicorn detector (which never fires).