This reifies his observation that the desire for happiness feels really, actually compelling to him and this compellingness seems innate to qualia
I’m not sure he’s wrong in saying that feeling the qualia of a sentient, as opposed to modeling those qualia in an affective black box without letting the feels ‘leak’ into the rest of your cognitionspace, requires some motivational effect. There are two basic questions here:
First, the Affect-Effect Question: To what extent are the character of subjective experiences like joy and suffering intrinsic or internal to the state, as opposed to constitutively bound up in functional relations that include behavioral impetuses? (For example, to what extent is it possible to undergo the phenomenology of anguish without thereby wanting the anguish to stop? And to what extent is it possible to want something to stop without being behaviorally moved, to the extent one is able and to the extent one’s other desires are inadequate overriders, to stop it?) Compare David Lewis’ ‘Mad Pain’, pain that has the same experiential character as ordinary pain but none of its functional relations (or at least not the large-scale ones). Some people think a state of that sort wouldn’t qualify as ‘pain’ at all, and this sort of relationalism lends some credibility to pearce’s view.
Second, the Third-Person Qualia Question: To what extent is phenomenological modeling (modeling a state in such a way that you, or a proper part of you, experiences that state) required for complete factual knowledge of real-world agents? One could grant that qualia are real (and really play an important role in various worldly facts, albeit perhaps physical ones) and are moreover unavoidably motivating (if you aren’t motivated to avoid something, then you don’t really fear it), but deny that an epistemically rational agent is required to phenomenologically model qualia. Perhaps there is some way to represent the same mental states without thereby experiencing them, to fully capture the worldly facts about cows without simulating their experiences oneself. If so, then knowing everything about cows would not require one to be motivated (even in some tiny powerless portion of oneself) to fulfill the values of cows. (Incidentally, it’s also possible in principle to grant the (admittedly spooky) claim that mental states are irreducible and indispensable, without thinking that you need to be in pain in order to fully and accurately model another agent’s pain; perhaps it’s possible to accurately model one phenomenology using a different phenomenology.)
And again, at this point I don’t think any of these positions need to endorse supernaturalism, i.e., the idea that special moral facts are intervening in the causal order to force cow-simulators, against their will, to try to help cows. (Perhaps there’s something spooky and supernatural about causally efficacious qualia, but for the moment I’ll continue assuming they’re physical states—mayhap physical states construed in a specific way.) All that’s being disputed, I think, is to what extent a programmer of a mind-modeler could isolate the phenomenology of states from their motivational or behavioral roles, and to what extent this programmer could model brains at all without modeling their first-person character.
As a limiting case: Assuming there are facts about conscious beings, could an agent simulate everything about those beings without ever becoming conscious itself? (And if it did become conscious, would it only be conscious inasmuch as it had tiny copies of conscious beings inside itself? Or would it also need to become conscious in a more global way, in order to access and manipulate useful information about its conscious subsystems?)
Incidentally, these engineering questions are in principle distinct both from the topic of causally efficacious irreducible Morality Stuff (what I called moral supernaturalism), and from the topic of whether moral claims are objectively right, that, causally efficacious or not, moral facts have a sort of ‘glow of One True Oughtness’ (what I called moral unconditionalism, though some might call it ‘moral absolutism’), two claims the conjunction of which it sounds like you’ve been labeling ‘moral realism’, in deference to your erstwhile meta-ethic. Whether we can motivation-externally simulate experiential states with perfect fidelity and epistemic availability-to-the-simulating-system-at-large is a question for philosophy of mind and computer science, not for meta-ethics. (And perhaps davidpearce’s actual view is closer to what you call moral realism than to my steelman. Regardless, I’m more interested in interrogating the steelman.)
“Moral non-naturalism” is what a moral cognitivist says to deconstruct your theory—the self-evident intrinsic compellingness of happiness quales doesn’t feel like asserting “non-naturalism” to David Pearce, although you could have a non-natural theory about how this mysterious observation was generated.
So terms like ‘non-naturalism’ or ‘supernaturalism’ are too theory-laden and sophisticated for what you’re imputing to Pearce (and ex-EY), which is really more of a hunch or thought-terminating-clichéplex. In that case, perhaps ‘naïve (moral) realism’ or ‘naïve absolutism’ is the clearest term you could use. (Actually, I like ‘magical absolutism’. It has a nice ring to it, and ‘magical’ gets at the proto-supernaturalism while ‘absolutism’ gets at the proto-unconditionalism. Mm, words.) Philosophers love calling views naïve, and the term doesn’t have a prior meaning like ‘moral realism’, so you wouldn’t have to deal with people griping about your choice of jargon.
This would also probably be a smart rhetorical move, since a lot of people don’t see a clear distinction between cognitivism and realism and might be turned off by your ideas qua an anti-realism theory even if they’d have loved them qua a realist theory. ‘Tis part of why I tried to taboo the term as ‘minimal moral realism’ etc., rather than endorsing just one of the definitions on offer.
I’m not sure he’s wrong in saying that feeling the qualia of a sentient, as opposed to modeling those qualia in an affective black box without letting the feels ‘leak’ into the rest of your cognitionspace, requires some motivational effect. There are two basic questions here:
First, the Affect-Effect Question: To what extent are the character of subjective experiences like joy and suffering intrinsic or internal to the state, as opposed to constitutively bound up in functional relations that include behavioral impetuses? (For example, to what extent is it possible to undergo the phenomenology of anguish without thereby wanting the anguish to stop? And to what extent is it possible to want something to stop without being behaviorally moved, to the extent one is able and to the extent one’s other desires are inadequate overriders, to stop it?) Compare David Lewis’ ‘Mad Pain’, pain that has the same experiential character as ordinary pain but none of its functional relations (or at least not the large-scale ones). Some people think a state of that sort wouldn’t qualify as ‘pain’ at all, and this sort of relationalism lends some credibility to pearce’s view.
Second, the Third-Person Qualia Question: To what extent is phenomenological modeling (modeling a state in such a way that you, or a proper part of you, experiences that state) required for complete factual knowledge of real-world agents? One could grant that qualia are real (and really play an important role in various worldly facts, albeit perhaps physical ones) and are moreover unavoidably motivating (if you aren’t motivated to avoid something, then you don’t really fear it), but deny that an epistemically rational agent is required to phenomenologically model qualia. Perhaps there is some way to represent the same mental states without thereby experiencing them, to fully capture the worldly facts about cows without simulating their experiences oneself. If so, then knowing everything about cows would not require one to be motivated (even in some tiny powerless portion of oneself) to fulfill the values of cows. (Incidentally, it’s also possible in principle to grant the (admittedly spooky) claim that mental states are irreducible and indispensable, without thinking that you need to be in pain in order to fully and accurately model another agent’s pain; perhaps it’s possible to accurately model one phenomenology using a different phenomenology.)
And again, at this point I don’t think any of these positions need to endorse supernaturalism, i.e., the idea that special moral facts are intervening in the causal order to force cow-simulators, against their will, to try to help cows. (Perhaps there’s something spooky and supernatural about causally efficacious qualia, but for the moment I’ll continue assuming they’re physical states—mayhap physical states construed in a specific way.) All that’s being disputed, I think, is to what extent a programmer of a mind-modeler could isolate the phenomenology of states from their motivational or behavioral roles, and to what extent this programmer could model brains at all without modeling their first-person character.
As a limiting case: Assuming there are facts about conscious beings, could an agent simulate everything about those beings without ever becoming conscious itself? (And if it did become conscious, would it only be conscious inasmuch as it had tiny copies of conscious beings inside itself? Or would it also need to become conscious in a more global way, in order to access and manipulate useful information about its conscious subsystems?)
Incidentally, these engineering questions are in principle distinct both from the topic of causally efficacious irreducible Morality Stuff (what I called moral supernaturalism), and from the topic of whether moral claims are objectively right, that, causally efficacious or not, moral facts have a sort of ‘glow of One True Oughtness’ (what I called moral unconditionalism, though some might call it ‘moral absolutism’), two claims the conjunction of which it sounds like you’ve been labeling ‘moral realism’, in deference to your erstwhile meta-ethic. Whether we can motivation-externally simulate experiential states with perfect fidelity and epistemic availability-to-the-simulating-system-at-large is a question for philosophy of mind and computer science, not for meta-ethics. (And perhaps davidpearce’s actual view is closer to what you call moral realism than to my steelman. Regardless, I’m more interested in interrogating the steelman.)
So terms like ‘non-naturalism’ or ‘supernaturalism’ are too theory-laden and sophisticated for what you’re imputing to Pearce (and ex-EY), which is really more of a hunch or thought-terminating-clichéplex. In that case, perhaps ‘naïve (moral) realism’ or ‘naïve absolutism’ is the clearest term you could use. (Actually, I like ‘magical absolutism’. It has a nice ring to it, and ‘magical’ gets at the proto-supernaturalism while ‘absolutism’ gets at the proto-unconditionalism. Mm, words.) Philosophers love calling views naïve, and the term doesn’t have a prior meaning like ‘moral realism’, so you wouldn’t have to deal with people griping about your choice of jargon.
This would also probably be a smart rhetorical move, since a lot of people don’t see a clear distinction between cognitivism and realism and might be turned off by your ideas qua an anti-realism theory even if they’d have loved them qua a realist theory. ‘Tis part of why I tried to taboo the term as ‘minimal moral realism’ etc., rather than endorsing just one of the definitions on offer.