This revolutionary treatise starts from one fundamental premise: that our phenomenal consciousness includes direct experience of value. For too long, ethical theorists have looked for value in external states of affairs or reduced value to a projection of the mind onto these same external states of affairs. The result, unsurprisingly, is widespread antirealism about ethics.
In this book, Sharon Hewitt Rawlette turns our metaethical gaze inward and dares us to consider that value, rather than being something “out there,” is a quality woven into the very fabric of our conscious experience, in a highly objective way. On this view, our experiences of pleasure and pain, joy and sorrow, ecstasy and despair are not signs of value or disvalue. They are instantiations of value and disvalue. When we feel pleasure, we are feeling intrinsic goodness itself. And it is from such feelings, argues Rawlette, that we derive the basic content of our normative concepts—that we understand what it means for something to be intrinsically good or bad.
Rawlette thus defends a version of analytic descriptivism. And argues that this view, unlike previous theories of moral realism, has the resources to explain where our concept of intrinsic value comes from and how we know when it objectively applies, as well as why we sometimes make mistakes in applying it. She defends this view against G. E. Moore’s Open Question Argument as well as shows how these basic facts about intrinsic value can ground facts about instrumental value and value “all things considered.” Ultimately, her view offers us the possibility of a robust metaphysical and epistemological justification for many of our strongest moral convictions.
My reply: Sounds descriptively false? I prefer lots of things that aren’t a matter of valenced experience.
You don’t need to have “direct experience” of all moral properties in order to be a moral realist, any more than you need direct experience of porcupines in order to be a porcupine realist. You can just acknowledge that moral knowledge is inferred indirectly, the same as our knowledge of most things. Seems like a classic case of philosophers reaching weird conclusions because they’re desperate for certainty (rather than embracing Bayesian/probabilistic ways of thinking about stuff).
Likewise, you don’t need direct experience or certainty in order to reconcile “is” and “ought”. Just accept that “ought” facts look like hypothetical imperatives that we happen to care about a lot, or look like rules-of-a-game that we happen to deeply endorse everyone always playing. No deep riddles are created by the fact that “what is a legal move in chess?” is not reducible to conjunctions of claims about our universe’s boundary conditions and laws of physics; we just treat chess-claims like math/logic claims and carry on with our lives. Treating our moral claims (insofar as they’re coherent and consistent) in the same way raises no special difficulties.
This also feels to me like an example of the common philosopher-error “notice a super important fact about morality, and rush (in your excitement) to conclude that this must therefore be the important fact about morality, the one big thing morality is About”.
Human morality is immensely complicated, because human brains are a complicated, incoherent mishmash of innumerable interacting preferences and experiences. Valenced experience indeed seems to be a super important piece of that puzzle, but we can acknowledge that without pretending that it Solves Everything or Exhausts The Phenomenon.
I suspect moral philosophy would have made a lot more progress by now if philosophers spent more of their time adding to the pool of claims about morality (so we can build a full understanding of what phenomenon we need to explain / account for in the first place), and less time trying to reduce all of those claims to a single simple principle.
In principle, I love theorizing and philosophizing about this stuff. In practice, seems like people have a strong tendency to fall in love with the first Grand Theory of Everything they discover in this domain, causing progress to stagnate (and unrealistic views to proliferate) relative to if we had more modest goals. Less “try to reduce all of morality to virtue cultivation”, more “try to marginally improve our understanding of what virtue cultivation consists in”.
A blurb for the book “The Feeling of Value”:
My reply: Sounds descriptively false? I prefer lots of things that aren’t a matter of valenced experience.
You don’t need to have “direct experience” of all moral properties in order to be a moral realist, any more than you need direct experience of porcupines in order to be a porcupine realist. You can just acknowledge that moral knowledge is inferred indirectly, the same as our knowledge of most things. Seems like a classic case of philosophers reaching weird conclusions because they’re desperate for certainty (rather than embracing Bayesian/probabilistic ways of thinking about stuff).
Likewise, you don’t need direct experience or certainty in order to reconcile “is” and “ought”. Just accept that “ought” facts look like hypothetical imperatives that we happen to care about a lot, or look like rules-of-a-game that we happen to deeply endorse everyone always playing. No deep riddles are created by the fact that “what is a legal move in chess?” is not reducible to conjunctions of claims about our universe’s boundary conditions and laws of physics; we just treat chess-claims like math/logic claims and carry on with our lives. Treating our moral claims (insofar as they’re coherent and consistent) in the same way raises no special difficulties.
This also feels to me like an example of the common philosopher-error “notice a super important fact about morality, and rush (in your excitement) to conclude that this must therefore be the important fact about morality, the one big thing morality is About”.
Human morality is immensely complicated, because human brains are a complicated, incoherent mishmash of innumerable interacting preferences and experiences. Valenced experience indeed seems to be a super important piece of that puzzle, but we can acknowledge that without pretending that it Solves Everything or Exhausts The Phenomenon.
I suspect moral philosophy would have made a lot more progress by now if philosophers spent more of their time adding to the pool of claims about morality (so we can build a full understanding of what phenomenon we need to explain / account for in the first place), and less time trying to reduce all of those claims to a single simple principle.
In principle, I love theorizing and philosophizing about this stuff. In practice, seems like people have a strong tendency to fall in love with the first Grand Theory of Everything they discover in this domain, causing progress to stagnate (and unrealistic views to proliferate) relative to if we had more modest goals. Less “try to reduce all of morality to virtue cultivation”, more “try to marginally improve our understanding of what virtue cultivation consists in”.