Contra hard moral anti-realism: a rough sequence of claims
Epistemic and provenance note: This post should not be taken as an attempt at a complete refutation of moral anti-realism, but rather as a set of observations and intuitions that may or may not give one pause as to the wisdom of taking a hard moral anti-realist stance. I may clean it up to construct a more formal argument in the future. I wrote it on a whim as a Telegram message, in direct response to the claim
> “you can’t find “values” in reality”.
Yet, you can find valence in your own experiences (that is, you just know from direct experience whether you like the sensations you are experiencing or not), and you can assume other people are likely to have a similar enough stimulus-valence mapping. (Example: I’m willing to bet 2k USD on my part against a single dollar yours that that if I waterboard you, you’ll want to stop before 3 minutes have passed.)[1]
However, since we humans are bounded imperfect rationalists, trying to explicitly optimize valence is often a dumb strategy. Evolution has made us not into fitness-maximizers, nor valence-maximizers, but adaptation-executers.
”values” originate as (thus are) reifications of heuristics that reliably increase long term valence in the real world (subject to memetic selection pressures, among them social desirability of utterances, adaptativeness of behavioral effects, etc.)
If you find yourself terminally valuing something that is not someone’s experienced valence, then either one of these propositions is likely true:
A nonsentient process has at some point had write access to your values.
What you value is a means to improving somebody’s experienced valence, and so are you now.
If you find yourself terminally valuing something that is not someone’s experienced valence, then either one of these propositions is likely true: A nonsentient process has at some point had write access to your values.
Maybe I’m misunderstanding your point, but this seems straightforwardly true for most people? Evolution, which wrote ~all our values, isn’t sentient, and most people do terminally value some things other than experienced valence (e.g. various forms of art, carrying out the traditions of their culture, doing things correctly according to some-or-other prescriptive system, etc); these may well be reified heuristics, but they’re not experienced as instrumental.
You are not misunderstanding my point. Some people may want to keep artificial stimulus-valence mappings (i.e. values) that someone or something else inserted into them. I do not.
Reflecting on this after some time, I do not endorse this comment in the case of (most) innate evolution-originated drives. I sure as heck do not want to stop enjoying sex, for instance.
However, I very much want to eliminate any terminal [nonsentient-thing-benefitting]-valence mapping any people or institutions may have inserted into my mind.
I’m willing to bet 2k USD on my part against a single dollar yours that that if I waterboard you, you’ll want to stop before 3 minutes have passed
Interesting, where are you physically located? Also, are you thinking of the unpleasantness of the situation, or are you thinking of the physical asphyxiation component?
Christopher Hitchens, who tried waterboarding because he wasn’t sure it was torture, wanted to stop almost instantly and was permanently traumatized, concluding it was definitely torture.
There is absolutely no way anyone would voluntarily last 3 minutes unless they simply hold their breath the entire time.
I’m currently based in Santiago, Chile. I will very likely be in Boston in September and then again in November for GCP and EAG, though. My main point is about the unpleasantness, regardless of its ultimate physiological or neurological origin.
One can take a hard anti-realism stance while still having values and beliefs about others’ values. It requires more humility and acknowledgement of boundaries than most people want from their moral systems. Especially around edge cases, distant extrapolation, and counterexamples—if you forget that most of your intuitions come from some mix of evolution, social learning, and idiosyncratic brain configuration, you’re likely to strongly believe untrue things.
But why must you care about valence? It’s not an epistemic error to not care. You don’t have direct experience of there being a law that you must care about valence.
Empirically, I cannot help but care about valence. This could in principle be just a weird quirk of my own mind. I do not think this is the case (see the waterboarding bet proposal on the original shortform post).
Contra hard moral anti-realism: a rough sequence of claims
Epistemic and provenance note: This post should not be taken as an attempt at a complete refutation of moral anti-realism, but rather as a set of observations and intuitions that may or may not give one pause as to the wisdom of taking a hard moral anti-realist stance. I may clean it up to construct a more formal argument in the future. I wrote it on a whim as a Telegram message, in direct response to the claim
> “you can’t find “values” in reality”.
Yet, you can find valence in your own experiences (that is, you just know from direct experience whether you like the sensations you are experiencing or not), and you can assume other people are likely to have a similar enough stimulus-valence mapping. (Example: I’m willing to bet 2k USD on my part against a single dollar yours that that if I waterboard you, you’ll want to stop before 3 minutes have passed.)[1]
However, since we humans are bounded imperfect rationalists, trying to explicitly optimize valence is often a dumb strategy. Evolution has made us not into fitness-maximizers, nor valence-maximizers, but adaptation-executers.
”values” originate as (thus are) reifications of heuristics that reliably increase long term valence in the real world (subject to memetic selection pressures, among them social desirability of utterances, adaptativeness of behavioral effects, etc.)
If you find yourself terminally valuing something that is not someone’s experienced valence, then either one of these propositions is likely true:
A nonsentient process has at some point had write access to your values.
What you value is a means to improving somebody’s experienced valence, and so are you now.
In retrospect, making this proposition was a bit crass on my part.
Maybe I’m misunderstanding your point, but this seems straightforwardly true for most people? Evolution, which wrote ~all our values, isn’t sentient, and most people do terminally value some things other than experienced valence (e.g. various forms of art, carrying out the traditions of their culture, doing things correctly according to some-or-other prescriptive system, etc); these may well be reified heuristics, but they’re not experienced as instrumental.
You are not misunderstanding my point. Some people may want to keep artificial stimulus-valence mappings (i.e. values) that someone or something else inserted into them. I do not.
Reflecting on this after some time, I do not endorse this comment in the case of (most) innate evolution-originated drives. I sure as heck do not want to stop enjoying sex, for instance.
However, I very much want to eliminate any terminal [nonsentient-thing-benefitting]-valence mapping any people or institutions may have inserted into my mind.
Interesting, where are you physically located? Also, are you thinking of the unpleasantness of the situation, or are you thinking of the physical asphyxiation component?
Christopher Hitchens, who tried waterboarding because he wasn’t sure it was torture, wanted to stop almost instantly and was permanently traumatized, concluding it was definitely torture.
There is absolutely no way anyone would voluntarily last 3 minutes unless they simply hold their breath the entire time.
Here’s a link: https://archive.nytimes.com/thelede.blogs.nytimes.com/2008/07/02/a-window-into-waterboarding/
I’m currently based in Santiago, Chile. I will very likely be in Boston in September and then again in November for GCP and EAG, though. My main point is about the unpleasantness, regardless of its ultimate physiological or neurological origin.
One can take a hard anti-realism stance while still having values and beliefs about others’ values. It requires more humility and acknowledgement of boundaries than most people want from their moral systems. Especially around edge cases, distant extrapolation, and counterexamples—if you forget that most of your intuitions come from some mix of evolution, social learning, and idiosyncratic brain configuration, you’re likely to strongly believe untrue things.
I agree with everything written in the above comment.
But why must you care about valence? It’s not an epistemic error to not care. You don’t have direct experience of there being a law that you must care about valence.
Empirically, I cannot help but care about valence. This could in principle be just a weird quirk of my own mind. I do not think this is the case (see the waterboarding bet proposal on the original shortform post).