Correct me if I’m wrong, but isn’t a value judgement necessarily part of a world model? You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you.
That’s confusing levels. A world model that makes some factual assertions, some of which imply “my values are X” is a distinct thing from your values actually being X. To begin with, it’s entirely possible for your world model to imply that “my values are X” when your values are actually Y, in which case your world model is wrong.
To put it simply, what I am saying is that a value judgement is about whatever it is you are in fact judging. While a factual assertion such as you would find in a “model of the world” is about the physical configuration of your brain. This is similar to the use/mention distinction in linguistics. When you make a value judgement you use your values. A model of your brain mentions them.
An argument like this
You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you [therefore a value judgement is necessarily part of a world model].
could be equally well applied to claim that the act of throwing a ball is necessarily part of a world model, because your arm is physical. In fact, they are completely different things (for one thing, simply applying a model will never result in the ball moving), even though a world model may well describe the throwing of a ball.
The judgement is an inference about values. The inference derives from the fact that some value exist. (The existing value exerts a causal influence on one’s inferences.)
This is how it is with all forms of inference.
Throwing a ball is not an inference (note that ‘inference’ and ‘judgement’ are synonyms), thus throwing a ball is no way necessarily part of a world model, and for our purposes, in no way analogous to making a value judgement.
Is my effective altruism a contrarian view? It seems to be more of a contrarian value judgment than a contrarian world model, and by “contrarian view” I tend to mean “contrarian world model.”
Lukeprog thinks that effective altruism is good, and this is a value judgement. Obviously, most of mainstream society doesn’t agree—people prefer to give money to warm fuzzy causes, like “adopt an endangered panda”. So that value judgement is certainly contrarian.
Presumably, lukeprog also believes that “lukeprog thinks effective altruism is good”. This is a fact in his world model. However, most people would agree with him when asked if that is true. We can see that lukeprog likes effective altruism. There’s no reason for anyone to claim “no, he doesn’t think that” when he obviously does. So this element of his world model is not contrarian.
I guess Lukeprog also believes that Lukeprog exists, and that this element of his world view is also not contrarian. So what?
One thing I see repeatedly in others is a deep-rooted reluctance to view themselves as blobs of perfectly standard physical matter. One of the many ways this manifests itself is a failure to consider inferences about one’s own mind as fundamentally similar to any other form of inference. There seems to be an assumption of some kind on non-inferable magic, when many people think about their own motivations. I’m sure you appreciate how fundamentally silly this is, but maybe you could take a little time to meditate on it some more.
Sorry if my tone is a little condescending, but understand that you have totally failed to support your initial claim that I was confused.
That’s not at all what I meant. Obviously minds and brains are just blobs of matter.
You are conflating the claims “lukeprog thinks X is good” and “X is good”. One is an empirical claim, one is a value judgement. More to the point, when someone says “P is a contrarian value judgement, not a contrarian world model”, they obviously intend “world model” to encompass empirical claims and not value judgements.
I’m not conflating anything. Those are different statements, and I’ve never implied otherwise.
The statement “X is good,” which is a value judgement, is also an empirical claim, as was my initial point. Simply restating your denial of that point does not constitute an argument.
“X is good” is a claim about the true state of X, and its relationship to the values of the person making the claim. Since you agree that values derive from physical matter, you must (if you wish to be coherent) also accept that “X is good” is a claim about physical matter, and therefore part of the world model of anybody who believes it.
If there is some particular point or question I can help with, don’t hesitate to ask.
If “X is good” was simply an empirical claim about whether an object conforms to a person’s values, people would frequently say things like “if my values approved of X, then X would be good” and would not say things like “taking a murder pill doesn’t affect the fact that murder is bad”.
Alternative: what if “X is good” was a mathematical claim about the value of a thing according to whatever values the speaker actually holds?
If “X is good” was simply an empirical claim about whether an object conforms to a person’s values, people would frequently say things like “if my values approved of X, then X would be good”....
If that is your basis for a scientific standard, then I’m afraid I must withdraw from this discussion.
Ditto, if this is your idea of humor.
what if “X is good” was a mathematical claim about the value of a thing according to whatever values the speaker actually holds?
That’s just silly. What if c = 299,792,458 m/s is a mathematical claim about the speed of light, according to what the speed of light actually is? May I suggest that you don’t invent unnecessary complexity to disguise the demise of a long deceased argument.
My theory is that the dualistic theory of mind is an artifact of the lossy compression algorithm which, conveniently, prevents introspection from turning into infinite recursion. Lack of neurosurgery in the environment of ancestral adaptation made that an acceptable compromise.
I quite like Bob Trivers’ self-deception theory, though I only have tangential acquaintance with it. We might anticipate that self deception is harder if we are inclined to recognize the bit we call “me” as caused by some inner mechanism, hence it may be profitable to suppress that recognition, if Trivers is on to something.
Wild speculation on my part, of course. There may simply be no good reason, from the point of view of historic genetic fitness, to be good at self analysis, and you’re quite possibly on to something, that the computational overhead just doesn’t pay off.
That’s confusing levels. A world model that makes some factual assertions, some of which imply “my values are X” is a distinct thing from your values actually being X. To begin with, it’s entirely possible for your world model to imply that “my values are X” when your values are actually Y, in which case your world model is wrong.
What levels am I confusing? Are you sure it’s not you that is confused?
Your comment bears some resemblance to that of Lumifer. See my reply above.
To put it simply, what I am saying is that a value judgement is about whatever it is you are in fact judging. While a factual assertion such as you would find in a “model of the world” is about the physical configuration of your brain. This is similar to the use/mention distinction in linguistics. When you make a value judgement you use your values. A model of your brain mentions them.
An argument like this
could be equally well applied to claim that the act of throwing a ball is necessarily part of a world model, because your arm is physical. In fact, they are completely different things (for one thing, simply applying a model will never result in the ball moving), even though a world model may well describe the throwing of a ball.
A value judgement both uses and mentions values.
The judgement is an inference about values. The inference derives from the fact that some value exist. (The existing value exerts a causal influence on one’s inferences.)
This is how it is with all forms of inference.
Throwing a ball is not an inference (note that ‘inference’ and ‘judgement’ are synonyms), thus throwing a ball is no way necessarily part of a world model, and for our purposes, in no way analogous to making a value judgement.
Here is a quote from the article:
Lukeprog thinks that effective altruism is good, and this is a value judgement. Obviously, most of mainstream society doesn’t agree—people prefer to give money to warm fuzzy causes, like “adopt an endangered panda”. So that value judgement is certainly contrarian.
Presumably, lukeprog also believes that “lukeprog thinks effective altruism is good”. This is a fact in his world model. However, most people would agree with him when asked if that is true. We can see that lukeprog likes effective altruism. There’s no reason for anyone to claim “no, he doesn’t think that” when he obviously does. So this element of his world model is not contrarian.
I guess Lukeprog also believes that Lukeprog exists, and that this element of his world view is also not contrarian. So what?
One thing I see repeatedly in others is a deep-rooted reluctance to view themselves as blobs of perfectly standard physical matter. One of the many ways this manifests itself is a failure to consider inferences about one’s own mind as fundamentally similar to any other form of inference. There seems to be an assumption of some kind on non-inferable magic, when many people think about their own motivations. I’m sure you appreciate how fundamentally silly this is, but maybe you could take a little time to meditate on it some more.
Sorry if my tone is a little condescending, but understand that you have totally failed to support your initial claim that I was confused.
That’s not at all what I meant. Obviously minds and brains are just blobs of matter.
You are conflating the claims “lukeprog thinks X is good” and “X is good”. One is an empirical claim, one is a value judgement. More to the point, when someone says “P is a contrarian value judgement, not a contrarian world model”, they obviously intend “world model” to encompass empirical claims and not value judgements.
I’m not conflating anything. Those are different statements, and I’ve never implied otherwise.
The statement “X is good,” which is a value judgement, is also an empirical claim, as was my initial point. Simply restating your denial of that point does not constitute an argument.
“X is good” is a claim about the true state of X, and its relationship to the values of the person making the claim. Since you agree that values derive from physical matter, you must (if you wish to be coherent) also accept that “X is good” is a claim about physical matter, and therefore part of the world model of anybody who believes it.
If there is some particular point or question I can help with, don’t hesitate to ask.
If “X is good” was simply an empirical claim about whether an object conforms to a person’s values, people would frequently say things like “if my values approved of X, then X would be good” and would not say things like “taking a murder pill doesn’t affect the fact that murder is bad”.
Alternative: what if “X is good” was a mathematical claim about the value of a thing according to whatever values the speaker actually holds?
If that is your basis for a scientific standard, then I’m afraid I must withdraw from this discussion.
Ditto, if this is your idea of humor.
That’s just silly. What if c = 299,792,458 m/s is a mathematical claim about the speed of light, according to what the speed of light actually is? May I suggest that you don’t invent unnecessary complexity to disguise the demise of a long deceased argument.
No further comment from me.
My theory is that the dualistic theory of mind is an artifact of the lossy compression algorithm which, conveniently, prevents introspection from turning into infinite recursion. Lack of neurosurgery in the environment of ancestral adaptation made that an acceptable compromise.
I quite like Bob Trivers’ self-deception theory, though I only have tangential acquaintance with it. We might anticipate that self deception is harder if we are inclined to recognize the bit we call “me” as caused by some inner mechanism, hence it may be profitable to suppress that recognition, if Trivers is on to something.
Wild speculation on my part, of course. There may simply be no good reason, from the point of view of historic genetic fitness, to be good at self analysis, and you’re quite possibly on to something, that the computational overhead just doesn’t pay off.