Okay, this seems like a crux of our disagreement. This statement seems pretty much equivalent to my statement #1 in almost all practical contexts. Can you point out how you think they differ?
This stuff is definitely a bit tricky to talk about, since people can use the word “should” in different ways. I think that sometimes when people say “You should do X if you want Y” they do basically just mean to say “If you do X you will receive Y.” But it doesn’t seem to me like this is always the case.
A couple examples:
1. “Bayesian updating has a certain asymptoptic convergence property, in the limit of infinite experience and infinite compute. So if you want to understand the world, you should be a Bayesian.”
If the first and second sentence were meant to communicate the same thing, then the second would be totally vacuous given the first. Anyone who accepted the first sentence could not intelligibly disagree with or even really consider disagreeing with the second. But I don’t think that people who say things like this typically mean for the second sentence to be vacuous or typically regard disagreement as unintelligible.
Suppose, for example, that I responded to this claim by saying something like: “I disagree. Since we only have finite lives, asymptoptic convergence properties don’t have direct relevance. I think we should instead use a different ‘risk averse’ updating rule that, for agents with finite lives, more strongly reduces the likelihood of ending up with especially inaccurate beliefs about key features of the world.”
The speaker might think I’m wrong. But if the speaker thinks that what I’m saying constitutes intelligible disagreement with their claim, then it seems like this means their claim is in fact a distinct normative one.
2. (To someone with no CS background) “If you want to understand the world, you should be a Bayesian.”
If this sentence were meant to communicate the same thing as the claim about asymptotic convergence, then the speaker shouldn’t expect the listener to understand what they’re saying (even if the speaker has already explained what it means to be a Bayesian). Most people don’t naturally understand or care at all about asymptotic convergence properties.
1. “Bayesian updating has a certain asymptoptic convergence property, in the limit of infinite experience and infinite compute. So if you want to understand the world, you should be a Bayesian.”
If the first and second sentence were meant to communicate the same thing, then the second would be totally vacuous given the first.
I was a little imprecise in saying that they’re exactly equivalent—the second sentence should also have a “in the limit of infinite compute” qualification. Or else we need a hidden assumption like “These asymptotic convergence properties give us reason to believe that even low-compute approximations to Bayesianism are very good ways to understand the world.” This is usually left implicit, but it allows us to think of “if you want to understand the world, you should be (approximately) a Bayesian” as an empirical claim not a normative one. For this to actually be an example of normativity, it needs to be the case that some people consider this hidden assumption unnecessary and would endorse claims like “You should use low-compute approximations to Bayesianism because Bayesianism has certain asymptotic convergence properties, even if those properties don’t give us any reason to think that low-compute approximations to Bayesianism help you understand the world better.” Do you expect that people would endorse this?
Hmm, I think focusing on a simpler case might be better for getting at the crux.
Suppose Alice says: “Eating meat is the most effective way to get protein. So if you want to get protein, you should eat meat.”
And then Bob, an animal welfare person, responds: “You’re wrong, people shouldn’t eat meat no matter how much they care about getting protein.”
If Alice doesn’t mean for her second sentence to be totally redundant—or if she is able to interpret Bob’s response as an intelligible (if incorrect) statement of disagreement with her second sentence—then that suggests her second sentence actually constitutes a substantively normative claim. Her second sentence isn’t just repeating the same non-normative claim as the first one.
I definitely don’t think that all “If you want X, do Y” claims are best understood as normative claims. It’s possible that when people make claims of this form about Bayesianism, and other commonly discussed topics, they’re not really saying anything normative. But a decent chunk of statements of this form do strike me as difficult to interpret in non-normative terms.
If Alice doesn’t mean for her second sentence to be totally redundant—or if she is able to interpret Bob’s response as an intelligible (if incorrect) statement of disagreement with her second sentence—then that suggests her second sentence actually constitutes a substantively normative claim.
I don’t think you can declare a sentence redundant without also considering the pragmatic aspects of meaning. In this example, Alice’s second sentence is a stronger claim than the first, because it again contains an implicit clause: “If you want to get protein, and you don’t have any other relevant goals, you should eat meat”. Or maybe it’s more like “If you want to get protein, and your other goals are standard ones, you should eat meat.”
Compare: Alice says “Jumping off cliffs without a parachute is a quick way to feel very excited. If you want to feel excited, you should jump off cliffs without a parachute.” Bob says “No you shouldn’t, because you’ll die.” Alice’s first sentence is true, and her second sentence is false, so they can’t be equivalent—but both of them can be interpreted as goal-conditional empirical sentences. It’s just the case that when you make broad statements, pragmatically you are assuming a “normal” set of goals.
If she is able to interpret Bob’s response as an intelligible (if incorrect) statement of disagreement with her second sentence
It’s not entirely unintelligible, because Alice is relying on an implicit premise of “standard goals” I mentioned above, and the reason people like Bob are so outspoken on this issue is because they’re trying to change that norm of what we consider “standard goals”. I do think that if Alice really understood normativity, she would tell Bob that she was trying to make a different type of claim to his one, because his was normative and hers wasn’t—while conceding that he had reason to find the pragmatics of her sentence objectionable.
Also, though, you’ve picked a case where the disputed statement is often used both in empirical ways and in normative ways. This is the least clear sort of example (especially since, pragmatically, when you repeat almost the same thing twice, it makes people think you’re implying something different). The vast majority of examples of people using “if you want..., then you should...” seem clearly empirical to me—including many that are in morally relevant domains, where the pragmatics make their empirical nature clear:
A: “If you want to murder someone without getting caught, you should plan carefully.”
B: “No you shouldn’t, because you shouldn’t murder people.”
A: “Well obviously you shouldn’t murder people, but I’m just saying that if you wanted to, planning would make things much easier.”
This stuff is definitely a bit tricky to talk about, since people can use the word “should” in different ways. I think that sometimes when people say “You should do X if you want Y” they do basically just mean to say “If you do X you will receive Y.” But it doesn’t seem to me like this is always the case.
A couple examples:
1. “Bayesian updating has a certain asymptoptic convergence property, in the limit of infinite experience and infinite compute. So if you want to understand the world, you should be a Bayesian.”
If the first and second sentence were meant to communicate the same thing, then the second would be totally vacuous given the first. Anyone who accepted the first sentence could not intelligibly disagree with or even really consider disagreeing with the second. But I don’t think that people who say things like this typically mean for the second sentence to be vacuous or typically regard disagreement as unintelligible.
Suppose, for example, that I responded to this claim by saying something like: “I disagree. Since we only have finite lives, asymptoptic convergence properties don’t have direct relevance. I think we should instead use a different ‘risk averse’ updating rule that, for agents with finite lives, more strongly reduces the likelihood of ending up with especially inaccurate beliefs about key features of the world.”
The speaker might think I’m wrong. But if the speaker thinks that what I’m saying constitutes intelligible disagreement with their claim, then it seems like this means their claim is in fact a distinct normative one.
2. (To someone with no CS background) “If you want to understand the world, you should be a Bayesian.”
If this sentence were meant to communicate the same thing as the claim about asymptotic convergence, then the speaker shouldn’t expect the listener to understand what they’re saying (even if the speaker has already explained what it means to be a Bayesian). Most people don’t naturally understand or care at all about asymptotic convergence properties.
I was a little imprecise in saying that they’re exactly equivalent—the second sentence should also have a “in the limit of infinite compute” qualification. Or else we need a hidden assumption like “These asymptotic convergence properties give us reason to believe that even low-compute approximations to Bayesianism are very good ways to understand the world.” This is usually left implicit, but it allows us to think of “if you want to understand the world, you should be (approximately) a Bayesian” as an empirical claim not a normative one. For this to actually be an example of normativity, it needs to be the case that some people consider this hidden assumption unnecessary and would endorse claims like “You should use low-compute approximations to Bayesianism because Bayesianism has certain asymptotic convergence properties, even if those properties don’t give us any reason to think that low-compute approximations to Bayesianism help you understand the world better.” Do you expect that people would endorse this?
Hmm, I think focusing on a simpler case might be better for getting at the crux.
Suppose Alice says: “Eating meat is the most effective way to get protein. So if you want to get protein, you should eat meat.”
And then Bob, an animal welfare person, responds: “You’re wrong, people shouldn’t eat meat no matter how much they care about getting protein.”
If Alice doesn’t mean for her second sentence to be totally redundant—or if she is able to interpret Bob’s response as an intelligible (if incorrect) statement of disagreement with her second sentence—then that suggests her second sentence actually constitutes a substantively normative claim. Her second sentence isn’t just repeating the same non-normative claim as the first one.
I definitely don’t think that all “If you want X, do Y” claims are best understood as normative claims. It’s possible that when people make claims of this form about Bayesianism, and other commonly discussed topics, they’re not really saying anything normative. But a decent chunk of statements of this form do strike me as difficult to interpret in non-normative terms.
I don’t think you can declare a sentence redundant without also considering the pragmatic aspects of meaning. In this example, Alice’s second sentence is a stronger claim than the first, because it again contains an implicit clause: “If you want to get protein, and you don’t have any other relevant goals, you should eat meat”. Or maybe it’s more like “If you want to get protein, and your other goals are standard ones, you should eat meat.”
Compare: Alice says “Jumping off cliffs without a parachute is a quick way to feel very excited. If you want to feel excited, you should jump off cliffs without a parachute.” Bob says “No you shouldn’t, because you’ll die.” Alice’s first sentence is true, and her second sentence is false, so they can’t be equivalent—but both of them can be interpreted as goal-conditional empirical sentences. It’s just the case that when you make broad statements, pragmatically you are assuming a “normal” set of goals.
It’s not entirely unintelligible, because Alice is relying on an implicit premise of “standard goals” I mentioned above, and the reason people like Bob are so outspoken on this issue is because they’re trying to change that norm of what we consider “standard goals”. I do think that if Alice really understood normativity, she would tell Bob that she was trying to make a different type of claim to his one, because his was normative and hers wasn’t—while conceding that he had reason to find the pragmatics of her sentence objectionable.
Also, though, you’ve picked a case where the disputed statement is often used both in empirical ways and in normative ways. This is the least clear sort of example (especially since, pragmatically, when you repeat almost the same thing twice, it makes people think you’re implying something different). The vast majority of examples of people using “if you want..., then you should...” seem clearly empirical to me—including many that are in morally relevant domains, where the pragmatics make their empirical nature clear:
A: “If you want to murder someone without getting caught, you should plan carefully.”
B: “No you shouldn’t, because you shouldn’t murder people.”
A: “Well obviously you shouldn’t murder people, but I’m just saying that if you wanted to, planning would make things much easier.”