I… don’t really understand the problems you’re having. There is a distinction between empirical and normative—it looks pretty clear-cut to me. You are either describing reality as it is (well, as you see it) or you are saying what should be and might specify an intervention to change the world to your liking. Of course in a single text you could be doing both, but these two classes of statements could (and usually should) be disentangled.
Similarly, when you are building models, there is a difference between explanatory models—which aim to correctly represent the causal structure of what’s happening—and predictive models which aim to generate good predictions. They are not necessarily the same (see e.g. this).
The question of how well-defined a model can you build is social sciences is indeed a very interesting one, but it seems to me the answers will be very context-dependent. Economists will use more numbers, historians will use more words, but what kind of a general answer to this question do you think is possible?
I think you’re right that the distinction is typically clear cut and useful to make. What I want to avoid (although I’m not sure I was successful) is simply being nihilistic and making a refined version of the boring argument “what do words even mean?!”.
The area I’m interested in is when that distinction grows blurry. Normative arguments always have to embed an accurate representation of reality, and a correct prediction that they will actually work. And positive arguments of reality frequently imply a natural or optimal result.
For example, some guy like Marx says “I’ve been thinking for a few decades, I have predicted the optimal state of human interaction. This map of the world clearly suggests we should move towards it.” He then writes a manifesto to encourage it. The normative part of his argument seems to come trivially from the positive explanation of the world. So to that extent it’s not like I can agree with his positive argument, but think his normative takes it too far, they are both equally wrong in the same way.
Or another way to say it, I think it’s very rare that people share the same positive view of the world, but disagree normatively. Our normative disagreements often always come from a different map of the world, not from the same map but different preferences. Obviously I can’t prove, or even test this, so I’m posting it here as an uncertain though. Not something I’m going to strongly defend. I know Aumann sort of proved it with his agreement theorem, well he modeled two Bayesian agents. So everything his model can’t explain could be called normative I guess?
In reality it’s still a useful distinction. As I said, I don’t want to be annoyingly nihilistic or anything.
Note: Will read the rest of that paper later. Looks very interesting and relevant though, so thanks for sharing.
Normative arguments always have to embed an accurate representation of reality, and a correct prediction that they will actually work.
They only have to claim this. Many merely imply this without bothering to provide arguments.
For example, some guy like Marx says “I’ve been thinking for a few decades, I have predicted the optimal state of human interaction.
And that’s precisely the point where the disentangling of the empirical and the normative rears up and shouts: Hold on! What is this “optimal” thing? Optimal for whom, how, and according to which values?
The normative part of his argument seems to come trivially from the positive explanation of the world.
I don’t think so. Marx thought the proletarian revolution to be inevitable and that is NOT a normative statement. He also thought it to be a good thing which is normative, but those are two different claims.
I think it’s very rare that people share the same positive view of the world, but disagree normatively.
Oh, I think it happens all the time: Should we go eat now or in an hour? Alice: Now. Bob: In an hour. That’s a normative disagreement without any sign of different empirics.
In more extended normative arguments people usually feel obliged to present a biased picture of the world to support their conclusions, but if you drill down it’s not uncommon to find that two different people agree on what the world is, but disagree about the ways it should be… adjusted.
I… don’t really understand the problems you’re having. There is a distinction between empirical and normative—it looks pretty clear-cut to me. You are either describing reality as it is (well, as you see it) or you are saying what should be and might specify an intervention to change the world to your liking. Of course in a single text you could be doing both, but these two classes of statements could (and usually should) be disentangled.
Similarly, when you are building models, there is a difference between explanatory models—which aim to correctly represent the causal structure of what’s happening—and predictive models which aim to generate good predictions. They are not necessarily the same (see e.g. this).
The question of how well-defined a model can you build is social sciences is indeed a very interesting one, but it seems to me the answers will be very context-dependent. Economists will use more numbers, historians will use more words, but what kind of a general answer to this question do you think is possible?
I think you’re right that the distinction is typically clear cut and useful to make. What I want to avoid (although I’m not sure I was successful) is simply being nihilistic and making a refined version of the boring argument “what do words even mean?!”.
The area I’m interested in is when that distinction grows blurry. Normative arguments always have to embed an accurate representation of reality, and a correct prediction that they will actually work. And positive arguments of reality frequently imply a natural or optimal result.
For example, some guy like Marx says “I’ve been thinking for a few decades, I have predicted the optimal state of human interaction. This map of the world clearly suggests we should move towards it.” He then writes a manifesto to encourage it. The normative part of his argument seems to come trivially from the positive explanation of the world. So to that extent it’s not like I can agree with his positive argument, but think his normative takes it too far, they are both equally wrong in the same way.
Or another way to say it, I think it’s very rare that people share the same positive view of the world, but disagree normatively. Our normative disagreements often always come from a different map of the world, not from the same map but different preferences. Obviously I can’t prove, or even test this, so I’m posting it here as an uncertain though. Not something I’m going to strongly defend. I know Aumann sort of proved it with his agreement theorem, well he modeled two Bayesian agents. So everything his model can’t explain could be called normative I guess?
In reality it’s still a useful distinction. As I said, I don’t want to be annoyingly nihilistic or anything.
Note: Will read the rest of that paper later. Looks very interesting and relevant though, so thanks for sharing.
They only have to claim this. Many merely imply this without bothering to provide arguments.
And that’s precisely the point where the disentangling of the empirical and the normative rears up and shouts: Hold on! What is this “optimal” thing? Optimal for whom, how, and according to which values?
I don’t think so. Marx thought the proletarian revolution to be inevitable and that is NOT a normative statement. He also thought it to be a good thing which is normative, but those are two different claims.
Oh, I think it happens all the time: Should we go eat now or in an hour? Alice: Now. Bob: In an hour. That’s a normative disagreement without any sign of different empirics.
In more extended normative arguments people usually feel obliged to present a biased picture of the world to support their conclusions, but if you drill down it’s not uncommon to find that two different people agree on what the world is, but disagree about the ways it should be… adjusted.