I feel like you’re taking my attempts to explain my position and requiring that each one be a rigorous defense. Sometimes we just have to spend some time trying to understand each other before we can bring the knives out or whatever, yeah? Sorry if I’m guilty of the same thing—I tried to unpack some more details after my flat statement that I thought you were wrong, but it probably came off as just being argumentative.
>>>If you use a gerrymandered concept, you may have no understanding of the non-gerrymandered versions; or you may have some understanding, but in any case not the fluency to think in them.
>>I’m not following you any more. Of course unscientific concepts can go wrong—anything can. But if you’re not saying everyone should use scientific conceotts all the time, what are you saying?
>In what you quoted, I was trying to point out the distinction between speaking a certain way vs thinking a certain way. My overall conversational strategy was to try to separate out the question of whether you should speak a specific way from the question of whether you should think a specific way. This was because I had hoped that we could more easily reach agreement about the “thinking” side of the question.
Arguing against whom? I dont believe that ones thinking should be constrained by some narrow set of interests. I have never said it should. On the contrary, I have been arguing against the narrowness of “everything is or should be passive reflection of statistical regularities in pre existing reality”.
(Sorry, I just don’t get how this is relevant to the quote you’re apparently responding to; I didn’t use the words ‘arguing against’ there, and was describing my conversational goal, rather than arguing something. So I’m going to try to make some more clarifying remarks which may not answer your question:)
You ask “if you’re not saying everyone should use scientific concepts all the time, what are you saying?”
I have attempted to separately argue the following:
Much of the time, using “unscientific concepts” is a mistake. In particular, by trying to separate thinking vs speaking, I was trying to point out that even in cases where it’s plausible that you are better off speaking in epistemically unhygenic ways, it’s not plausible that you’re better off thinking in those ways: there’s a high cost to pay in not understanding the world. (Note the weak “much of the time” qualifier here—I endorse this point and think it’s important to the discussion, but I’m endorsing a rather weak statement, on purpose.)
Most of the time, using “unscientific concepts” is useful only for manipulative purposes. My argument here is based on the idea that agents with shared goals will communicate in a way which shares as much information as possible (in the bits communicated—IE, modulo communication costs, redundancy built into the language to ensure communication over noisy channels, etc). Therefore, behavior contrary to this must be either uncooperative or simply sub-optimal. This doesn’t mean it’s irrational (a consequentialist might manipulate others), but I presume that you would be less happy to argue in favor of unscientific concepts if you conceded that they were almost always manipulative. Your response to this was to call my argument a “very special case”. I do not concede this; I think it is a very general case. (I do not currently understand why you called it a very special case.)
Very nearly all of the time, it makes sense to separate out pure epistemic quality and consider it as a coherent goal, talk about how to achieve it, etc. (Not pursue it singlemindedly, but distinguish it as a comprehensible thing.) In particular, it makes sense to have this discussion about nearly any statement. I perceive you as having a large disagreement with me about this, thinking that it makes a lot less sense for some statements, EG those about marriage and money.
Some of the time, it makes sense to have a social norm against appeals to consequences (as an argument for changing epistemic stances), in order to safeguard ‘scientific’ thought-processes against distortion. In particular, I think it makes sense on lesswrong. This is not a claim that all conceptual gerrymandering can be eliminated, but rather, that we should make the attempt (at least in specific arenas of discourse).
what has been offered already are the ideas of:-
1) self-fulffilling prophecies, AKA blueprints AKA social constructs
2) co-ordination.
3) fuctionality. Treating a tomato as a vegeable tells you wha to do with it for culinary puposes.
What hasn’t been offered is any reason to think those things don’t exist, or aren’t important, or aren’t useful. My 1) and 2) are Zack’s b) and d). Zack dismissed b) and d) without argument.
I fully conceded #1 earlier in our discussion—I have no qualms with this pathway, and I think it’s important. I don’t think it entails accepting less-accurate beliefs (a self-fulfilling prophecy is, after all, true!), but I do think it entails valid appeals-to-consequences for what might otherwise seem like purely epistemic questions. Furthermore I think this is relatively common.
I fully concede #3, and also perceive Zack as explicitly doing so, as part of his central argument.
I am not trying to defend a norm against #1 or #3, nor am I defending a concept of “pure epistemics” which regards #1 or #3 as impurities, in my own points 1-4 earlier. I think “pure epistemics” without your #1 would be very limited, because it becomes ill-defined in the presence of self-fulfilling prophecies or other predictions which are relevant to their own outcomes. I think “pure epistemics” without your #3 is very nearly useless, due to a lack of focus on useful questions. Both of these things are coherent things to talk about, but not very useful to agents, and therefore less descriptively apt for discussing and understanding agents, nor as normatively apt for a community of agents.
As for #2, I think some of this is covered by #1. Everything else, I claim is manipulative, like EG promising a good afterlife if you help build a pyramid in the middle of the desert. Manipulation works, but I continue to presume it’s not what you’re defending when you defend ‘unscientific concepts’.
So I suppose either (a) we can agree on all of that, and don’t have any remaining disagreement, or (b) our main disagreement is with #2, and we should focus on my argument that epistemic impurities are going to be manipulative, or (c) your 1-3 don’t cover all the bases you think are important, and we should talk about what other channels make unscientific concepts useful. (Or perhaps some mix of a-c.)
I feel like you’re taking my attempts to explain my position and requiring that each one be a rigorous defense.
If someone has made a position clear, they need to move onto defending it at some stage, or else it’s all just opinion.
You clearly think that some concepts lack objectivity .. that’s been explained a great length with equations and diagrams...and you think that the very existence of scientific objectivity is in danger. But between these two claims there are any number of intermediate steps that have not been explained or defended.
Much of the time, using “unscientific concepts” is a mistake
I don’t see why. It’s not a mistake to use special purpose or value laden concepts appropriately. So how can it be usually be a mistake to use them? Are you saying that they are usually used inappropriately?
Most of the time, using “unscientific concepts” is useful only for manipulative purposes. My argument here is based on the idea that agents with shared goals will communicate in a way which shares as much information as possible (in the bits communicated—IE, modulo communication costs, redundancy built into the language to ensure communication over noisy channels, etc).
No. If they have shared goals, they will already have a lot of shared information ( ie. small inferential distance) and they will already use a special purpose jargon.
Special interest groups always have special language. Objective, scientific language is what scientists use, and not that many people are scientists, so it is not the default.
In any case, how is that evidence of manipulation?
but I presume that you would be less happy to argue in favor of unscientific concepts if you conceded that they were almost always manipulative.
I don’t concede that they are always manipulative, in an objectionable sense. We are at the stage where you need to clarify that.
Your response to this was to call my argument a “very special case”. I do not concede this; I think it is a very general case. (I do not currently understand why you called it a very special case).
How common is manipulation? If you set the bar on what constitutes manipulation very low, then it is very common, even including this discussion. But if it is very common, how can it be very bad? If you think that all gerrymandered concepts are “manipulative” in the sense of micro manipulations, where’s the problem?
I think this a central weakness of your case: you need to choose one of “manipulation common”, and “manipulation bad”.
Very nearly all of the time, it makes sense to separate out pure epistemic quality and consider it as a coherent goal, talk about how to achieve it, etc.
Why? And for whom?
Some of the time, it makes sense to have a social norm against appeals to consequences (as an argument for changing epistemic stances), in order to safeguard ‘scientific’ thought-processes against distortion
Well, if it’s only some of the time, you can achieve that by saying that scientists are special people who do have an obligation to be as objective as possible , but no obligation to be consequentialist. But that’s not novel.
As for #2, I think some of this is covered by #1. Everything else, I claim is manipulative, like EG promising a good afterlife if you help build a pyramid in the middle of the desert
That seems like a weakman to me. What about cases where coordination is of benefit to the people doing the coordinating...like obeying traffic laws? A speed limit is a gerrymandered concept.
I feel like you’re taking my attempts to explain my position and requiring that each one be a rigorous defense. Sometimes we just have to spend some time trying to understand each other before we can bring the knives out or whatever, yeah? Sorry if I’m guilty of the same thing—I tried to unpack some more details after my flat statement that I thought you were wrong, but it probably came off as just being argumentative.
(Sorry, I just don’t get how this is relevant to the quote you’re apparently responding to; I didn’t use the words ‘arguing against’ there, and was describing my conversational goal, rather than arguing something. So I’m going to try to make some more clarifying remarks which may not answer your question:)
You ask “if you’re not saying everyone should use scientific concepts all the time, what are you saying?”
I have attempted to separately argue the following:
Much of the time, using “unscientific concepts” is a mistake. In particular, by trying to separate thinking vs speaking, I was trying to point out that even in cases where it’s plausible that you are better off speaking in epistemically unhygenic ways, it’s not plausible that you’re better off thinking in those ways: there’s a high cost to pay in not understanding the world. (Note the weak “much of the time” qualifier here—I endorse this point and think it’s important to the discussion, but I’m endorsing a rather weak statement, on purpose.)
Most of the time, using “unscientific concepts” is useful only for manipulative purposes. My argument here is based on the idea that agents with shared goals will communicate in a way which shares as much information as possible (in the bits communicated—IE, modulo communication costs, redundancy built into the language to ensure communication over noisy channels, etc). Therefore, behavior contrary to this must be either uncooperative or simply sub-optimal. This doesn’t mean it’s irrational (a consequentialist might manipulate others), but I presume that you would be less happy to argue in favor of unscientific concepts if you conceded that they were almost always manipulative. Your response to this was to call my argument a “very special case”. I do not concede this; I think it is a very general case. (I do not currently understand why you called it a very special case.)
Very nearly all of the time, it makes sense to separate out pure epistemic quality and consider it as a coherent goal, talk about how to achieve it, etc. (Not pursue it singlemindedly, but distinguish it as a comprehensible thing.) In particular, it makes sense to have this discussion about nearly any statement. I perceive you as having a large disagreement with me about this, thinking that it makes a lot less sense for some statements, EG those about marriage and money.
Some of the time, it makes sense to have a social norm against appeals to consequences (as an argument for changing epistemic stances), in order to safeguard ‘scientific’ thought-processes against distortion. In particular, I think it makes sense on lesswrong. This is not a claim that all conceptual gerrymandering can be eliminated, but rather, that we should make the attempt (at least in specific arenas of discourse).
I fully conceded #1 earlier in our discussion—I have no qualms with this pathway, and I think it’s important. I don’t think it entails accepting less-accurate beliefs (a self-fulfilling prophecy is, after all, true!), but I do think it entails valid appeals-to-consequences for what might otherwise seem like purely epistemic questions. Furthermore I think this is relatively common.
I fully concede #3, and also perceive Zack as explicitly doing so, as part of his central argument.
I am not trying to defend a norm against #1 or #3, nor am I defending a concept of “pure epistemics” which regards #1 or #3 as impurities, in my own points 1-4 earlier. I think “pure epistemics” without your #1 would be very limited, because it becomes ill-defined in the presence of self-fulfilling prophecies or other predictions which are relevant to their own outcomes. I think “pure epistemics” without your #3 is very nearly useless, due to a lack of focus on useful questions. Both of these things are coherent things to talk about, but not very useful to agents, and therefore less descriptively apt for discussing and understanding agents, nor as normatively apt for a community of agents.
As for #2, I think some of this is covered by #1. Everything else, I claim is manipulative, like EG promising a good afterlife if you help build a pyramid in the middle of the desert. Manipulation works, but I continue to presume it’s not what you’re defending when you defend ‘unscientific concepts’.
So I suppose either (a) we can agree on all of that, and don’t have any remaining disagreement, or (b) our main disagreement is with #2, and we should focus on my argument that epistemic impurities are going to be manipulative, or (c) your 1-3 don’t cover all the bases you think are important, and we should talk about what other channels make unscientific concepts useful. (Or perhaps some mix of a-c.)
If someone has made a position clear, they need to move onto defending it at some stage, or else it’s all just opinion.
You clearly think that some concepts lack objectivity .. that’s been explained a great length with equations and diagrams...and you think that the very existence of scientific objectivity is in danger. But between these two claims there are any number of intermediate steps that have not been explained or defended.
I don’t see why. It’s not a mistake to use special purpose or value laden concepts appropriately. So how can it be usually be a mistake to use them? Are you saying that they are usually used inappropriately?
No. If they have shared goals, they will already have a lot of shared information ( ie. small inferential distance) and they will already use a special purpose jargon. Special interest groups always have special language. Objective, scientific language is what scientists use, and not that many people are scientists, so it is not the default.
In any case, how is that evidence of manipulation?
I don’t concede that they are always manipulative, in an objectionable sense. We are at the stage where you need to clarify that.
How common is manipulation? If you set the bar on what constitutes manipulation very low, then it is very common, even including this discussion. But if it is very common, how can it be very bad? If you think that all gerrymandered concepts are “manipulative” in the sense of micro manipulations, where’s the problem?
I think this a central weakness of your case: you need to choose one of “manipulation common”, and “manipulation bad”.
Why? And for whom?
Well, if it’s only some of the time, you can achieve that by saying that scientists are special people who do have an obligation to be as objective as possible , but no obligation to be consequentialist. But that’s not novel.
That seems like a weakman to me. What about cases where coordination is of benefit to the people doing the coordinating...like obeying traffic laws? A speed limit is a gerrymandered concept.