I think a lot of words have been spent on this debate elsewhere, but all I feel like citing is biases against overcoming bias. The point it mentions about costs accruing mostly to you is related to the point you made about group rationality. The point about not knowing how to evaluate whether epistemic rationality is useful without developing epistemic rationality—while it is perhaps intended as little more than a cute retort, I take it fairly seriously; it seems to apply to specific examples I encounter.
My recent view on this is mostly: but if you actually look, doesn’t it seem really useful to be able to separate these concerns? Overwhelmingly so?
Both you and Jessica seem to have interpreted me as arguing against separation of concerns w.r.t. epistemic and instrumental rationality, which wasn’t really my intention. I’m actually highly in favor of separation of concerns in this regard, and was just reporting a train of thought that was triggered by your “Separation of concerns is a principle in computer science” statement.
I didn’t follow much of the earlier debates about instrumental vs epistemic rationality (in part because I think something like curiosity/truth/knowledge is part of my terminal values so I’d personally want epistemic rationality regardless) so apologies if I’m retreading familiar ground.
Yeah, I later realized that my comment was not really addressing what you were interested in.
I read you as questioning the argument “separation of concerns, therefore, separation of epistemic vs instrumental”—not questioning the conclusion, which is what I initially responded to.
I think separation-of-concerns just shouldn’t be viewed as an argument in itself (ie, identifying some concerns which you can make a distinction between does not mean you should separate them). That conclusion rests on many other considerations.
Part of my thinking in writing the post was that humans have a relatively high degree of separation between epistemic and instrumental even without special scientific/rationalist memes. So, you can observe the phenomenon, take it as an example of separation-of-concerns, and think about why that may happen without thinking about abandoning evolved strategies.
Sort of like the question “why would an evolved species invent mathematics?”—why would an evolved species have a concept of truth? (But, I’m somewhat conflating ‘having a concept of truth’ and ‘having beliefs at all, which an outside observer might meaningfully apply a concept of truth to’.)
I think a lot of words have been spent on this debate elsewhere, but all I feel like citing is biases against overcoming bias. The point it mentions about costs accruing mostly to you is related to the point you made about group rationality. The point about not knowing how to evaluate whether epistemic rationality is useful without developing epistemic rationality—while it is perhaps intended as little more than a cute retort, I take it fairly seriously; it seems to apply to specific examples I encounter.
My recent view on this is mostly: but if you actually look, doesn’t it seem really useful to be able to separate these concerns? Overwhelmingly so?
Both you and Jessica seem to have interpreted me as arguing against separation of concerns w.r.t. epistemic and instrumental rationality, which wasn’t really my intention. I’m actually highly in favor of separation of concerns in this regard, and was just reporting a train of thought that was triggered by your “Separation of concerns is a principle in computer science” statement.
I didn’t follow much of the earlier debates about instrumental vs epistemic rationality (in part because I think something like curiosity/truth/knowledge is part of my terminal values so I’d personally want epistemic rationality regardless) so apologies if I’m retreading familiar ground.
Yeah, I later realized that my comment was not really addressing what you were interested in.
I read you as questioning the argument “separation of concerns, therefore, separation of epistemic vs instrumental”—not questioning the conclusion, which is what I initially responded to.
I think separation-of-concerns just shouldn’t be viewed as an argument in itself (ie, identifying some concerns which you can make a distinction between does not mean you should separate them). That conclusion rests on many other considerations.
Part of my thinking in writing the post was that humans have a relatively high degree of separation between epistemic and instrumental even without special scientific/rationalist memes. So, you can observe the phenomenon, take it as an example of separation-of-concerns, and think about why that may happen without thinking about abandoning evolved strategies.
Sort of like the question “why would an evolved species invent mathematics?”—why would an evolved species have a concept of truth? (But, I’m somewhat conflating ‘having a concept of truth’ and ‘having beliefs at all, which an outside observer might meaningfully apply a concept of truth to’.)