I think I’m willing to concede that there is something of an empirical question about what works best for truth-seeking, as much as that feels like a dangerous statement to acknowledge. Though seemingly true, it feels like it’s something that people who try to get you commit bad epistemic moves like to raise [1]. I’m thinking here of post-rationalist lines of thought (though I can’t claim overmuch familiarity with them) or the perennial debates over whether it’s ever okay to deceive yourself. Whether or not they do so doesn’t make it less true however.
Questions over how quickly to aim for consensus or how long to entertain new and strange ideas seem like very important questions. There’ve been recent debates about this kind of thing on LessWrong, particularly the entertaining as-yet-not-completely-justified-in-the-standard-frame things. It does seem like getting the correct balance between Babble vs Prune is an empirical question.
Allowing questions of motivation to factor into one’s truth-seeking process feels most perilous to me, mostly as it seems too easy to claim one’s motivation will be affected adversely to justify any desired behavior. I don’t deny certain moves might destroy motivation, but it seems the risks of allowing such a fear to be a justification for changing behavior are much worse. Granted, that’s an empirical claim I’m making.
[1] Or at least it feels that way, because it’s so easy to assert that something is useful and therefore justified despite violating what seem like the correct rules. By insisting on usefulness, one can seemingly defend any belief or model. Crystals, astrology, who knows what. Though maybe I merely react poorly at what seem like heresies.
I think I’m willing to concede that there is something of an empirical question about what works best for truth-seeking, as much as that feels like a dangerous statement to acknowledge. Though seemingly true, it feels like it’s something that people who try to get you commit bad epistemic moves like to raise [1].
There’s a tricky balance to maintain here. On one hand, we don’t want to commit bad epistemic moves. On the other hand, failing to acknowledge the empirical basis of something when the evidence of its being empirical is presented is itself a bad epistemic move.
With epistemic dangers, I think there is a choice between “confront” and “evade”. Both are dangerous. Confronting the danger might harm you epistemically, and is frequently the wrong idea — like “confronting” radiation. But evading the danger might harm you epistemically, and is also frequently wrong — like “evading” a treatable illness. Ultimately, whether to confront or evade is an empirical question.
Allowing questions of motivation to factor into one’s truth-seeking process feels most perilous to me, mostly as it seems too easy to claim one’s motivation will be affected adversely to justify any desired behavior. I don’t deny certain moves might destroy motivation, but it seems the risks of allowing such a fear to be a justification for changing behavior are much worse. Granted, that’s an empirical claim I’m making.
One good test here might be: Is a person willing to take hits to their morale for the sake of acquiring the truth? If a person is unwilling to take hits to their morale, they are unlikely to be wisely managing their morale and epistemics, and instead trading off too hard against their epistemics. Another good test might be: If the person avoids useful behavior X in order to maintain their motivation, do they have a plan to get to a state where they won’t have to avoid behavior X forever? If not, that might be a cause for concern.
With epistemic dangers, I think there is a choice between “confront” and “evade”.
Not a bid for further explanation, just flagging that I’m not sure what you actually mean by this, as in which concrete moves which correspond to each.
If a person is unwilling to take hits to their morale, they are unlikely to be wisely managing their morale and epistemics, and instead trading off too hard against their epistemics.
To me the empirical question is whether a person ought to be willing to take all possible hits to their morale for the sake of their epistemics. I have a consequentialist fear—and I think consequentialist means we’re necessarily talking empiricism—that any exceptions/compromizes may be catastrophic.
. . .
It’s possible there’s a kind of meta-debate here going on, with some people (including me) sometimes having underlying consequentialist/empirical beliefs that even engaging in consequentialist/empirical arguments about trading off against epistemics would have overall bad consequences and/or an empirical belief that anyone who would offer such arguments readily must not really care about epistemics because they’re not [naively] treating them as sacred enough [1].
I hadn’t formulated this in that way before, so I’m glad this post/discussion has helped me realize that arguably “it’s consequentialism/empiricism all the way up”, even if you ultimately claim that your epistemological consequentialism cashes out to some inviolable deontological rules.
[1] Not treating them as sacred enough therefore they don’t really care, therefore can’t be trusted—this is my instinctive reaction when encounter, say, post-rationalist arguments about needing to consider what’s useful, not just what’s true. Maybe it’s not always fair.
. . .
I had a revealing exchange with someone a few months ago about conversation norms on LessWrong. I was stating the necessity of considering the consequences of your speech and how that should factor into how one speaks. In course of that debate, they said [paraphrasing]:
“You’ re trying to get me to admit that I sometimes trade off things against truth, and once I’ve admitted that, we’re just “haggling over price”. Except, no.
I think this response was a mistake, not in least because their rigidity meant we couldn’t discuss different consequences of different policies or even what tradeoffs I thought I was making (fewer than they did). That discussion felt different than this post because it was mostly about what you say to others and how, but I see the analogy to even when you’re considering how people individually think.
So, I may maintain my suspicions, but I won’t say “except, no.”
arguments about needing to consider what’s useful, not just what’s true.
Absent examples it’s not clear how these trade off against each other. It seems like what’s useful is a subset of what’s true—offhandedly, I don’t know what color of flames are produced if cesium is burned (or what cesium is, if it burns, if the fumes would be harmful, etc.), but if I thought that might be useful knowledge in the future I’d seek it out.
Say we’re in the inconvenient world where it’s important to have lots of babble, yet it is also the case that lots of babble is dangerous for the epistemic health of both groups and individuals (i.e. the strategy with the highest expected payoff is “try lots of questionable thinking, some of which outputs the most important stuff, but most of which is useless or harmful)...
...what do you do, if you want to succeed as an individual or a group?
(I don’t have a good answer right now and will be thinking about it. I have some sense that there are norms that are reasonable for “how to flag things with the right epistemic status, and how much to communicate publicly”, which might navigate the tradeoff reasonably)
I currently think we are in a world where a lot of discussion of near-guesses, mildly informed conjectures, probably-wrong speculation, and so forth is extremely helpful, at least in contexts where one is trying to discover new truths.
My primary solution to this has been (1) epistemic tagging, including coarse-grained/qualitative tags, plus (2) a study of what the different tags actually amount to empirically. So person X can say something and tag it as “probably wrong, just an idea”, and you can know that when person X uses that tag, the idea is, e.g., usually correct or usually very illuminating. Then over time you can try to get people to sync up on the use of tags and an understanding of what the tags mean.
In cases where it looks like people irrationally update on a proposition, even with appropriate tags, it might be better to not discuss that proposition (or discuss in a smaller, safer group) until it has achieved adequately good epistemic status.
I actually disagree that that lots of babble is necessary. One of the original motivations for Mazes and Crayon was to show, in an algorithmic context, what some less babble-based strategies might look like.
My own intuition on the matter comes largely from hard math problems. Outside of intro classes, if you sit down to write a proof without a pre-existing intuitive understanding of why it works, you’ll math-babble without getting any closer to a proof. I’ve spent weeks at a time babbling math, many times, with nothing to show for it. It reliably does not work on hard problems.
Something like babbling is still necessary to build intuitions, of course, but even there it’s less like random branching and more like A* search.
I was not making a claim about how much babble is necessary – just noting if it were necessary we’d want a good way to handle that fact. (My primary motivation here was a worry that people might contrast “high babble == low epistemic standards” and stop there, and I wanted to make sure the conversation had a proper line of retreat)
That said – I think I might have been using babble as shorthand for a different concept than you were thinking (and I do obviously suspect the concept I do mean is at least plausibly important enough to be entertaining this line of thought)
There’s a type of thinking I (now) call “GPT2 style thinking”, where I’m just sort of pattern matching nearby thoughts based on “what sort of things I tend to think/say in this situation”, without much reflection. I sometimes try to use this while programming and it’s a terrible idea.
Was that the sort of thing you were thinking? (If so that makes sense, but that’s not what I meant)
The thing I’m thinking of is… not necessarily more intentional, but a specific type of brainstorming. It’s more for exploring new ideas, and combining ideas together, and following hunches about things being important. (this might not be the best use of the term “babble” and if so apologies)
I think I’m willing to concede that there is something of an empirical question about what works best for truth-seeking, as much as that feels like a dangerous statement to acknowledge. Though seemingly true, it feels like it’s something that people who try to get you commit bad epistemic moves like to raise [1]. I’m thinking here of post-rationalist lines of thought (though I can’t claim overmuch familiarity with them) or the perennial debates over whether it’s ever okay to deceive yourself. Whether or not they do so doesn’t make it less true however.
Questions over how quickly to aim for consensus or how long to entertain new and strange ideas seem like very important questions. There’ve been recent debates about this kind of thing on LessWrong, particularly the entertaining as-yet-not-completely-justified-in-the-standard-frame things. It does seem like getting the correct balance between Babble vs Prune is an empirical question.
Allowing questions of motivation to factor into one’s truth-seeking process feels most perilous to me, mostly as it seems too easy to claim one’s motivation will be affected adversely to justify any desired behavior. I don’t deny certain moves might destroy motivation, but it seems the risks of allowing such a fear to be a justification for changing behavior are much worse. Granted, that’s an empirical claim I’m making.
[1] Or at least it feels that way, because it’s so easy to assert that something is useful and therefore justified despite violating what seem like the correct rules. By insisting on usefulness, one can seemingly defend any belief or model. Crystals, astrology, who knows what. Though maybe I merely react poorly at what seem like heresies.
There’s a tricky balance to maintain here. On one hand, we don’t want to commit bad epistemic moves. On the other hand, failing to acknowledge the empirical basis of something when the evidence of its being empirical is presented is itself a bad epistemic move.
With epistemic dangers, I think there is a choice between “confront” and “evade”. Both are dangerous. Confronting the danger might harm you epistemically, and is frequently the wrong idea — like “confronting” radiation. But evading the danger might harm you epistemically, and is also frequently wrong — like “evading” a treatable illness. Ultimately, whether to confront or evade is an empirical question.
One good test here might be: Is a person willing to take hits to their morale for the sake of acquiring the truth? If a person is unwilling to take hits to their morale, they are unlikely to be wisely managing their morale and epistemics, and instead trading off too hard against their epistemics. Another good test might be: If the person avoids useful behavior X in order to maintain their motivation, do they have a plan to get to a state where they won’t have to avoid behavior X forever? If not, that might be a cause for concern.
Very much so.
Not a bid for further explanation, just flagging that I’m not sure what you actually mean by this, as in which concrete moves which correspond to each.
To me the empirical question is whether a person ought to be willing to take all possible hits to their morale for the sake of their epistemics. I have a consequentialist fear—and I think consequentialist means we’re necessarily talking empiricism—that any exceptions/compromizes may be catastrophic.
. . .
It’s possible there’s a kind of meta-debate here going on, with some people (including me) sometimes having underlying consequentialist/empirical beliefs that even engaging in consequentialist/empirical arguments about trading off against epistemics would have overall bad consequences and/or an empirical belief that anyone who would offer such arguments readily must not really care about epistemics because they’re not [naively] treating them as sacred enough [1].
I hadn’t formulated this in that way before, so I’m glad this post/discussion has helped me realize that arguably “it’s consequentialism/empiricism all the way up”, even if you ultimately claim that your epistemological consequentialism cashes out to some inviolable deontological rules.
[1] Not treating them as sacred enough therefore they don’t really care, therefore can’t be trusted—this is my instinctive reaction when encounter, say, post-rationalist arguments about needing to consider what’s useful, not just what’s true. Maybe it’s not always fair.
. . .
I had a revealing exchange with someone a few months ago about conversation norms on LessWrong. I was stating the necessity of considering the consequences of your speech and how that should factor into how one speaks. In course of that debate, they said [paraphrasing]:
I think this response was a mistake, not in least because their rigidity meant we couldn’t discuss different consequences of different policies or even what tradeoffs I thought I was making (fewer than they did). That discussion felt different than this post because it was mostly about what you say to others and how, but I see the analogy to even when you’re considering how people individually think.
So, I may maintain my suspicions, but I won’t say “except, no.”
Absent examples it’s not clear how these trade off against each other. It seems like what’s useful is a subset of what’s true—offhandedly, I don’t know what color of flames are produced if cesium is burned (or what cesium is, if it burns, if the fumes would be harmful, etc.), but if I thought that might be useful knowledge in the future I’d seek it out.
One important question is:
Say we’re in the inconvenient world where it’s important to have lots of babble, yet it is also the case that lots of babble is dangerous for the epistemic health of both groups and individuals (i.e. the strategy with the highest expected payoff is “try lots of questionable thinking, some of which outputs the most important stuff, but most of which is useless or harmful)...
...what do you do, if you want to succeed as an individual or a group?
(I don’t have a good answer right now and will be thinking about it. I have some sense that there are norms that are reasonable for “how to flag things with the right epistemic status, and how much to communicate publicly”, which might navigate the tradeoff reasonably)
I currently think we are in a world where a lot of discussion of near-guesses, mildly informed conjectures, probably-wrong speculation, and so forth is extremely helpful, at least in contexts where one is trying to discover new truths.
My primary solution to this has been (1) epistemic tagging, including coarse-grained/qualitative tags, plus (2) a study of what the different tags actually amount to empirically. So person X can say something and tag it as “probably wrong, just an idea”, and you can know that when person X uses that tag, the idea is, e.g., usually correct or usually very illuminating. Then over time you can try to get people to sync up on the use of tags and an understanding of what the tags mean.
In cases where it looks like people irrationally update on a proposition, even with appropriate tags, it might be better to not discuss that proposition (or discuss in a smaller, safer group) until it has achieved adequately good epistemic status.
I actually disagree that that lots of babble is necessary. One of the original motivations for Mazes and Crayon was to show, in an algorithmic context, what some less babble-based strategies might look like.
My own intuition on the matter comes largely from hard math problems. Outside of intro classes, if you sit down to write a proof without a pre-existing intuitive understanding of why it works, you’ll math-babble without getting any closer to a proof. I’ve spent weeks at a time babbling math, many times, with nothing to show for it. It reliably does not work on hard problems.
Something like babbling is still necessary to build intuitions, of course, but even there it’s less like random branching and more like A* search.
I was not making a claim about how much babble is necessary – just noting if it were necessary we’d want a good way to handle that fact. (My primary motivation here was a worry that people might contrast “high babble == low epistemic standards” and stop there, and I wanted to make sure the conversation had a proper line of retreat)
That said – I think I might have been using babble as shorthand for a different concept than you were thinking (and I do obviously suspect the concept I do mean is at least plausibly important enough to be entertaining this line of thought)
There’s a type of thinking I (now) call “GPT2 style thinking”, where I’m just sort of pattern matching nearby thoughts based on “what sort of things I tend to think/say in this situation”, without much reflection. I sometimes try to use this while programming and it’s a terrible idea.
Was that the sort of thing you were thinking? (If so that makes sense, but that’s not what I meant)
The thing I’m thinking of is… not necessarily more intentional, but a specific type of brainstorming. It’s more for exploring new ideas, and combining ideas together, and following hunches about things being important. (this might not be the best use of the term “babble” and if so apologies)
Ah yeah, makes sense on a second read.
Now I’m curious, but not yet sure what you mean. Could you give an example or two?