Say we’re in the inconvenient world where it’s important to have lots of babble, yet it is also the case that lots of babble is dangerous for the epistemic health of both groups and individuals (i.e. the strategy with the highest expected payoff is “try lots of questionable thinking, some of which outputs the most important stuff, but most of which is useless or harmful)...
...what do you do, if you want to succeed as an individual or a group?
(I don’t have a good answer right now and will be thinking about it. I have some sense that there are norms that are reasonable for “how to flag things with the right epistemic status, and how much to communicate publicly”, which might navigate the tradeoff reasonably)
I currently think we are in a world where a lot of discussion of near-guesses, mildly informed conjectures, probably-wrong speculation, and so forth is extremely helpful, at least in contexts where one is trying to discover new truths.
My primary solution to this has been (1) epistemic tagging, including coarse-grained/qualitative tags, plus (2) a study of what the different tags actually amount to empirically. So person X can say something and tag it as “probably wrong, just an idea”, and you can know that when person X uses that tag, the idea is, e.g., usually correct or usually very illuminating. Then over time you can try to get people to sync up on the use of tags and an understanding of what the tags mean.
In cases where it looks like people irrationally update on a proposition, even with appropriate tags, it might be better to not discuss that proposition (or discuss in a smaller, safer group) until it has achieved adequately good epistemic status.
I actually disagree that that lots of babble is necessary. One of the original motivations for Mazes and Crayon was to show, in an algorithmic context, what some less babble-based strategies might look like.
My own intuition on the matter comes largely from hard math problems. Outside of intro classes, if you sit down to write a proof without a pre-existing intuitive understanding of why it works, you’ll math-babble without getting any closer to a proof. I’ve spent weeks at a time babbling math, many times, with nothing to show for it. It reliably does not work on hard problems.
Something like babbling is still necessary to build intuitions, of course, but even there it’s less like random branching and more like A* search.
I was not making a claim about how much babble is necessary – just noting if it were necessary we’d want a good way to handle that fact. (My primary motivation here was a worry that people might contrast “high babble == low epistemic standards” and stop there, and I wanted to make sure the conversation had a proper line of retreat)
That said – I think I might have been using babble as shorthand for a different concept than you were thinking (and I do obviously suspect the concept I do mean is at least plausibly important enough to be entertaining this line of thought)
There’s a type of thinking I (now) call “GPT2 style thinking”, where I’m just sort of pattern matching nearby thoughts based on “what sort of things I tend to think/say in this situation”, without much reflection. I sometimes try to use this while programming and it’s a terrible idea.
Was that the sort of thing you were thinking? (If so that makes sense, but that’s not what I meant)
The thing I’m thinking of is… not necessarily more intentional, but a specific type of brainstorming. It’s more for exploring new ideas, and combining ideas together, and following hunches about things being important. (this might not be the best use of the term “babble” and if so apologies)
One important question is:
Say we’re in the inconvenient world where it’s important to have lots of babble, yet it is also the case that lots of babble is dangerous for the epistemic health of both groups and individuals (i.e. the strategy with the highest expected payoff is “try lots of questionable thinking, some of which outputs the most important stuff, but most of which is useless or harmful)...
...what do you do, if you want to succeed as an individual or a group?
(I don’t have a good answer right now and will be thinking about it. I have some sense that there are norms that are reasonable for “how to flag things with the right epistemic status, and how much to communicate publicly”, which might navigate the tradeoff reasonably)
I currently think we are in a world where a lot of discussion of near-guesses, mildly informed conjectures, probably-wrong speculation, and so forth is extremely helpful, at least in contexts where one is trying to discover new truths.
My primary solution to this has been (1) epistemic tagging, including coarse-grained/qualitative tags, plus (2) a study of what the different tags actually amount to empirically. So person X can say something and tag it as “probably wrong, just an idea”, and you can know that when person X uses that tag, the idea is, e.g., usually correct or usually very illuminating. Then over time you can try to get people to sync up on the use of tags and an understanding of what the tags mean.
In cases where it looks like people irrationally update on a proposition, even with appropriate tags, it might be better to not discuss that proposition (or discuss in a smaller, safer group) until it has achieved adequately good epistemic status.
I actually disagree that that lots of babble is necessary. One of the original motivations for Mazes and Crayon was to show, in an algorithmic context, what some less babble-based strategies might look like.
My own intuition on the matter comes largely from hard math problems. Outside of intro classes, if you sit down to write a proof without a pre-existing intuitive understanding of why it works, you’ll math-babble without getting any closer to a proof. I’ve spent weeks at a time babbling math, many times, with nothing to show for it. It reliably does not work on hard problems.
Something like babbling is still necessary to build intuitions, of course, but even there it’s less like random branching and more like A* search.
I was not making a claim about how much babble is necessary – just noting if it were necessary we’d want a good way to handle that fact. (My primary motivation here was a worry that people might contrast “high babble == low epistemic standards” and stop there, and I wanted to make sure the conversation had a proper line of retreat)
That said – I think I might have been using babble as shorthand for a different concept than you were thinking (and I do obviously suspect the concept I do mean is at least plausibly important enough to be entertaining this line of thought)
There’s a type of thinking I (now) call “GPT2 style thinking”, where I’m just sort of pattern matching nearby thoughts based on “what sort of things I tend to think/say in this situation”, without much reflection. I sometimes try to use this while programming and it’s a terrible idea.
Was that the sort of thing you were thinking? (If so that makes sense, but that’s not what I meant)
The thing I’m thinking of is… not necessarily more intentional, but a specific type of brainstorming. It’s more for exploring new ideas, and combining ideas together, and following hunches about things being important. (this might not be the best use of the term “babble” and if so apologies)
Ah yeah, makes sense on a second read.
Now I’m curious, but not yet sure what you mean. Could you give an example or two?