Here is a flawed dynamic in group conversations, especially among large groups of people with no common knowledge.
Suppose everyone is trying to build a bridge.
Alice: We could make a bridge by just laying a really long plank over the river.
Bob: According to my calculations, a single plank would fall down.
Carl: Scientists Warn Of Falling Down Bridges, Panic.
Dave: No one would be stupid enough to design a bridge like that, we will make a better design with more supports.
Bob: Do you have a schematic for that better design?
And, at worst, the cycle repeats.
The problem here is Carl. The message should be
Carl: At least one attempt at designing a bridge is calculated to show the phenomena of falling down. It is probable that many other potential bridge designs share this failure mode. In order to build a bridge that won’t fall down, someone will have to check any designs for falling down behavior before they are built.
This entire dynamic plays out the same, whether the people actually deciding on building the bridge are incredibly cautious, never approving a design they weren’t confidant in, or totally reckless. The probability of any bridge actually falling down in the real world depends on their caution. But the process of cautious bridge builders finding a good design looks like them rejecting lots of bad ones. If the rejection of bad designs is public, people can accuse you of attacking a strawman, they can say that no-one would be stupid enough to build such a thing. If they are right that no one would be stupid enough to build such a thing, its still helpful to share the reason the design fails.
What? In this example, the problem is not Carl—he’s harmless, and Dave carries on with the cycle (of improving the design) as he should. Showing a situation where Carl’s sensationalist misstatement actually stops progress would likely also show that the problem isn’t Carl—it’s EITHER the people who listen to Carl and interfere with Alice, Bob, and Dave, OR it’s Alice and Dave for letting Carl discourage them rather than understanding Bob’s objection directly.
Your description implies that the problem is something else—that Carl is somehow preventing Dave from taking Bob’s analysis into consideration, but your example doesn’t show that, and I’m not sure how it’s intended to.
In the actual world, there’s LOTS of sensationalist bad reporting of failures (and of extremely minor successes, for that matter). And those people who are actually trying to build things mostly ignore it, in favor of more reasonable publication and discussion of the underlying experiments/failures/calculations.
But nobody would be that stupid!
Here is a flawed dynamic in group conversations, especially among large groups of people with no common knowledge.
Suppose everyone is trying to build a bridge.
Alice: We could make a bridge by just laying a really long plank over the river.
Bob: According to my calculations, a single plank would fall down.
Carl: Scientists Warn Of Falling Down Bridges, Panic.
Dave: No one would be stupid enough to design a bridge like that, we will make a better design with more supports.
Bob: Do you have a schematic for that better design?
And, at worst, the cycle repeats.
The problem here is Carl. The message should be
Carl: At least one attempt at designing a bridge is calculated to show the phenomena of falling down. It is probable that many other potential bridge designs share this failure mode. In order to build a bridge that won’t fall down, someone will have to check any designs for falling down behavior before they are built.
This entire dynamic plays out the same, whether the people actually deciding on building the bridge are incredibly cautious, never approving a design they weren’t confidant in, or totally reckless. The probability of any bridge actually falling down in the real world depends on their caution. But the process of cautious bridge builders finding a good design looks like them rejecting lots of bad ones. If the rejection of bad designs is public, people can accuse you of attacking a strawman, they can say that no-one would be stupid enough to build such a thing. If they are right that no one would be stupid enough to build such a thing, its still helpful to share the reason the design fails.
What? In this example, the problem is not Carl—he’s harmless, and Dave carries on with the cycle (of improving the design) as he should. Showing a situation where Carl’s sensationalist misstatement actually stops progress would likely also show that the problem isn’t Carl—it’s EITHER the people who listen to Carl and interfere with Alice, Bob, and Dave, OR it’s Alice and Dave for letting Carl discourage them rather than understanding Bob’s objection directly.
Your description implies that the problem is something else—that Carl is somehow preventing Dave from taking Bob’s analysis into consideration, but your example doesn’t show that, and I’m not sure how it’s intended to.
In the actual world, there’s LOTS of sensationalist bad reporting of failures (and of extremely minor successes, for that matter). And those people who are actually trying to build things mostly ignore it, in favor of more reasonable publication and discussion of the underlying experiments/failures/calculations.