Whether you are building an engine for a tractor or a race car, there are certain principles and guidelines that will help you get there. Things like:
measure twice before you cut the steel
Double check your fittings before you test the engine
keep track of which direction the axle is supposed to be turning for the type of engine you are making
etc.
The point of the guidelines isn’t to enforce a norm of making a particular type of engine. They exist to help groups of engineer make any kind of engine at all. People building engines make consistent, predictable mistakes. The guidelines are about helping people move past those mistakes so they can actually build an engine that has a chance of working.
The point of “rationalist guidelines” isn’t to enforce a norm of making particular types of beliefs. They exist to help groups of people stay connected to reality at all. People make consistent, predictable mistakes. The guidelines are for helping people avoid them. Regardless of what those beliefs are.
Well, for one thing, we might reasonably ask whether these guidelines (or anything sufficiently similar to these guidelines to identifiably be “the same idea”, and not just “generic stuff that many other people have said before”) are, in fact, needed in order for a group of people to “stay connected to reality at all”. Indeed we might go further and ask whether these guidelines do, in fact, help a group of people “stay connected to reality at all”.
In other words, you say: “The guidelines are for helping people avoid [consistent, predictable mistakes]” (emphasis mine). Yes, the guidelines are “for” that—in the sense that they are intended to fulfill the stated function. But are the guidelines good for that purpose? It’s an open question, surely! And it’s one that merely asserting the guidelines’ intent does not do much to answer.
But, perhaps even more importantly, we might, even more reasonably, ask whether any particular guideline is any good for helping a group of people “stay connected to reality at all”. Surely we can imagine a scenario where some of the guidelines are good for that, but others of the guidelines are’t—yes? Indeed, it’s not out of the question that some of the guidelines are good for that purpose, but others of the guidelines are actively bad for it! Surely we can’t reject that possibility a priori, simply because the guidelines are merely labeled “guidelines for rationalist discourse, which are necessary in order to avoid consistent, predictable mistakes, and stay connected to reality at all”—right?
I agree wholeheartedly that the intent of the guidelines isn’t enough. Do you have examples in mind where following a given guideline leads to worse outcomes than not following the guideline?
If so, we can talk about that particular guideline itself, without throwing away the whole concept of guidelines to try to do better.
An analogy I keep thinking of is the typescript vs javascript tradeoffs when programming with a team. Unless you have a weird special-case, it’s just straight up more useful to work with other people’s code where the type signatures are explicit. There’s less guessing, and therefore less mistakes. Yes, there are tradeoffs. You gain better understanding at the slight cost of implementation code.
The thing is, you pay that cost anyway. You either pay it upfront, and people can make smoother progress with less mistakes, or they make mistakes and have to figure out the type signatures the hard way.
People either distinguish between their observations and inferences explicitly, or you spend extra time, and make predictable mistakes, until the participants in the discourse figure out the distinction during the course of the conversation. If they can’t, then the conversation doesn’t go anywhere on that topic.
I don’t see any way of getting around this if you want to avoid making dumb mistakes in conversation. Not every change is an improvement, but every improvement is necessarily a change. If we want to raise the sanity waterline and have discourse that more reliably leads to us winning, we have to change things.
If so, we can talk about that particular guideline itself, without throwing away the whole concept of guidelines to try to do better.
Yes, sure, we shouldn’t throw away the concept; but that’s not at all a reason to start with the presumption that these particular guidelines are any good!
As far as examples go… well, quite frankly, that’s what the OP is all about, right?
An analogy I keep thinking of is the typescript vs javascript tradeoffs when programming with a team.
Apologies, but I am deliberately not responding to this analogy and inferences from it, because adding an argument about programming languages to this discussion seems like the diametric opposite of productive.
Whether you are building an engine for a tractor or a race car, there are certain principles and guidelines that will help you get there. Things like:
measure twice before you cut the steel
Double check your fittings before you test the engine
keep track of which direction the axle is supposed to be turning for the type of engine you are making
etc.
The point of the guidelines isn’t to enforce a norm of making a particular type of engine. They exist to help groups of engineer make any kind of engine at all. People building engines make consistent, predictable mistakes. The guidelines are about helping people move past those mistakes so they can actually build an engine that has a chance of working.
The point of “rationalist guidelines” isn’t to enforce a norm of making particular types of beliefs. They exist to help groups of people stay connected to reality at all. People make consistent, predictable mistakes. The guidelines are for helping people avoid them. Regardless of what those beliefs are.
Well, for one thing, we might reasonably ask whether these guidelines (or anything sufficiently similar to these guidelines to identifiably be “the same idea”, and not just “generic stuff that many other people have said before”) are, in fact, needed in order for a group of people to “stay connected to reality at all”. Indeed we might go further and ask whether these guidelines do, in fact, help a group of people “stay connected to reality at all”.
In other words, you say: “The guidelines are for helping people avoid [consistent, predictable mistakes]” (emphasis mine). Yes, the guidelines are “for” that—in the sense that they are intended to fulfill the stated function. But are the guidelines good for that purpose? It’s an open question, surely! And it’s one that merely asserting the guidelines’ intent does not do much to answer.
But, perhaps even more importantly, we might, even more reasonably, ask whether any particular guideline is any good for helping a group of people “stay connected to reality at all”. Surely we can imagine a scenario where some of the guidelines are good for that, but others of the guidelines are’t—yes? Indeed, it’s not out of the question that some of the guidelines are good for that purpose, but others of the guidelines are actively bad for it! Surely we can’t reject that possibility a priori, simply because the guidelines are merely labeled “guidelines for rationalist discourse, which are necessary in order to avoid consistent, predictable mistakes, and stay connected to reality at all”—right?
I agree wholeheartedly that the intent of the guidelines isn’t enough. Do you have examples in mind where following a given guideline leads to worse outcomes than not following the guideline?
If so, we can talk about that particular guideline itself, without throwing away the whole concept of guidelines to try to do better.
An analogy I keep thinking of is the typescript vs javascript tradeoffs when programming with a team. Unless you have a weird special-case, it’s just straight up more useful to work with other people’s code where the type signatures are explicit. There’s less guessing, and therefore less mistakes. Yes, there are tradeoffs. You gain better understanding at the slight cost of implementation code.
The thing is, you pay that cost anyway. You either pay it upfront, and people can make smoother progress with less mistakes, or they make mistakes and have to figure out the type signatures the hard way.
People either distinguish between their observations and inferences explicitly, or you spend extra time, and make predictable mistakes, until the participants in the discourse figure out the distinction during the course of the conversation. If they can’t, then the conversation doesn’t go anywhere on that topic.
I don’t see any way of getting around this if you want to avoid making dumb mistakes in conversation. Not every change is an improvement, but every improvement is necessarily a change. If we want to raise the sanity waterline and have discourse that more reliably leads to us winning, we have to change things.
Yes, sure, we shouldn’t throw away the concept; but that’s not at all a reason to start with the presumption that these particular guidelines are any good!
As far as examples go… well, quite frankly, that’s what the OP is all about, right?
Apologies, but I am deliberately not responding to this analogy and inferences from it, because adding an argument about programming languages to this discussion seems like the diametric opposite of productive.