One option might be ‘do the rationalist-ish thing when you’re forced to because it’s decision-relevant; but when you’re just analyzing an interesting intellectual puzzle for fun, don’t do the rationalist-ish thing’
This is the closest to what I was trying to say, but I would scope my criticism even more narrowly. To try and put it bluntly and briefly: Don’t choose to suspend disbelief for multiple core hypotheses within your argument, while simultaneously holding that the final conclusion built off of them is objectively likely and has been supported throughout.
The motte with this argument style, that your conclusion is the best you can do given your limited data, is true and I agree. Because of that this is a genuinely good technique for decision making in a limited space, as you mention. What I see as the bailey though, that your conclusion is actually probable in a real and objective sense, and that you’ve proven it to be so with supporting logic and data, is what doesn’t follow to me. Because you haven’t falsified anything in an objective sense, there is no guaranteed probability or likelihood that you are correct, and you are more likely to be incorrect the more times in your argument you’ve chosen to deliberately suspend disbelief for one of your hypotheses to carry onward. Confidence intervals are a number you’re applying to your own feelings, not actual odds of correctness, so can’t be objectively used to calculate your chance of being right overall.
Put another way, in science it is totally possible and reasonable for a researcher to have an informed hypothesis that multiple hypothetical mechanisms in the world all exist, and that they combine together to cause some broader behavior that so far has been unexplained. But if this researcher were to jump to asserting that the broader behavior is probably happening because of all these hypothetical mechanisms, without first actively validating all the individual hypotheses with falsifiable experiments, we’d label their proposed broad system of belief as a pseudoscience. The pseudoscience label would still be true even if their final conclusion turned out to be accurate, because the problem here is with the form (assuming multiple mechanisms are real without validating them) rather than the content (the mechanisms themselves). This becomes better or worse the more of these hypothetical but unproven mechanisms need to exist and depend on each other for the researcher’s final conclusion to be true.
I hear you on examples, but since I don’t like posts that do this I don’t have any saved to point at unfortunately. I can go looking for new ones that do this if you think it would still be helpful though.
To try and put it bluntly and briefly: Don’t choose to suspend disbelief for multiple core hypotheses within your argument, while simultaneously holding that the final conclusion built off of them is objectively likely and has been supported throughout.
I agree with what you are saying...but my brief version would be “don’t confuse absolute plausibility with relative plausibility”.
This is the closest to what I was trying to say, but I would scope my criticism even more narrowly. To try and put it bluntly and briefly: Don’t choose to suspend disbelief for multiple core hypotheses within your argument, while simultaneously holding that the final conclusion built off of them is objectively likely and has been supported throughout.
The motte with this argument style, that your conclusion is the best you can do given your limited data, is true and I agree. Because of that this is a genuinely good technique for decision making in a limited space, as you mention. What I see as the bailey though, that your conclusion is actually probable in a real and objective sense, and that you’ve proven it to be so with supporting logic and data, is what doesn’t follow to me. Because you haven’t falsified anything in an objective sense, there is no guaranteed probability or likelihood that you are correct, and you are more likely to be incorrect the more times in your argument you’ve chosen to deliberately suspend disbelief for one of your hypotheses to carry onward. Confidence intervals are a number you’re applying to your own feelings, not actual odds of correctness, so can’t be objectively used to calculate your chance of being right overall.
Put another way, in science it is totally possible and reasonable for a researcher to have an informed hypothesis that multiple hypothetical mechanisms in the world all exist, and that they combine together to cause some broader behavior that so far has been unexplained. But if this researcher were to jump to asserting that the broader behavior is probably happening because of all these hypothetical mechanisms, without first actively validating all the individual hypotheses with falsifiable experiments, we’d label their proposed broad system of belief as a pseudoscience. The pseudoscience label would still be true even if their final conclusion turned out to be accurate, because the problem here is with the form (assuming multiple mechanisms are real without validating them) rather than the content (the mechanisms themselves). This becomes better or worse the more of these hypothetical but unproven mechanisms need to exist and depend on each other for the researcher’s final conclusion to be true.
I hear you on examples, but since I don’t like posts that do this I don’t have any saved to point at unfortunately. I can go looking for new ones that do this if you think it would still be helpful though.
I agree with what you are saying...but my brief version would be “don’t confuse absolute plausibility with relative plausibility”.