Because you wrote one sentence without actually giving the argument. So I went with my prior on your argument.
That’s what I’m suggesting you not do.
Writing out arguments, and in general, making one’s thought processes transparent, is a lot of work. We benefit greatly by not having a norm of only stating conclusions that are a small inferential distance away from public knowledge.
I’m not saying you should (necessarily) believe what I say, just because I say it. You just shouldn’t jump to the conclusion that I don’t have justifications beyond what I have stated or am willing to bother stating.
If I were to restrict myself to making claims that I could substantiate in a mere ~2 hours, that would preclude the possibility of me sharing the vast majority of what I know.
I’m not saying you should (necessarily) believe what I say, just because I say it.
If I’m not going to believe what you say, why even bother saying it in the first place? Isn’t just saying things “a lot of work”, too?
Writing out arguments, and in general, making one’s thought processes transparent, is a lot of work.
Guess what, verifying arguments that haven’t been written out transparently is a lot more work! And it’s often a requirement if what you say is to be useful at all. It is precisely when inferential distances are long that clarifying one’s argument becomes critically important!
Well, if your justifications are truly marvelous but the margin of this post is too narrow to contain them, you are basically asking everyone to trust you that you know what you’re talking about. This makes it an argument by reputation (or, in a slightly more pronounced form, an argument by authority).
I am fairly confident that you have justifications you haven’t bothered stating. But that’s not the question, the question is whether they are good justifications and this is a much more complicated matter.
You don’t seem to be engaging with what I said in the grandparent at all. The claim was:
We benefit greatly by not having a norm of only stating conclusions that are a small inferential distance away from public knowledge
Maybe you disagree with this, but you don’t even explicitly state disagreement; your comment just looks like an attempt to enforce the very norm that I claimed was undesirable.
I have often been bothered by that norm myself, especially on Less Wrong, but it’s not clear what you’re proposing to put in its place. Given the fact that human beings are not even close to the kind of ideal reasoners that Aumann’s theorem applies to, if you state something very far from what other people think, you cannot expect any sudden change in their probability estimate. They are just going to ignore you at best.
If you’re simply saying that people should assume you have reasons, they probably do assume that. But if you say something they think is wrong, they will just assume your reasons are bad ones. It is not clear why or how you can prevent them from doing that, since you probably do the same thing to them.
“Conclusions that are at a huge inferential distance” doesn’t look to me like a useful category. It includes both quantum physics and the lizardmen-are-secretly-ruling-the-Earth theory.
You (and anyone else) can, of course, offer such conclusions. But I don’t know why would you expect them to necessarily be taken seriously. How do you suggest people filter out rank crackpottery?
How do you distinguish claims in advanced physics from claims about lizardmen? There are ways of judging meaningfulness and truth of conclusions that you can’t yet understand or verify. There do exist experts who know things that you don’t yet know, but who you can identify as having expertise about those claims. Having the norm of not mentioning such claims is an arbitrary restriction on the kinds of considerations that can be used to think or argue about a point.
How do you distinguish claims in advanced physics from claims about lizardmen?
I can buy books and read papers about advanced physics that will outline the arguments in support of these claims from first principles. In a pinch, I could even refrain from verifying the claims myself, and simply trust that others have done so competently. None of this is true when a claim is simply unsupported!
That’s what I’m suggesting you not do.
Writing out arguments, and in general, making one’s thought processes transparent, is a lot of work. We benefit greatly by not having a norm of only stating conclusions that are a small inferential distance away from public knowledge.
I’m not saying you should (necessarily) believe what I say, just because I say it. You just shouldn’t jump to the conclusion that I don’t have justifications beyond what I have stated or am willing to bother stating.
Cf. Jonah’s remark:
If I’m not going to believe what you say, why even bother saying it in the first place? Isn’t just saying things “a lot of work”, too?
Guess what, verifying arguments that haven’t been written out transparently is a lot more work! And it’s often a requirement if what you say is to be useful at all. It is precisely when inferential distances are long that clarifying one’s argument becomes critically important!
Well, if your justifications are truly marvelous but the margin of this post is too narrow to contain them, you are basically asking everyone to trust you that you know what you’re talking about. This makes it an argument by reputation (or, in a slightly more pronounced form, an argument by authority).
I am fairly confident that you have justifications you haven’t bothered stating. But that’s not the question, the question is whether they are good justifications and this is a much more complicated matter.
You don’t seem to be engaging with what I said in the grandparent at all. The claim was:
Maybe you disagree with this, but you don’t even explicitly state disagreement; your comment just looks like an attempt to enforce the very norm that I claimed was undesirable.
I have often been bothered by that norm myself, especially on Less Wrong, but it’s not clear what you’re proposing to put in its place. Given the fact that human beings are not even close to the kind of ideal reasoners that Aumann’s theorem applies to, if you state something very far from what other people think, you cannot expect any sudden change in their probability estimate. They are just going to ignore you at best.
If you’re simply saying that people should assume you have reasons, they probably do assume that. But if you say something they think is wrong, they will just assume your reasons are bad ones. It is not clear why or how you can prevent them from doing that, since you probably do the same thing to them.
“Conclusions that are at a huge inferential distance” doesn’t look to me like a useful category. It includes both quantum physics and the lizardmen-are-secretly-ruling-the-Earth theory.
You (and anyone else) can, of course, offer such conclusions. But I don’t know why would you expect them to necessarily be taken seriously. How do you suggest people filter out rank crackpottery?
How do you distinguish claims in advanced physics from claims about lizardmen? There are ways of judging meaningfulness and truth of conclusions that you can’t yet understand or verify. There do exist experts who know things that you don’t yet know, but who you can identify as having expertise about those claims. Having the norm of not mentioning such claims is an arbitrary restriction on the kinds of considerations that can be used to think or argue about a point.
I can buy books and read papers about advanced physics that will outline the arguments in support of these claims from first principles. In a pinch, I could even refrain from verifying the claims myself, and simply trust that others have done so competently. None of this is true when a claim is simply unsupported!