Well, it could be a good thing, if you noticed an anti-correlation between your no-good-reason opinions and whatever the best-reasons opinion is.
It doesn’t just depend on the accuracy of the opinions themselves, but rather on how the recipient weighs the information.
If you have a certainty of 55% on a binary (A) or (B) question, and you respond “it seems to me there’s a 55% chance of “Yes” ”, I wouldn’t be surprised if the non-perfect Bayesian recipient of the information would treat this as if you had mentioned a certainty of 75%.
In this situation, you’re better off just saying “I don’t know” (essentially 50%) which doesn’t bias the result towards the wrong direction.
Now if you believe the person you talk to will NOT put more weight on your information than they should, then sure, you should feel free to speak.
If you have a certainty of 55% on a binary (A) or (B) question, and you respond “it seems to me there’s a 55% chance of “Yes” ”, I wouldn’t be surprised if the non-perfect Bayesian recipient of the information would treat this as if you had mentioned a certainty of 75%.
Hence my useful phrases; in practice they diminish the other person’s measure of my certainty significantly.
It doesn’t just depend on the accuracy of the opinions themselves, but rather on how the recipient weighs the information.
If you have a certainty of 55% on a binary (A) or (B) question, and you respond “it seems to me there’s a 55% chance of “Yes” ”, I wouldn’t be surprised if the non-perfect Bayesian recipient of the information would treat this as if you had mentioned a certainty of 75%.
In this situation, you’re better off just saying “I don’t know” (essentially 50%) which doesn’t bias the result towards the wrong direction.
Now if you believe the person you talk to will NOT put more weight on your information than they should, then sure, you should feel free to speak.
Hence my useful phrases; in practice they diminish the other person’s measure of my certainty significantly.