I don’t think it works if there isn’t a correct answer, e.g. predicting the future, but I’m positive this is a good way to improve how convincing your claims are to others.
If there isn’t ground truth about a claim to refer to, any disagreement around a claim is going to be about how convincing and internally/externally consistent the claim is. As we keep learning from prediction markets, rationale don’t always lead to correctness. Many cases of good heuristics (priors) doing extremely well.
If you want to be correct, good reasoning is often a nice-to-have, not a need-to-have.
I don’t think it works if there isn’t a correct answer, e.g. predicting the future, but I’m positive this is a good way to improve how convincing your claims are to others.
If there isn’t ground truth about a claim to refer to, any disagreement around a claim is going to be about how convincing and internally/externally consistent the claim is. As we keep learning from prediction markets, rationale don’t always lead to correctness. Many cases of good heuristics (priors) doing extremely well.
If you want to be correct, good reasoning is often a nice-to-have, not a need-to-have.