I don’t see how you can think that the probability of coming up with a weak argument for a position depends more on whether the position is true than on whether you think the position is true. The arguments clearly aren’t independent, as they have the commonality that they all support the same position. The validity of your meta-argument has such a huge dependence on your ability to search argument space and evaluate all of them that I don’t see how you can think that this compares to a single strong argument.
Given a claim, consider the question “What would the world look like if this claim were true?” and try to generate ~4-8 independent predictions. Then, look at the world, and check whether these predictions are borne out. If they are, then by Bayes’ theorem, you can develop some confidence in the truth of the claim. If the predictions are not borne out, then by Bayes’ theorem, you can develop some confidence that the claim is not true.
I don’t see how you can think that the probability of coming up with a weak argument for a position depends more on whether the position is true than on whether you think the position is true. The arguments clearly aren’t independent, as they have the commonality that they all support the same position. The validity of your meta-argument has such a huge dependence on your ability to search argument space and evaluate all of them that I don’t see how you can think that this compares to a single strong argument.
Given a claim, consider the question “What would the world look like if this claim were true?” and try to generate ~4-8 independent predictions. Then, look at the world, and check whether these predictions are borne out. If they are, then by Bayes’ theorem, you can develop some confidence in the truth of the claim. If the predictions are not borne out, then by Bayes’ theorem, you can develop some confidence that the claim is not true.