When lacking evidence, the testing process is difficult, weird and lengthy—and in light of the ‘saturation’ mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.
And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.
From the point of view of someone who has a true claim but doesn’t have evidence for it and can’t easily convince someone else, you’re right that this approach is frustrating. But if I were to relax my standards, the odds are that I wouldn’t start with your true claim, but start working my way through a bunch of other false claims instead.
Evidence, in the general sense of “some way of filtering out the false claims”, can take on many forms. For example, I can choose to try out lucid dreaming, not because I’ve found scientific evidence that it works, but because it’s presented to me by someone from a community with a good track record of finding weird things that work. Or maybe the person explaining lucid dreaming to me is scrupulously honest and knows me very well, so that when they tell me “this is a real effect and has effects you’ll find worth the cost of trying it out”, I believe them.
Is that a bad thing?
Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets. That puts them at a disadvantage for winning the lottery. But it gives than an overall advantage in having more money, so I don’t see it as a problem.
The situation you’re describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you’re at a disadvantage in knowing about unlikely-but-true facts that have yet to become mainstream. But you’re also not paying the opportunity cost of trying out many unlikely ideas, most of which don’t pan out. Overall, you’re better off, because you have more time to pursue more promising ways to satisfy your goals.
(And if you’re not better off overall, there’s a different problem. Are you consistently underestimating how useful unlikely fringe beliefs that take lots of effort to test might be, if they were true? Then yes, that’s a problem that can be solved by trying out more fringe beliefs that take lots of effort to test. But it’s a separate problem from the problem of “you don’t try things that look like they aren’t worth the opportunity cost.”)