Based on our rational approach we are at a disadvantage for discovering these truths.
Is that a bad thing?
Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets. That puts them at a disadvantage for winning the lottery. But it gives than an overall advantage in having more money, so I don’t see it as a problem.
The situation you’re describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you’re at a disadvantage in knowing about unlikely-but-true facts that have yet to become mainstream. But you’re also not paying the opportunity cost of trying out many unlikely ideas, most of which don’t pan out. Overall, you’re better off, because you have more time to pursue more promising ways to satisfy your goals.
(And if you’re not better off overall, there’s a different problem. Are you consistently underestimating how useful unlikely fringe beliefs that take lots of effort to test might be, if they were true? Then yes, that’s a problem that can be solved by trying out more fringe beliefs that take lots of effort to test. But it’s a separate problem from the problem of “you don’t try things that look like they aren’t worth the opportunity cost.”)
Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets.
Whereas as someone who understands advanced probability, particularly the value/utility distinction, might.
The situation you’re describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you’re at a disadvantage in knowing about unlikely-but-true facts that have yet to become mainstream. But you’re also not paying the opportunity cost of trying out many unlikely ideas, most of which don’t pan out. Overall, you’re better off, because you have more time to pursue more promising ways to satisfy your goals.
So long as you can put a ceiling on possible benefits.
Your assessment makes the assumption that the knowledge that we are missing is “not that important”. Since we do not know what the knowledge we are missing is, its significance could range from insignificant to essential. We are not at a the point where we can make that distinction so we better start realising and working on the problem. That is my position.
To my eyes your further analysis makes the assumption that the only strategy we can follow would be to randomly try out beliefs. Although I have not formulated a solution (I am currently just describing the problem), I can already see much more efficient ways of navigating the space. I will post when I have something more developed to say about this.
Your assessment makes the assumption that the knowledge that we are missing is “not that important”.
Better to call it a rational estimate than an assumption.
It is perfectly rational to say to onesself “but if I refuse to look into anything which takes a lot of effort to get any evidence for, then I will probably miss out.” We can put math to that sentiment and use it to help decide how much time to spend investigating unlikely claims. Solutions along these lines are sometimes called “taking the outside view”.
To my eyes your further analysis makes the assumption that the only strategy we can follow would be to randomly try out beliefs.
For the sake of engaging with your points 1 thru 5, ProofOfLogic, Kindly, et al. are supposing the existence of a class of claims for which there exists roughly the same amount of evidence pro and con as exists for lucid dreaming. This includes how much we trust the person making the claim, how well the claim itself fits with our existing beliefs, how simple the claim is (ie, Occam’s Razor), how many other people make similar claims, and any other information we might get our hands on. So the assumption for the sake of argument is that these claims look just about equally plausible once everything we know or even suspect is taken into account.
It seems very reasonable to conclude that the best one can do in such a case is choose randomly, if one does in fact want to test out some claim within the class.
But suggestions as to what else might be counted as evidence are certainly welcome.
That is actually very clear :) Thanks. As I was saying to ProofOfLogic this post is about the identification of the difficult space which I think we are all in agreement. The way you explain it I see why you would suggest that choosing at random is the best rational strategy. I would prefer to explore associated topics in a different post so we keep this one self contained (and because I have to think about it!).
But the knowledge that you miss by wasting your time on things with bad evidence instead of spending your time on something else with good evidence could also range from insignificant to essential. And since it has good evidence, more such things are likely to pan out.
But the knowledge that you miss by wasting your time on things with bad evidence instead of spending your time on something else with good evidence could also range from insignificant to essential.
Assuming everything is instrumental, and that your goals/values themselves aren;t going to be changed by any subjective experience.
I think I should be more explicit: Saying that ignoring bad evidence could lead you miss things “ranging from insignificant to essential”
1) is worded in a lopsided way that emphasizes “essential” too much—almost everything you’ll miss is insignificant, with the essential things being vanishingly rare.
2) Is special pleading—many activities could get you to miss things “ranging from insignificant to essential”, including ignoring bad evidence, ignoring claims because they are fraudulent, or ignoring the scientific theories of a 6 year old, and nobody bothers mentioning them.
3) is probably being said because the speaker really wants to treat his bad evidence as good evidence, and is rationalizing it by saying “even bad evidence could have essential knowledge behind it sometimes”.
I am not proposing wasting time with bad evidence. I am just pointing towards a problem that creates a space of difficult to discover truths. The strategy about dealing with this is for another post. This post is concerned with the identification of the issue.
Yes you are. You say that if you believe bad evidence, you may end up believing something true that ranges from insignificant to essential.
This is correct. But you are conflating the identification of the issue with an action strategy that I haven’t suggested. Also do not forget that I am talking about truths that are experientially verifiable not just believed in.
But any belief with any evidence could range from insignificant to essential. And you aren’t mentioning them.
Of course. If there is evidence a rational approach will lead us to the conclusion that it is worth exploring the belief. I think the LW community is perfectly aware of that kind of assesment.
So you must think there’s something special about beliefs based on bad evidence, that gives you a reason to mention them.
I think there is something special about truths for which the verification is experientially available, but for which there is currently no evidence.
Is that a bad thing?
Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets. That puts them at a disadvantage for winning the lottery. But it gives than an overall advantage in having more money, so I don’t see it as a problem.
The situation you’re describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you’re at a disadvantage in knowing about unlikely-but-true facts that have yet to become mainstream. But you’re also not paying the opportunity cost of trying out many unlikely ideas, most of which don’t pan out. Overall, you’re better off, because you have more time to pursue more promising ways to satisfy your goals.
(And if you’re not better off overall, there’s a different problem. Are you consistently underestimating how useful unlikely fringe beliefs that take lots of effort to test might be, if they were true? Then yes, that’s a problem that can be solved by trying out more fringe beliefs that take lots of effort to test. But it’s a separate problem from the problem of “you don’t try things that look like they aren’t worth the opportunity cost.”)
Whereas as someone who understands advanced probability, particularly the value/utility distinction, might.
So long as you can put a ceiling on possible benefits.
I propose that it is a bad thing.
Your assessment makes the assumption that the knowledge that we are missing is “not that important”. Since we do not know what the knowledge we are missing is, its significance could range from insignificant to essential. We are not at a the point where we can make that distinction so we better start realising and working on the problem. That is my position.
To my eyes your further analysis makes the assumption that the only strategy we can follow would be to randomly try out beliefs. Although I have not formulated a solution (I am currently just describing the problem), I can already see much more efficient ways of navigating the space. I will post when I have something more developed to say about this.
Better to call it a rational estimate than an assumption.
It is perfectly rational to say to onesself “but if I refuse to look into anything which takes a lot of effort to get any evidence for, then I will probably miss out.” We can put math to that sentiment and use it to help decide how much time to spend investigating unlikely claims. Solutions along these lines are sometimes called “taking the outside view”.
For the sake of engaging with your points 1 thru 5, ProofOfLogic, Kindly, et al. are supposing the existence of a class of claims for which there exists roughly the same amount of evidence pro and con as exists for lucid dreaming. This includes how much we trust the person making the claim, how well the claim itself fits with our existing beliefs, how simple the claim is (ie, Occam’s Razor), how many other people make similar claims, and any other information we might get our hands on. So the assumption for the sake of argument is that these claims look just about equally plausible once everything we know or even suspect is taken into account.
It seems very reasonable to conclude that the best one can do in such a case is choose randomly, if one does in fact want to test out some claim within the class.
But suggestions as to what else might be counted as evidence are certainly welcome.
That is actually very clear :) Thanks. As I was saying to ProofOfLogic this post is about the identification of the difficult space which I think we are all in agreement. The way you explain it I see why you would suggest that choosing at random is the best rational strategy. I would prefer to explore associated topics in a different post so we keep this one self contained (and because I have to think about it!).
Thanks for engaging!
But the knowledge that you miss by wasting your time on things with bad evidence instead of spending your time on something else with good evidence could also range from insignificant to essential. And since it has good evidence, more such things are likely to pan out.
Assuming everything is instrumental, and that your goals/values themselves aren;t going to be changed by any subjective experience.
I think I should be more explicit: Saying that ignoring bad evidence could lead you miss things “ranging from insignificant to essential”
1) is worded in a lopsided way that emphasizes “essential” too much—almost everything you’ll miss is insignificant, with the essential things being vanishingly rare.
2) Is special pleading—many activities could get you to miss things “ranging from insignificant to essential”, including ignoring bad evidence, ignoring claims because they are fraudulent, or ignoring the scientific theories of a 6 year old, and nobody bothers mentioning them.
3) is probably being said because the speaker really wants to treat his bad evidence as good evidence, and is rationalizing it by saying “even bad evidence could have essential knowledge behind it sometimes”.
I am not proposing wasting time with bad evidence. I am just pointing towards a problem that creates a space of difficult to discover truths. The strategy about dealing with this is for another post. This post is concerned with the identification of the issue.
Yes you are. You say that if you believe bad evidence, you may end up believing something true that ranges from insignificant to essential.
But any belief with any evidence could range from insignificant to essential. And you aren’t mentioning them.
So you must think there’s something special about beliefs based on bad evidence, that gives you a reason to mention them.
This is correct. But you are conflating the identification of the issue with an action strategy that I haven’t suggested. Also do not forget that I am talking about truths that are experientially verifiable not just believed in.
Of course. If there is evidence a rational approach will lead us to the conclusion that it is worth exploring the belief. I think the LW community is perfectly aware of that kind of assesment.
I think there is something special about truths for which the verification is experientially available, but for which there is currently no evidence.