So, uh, is the typical claim that has an equal lack of scientific evidence true, or false?
[5.1] As ProofOfLogic indicates with his example of shamanistic scammers the space of claims about subjective experiences is saturated with demonstrably false claims.
[5.2] This actually causes us to adjust and have a rule of ignoring all strange sounding claims that require subjective evidence (except if it is trivial to test).
You are right that if the claim is true an idealised rational assessment should be to believe the claim. But how do you make a rational assessment when you lack evidence?
(More precisely, we’d want to do a cost-benefit analysis of believing/disbelieving a true claim vs. a comparably difficult-to-test false claim.)
When lacking evidence, the testing process is difficult, weird and lengthy—and in light of the ‘saturation’ mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.
When lacking evidence, the testing process is difficult, weird and lengthy—and in light of the ‘saturation’ mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.
And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.
From the point of view of someone who has a true claim but doesn’t have evidence for it and can’t easily convince someone else, you’re right that this approach is frustrating. But if I were to relax my standards, the odds are that I wouldn’t start with your true claim, but start working my way through a bunch of other false claims instead.
Evidence, in the general sense of “some way of filtering out the false claims”, can take on many forms. For example, I can choose to try out lucid dreaming, not because I’ve found scientific evidence that it works, but because it’s presented to me by someone from a community with a good track record of finding weird things that work. Or maybe the person explaining lucid dreaming to me is scrupulously honest and knows me very well, so that when they tell me “this is a real effect and has effects you’ll find worth the cost of trying it out”, I believe them.
And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.
From the point of view of someone who has a true claim but doesn’t have evidence for it and can’t easily convince someone else, you’re right that this approach is frustrating. But if I were to relax my standards, the odds are that I wouldn’t start with your true claim, but start working my way through a bunch of other false claims instead.
Exactly, that is why I am pointing towards the problem. Based on our rational approach we are at a disadvantage for discovering these truths. I want to use this post as a reference to the issue as it can become important in other subjects.
I can choose to try out lucid dreaming, not because I’ve found scientific evidence that it works, but because it’s presented to me by someone from a community with a good track record of finding weird things that work. Or maybe the person explaining lucid dreaming to me is scrupulously honest and knows me very well, so that when they tell me “this is a real effect and has effects you’ll find worth the cost of trying it out”, I believe them.
Yes, that is the other way in. Trust and respect. Unfortunately, I feel we tend to surround ourselves with people that are similar to us and thus selecting our acquaintances in the same way we select ideas to focus on. In my experience (which is not necessarily indicative), people tend to just blank out unfamiliar information or consider it a bit of an eccentricity. In addition, as stated, if a subject requires substantial effort before you can confirm its validity it becomes exponentially harder to communicate even in these circumstances.
Based on our rational approach we are at a disadvantage for discovering these truths.
Is that a bad thing?
Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets. That puts them at a disadvantage for winning the lottery. But it gives than an overall advantage in having more money, so I don’t see it as a problem.
The situation you’re describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you’re at a disadvantage in knowing about unlikely-but-true facts that have yet to become mainstream. But you’re also not paying the opportunity cost of trying out many unlikely ideas, most of which don’t pan out. Overall, you’re better off, because you have more time to pursue more promising ways to satisfy your goals.
(And if you’re not better off overall, there’s a different problem. Are you consistently underestimating how useful unlikely fringe beliefs that take lots of effort to test might be, if they were true? Then yes, that’s a problem that can be solved by trying out more fringe beliefs that take lots of effort to test. But it’s a separate problem from the problem of “you don’t try things that look like they aren’t worth the opportunity cost.”)
Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets.
Whereas as someone who understands advanced probability, particularly the value/utility distinction, might.
The situation you’re describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you’re at a disadvantage in knowing about unlikely-but-true facts that have yet to become mainstream. But you’re also not paying the opportunity cost of trying out many unlikely ideas, most of which don’t pan out. Overall, you’re better off, because you have more time to pursue more promising ways to satisfy your goals.
So long as you can put a ceiling on possible benefits.
Your assessment makes the assumption that the knowledge that we are missing is “not that important”. Since we do not know what the knowledge we are missing is, its significance could range from insignificant to essential. We are not at a the point where we can make that distinction so we better start realising and working on the problem. That is my position.
To my eyes your further analysis makes the assumption that the only strategy we can follow would be to randomly try out beliefs. Although I have not formulated a solution (I am currently just describing the problem), I can already see much more efficient ways of navigating the space. I will post when I have something more developed to say about this.
Your assessment makes the assumption that the knowledge that we are missing is “not that important”.
Better to call it a rational estimate than an assumption.
It is perfectly rational to say to onesself “but if I refuse to look into anything which takes a lot of effort to get any evidence for, then I will probably miss out.” We can put math to that sentiment and use it to help decide how much time to spend investigating unlikely claims. Solutions along these lines are sometimes called “taking the outside view”.
To my eyes your further analysis makes the assumption that the only strategy we can follow would be to randomly try out beliefs.
For the sake of engaging with your points 1 thru 5, ProofOfLogic, Kindly, et al. are supposing the existence of a class of claims for which there exists roughly the same amount of evidence pro and con as exists for lucid dreaming. This includes how much we trust the person making the claim, how well the claim itself fits with our existing beliefs, how simple the claim is (ie, Occam’s Razor), how many other people make similar claims, and any other information we might get our hands on. So the assumption for the sake of argument is that these claims look just about equally plausible once everything we know or even suspect is taken into account.
It seems very reasonable to conclude that the best one can do in such a case is choose randomly, if one does in fact want to test out some claim within the class.
But suggestions as to what else might be counted as evidence are certainly welcome.
That is actually very clear :) Thanks. As I was saying to ProofOfLogic this post is about the identification of the difficult space which I think we are all in agreement. The way you explain it I see why you would suggest that choosing at random is the best rational strategy. I would prefer to explore associated topics in a different post so we keep this one self contained (and because I have to think about it!).
But the knowledge that you miss by wasting your time on things with bad evidence instead of spending your time on something else with good evidence could also range from insignificant to essential. And since it has good evidence, more such things are likely to pan out.
But the knowledge that you miss by wasting your time on things with bad evidence instead of spending your time on something else with good evidence could also range from insignificant to essential.
Assuming everything is instrumental, and that your goals/values themselves aren;t going to be changed by any subjective experience.
I think I should be more explicit: Saying that ignoring bad evidence could lead you miss things “ranging from insignificant to essential”
1) is worded in a lopsided way that emphasizes “essential” too much—almost everything you’ll miss is insignificant, with the essential things being vanishingly rare.
2) Is special pleading—many activities could get you to miss things “ranging from insignificant to essential”, including ignoring bad evidence, ignoring claims because they are fraudulent, or ignoring the scientific theories of a 6 year old, and nobody bothers mentioning them.
3) is probably being said because the speaker really wants to treat his bad evidence as good evidence, and is rationalizing it by saying “even bad evidence could have essential knowledge behind it sometimes”.
I am not proposing wasting time with bad evidence. I am just pointing towards a problem that creates a space of difficult to discover truths. The strategy about dealing with this is for another post. This post is concerned with the identification of the issue.
Yes you are. You say that if you believe bad evidence, you may end up believing something true that ranges from insignificant to essential.
This is correct. But you are conflating the identification of the issue with an action strategy that I haven’t suggested. Also do not forget that I am talking about truths that are experientially verifiable not just believed in.
But any belief with any evidence could range from insignificant to essential. And you aren’t mentioning them.
Of course. If there is evidence a rational approach will lead us to the conclusion that it is worth exploring the belief. I think the LW community is perfectly aware of that kind of assesment.
So you must think there’s something special about beliefs based on bad evidence, that gives you a reason to mention them.
I think there is something special about truths for which the verification is experientially available, but for which there is currently no evidence.
Based on our rational approach we are at a disadvantage for discovering these truths.
As I argued, assigning accurate (perhaps low, perhaps high) probabilities to the truth of such claims (of the general category which lucid dreaming falls into) does not make it harder—not even a little harder—to discover the truth about lucid dreaming. What makes it hard is the large number of similar but bogus claims to sift through, as well as the difficulty of lucid dreaming itself. Assigning an appropriate probability based on past experience with these sorts of claims only helps us because it allows us to make good decisions about how much of our time to spend investigating such claims.
What you seem to be missing (maybe?) is that we need to have a general policy which we can be satisfied with in “situations of this kind”. You’re saying that what we should really do is trust our friend who is telling us about lucid dreaming (and, in fact, I agree with that policy). But if it’s rational for us to ascribe a really low probability (I don’t think it is), that’s because we see a lot of similar claims to this which turn out to be false. We can still try a lot of these things, with an experimental attitude, if the payoff of finding a true claim balances well against the number of false claims we expect to sift through in the process. However, we probably don’t have the attention of looking at all such cases, which means we may miss lucid dreaming by accident. But this is not a flaw in the strategy; this is just a difficulty of the situation.
I’m frustrated because it seems like you are misunderstanding a part of the response Kindly and I are making, but you’re doing a pretty good job of engaging with our replies and trying to sift out what you think and where you start disagreeing with our arguments. I’m just not quite sure yet where the gap between our views is.
I don’t think there is a gap. I am pointing towards a difficulty. If you are acknowledging the difficulty (which you are) then we are in agreement. I am not sure why it feels like a disagreement, Don’t forget that at the start you had a reason for disagreeing which was my erroneous use of the word rationality. I have now corrected that so maybe we are arguing from the momentum of our first disagreement :P
[5.1] As ProofOfLogic indicates with his example of shamanistic scammers the space of claims about subjective experiences is saturated with demonstrably false claims.
[5.2] This actually causes us to adjust and have a rule of ignoring all strange sounding claims that require subjective evidence (except if it is trivial to test).
You are right that if the claim is true an idealised rational assessment should be to believe the claim. But how do you make a rational assessment when you lack evidence?
When lacking evidence, the testing process is difficult, weird and lengthy—and in light of the ‘saturation’ mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.
And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.
From the point of view of someone who has a true claim but doesn’t have evidence for it and can’t easily convince someone else, you’re right that this approach is frustrating. But if I were to relax my standards, the odds are that I wouldn’t start with your true claim, but start working my way through a bunch of other false claims instead.
Evidence, in the general sense of “some way of filtering out the false claims”, can take on many forms. For example, I can choose to try out lucid dreaming, not because I’ve found scientific evidence that it works, but because it’s presented to me by someone from a community with a good track record of finding weird things that work. Or maybe the person explaining lucid dreaming to me is scrupulously honest and knows me very well, so that when they tell me “this is a real effect and has effects you’ll find worth the cost of trying it out”, I believe them.
Exactly, that is why I am pointing towards the problem. Based on our rational approach we are at a disadvantage for discovering these truths. I want to use this post as a reference to the issue as it can become important in other subjects.
Yes, that is the other way in. Trust and respect. Unfortunately, I feel we tend to surround ourselves with people that are similar to us and thus selecting our acquaintances in the same way we select ideas to focus on. In my experience (which is not necessarily indicative), people tend to just blank out unfamiliar information or consider it a bit of an eccentricity. In addition, as stated, if a subject requires substantial effort before you can confirm its validity it becomes exponentially harder to communicate even in these circumstances.
Is that a bad thing?
Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets. That puts them at a disadvantage for winning the lottery. But it gives than an overall advantage in having more money, so I don’t see it as a problem.
The situation you’re describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you’re at a disadvantage in knowing about unlikely-but-true facts that have yet to become mainstream. But you’re also not paying the opportunity cost of trying out many unlikely ideas, most of which don’t pan out. Overall, you’re better off, because you have more time to pursue more promising ways to satisfy your goals.
(And if you’re not better off overall, there’s a different problem. Are you consistently underestimating how useful unlikely fringe beliefs that take lots of effort to test might be, if they were true? Then yes, that’s a problem that can be solved by trying out more fringe beliefs that take lots of effort to test. But it’s a separate problem from the problem of “you don’t try things that look like they aren’t worth the opportunity cost.”)
Whereas as someone who understands advanced probability, particularly the value/utility distinction, might.
So long as you can put a ceiling on possible benefits.
I propose that it is a bad thing.
Your assessment makes the assumption that the knowledge that we are missing is “not that important”. Since we do not know what the knowledge we are missing is, its significance could range from insignificant to essential. We are not at a the point where we can make that distinction so we better start realising and working on the problem. That is my position.
To my eyes your further analysis makes the assumption that the only strategy we can follow would be to randomly try out beliefs. Although I have not formulated a solution (I am currently just describing the problem), I can already see much more efficient ways of navigating the space. I will post when I have something more developed to say about this.
Better to call it a rational estimate than an assumption.
It is perfectly rational to say to onesself “but if I refuse to look into anything which takes a lot of effort to get any evidence for, then I will probably miss out.” We can put math to that sentiment and use it to help decide how much time to spend investigating unlikely claims. Solutions along these lines are sometimes called “taking the outside view”.
For the sake of engaging with your points 1 thru 5, ProofOfLogic, Kindly, et al. are supposing the existence of a class of claims for which there exists roughly the same amount of evidence pro and con as exists for lucid dreaming. This includes how much we trust the person making the claim, how well the claim itself fits with our existing beliefs, how simple the claim is (ie, Occam’s Razor), how many other people make similar claims, and any other information we might get our hands on. So the assumption for the sake of argument is that these claims look just about equally plausible once everything we know or even suspect is taken into account.
It seems very reasonable to conclude that the best one can do in such a case is choose randomly, if one does in fact want to test out some claim within the class.
But suggestions as to what else might be counted as evidence are certainly welcome.
That is actually very clear :) Thanks. As I was saying to ProofOfLogic this post is about the identification of the difficult space which I think we are all in agreement. The way you explain it I see why you would suggest that choosing at random is the best rational strategy. I would prefer to explore associated topics in a different post so we keep this one self contained (and because I have to think about it!).
Thanks for engaging!
But the knowledge that you miss by wasting your time on things with bad evidence instead of spending your time on something else with good evidence could also range from insignificant to essential. And since it has good evidence, more such things are likely to pan out.
Assuming everything is instrumental, and that your goals/values themselves aren;t going to be changed by any subjective experience.
I think I should be more explicit: Saying that ignoring bad evidence could lead you miss things “ranging from insignificant to essential”
1) is worded in a lopsided way that emphasizes “essential” too much—almost everything you’ll miss is insignificant, with the essential things being vanishingly rare.
2) Is special pleading—many activities could get you to miss things “ranging from insignificant to essential”, including ignoring bad evidence, ignoring claims because they are fraudulent, or ignoring the scientific theories of a 6 year old, and nobody bothers mentioning them.
3) is probably being said because the speaker really wants to treat his bad evidence as good evidence, and is rationalizing it by saying “even bad evidence could have essential knowledge behind it sometimes”.
I am not proposing wasting time with bad evidence. I am just pointing towards a problem that creates a space of difficult to discover truths. The strategy about dealing with this is for another post. This post is concerned with the identification of the issue.
Yes you are. You say that if you believe bad evidence, you may end up believing something true that ranges from insignificant to essential.
But any belief with any evidence could range from insignificant to essential. And you aren’t mentioning them.
So you must think there’s something special about beliefs based on bad evidence, that gives you a reason to mention them.
This is correct. But you are conflating the identification of the issue with an action strategy that I haven’t suggested. Also do not forget that I am talking about truths that are experientially verifiable not just believed in.
Of course. If there is evidence a rational approach will lead us to the conclusion that it is worth exploring the belief. I think the LW community is perfectly aware of that kind of assesment.
I think there is something special about truths for which the verification is experientially available, but for which there is currently no evidence.
As I argued, assigning accurate (perhaps low, perhaps high) probabilities to the truth of such claims (of the general category which lucid dreaming falls into) does not make it harder—not even a little harder—to discover the truth about lucid dreaming. What makes it hard is the large number of similar but bogus claims to sift through, as well as the difficulty of lucid dreaming itself. Assigning an appropriate probability based on past experience with these sorts of claims only helps us because it allows us to make good decisions about how much of our time to spend investigating such claims.
What you seem to be missing (maybe?) is that we need to have a general policy which we can be satisfied with in “situations of this kind”. You’re saying that what we should really do is trust our friend who is telling us about lucid dreaming (and, in fact, I agree with that policy). But if it’s rational for us to ascribe a really low probability (I don’t think it is), that’s because we see a lot of similar claims to this which turn out to be false. We can still try a lot of these things, with an experimental attitude, if the payoff of finding a true claim balances well against the number of false claims we expect to sift through in the process. However, we probably don’t have the attention of looking at all such cases, which means we may miss lucid dreaming by accident. But this is not a flaw in the strategy; this is just a difficulty of the situation.
I’m frustrated because it seems like you are misunderstanding a part of the response Kindly and I are making, but you’re doing a pretty good job of engaging with our replies and trying to sift out what you think and where you start disagreeing with our arguments. I’m just not quite sure yet where the gap between our views is.
I don’t think there is a gap. I am pointing towards a difficulty. If you are acknowledging the difficulty (which you are) then we are in agreement. I am not sure why it feels like a disagreement, Don’t forget that at the start you had a reason for disagreeing which was my erroneous use of the word rationality. I have now corrected that so maybe we are arguing from the momentum of our first disagreement :P
I think so, sorry!