A’s claim is immediate evidence, since A is more likely to make the claim if they actually have the experience than if they do not.
It is indeed anecdotal evidence. But since we live in a world where people constantly offer anecdotal evidence to support their claims of unexplained subjective experiences rational people tend to ignore them. But (if my reasoning is correct) the fact is that a real method can work before there is enough evidence to support it. My post attempts to bring to our attention that this will make it really hard to discover certain experiences assuming that they exist.
[That said, 4 is in fact false—in all of my dreams, I always know that I am dreaming, and I never invested any kind of effort whatsoever.]
Yes, that is why I included footnote 1. I think my statements are true for most people but it is not a perfect example. Nevertheless, I feel the example is accurate enough to communicate the underlying argument.
“But since we live in a world where people constantly offer anecdotal evidence to support their claims of unexplained subjective experiences rational people tend to ignore them.”
Stanley Jaki tells this story:
Laplace shouted, “We have had enough such myths,” when his fellow academician Marc-Auguste Pictet urged, in the full hearing of the Académie des Sciences, that attention be given to the report about a huge meteor shower that fell at L’Aigle, near Paris, on April 26, 1803.
I presume this would be an example of Laplace being rational and ignoring this evidence, in your view. In my view, it shows that people trying to be rational sometimes fail to be rational, and one case of this is by ignoring weak evidence, when weak evidence is still evidence. Obviously you do not assume that everything is necessarily correct: but the alternative is to take it as weak evidence, rather than ignoring it.
But (if my reasoning is correct) the fact is that a real method can work before there is enough evidence to support it. My post attempts to bring to our attention that this will make it really hard to discover certain experiences assuming that they exist.
Discounting the evidence doesn’t actually make it any harder for us to discover those experiences. If we don’t want to lose out on such things, then we should try some practices which we assign low probability, to see which ones work. Assigning low probability isn’t what makes this hard—what makes this hard is the large number of similarly-goofy-sounding things which we have to choose from, not knowing which ones will work. Assigning a more accurate probability just allows us to make a more accurate cost-benefit analysis in choosing how much of our time to spend on such things. The actual amount of effort it takes to achieve the results (in cases where results are real) doesn’t change with the level of rationality of our beliefs.
I am phrasing the problem as an issue with rationality when I should have been phrasing it as a type of bias that tends to affect people with a rationality focus. Identifying the bias should allow us to choose a strategy which will in effect be the more rational approach.
Did I understand you correctly?
P.S: I edited the opening paragraph and conclusion to address yours and entirelyuseless’ valid criticism.
Yes, I think that’s right. Especially among those who identify as “skeptics”, who see rationality/science as mostly heightened standards of evidence (and therefore lowered standards of disbelief), there can be a tendency to mistake “I have to assign this a low probability for now” for “I am obligated to ignore this due to lack of evidence”.
The Bayesian system of rationality rejects “rationality-as-heightened-standard-of-evidence”, instead accepting everything as some degree of evidence but requiring us to quantify those degrees. Another important distinction which bears on this point is “assuming is not believing”, discussed on Black Belt Bayesian. I can’t link to the individual post for some reason, but it’s short, so here it is quoted in full:
Assuming Is Not Believing
Suppose I’m participating in a game show. I know that the host will spin a big wheel of misfortune with numbers 1-100 on it, and if it ends on 100, he will open a hatch in the ceiling over my head and dangerously heavy rocks will fall out. (This is a Japanese game show I guess.) For $1 he lets me rent a helmet for the duration of the show, if I so choose.
Do I rent the helmet? Yes. Do I believe that rocks will fall? No. Do I assume that rocks will fall? Yes, but if that doesn’t mean I believe it, then what does it mean? It means that my actions are much more similar (maybe identical) to the actions I’d take if I believed rocks would definitely fall, than to the actions I’d take if I believed rocks would definitely not fall.
So assuming and believing (at least as I’d use the words) are two quite different things. It’s true that the more you believe P the more you should assume P, but it’s also true that the more your actions matter given P, the more you should assume P. All of this could be put into math.
Hopefully nothing shocking here, but I’ve seen it confuse people.
With some stretching you can see the assumptions made by mathematicians in the same way. When you assume, with the intent to disprove it, that there is a largest prime number, you don’t believe there is a largest prime number, but you do act like you believe it. If you believed it you’d try to figure out the consequences too. It’s been argued that scientists disagree among themselves more than Aumann’s agreement theorem condones as rational, and it’s been pointed out that if they didn’t, they wouldn’t be as motivated to explore their own new theories; if so, you could say that the problem is that humans aren’t good enough at disbelieving-but-assuming.
The Bayesian system of rationality rejects “rationality-as-heightened-standard-of-evidence”, instead accepting everything as some degree of evidence but requiring us to quantify those degrees. Another important distinction which bears on this point is “assuming is not believing”
I do like the flexibility of the Bayesian system of rationality and the “assuming is not believing” example you gave. But I do not (at the moment) see how it is any more efficient in the occasion where the evidence is not clearly quantified or is simply really weak.
There seems to me to be a dependence of any system of rational analysis to the current stage of quantifiable evidence. In other words, rational analysis can go up to where science currently is. But there is an experimental space that is open for exploration without convincing intellectual evidence. Navigating this space is a bit of a puzzle..
But this is a half baked thought. I will post when I can express it clearly.
Personally, I find the lucid-dreaming example rather absurd, because I tend to believe a friend who claims they’ve had a mental experience. I might not agree with their analysis of their mental experience; for example, if they say they’ve talked to God in a dream, then I would tend to suspect them of mis-interpreting their experience. I do tend to believe that they’re honestly trying to convey an experience they had, though. And it’s plausible (though far from certain) that the steps which they took in order to get that experience will also work for me.
So, I can imagine a skeptic who brushes off a friend’s report of lucid dreaming as “unscientific”, but I have no sympathy for it. My model of the skeptic is: they have the crazy view that observations made by someone who has a phd, works at a university, and publishes in an academic journal are of a different kind than observations made by other people. Perhaps the lucid-dreaming studies have some interesting MRI scans to show differences in brain activity (I haven’t read them), but they must still rely on descriptions of internal experience which come from human beings in order to establish the basic facts about lucid dreams, right? In no sense is the skeptic’s inability to go beyond the current state of science “rational”; in fact, it strikes me as rather irrational.
This is an especially easy mistake for non-Bayesian rationalists to make because they lack a notion of degrees of belief. There must be a set of trusted beliefs, and a process for beliefs to go from untrusted to trusted. It’s natural for this process to involve the experimental method and peer review. But this kind of naive scientism only makes sense for a consumer of science. If scientists used the kind of “rationality” described in your post, they would never do the experiments to determine whether lucid dreaming is a real thing, because the argument in your post concludes that you can’t rationally commit time and effort to testing uncertain hypotheses. So this kind of naive scientific-rationalism is somewhat self-contradictory.
Yes, that makes sense. I don’t think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments. I am using it as a label for a social group and you are using it as an approach to knowledge. Needless to say this is my mistake as the whole point of this post is about improving the rational approach by becoming aware of what I think as a difficult space of truth.
If scientists used the kind of “rationality” described in your post, they would never do the experiments to determine whether lucid dreaming is a real thing, because the argument in your post concludes that you can’t rationally commit time and effort to testing uncertain hypotheses. So this kind of naive scientific-rationalism is somewhat self-contradictory.
That, I feel, is not accurate. Don’t forget that my example assumes a world before the means to experimentally verify lucid dreaming were available. The people that in the end tested lucid dreaming were the lucid dreamers themselves. This will inevitably happen for all knowledge that can be verified. It will happen by the people that have it. I am talking about the knowledge that is currently unverifiable (except through experience).
I am using it as a label for a social group and you are using it as an approach to knowledge.
The problem is that you point to a social group that quite different from the LW crowd that calls itself rationalists.
Let’s look at my local LW dojo. I know my own skills from first hand experience so they are not a good example, but there are two other people who profess to have very nonstandard mental skills (I think both are as awesome as Lucid Dreaming). I’m not aware of either of those skills having been described in academia. Yet nobody of the aspiring rationalists in our dojo doubts them when they speak about those skills.
We don’t know how any of those skills could be taught to other people. For one of them we tried to do a few exercises and teaching it doesn’t work. The other is likely even more complex so that we aren’t able to even create exercises to teach it.
In neither of those cases the lack of academic description of the skills is any concern to us because we trust us to be honest to each other.
The problem is that you point to a social group that quite different from the LW crowd that calls itself rationalists.
Indeed, I am very happy to learn that and I have internally adjusted my use of the world ‘rationalist’ to what the community suggests and demonstrates through behaviour. I might slip into using it wrongly from time to time (out of habit) but I trust the community will correct me when that happens.
Now of course, inevitably my next question has to be: What are your two skills? :)
One is the ability to create a mental alert that pops up at a specific environment. “When entering the supermarket, I will think of the milk”. That’s a quite useful skill.
the ability to create a mental alert that pops up at a specific environment.
Ah, yes. I tried to implement this kind of trigger myself with some success using mnemonic techniques. I visualise a strong reminder image at a location that is then triggered when I am there. I find this works kind of ok when I first attach the image by visualising it while being in the space but this might be because my visualisation skills are poor. Is that what you are doing or have a you find a different way to attach the trigger? I would love to be able to do that reliably!
The second is mental multithreading.
I can see this happening in a subconscious level—the brain is ‘multithreading’ anyway—but, as far as I can tell through observing myself, the conscious stream is always one. Except if you manage to encode different threads in different senses (like imagining a scene with its sound, vision encoding different things) but I can not see how that is possible. How do you experience the skill?
[I know we are off topic but this is really interesting. If you have a thread discussing these do point me to it.]
Is that what you are doing or have a you find a different way to attach the trigger? I would love to be able to do that reliably!
Nobody in our group managed to copy the skill and make it work for them. From the description of it, it seems like in addition visualizing it’s important to add other senses. For the person who makes it work, I think smell is always included.
but, as far as I can tell through observing myself, the conscious stream is always one
Nope. There is profession—simultaneous interpretation. Basically you translate someone speaking naturally: the speaker doesn’t pause and you speak in a different language simultaneously with listening to the speaker. Typically you’re a sentence behind. This requires having two two separate threads in your consciousness.
It’s a skill that needs to be trained, not a default ability, though.
I am not sure that is necessarily simultaneous. Attention can be on the speaker with the response being produced and delivered automatically through lots and lots of practice. This is what I observe of myself during music improvisation. The automatic part can even have creative variations.
Another example would be to try to read a paragraph that is new to you while at the same time having a proper discussion, versus reading the paragraph while you sing a song you already know by heart. You can do the second thing because the delivery of the song is automatic but not the first because both processes deal with novel input/output.
It is pretty simultaneous because you can’t afford to let any thread fall back to “automatically” for more than a second or two. It is also a recognizable sensation of having and managing two threads in your mind. You do have some main focus which flickers between the two threads depending on which needs attention more, but both stay continuous and coherent.
It is actually hard effort to maintain the two threads and not lose one of them.
You do have some main focus which flickers between the two threads depending on which needs attention more
That is what I observe and I consider this focus to be attention. Of course it could be that I just lack the ability. If you have any kind of exercise/experiment that I can try in order to experience it please share! As long as it isn’t too much effort! (Just kidding :P)
That is what I observe and I consider this focus to be attention.
But the thing is, the focus does not switch completely, it just leans. It’s like you’re standing and shifting your weight from one foot to another, but still you never stand on one foot, you merely adjust the distribution of weight. And it takes explicit effort to keep the two threads coherent, you never “let go” of one completely.
As far as I know, the ability isn’t “natural” (or is rare) -- it needs to be developed and trained.
As to exercises, not sure. There are classes which teach simultaneous interpreting, but you probably need to be bilingual to start with.
The people that in the end tested lucid dreaming were the lucid dreamers themselves.
Ah, right. I agree that invalidates my argument there.
Yes, that makes sense. I don’t think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments.
Ok. (I think I might have also been inferring a larger disagreement than actually existed due to failing to keep in mind the order in which you made certain replies.)
“But since there is no rational evidence to support the claim of A”
A’s claim is immediate evidence, since A is more likely to make the claim if they actually have the experience than if they do not.
[That said, 4 is in fact false—in all of my dreams, I always know that I am dreaming, and I never invested any kind of effort whatsoever.]
It is indeed anecdotal evidence. But since we live in a world where people constantly offer anecdotal evidence to support their claims of unexplained subjective experiences rational people tend to ignore them. But (if my reasoning is correct) the fact is that a real method can work before there is enough evidence to support it. My post attempts to bring to our attention that this will make it really hard to discover certain experiences assuming that they exist.
Yes, that is why I included footnote 1. I think my statements are true for most people but it is not a perfect example. Nevertheless, I feel the example is accurate enough to communicate the underlying argument.
“But since we live in a world where people constantly offer anecdotal evidence to support their claims of unexplained subjective experiences rational people tend to ignore them.”
Stanley Jaki tells this story:
I presume this would be an example of Laplace being rational and ignoring this evidence, in your view. In my view, it shows that people trying to be rational sometimes fail to be rational, and one case of this is by ignoring weak evidence, when weak evidence is still evidence. Obviously you do not assume that everything is necessarily correct: but the alternative is to take it as weak evidence, rather than ignoring it.
You are right!
I have edited the opening paragraph and conclusion, to present the idea as a bias. Let me know if you notice issues with this formulation.
Discounting the evidence doesn’t actually make it any harder for us to discover those experiences. If we don’t want to lose out on such things, then we should try some practices which we assign low probability, to see which ones work. Assigning low probability isn’t what makes this hard—what makes this hard is the large number of similarly-goofy-sounding things which we have to choose from, not knowing which ones will work. Assigning a more accurate probability just allows us to make a more accurate cost-benefit analysis in choosing how much of our time to spend on such things. The actual amount of effort it takes to achieve the results (in cases where results are real) doesn’t change with the level of rationality of our beliefs.
I think I see what you are saying.
I am phrasing the problem as an issue with rationality when I should have been phrasing it as a type of bias that tends to affect people with a rationality focus. Identifying the bias should allow us to choose a strategy which will in effect be the more rational approach.
Did I understand you correctly?
P.S: I edited the opening paragraph and conclusion to address yours and entirelyuseless’ valid criticism.
Yes, I think that’s right. Especially among those who identify as “skeptics”, who see rationality/science as mostly heightened standards of evidence (and therefore lowered standards of disbelief), there can be a tendency to mistake “I have to assign this a low probability for now” for “I am obligated to ignore this due to lack of evidence”.
The Bayesian system of rationality rejects “rationality-as-heightened-standard-of-evidence”, instead accepting everything as some degree of evidence but requiring us to quantify those degrees. Another important distinction which bears on this point is “assuming is not believing”, discussed on Black Belt Bayesian. I can’t link to the individual post for some reason, but it’s short, so here it is quoted in full:
I do like the flexibility of the Bayesian system of rationality and the “assuming is not believing” example you gave. But I do not (at the moment) see how it is any more efficient in the occasion where the evidence is not clearly quantified or is simply really weak.
There seems to me to be a dependence of any system of rational analysis to the current stage of quantifiable evidence. In other words, rational analysis can go up to where science currently is. But there is an experimental space that is open for exploration without convincing intellectual evidence. Navigating this space is a bit of a puzzle..
But this is a half baked thought. I will post when I can express it clearly.
That’s related to Science Doesn’t Trust Your Rationality.
What I’d say is this:
Personally, I find the lucid-dreaming example rather absurd, because I tend to believe a friend who claims they’ve had a mental experience. I might not agree with their analysis of their mental experience; for example, if they say they’ve talked to God in a dream, then I would tend to suspect them of mis-interpreting their experience. I do tend to believe that they’re honestly trying to convey an experience they had, though. And it’s plausible (though far from certain) that the steps which they took in order to get that experience will also work for me.
So, I can imagine a skeptic who brushes off a friend’s report of lucid dreaming as “unscientific”, but I have no sympathy for it. My model of the skeptic is: they have the crazy view that observations made by someone who has a phd, works at a university, and publishes in an academic journal are of a different kind than observations made by other people. Perhaps the lucid-dreaming studies have some interesting MRI scans to show differences in brain activity (I haven’t read them), but they must still rely on descriptions of internal experience which come from human beings in order to establish the basic facts about lucid dreams, right? In no sense is the skeptic’s inability to go beyond the current state of science “rational”; in fact, it strikes me as rather irrational.
This is an especially easy mistake for non-Bayesian rationalists to make because they lack a notion of degrees of belief. There must be a set of trusted beliefs, and a process for beliefs to go from untrusted to trusted. It’s natural for this process to involve the experimental method and peer review. But this kind of naive scientism only makes sense for a consumer of science. If scientists used the kind of “rationality” described in your post, they would never do the experiments to determine whether lucid dreaming is a real thing, because the argument in your post concludes that you can’t rationally commit time and effort to testing uncertain hypotheses. So this kind of naive scientific-rationalism is somewhat self-contradictory.
Yes, that makes sense. I don’t think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments. I am using it as a label for a social group and you are using it as an approach to knowledge. Needless to say this is my mistake as the whole point of this post is about improving the rational approach by becoming aware of what I think as a difficult space of truth.
That, I feel, is not accurate. Don’t forget that my example assumes a world before the means to experimentally verify lucid dreaming were available. The people that in the end tested lucid dreaming were the lucid dreamers themselves. This will inevitably happen for all knowledge that can be verified. It will happen by the people that have it. I am talking about the knowledge that is currently unverifiable (except through experience).
The problem is that you point to a social group that quite different from the LW crowd that calls itself rationalists.
Let’s look at my local LW dojo. I know my own skills from first hand experience so they are not a good example, but there are two other people who profess to have very nonstandard mental skills (I think both are as awesome as Lucid Dreaming). I’m not aware of either of those skills having been described in academia. Yet nobody of the aspiring rationalists in our dojo doubts them when they speak about those skills.
We don’t know how any of those skills could be taught to other people. For one of them we tried to do a few exercises and teaching it doesn’t work. The other is likely even more complex so that we aren’t able to even create exercises to teach it.
In neither of those cases the lack of academic description of the skills is any concern to us because we trust us to be honest to each other.
Indeed, I am very happy to learn that and I have internally adjusted my use of the world ‘rationalist’ to what the community suggests and demonstrates through behaviour. I might slip into using it wrongly from time to time (out of habit) but I trust the community will correct me when that happens.
Now of course, inevitably my next question has to be: What are your two skills? :)
One is the ability to create a mental alert that pops up at a specific environment. “When entering the supermarket, I will think of the milk”. That’s a quite useful skill.
The second is mental multithreading.
Ah, yes. I tried to implement this kind of trigger myself with some success using mnemonic techniques. I visualise a strong reminder image at a location that is then triggered when I am there. I find this works kind of ok when I first attach the image by visualising it while being in the space but this might be because my visualisation skills are poor. Is that what you are doing or have a you find a different way to attach the trigger? I would love to be able to do that reliably!
I can see this happening in a subconscious level—the brain is ‘multithreading’ anyway—but, as far as I can tell through observing myself, the conscious stream is always one. Except if you manage to encode different threads in different senses (like imagining a scene with its sound, vision encoding different things) but I can not see how that is possible. How do you experience the skill?
[I know we are off topic but this is really interesting. If you have a thread discussing these do point me to it.]
Nobody in our group managed to copy the skill and make it work for them. From the description of it, it seems like in addition visualizing it’s important to add other senses. For the person who makes it work, I think smell is always included.
As I said above, it’s a skill that another person has and I don’t have any way to copy it. https://www.facebook.com/mqrius/posts/10154824172856168?pnref=story has a write-up of details.
Ah, sorry I got mixed up and thought it was you that had the skill. Thanks for the link!
Nope. There is profession—simultaneous interpretation. Basically you translate someone speaking naturally: the speaker doesn’t pause and you speak in a different language simultaneously with listening to the speaker. Typically you’re a sentence behind. This requires having two two separate threads in your consciousness.
It’s a skill that needs to be trained, not a default ability, though.
I am not sure that is necessarily simultaneous. Attention can be on the speaker with the response being produced and delivered automatically through lots and lots of practice. This is what I observe of myself during music improvisation. The automatic part can even have creative variations.
Another example would be to try to read a paragraph that is new to you while at the same time having a proper discussion, versus reading the paragraph while you sing a song you already know by heart. You can do the second thing because the delivery of the song is automatic but not the first because both processes deal with novel input/output.
It is pretty simultaneous because you can’t afford to let any thread fall back to “automatically” for more than a second or two. It is also a recognizable sensation of having and managing two threads in your mind. You do have some main focus which flickers between the two threads depending on which needs attention more, but both stay continuous and coherent.
It is actually hard effort to maintain the two threads and not lose one of them.
That is what I observe and I consider this focus to be attention. Of course it could be that I just lack the ability. If you have any kind of exercise/experiment that I can try in order to experience it please share! As long as it isn’t too much effort! (Just kidding :P)
But the thing is, the focus does not switch completely, it just leans. It’s like you’re standing and shifting your weight from one foot to another, but still you never stand on one foot, you merely adjust the distribution of weight. And it takes explicit effort to keep the two threads coherent, you never “let go” of one completely.
As far as I know, the ability isn’t “natural” (or is rare) -- it needs to be developed and trained.
As to exercises, not sure. There are classes which teach simultaneous interpreting, but you probably need to be bilingual to start with.
Well, thankfully I am bilingual (my first language is Greek). Will check out the techniques they are using. Thanks!
Ah, right. I agree that invalidates my argument there.
Ok. (I think I might have also been inferring a larger disagreement than actually existed due to failing to keep in mind the order in which you made certain replies.)