The Bayesian system of rationality rejects “rationality-as-heightened-standard-of-evidence”, instead accepting everything as some degree of evidence but requiring us to quantify those degrees. Another important distinction which bears on this point is “assuming is not believing”
I do like the flexibility of the Bayesian system of rationality and the “assuming is not believing” example you gave. But I do not (at the moment) see how it is any more efficient in the occasion where the evidence is not clearly quantified or is simply really weak.
There seems to me to be a dependence of any system of rational analysis to the current stage of quantifiable evidence. In other words, rational analysis can go up to where science currently is. But there is an experimental space that is open for exploration without convincing intellectual evidence. Navigating this space is a bit of a puzzle..
But this is a half baked thought. I will post when I can express it clearly.
Personally, I find the lucid-dreaming example rather absurd, because I tend to believe a friend who claims they’ve had a mental experience. I might not agree with their analysis of their mental experience; for example, if they say they’ve talked to God in a dream, then I would tend to suspect them of mis-interpreting their experience. I do tend to believe that they’re honestly trying to convey an experience they had, though. And it’s plausible (though far from certain) that the steps which they took in order to get that experience will also work for me.
So, I can imagine a skeptic who brushes off a friend’s report of lucid dreaming as “unscientific”, but I have no sympathy for it. My model of the skeptic is: they have the crazy view that observations made by someone who has a phd, works at a university, and publishes in an academic journal are of a different kind than observations made by other people. Perhaps the lucid-dreaming studies have some interesting MRI scans to show differences in brain activity (I haven’t read them), but they must still rely on descriptions of internal experience which come from human beings in order to establish the basic facts about lucid dreams, right? In no sense is the skeptic’s inability to go beyond the current state of science “rational”; in fact, it strikes me as rather irrational.
This is an especially easy mistake for non-Bayesian rationalists to make because they lack a notion of degrees of belief. There must be a set of trusted beliefs, and a process for beliefs to go from untrusted to trusted. It’s natural for this process to involve the experimental method and peer review. But this kind of naive scientism only makes sense for a consumer of science. If scientists used the kind of “rationality” described in your post, they would never do the experiments to determine whether lucid dreaming is a real thing, because the argument in your post concludes that you can’t rationally commit time and effort to testing uncertain hypotheses. So this kind of naive scientific-rationalism is somewhat self-contradictory.
Yes, that makes sense. I don’t think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments. I am using it as a label for a social group and you are using it as an approach to knowledge. Needless to say this is my mistake as the whole point of this post is about improving the rational approach by becoming aware of what I think as a difficult space of truth.
If scientists used the kind of “rationality” described in your post, they would never do the experiments to determine whether lucid dreaming is a real thing, because the argument in your post concludes that you can’t rationally commit time and effort to testing uncertain hypotheses. So this kind of naive scientific-rationalism is somewhat self-contradictory.
That, I feel, is not accurate. Don’t forget that my example assumes a world before the means to experimentally verify lucid dreaming were available. The people that in the end tested lucid dreaming were the lucid dreamers themselves. This will inevitably happen for all knowledge that can be verified. It will happen by the people that have it. I am talking about the knowledge that is currently unverifiable (except through experience).
I am using it as a label for a social group and you are using it as an approach to knowledge.
The problem is that you point to a social group that quite different from the LW crowd that calls itself rationalists.
Let’s look at my local LW dojo. I know my own skills from first hand experience so they are not a good example, but there are two other people who profess to have very nonstandard mental skills (I think both are as awesome as Lucid Dreaming). I’m not aware of either of those skills having been described in academia. Yet nobody of the aspiring rationalists in our dojo doubts them when they speak about those skills.
We don’t know how any of those skills could be taught to other people. For one of them we tried to do a few exercises and teaching it doesn’t work. The other is likely even more complex so that we aren’t able to even create exercises to teach it.
In neither of those cases the lack of academic description of the skills is any concern to us because we trust us to be honest to each other.
The problem is that you point to a social group that quite different from the LW crowd that calls itself rationalists.
Indeed, I am very happy to learn that and I have internally adjusted my use of the world ‘rationalist’ to what the community suggests and demonstrates through behaviour. I might slip into using it wrongly from time to time (out of habit) but I trust the community will correct me when that happens.
Now of course, inevitably my next question has to be: What are your two skills? :)
One is the ability to create a mental alert that pops up at a specific environment. “When entering the supermarket, I will think of the milk”. That’s a quite useful skill.
the ability to create a mental alert that pops up at a specific environment.
Ah, yes. I tried to implement this kind of trigger myself with some success using mnemonic techniques. I visualise a strong reminder image at a location that is then triggered when I am there. I find this works kind of ok when I first attach the image by visualising it while being in the space but this might be because my visualisation skills are poor. Is that what you are doing or have a you find a different way to attach the trigger? I would love to be able to do that reliably!
The second is mental multithreading.
I can see this happening in a subconscious level—the brain is ‘multithreading’ anyway—but, as far as I can tell through observing myself, the conscious stream is always one. Except if you manage to encode different threads in different senses (like imagining a scene with its sound, vision encoding different things) but I can not see how that is possible. How do you experience the skill?
[I know we are off topic but this is really interesting. If you have a thread discussing these do point me to it.]
Is that what you are doing or have a you find a different way to attach the trigger? I would love to be able to do that reliably!
Nobody in our group managed to copy the skill and make it work for them. From the description of it, it seems like in addition visualizing it’s important to add other senses. For the person who makes it work, I think smell is always included.
but, as far as I can tell through observing myself, the conscious stream is always one
Nope. There is profession—simultaneous interpretation. Basically you translate someone speaking naturally: the speaker doesn’t pause and you speak in a different language simultaneously with listening to the speaker. Typically you’re a sentence behind. This requires having two two separate threads in your consciousness.
It’s a skill that needs to be trained, not a default ability, though.
I am not sure that is necessarily simultaneous. Attention can be on the speaker with the response being produced and delivered automatically through lots and lots of practice. This is what I observe of myself during music improvisation. The automatic part can even have creative variations.
Another example would be to try to read a paragraph that is new to you while at the same time having a proper discussion, versus reading the paragraph while you sing a song you already know by heart. You can do the second thing because the delivery of the song is automatic but not the first because both processes deal with novel input/output.
It is pretty simultaneous because you can’t afford to let any thread fall back to “automatically” for more than a second or two. It is also a recognizable sensation of having and managing two threads in your mind. You do have some main focus which flickers between the two threads depending on which needs attention more, but both stay continuous and coherent.
It is actually hard effort to maintain the two threads and not lose one of them.
You do have some main focus which flickers between the two threads depending on which needs attention more
That is what I observe and I consider this focus to be attention. Of course it could be that I just lack the ability. If you have any kind of exercise/experiment that I can try in order to experience it please share! As long as it isn’t too much effort! (Just kidding :P)
That is what I observe and I consider this focus to be attention.
But the thing is, the focus does not switch completely, it just leans. It’s like you’re standing and shifting your weight from one foot to another, but still you never stand on one foot, you merely adjust the distribution of weight. And it takes explicit effort to keep the two threads coherent, you never “let go” of one completely.
As far as I know, the ability isn’t “natural” (or is rare) -- it needs to be developed and trained.
As to exercises, not sure. There are classes which teach simultaneous interpreting, but you probably need to be bilingual to start with.
The people that in the end tested lucid dreaming were the lucid dreamers themselves.
Ah, right. I agree that invalidates my argument there.
Yes, that makes sense. I don’t think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments.
Ok. (I think I might have also been inferring a larger disagreement than actually existed due to failing to keep in mind the order in which you made certain replies.)
I do like the flexibility of the Bayesian system of rationality and the “assuming is not believing” example you gave. But I do not (at the moment) see how it is any more efficient in the occasion where the evidence is not clearly quantified or is simply really weak.
There seems to me to be a dependence of any system of rational analysis to the current stage of quantifiable evidence. In other words, rational analysis can go up to where science currently is. But there is an experimental space that is open for exploration without convincing intellectual evidence. Navigating this space is a bit of a puzzle..
But this is a half baked thought. I will post when I can express it clearly.
That’s related to Science Doesn’t Trust Your Rationality.
What I’d say is this:
Personally, I find the lucid-dreaming example rather absurd, because I tend to believe a friend who claims they’ve had a mental experience. I might not agree with their analysis of their mental experience; for example, if they say they’ve talked to God in a dream, then I would tend to suspect them of mis-interpreting their experience. I do tend to believe that they’re honestly trying to convey an experience they had, though. And it’s plausible (though far from certain) that the steps which they took in order to get that experience will also work for me.
So, I can imagine a skeptic who brushes off a friend’s report of lucid dreaming as “unscientific”, but I have no sympathy for it. My model of the skeptic is: they have the crazy view that observations made by someone who has a phd, works at a university, and publishes in an academic journal are of a different kind than observations made by other people. Perhaps the lucid-dreaming studies have some interesting MRI scans to show differences in brain activity (I haven’t read them), but they must still rely on descriptions of internal experience which come from human beings in order to establish the basic facts about lucid dreams, right? In no sense is the skeptic’s inability to go beyond the current state of science “rational”; in fact, it strikes me as rather irrational.
This is an especially easy mistake for non-Bayesian rationalists to make because they lack a notion of degrees of belief. There must be a set of trusted beliefs, and a process for beliefs to go from untrusted to trusted. It’s natural for this process to involve the experimental method and peer review. But this kind of naive scientism only makes sense for a consumer of science. If scientists used the kind of “rationality” described in your post, they would never do the experiments to determine whether lucid dreaming is a real thing, because the argument in your post concludes that you can’t rationally commit time and effort to testing uncertain hypotheses. So this kind of naive scientific-rationalism is somewhat self-contradictory.
Yes, that makes sense. I don’t think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments. I am using it as a label for a social group and you are using it as an approach to knowledge. Needless to say this is my mistake as the whole point of this post is about improving the rational approach by becoming aware of what I think as a difficult space of truth.
That, I feel, is not accurate. Don’t forget that my example assumes a world before the means to experimentally verify lucid dreaming were available. The people that in the end tested lucid dreaming were the lucid dreamers themselves. This will inevitably happen for all knowledge that can be verified. It will happen by the people that have it. I am talking about the knowledge that is currently unverifiable (except through experience).
The problem is that you point to a social group that quite different from the LW crowd that calls itself rationalists.
Let’s look at my local LW dojo. I know my own skills from first hand experience so they are not a good example, but there are two other people who profess to have very nonstandard mental skills (I think both are as awesome as Lucid Dreaming). I’m not aware of either of those skills having been described in academia. Yet nobody of the aspiring rationalists in our dojo doubts them when they speak about those skills.
We don’t know how any of those skills could be taught to other people. For one of them we tried to do a few exercises and teaching it doesn’t work. The other is likely even more complex so that we aren’t able to even create exercises to teach it.
In neither of those cases the lack of academic description of the skills is any concern to us because we trust us to be honest to each other.
Indeed, I am very happy to learn that and I have internally adjusted my use of the world ‘rationalist’ to what the community suggests and demonstrates through behaviour. I might slip into using it wrongly from time to time (out of habit) but I trust the community will correct me when that happens.
Now of course, inevitably my next question has to be: What are your two skills? :)
One is the ability to create a mental alert that pops up at a specific environment. “When entering the supermarket, I will think of the milk”. That’s a quite useful skill.
The second is mental multithreading.
Ah, yes. I tried to implement this kind of trigger myself with some success using mnemonic techniques. I visualise a strong reminder image at a location that is then triggered when I am there. I find this works kind of ok when I first attach the image by visualising it while being in the space but this might be because my visualisation skills are poor. Is that what you are doing or have a you find a different way to attach the trigger? I would love to be able to do that reliably!
I can see this happening in a subconscious level—the brain is ‘multithreading’ anyway—but, as far as I can tell through observing myself, the conscious stream is always one. Except if you manage to encode different threads in different senses (like imagining a scene with its sound, vision encoding different things) but I can not see how that is possible. How do you experience the skill?
[I know we are off topic but this is really interesting. If you have a thread discussing these do point me to it.]
Nobody in our group managed to copy the skill and make it work for them. From the description of it, it seems like in addition visualizing it’s important to add other senses. For the person who makes it work, I think smell is always included.
As I said above, it’s a skill that another person has and I don’t have any way to copy it. https://www.facebook.com/mqrius/posts/10154824172856168?pnref=story has a write-up of details.
Ah, sorry I got mixed up and thought it was you that had the skill. Thanks for the link!
Nope. There is profession—simultaneous interpretation. Basically you translate someone speaking naturally: the speaker doesn’t pause and you speak in a different language simultaneously with listening to the speaker. Typically you’re a sentence behind. This requires having two two separate threads in your consciousness.
It’s a skill that needs to be trained, not a default ability, though.
I am not sure that is necessarily simultaneous. Attention can be on the speaker with the response being produced and delivered automatically through lots and lots of practice. This is what I observe of myself during music improvisation. The automatic part can even have creative variations.
Another example would be to try to read a paragraph that is new to you while at the same time having a proper discussion, versus reading the paragraph while you sing a song you already know by heart. You can do the second thing because the delivery of the song is automatic but not the first because both processes deal with novel input/output.
It is pretty simultaneous because you can’t afford to let any thread fall back to “automatically” for more than a second or two. It is also a recognizable sensation of having and managing two threads in your mind. You do have some main focus which flickers between the two threads depending on which needs attention more, but both stay continuous and coherent.
It is actually hard effort to maintain the two threads and not lose one of them.
That is what I observe and I consider this focus to be attention. Of course it could be that I just lack the ability. If you have any kind of exercise/experiment that I can try in order to experience it please share! As long as it isn’t too much effort! (Just kidding :P)
But the thing is, the focus does not switch completely, it just leans. It’s like you’re standing and shifting your weight from one foot to another, but still you never stand on one foot, you merely adjust the distribution of weight. And it takes explicit effort to keep the two threads coherent, you never “let go” of one completely.
As far as I know, the ability isn’t “natural” (or is rare) -- it needs to be developed and trained.
As to exercises, not sure. There are classes which teach simultaneous interpreting, but you probably need to be bilingual to start with.
Well, thankfully I am bilingual (my first language is Greek). Will check out the techniques they are using. Thanks!
Ah, right. I agree that invalidates my argument there.
Ok. (I think I might have also been inferring a larger disagreement than actually existed due to failing to keep in mind the order in which you made certain replies.)