Seriously though, I’d love to see some applied-rationality techniques put to use successfully doubting parts of the applied rationality worldview. I’ve seen some examples already, but more is good.
The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.
The existence of the past and future (and within most people’s reasoning systems, the understanding of these as linear) are both ultimately postulations that are generally accepted at face value, as well as the idea that consciousness/awareness arises from matter/quantum phenomena not vice versa.
The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.
In your opinion, is there some other form of reasoning that avoids this weakness?
That’s a very complicated question but I’ll try to do my best to answer.
In many ancient cultures, they used two words for the mind, or for thinking, and it is still used figuratively today. “In my heart I know...”
In my opinion, in terms of expected impact on the course of life for a given subject, generally, more important than their understanding of Bayesian reasoning, is what they ‘want’ … how they define themselves. Consciously, and unconsciously.
For “reasoning”, no I doubt there is a better system. But since we must (or almost universally do) follow our instincts on a wide range of issues (is everyone else p-zombies? am I real? Is my chair conscious? Am I dreaming?), it is highly important, and often overlooked, that one’s “presumptive model” of reality and of themselves (both strictly intertwined psychologically) should be perfected with just as much effort (if not more) as we spend perfecting our probabilistic reasoning.
Probabilities can’t cover everything. Eventually you just have to make a choice as to which concept or view you believe more, and that choice changes your character, and your character changes your decisions, and your decisions are your life.
When one is confident, and subconsciously/instinctively aware that they are doing what they should be doing, thinking how they should be thinking, that their ‘foundation’ is solid (moral compass, goals, motivation, emotional baggage, openness to new ideas, etc.) they then can be a much more effective rationalist, and be more sure (albeit only instinctively) that they are doing the right thing when they act.
Those instinctive presumptions, and life-defining self image do have a strong quantifiable impact on the life of any human, and even a nominal understanding of rationality would allow one to realize that.
Maximise your own effectiveness. Perfect how your mind works, how you think of yourselves and others (again, instinctive opinions, gut feelings, more than conscious thought, although conscious thought is extremely important). Then when you start teaching it and filling it with data you’ll make a lot less mistakes.
Less Wrong has some material on this topic :)
Seriously though, I’d love to see some applied-rationality techniques put to use successfully doubting parts of the applied rationality worldview. I’ve seen some examples already, but more is good.
The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.
The existence of the past and future (and within most people’s reasoning systems, the understanding of these as linear) are both ultimately postulations that are generally accepted at face value, as well as the idea that consciousness/awareness arises from matter/quantum phenomena not vice versa.
In your opinion, is there some other form of reasoning that avoids this weakness?
That’s a very complicated question but I’ll try to do my best to answer.
In many ancient cultures, they used two words for the mind, or for thinking, and it is still used figuratively today. “In my heart I know...”
In my opinion, in terms of expected impact on the course of life for a given subject, generally, more important than their understanding of Bayesian reasoning, is what they ‘want’ … how they define themselves. Consciously, and unconsciously.
For “reasoning”, no I doubt there is a better system. But since we must (or almost universally do) follow our instincts on a wide range of issues (is everyone else p-zombies? am I real? Is my chair conscious? Am I dreaming?), it is highly important, and often overlooked, that one’s “presumptive model” of reality and of themselves (both strictly intertwined psychologically) should be perfected with just as much effort (if not more) as we spend perfecting our probabilistic reasoning.
Probabilities can’t cover everything. Eventually you just have to make a choice as to which concept or view you believe more, and that choice changes your character, and your character changes your decisions, and your decisions are your life.
When one is confident, and subconsciously/instinctively aware that they are doing what they should be doing, thinking how they should be thinking, that their ‘foundation’ is solid (moral compass, goals, motivation, emotional baggage, openness to new ideas, etc.) they then can be a much more effective rationalist, and be more sure (albeit only instinctively) that they are doing the right thing when they act.
Those instinctive presumptions, and life-defining self image do have a strong quantifiable impact on the life of any human, and even a nominal understanding of rationality would allow one to realize that.
Maximise your own effectiveness. Perfect how your mind works, how you think of yourselves and others (again, instinctive opinions, gut feelings, more than conscious thought, although conscious thought is extremely important). Then when you start teaching it and filling it with data you’ll make a lot less mistakes.
All right. Thanks for clarifying.
I think the discussion about the value of Bayseanism is good (this post and following).