That looks like losing your rationality by reading LessWrong. As does this by XiXiDu that he links to.
A couple of quotes from the latter strike me:
Logical implications just don’t seem enough in some cases.
and
Until the above problems are resolved, or sufficiently established, I will continue to put vastly more weight on empirical evidence and my intuition than on logical implications
That is as it should be. Blindly following logic wherever it takes you is like strapping yourself to a rocket with no steering.
I’ve never been able to make sense of the traditional koans, not because I find them hard puzzles, but because I don’t even see what puzzle is being posed. But we have here in the LessWrong material koans aplenty.
Mentalism cannot be true! Physicalism cannot be true!
Bayesian reasoning is the only way! We cannot do Bayesian reasoning!
Aumann agreement! Dissension among rational people!
Human intelligence is possible! After sixty years of trying we haven’t the slightest idea how!
Trolley problems!
TORTURE vs. SPECKS!
Quantum suicide!
Give me all your money and I’ll repay you 3^^^3-fold!
The Utility Monster!
The Repugnant Conclusion!
You spend one dead child at Starbucks every year!
Vast stakes depend on your slightest decision! You cannot evaluate them! You must evaluate them!
You have six hours to cut down a tree! It will take twelve hours to sharpen your axe! The first god we make will torture you forever for failing!
Until very recently I thought it might be just me and that you people can calculate what you should do. But then I learnt that even important SI donors have similar problems. And other people as well.
‘Some years ago I was trying to decide whether or not to move to Harvard from Stanford. I had bored my friends silly with endless discussion. Finally, one of them said, “You’re one of our leading decision theorists. Maybe you should make a list of the costs and benefits and try to roughly calculate your expected utility.” Without thinking, I blurted out, “Come on, Sandy, this is serious.”’
No argument from me on any of that. I’ve said a couple of times on LW that I don’t believe that people have or can have utility functions (I’m not alone on LW there), that approximating Solomonoff induction is impractical on at least an NP scale, and that large-world Bayesianism is tantamount to AGI, which nobody knows how to do. (Framing the AGI problem as LWB doesn’t help solve AGI, it’s an argument against expecting to succeed at LWB.)
But where does that leave us? What does one do instead?
To abandon “rationality” is to throw out the baby with the bathwater. The application of Bayes’ theorem to screening tests for rare conditions remains just as valid, just as practical, and just as essential for making correct medical decisions. Noticing that a discussion has wandered into disputing definitions remains just as valid a sign that the discussion has done wrong. The usefulness of checklists for ensuring that complex procedures are reliably performed does not go away. When thine I offends thee, try the many practical pieces of advice for personal development that have appeared here or elsewhere and see what one can make work. And so on.
All of this “small-world” rationality may not be as exciting, to some temperaments, as talking about AGIs, uploads, Tegmark multiverses, and 3^^^3 specks, but it has the great advantage (to my temperament) of having an actual subject matter here and now, and of not driving people crazy who take it seriously. It works when taken seriously. And there’s this to consider. It counts against the wilder speculations of LW just as much as against kobolds.
For the rest, my rule of thumb (the small-world response to large-world problems, as Diaconis says in that paper) for deciding its worth is this: if it isn’t being done with mathematics, it’s useless. A necessary condition, not a sufficient one, but it weeds out almost everything. AGI? Make one. Friendliness proof methods? Write it up in the Journal of Symbolic Logic. TDT? Ditto.
Aumann agreement! Dissension among rational people!
This one’s easy; I’m guessing this is about “rational” people (lesswrongers for instance) disagreeing. “Rational” in the above sentence isn’t the same as rational as defined in Aumann’s paper.
Specifically, we’re human beings, two of us don’t necessarily have the same priors, or common knowledge of a posterior A for every possible event A. So we’re bound to disagree sometimes.
That looks like losing your rationality by reading LessWrong. As does this by XiXiDu that he links to.
A couple of quotes from the latter strike me:
and
That is as it should be. Blindly following logic wherever it takes you is like strapping yourself to a rocket with no steering.
I’ve never been able to make sense of the traditional koans, not because I find them hard puzzles, but because I don’t even see what puzzle is being posed. But we have here in the LessWrong material koans aplenty.
Mentalism cannot be true! Physicalism cannot be true!
Bayesian reasoning is the only way! We cannot do Bayesian reasoning!
Aumann agreement! Dissension among rational people!
Human intelligence is possible! After sixty years of trying we haven’t the slightest idea how!
Trolley problems!
TORTURE vs. SPECKS!
Quantum suicide!
Give me all your money and I’ll repay you 3^^^3-fold!
The Utility Monster!
The Repugnant Conclusion!
You spend one dead child at Starbucks every year!
Vast stakes depend on your slightest decision! You cannot evaluate them! You must evaluate them!
You have six hours to cut down a tree! It will take twelve hours to sharpen your axe! The first god we make will torture you forever for failing!
Until very recently I thought it might be just me and that you people can calculate what you should do. But then I learnt that even important SI donors have similar problems. And other people as well.
The problem is that all the talk about approximations is complete handwaving and that you really can’t calculate shit. And even if you could, there doesn’t seem to be anything medium-probable that you could do about it.
— Persi Diaconis, in The Problem of Thinking Too Much
No argument from me on any of that. I’ve said a couple of times on LW that I don’t believe that people have or can have utility functions (I’m not alone on LW there), that approximating Solomonoff induction is impractical on at least an NP scale, and that large-world Bayesianism is tantamount to AGI, which nobody knows how to do. (Framing the AGI problem as LWB doesn’t help solve AGI, it’s an argument against expecting to succeed at LWB.)
But where does that leave us? What does one do instead?
To abandon “rationality” is to throw out the baby with the bathwater. The application of Bayes’ theorem to screening tests for rare conditions remains just as valid, just as practical, and just as essential for making correct medical decisions. Noticing that a discussion has wandered into disputing definitions remains just as valid a sign that the discussion has done wrong. The usefulness of checklists for ensuring that complex procedures are reliably performed does not go away. When thine I offends thee, try the many practical pieces of advice for personal development that have appeared here or elsewhere and see what one can make work. And so on.
All of this “small-world” rationality may not be as exciting, to some temperaments, as talking about AGIs, uploads, Tegmark multiverses, and 3^^^3 specks, but it has the great advantage (to my temperament) of having an actual subject matter here and now, and of not driving people crazy who take it seriously. It works when taken seriously. And there’s this to consider. It counts against the wilder speculations of LW just as much as against kobolds.
For the rest, my rule of thumb (the small-world response to large-world problems, as Diaconis says in that paper) for deciding its worth is this: if it isn’t being done with mathematics, it’s useless. A necessary condition, not a sufficient one, but it weeds out almost everything. AGI? Make one. Friendliness proof methods? Write it up in the Journal of Symbolic Logic. TDT? Ditto.
This one’s easy; I’m guessing this is about “rational” people (lesswrongers for instance) disagreeing. “Rational” in the above sentence isn’t the same as rational as defined in Aumann’s paper.
Specifically, we’re human beings, two of us don’t necessarily have the same priors, or common knowledge of a posterior A for every possible event A. So we’re bound to disagree sometimes.