Today, one of the chief pieces of advice I give to aspiring young rationalists is “Do not attempt long chains of reasoning or complicated plans.”
Advice more or less completely ignored by everyone, including EY himself.
To a Bayesian, on the other hand, if a hypothesis does not today have a favorable likelihood ratio over “I don’t know”, it raises the question of why you today believe anything more complicated than “I don’t know”. But I knew not the Way of Bayes, so I was not thinking about likelihood ratios or focusing probability density.
I want to point out that thinking about likelihood ratios or focusing probability density is independent of any knowledge of Bayes’ theorem. I’d be surprised if any calculation actually occurred.
When I think about how my younger self very carefully followed the rules of Traditional Rationality in the course of getting the answer wrong, it sheds light on the question of why people who call themselves “rationalists” do not rule the world. You need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.
I don’t understand this argument. What does calling yourself rationalist have to do with not ruling the world? Traditional rationality has all sorts of specialized counter-memes against political activity. Hell, even LW has counter-memes against entering politics. Isn’t it plausible that traditional rationalists eschew politics for these reasons, rather than EY’s thesis, which always seems to be along the lines of “Traditional Rationality is good for nothing, or perhaps less than nothing.”
Traditional Rationality is taught as an art, rather than a science; you read the biography of famous physicists describing the lessons life taught them, and you try to do what they tell you to do. But you haven’t lived their lives, and half of what they’re trying to describe is an instinct that has been trained into them.
The crushing irony of reading an autobiographical anecdote of EY’s in an essay that implicitly says “learn this intuition I stumbled across” is almost too great to bear. Certainly LW-style rationality is no more a science than traditional rationality is.
The way Traditional Rationality is designed, it would have been acceptable for me to spend 30 years on my silly idea, so long as I succeeded in falsifying it eventually, and was honest with myself about what my theory predicted, and accepted the disproof when it arrived, et cetera.
I don’t think there’s enough data presented here to actually support this claim, depending upon what EY means by “acceptable.” Would the community support him financially if he spent 30 years trying to demonstrate quantum consciousness, without producing something intrinsically valuable along the way? I don’t think so; even Penrose had to produce sound mathematics to back up his crackpot-ness.
Traditional Rationalists can agree to disagree.
The LW advance in this area seems to consist entirely of agreeing to disagree after quoting Aumann’s theorem and continuing the argument well past the point of diminishing returns. In that respect, Traditional Rationalists win, merely because they don’t have to put up with as much bullshit from e.g., religious nutjobs playing at rationality.
Maybe that will be enough to cross the stratospherically high threshold required for a discipline that lets you actually get it right, instead of just constraining you into interesting new mistakes.
If modern LW is any indication, then it probably wasn’t enough. Everyone talks about Bayes, but few people do any actual math. EY wrote a whole sequence on quantum physics, writing the Schrodinger operator exactly once. If math is what will save us from making interesting new mistakes, we clearly aren’t doing enough of it.
Today, one of the chief pieces of advice I give to aspiring young rationalists is “Do not attempt long chains of reasoning or complicated plans.”
Advice more or less completely ignored by everyone, including EY himself.
An alternative interpretation is that we should break up long chains of reasoning into individually analyzed lemmas and break up complicated plans into subgoals.
Hell, even LW has counter-memes against entering politics.
Avoiding discussing politics directly is not the same as not personally entering politics.
Traditional Rationalists can agree to disagree.
The LW advance in this area seems to consist entirely of agreeing to disagree after quoting Aumann’s theorem and continuing the argument well past the point of diminishing returns
It’s good advice, but only if both parties are truly following it; an admittedly implausible prospect.
If math is what will save us from making interesting new mistakes, we clearly aren’t doing enough of it.
What about requiring all new users to solve some different numbers of Euler problems to comment, vote, post top level, have cool neon color names, etc.? Alternatively or conjunctively, breaking up the site into “fuzzy self help” and “1337 Bayes mathhacker” sections might help.
What about requiring all new users to solve some different numbers of Euler problems to comment, vote, post top level, have cool neon color names, etc.? Alternatively or conjunctively, breaking up the site into “fuzzy self help” and “1337 Bayes mathhacker” sections might help.
Even assuming that this only filters out people whose contributions are unhelpful and provides useful exercise to those who are, it still sounds like too much inconvenience.
It can certainly be helpful to apply actual math to a question rather than relying on vague intuitions, but if you don’t ensure that the math corresponds to the reality, then calculations only provide an illusion of helpfulness, and illusory helpfulness is worse than transparent unhelpfulness.
I’d much prefer a system incentivizing actual empiricism (“I will go out and test this with reliable methodology”) rather than math with uncertain applicability to the real world.
Today, one of the chief pieces of advice I give to aspiring young rationalists is “Do not attempt long chains of reasoning or complicated plans.”
Advice more or less completely ignored by everyone, including EY himself.
An alternative interpretation is that we should break up long chains of reasoning into individually analyzed lemmas and break up complicated plans into subgoals.
It would be overwhelmingly excellent if people did that.
Hell, even LW has counter-memes against entering politics.
Avoiding discussing politics directly is not the same as not personally entering politics.
True, I should have said “engaging in” or similar.
If math is what will save us from making interesting new mistakes, we clearly aren’t doing enough of it.
What about requiring all new users to solve some different numbers of Euler problems to comment, vote, post top level, have cool neon color names, etc.? Alternatively or conjunctively, breaking up the site into “fuzzy self help” and “1337 Bayes mathhacker” sections might help.
I don’t have any data on these sorts of incentive programs yet.
I disagree that breaking up the site into multiple walled gardens would be helpful, under the principle that there are few enough of us as it is without fragmenting ourselves further.
I think ‘do not rule the world’ meant something ‘are not highly influential in the world, being CEOs, influential politicians, directors or large scientific projects, etc’.
He means Judgment under Uncertainty: Heuristics and Biases, almost certainly. I think at one point there was a reading group surrounding it, but I don’t know what ever happened to them.
Advice more or less completely ignored by everyone, including EY himself.
I want to point out that thinking about likelihood ratios or focusing probability density is independent of any knowledge of Bayes’ theorem. I’d be surprised if any calculation actually occurred.
I don’t understand this argument. What does calling yourself rationalist have to do with not ruling the world? Traditional rationality has all sorts of specialized counter-memes against political activity. Hell, even LW has counter-memes against entering politics. Isn’t it plausible that traditional rationalists eschew politics for these reasons, rather than EY’s thesis, which always seems to be along the lines of “Traditional Rationality is good for nothing, or perhaps less than nothing.”
The crushing irony of reading an autobiographical anecdote of EY’s in an essay that implicitly says “learn this intuition I stumbled across” is almost too great to bear. Certainly LW-style rationality is no more a science than traditional rationality is.
I don’t think there’s enough data presented here to actually support this claim, depending upon what EY means by “acceptable.” Would the community support him financially if he spent 30 years trying to demonstrate quantum consciousness, without producing something intrinsically valuable along the way? I don’t think so; even Penrose had to produce sound mathematics to back up his crackpot-ness.
The LW advance in this area seems to consist entirely of agreeing to disagree after quoting Aumann’s theorem and continuing the argument well past the point of diminishing returns. In that respect, Traditional Rationalists win, merely because they don’t have to put up with as much bullshit from e.g., religious nutjobs playing at rationality.
If modern LW is any indication, then it probably wasn’t enough. Everyone talks about Bayes, but few people do any actual math. EY wrote a whole sequence on quantum physics, writing the Schrodinger operator exactly once. If math is what will save us from making interesting new mistakes, we clearly aren’t doing enough of it.
An alternative interpretation is that we should break up long chains of reasoning into individually analyzed lemmas and break up complicated plans into subgoals.
Avoiding discussing politics directly is not the same as not personally entering politics.
It’s good advice, but only if both parties are truly following it; an admittedly implausible prospect.
What about requiring all new users to solve some different numbers of Euler problems to comment, vote, post top level, have cool neon color names, etc.? Alternatively or conjunctively, breaking up the site into “fuzzy self help” and “1337 Bayes mathhacker” sections might help.
Even assuming that this only filters out people whose contributions are unhelpful and provides useful exercise to those who are, it still sounds like too much inconvenience.
It can certainly be helpful to apply actual math to a question rather than relying on vague intuitions, but if you don’t ensure that the math corresponds to the reality, then calculations only provide an illusion of helpfulness, and illusory helpfulness is worse than transparent unhelpfulness.
I’d much prefer a system incentivizing actual empiricism (“I will go out and test this with reliable methodology”) rather than math with uncertain applicability to the real world.
It would be overwhelmingly excellent if people did that.
True, I should have said “engaging in” or similar.
I don’t have any data on these sorts of incentive programs yet.
I disagree that breaking up the site into multiple walled gardens would be helpful, under the principle that there are few enough of us as it is without fragmenting ourselves further.
Because I have nowhere better to post this:
Public key is on my wiki userpage.
I think ‘do not rule the world’ meant something ‘are not highly influential in the world, being CEOs, influential politicians, directors or large scientific projects, etc’.