[SEQ RERUN] My Wild and Reckless Youth
Today’s post, My Wild and Reckless Youth was originally published on 30 August 2007. A summary (taken from the LW wiki):
Traditional rationality (without Bayes’ Theorem) allows you to formulate hypotheses without a reason to prefer them to the status quo, as long as they are falsifiable. Even following all the rules of traditional rationality, you can waste a lot of time. It takes a lot of rationality to avoid making mistakes; a moderate level of rationality will just lead you to make new and different mistakes.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we’ll be going through Eliezer Yudkowsky’s old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Say Not “Complexity”, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day’s sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
He means Judgment under Uncertainty: Heuristics and Biases, almost certainly. I think at one point there was a reading group surrounding it, but I don’t know what ever happened to them.
Advice more or less completely ignored by everyone, including EY himself.
I want to point out that thinking about likelihood ratios or focusing probability density is independent of any knowledge of Bayes’ theorem. I’d be surprised if any calculation actually occurred.
I don’t understand this argument. What does calling yourself rationalist have to do with not ruling the world? Traditional rationality has all sorts of specialized counter-memes against political activity. Hell, even LW has counter-memes against entering politics. Isn’t it plausible that traditional rationalists eschew politics for these reasons, rather than EY’s thesis, which always seems to be along the lines of “Traditional Rationality is good for nothing, or perhaps less than nothing.”
The crushing irony of reading an autobiographical anecdote of EY’s in an essay that implicitly says “learn this intuition I stumbled across” is almost too great to bear. Certainly LW-style rationality is no more a science than traditional rationality is.
I don’t think there’s enough data presented here to actually support this claim, depending upon what EY means by “acceptable.” Would the community support him financially if he spent 30 years trying to demonstrate quantum consciousness, without producing something intrinsically valuable along the way? I don’t think so; even Penrose had to produce sound mathematics to back up his crackpot-ness.
The LW advance in this area seems to consist entirely of agreeing to disagree after quoting Aumann’s theorem and continuing the argument well past the point of diminishing returns. In that respect, Traditional Rationalists win, merely because they don’t have to put up with as much bullshit from e.g., religious nutjobs playing at rationality.
If modern LW is any indication, then it probably wasn’t enough. Everyone talks about Bayes, but few people do any actual math. EY wrote a whole sequence on quantum physics, writing the Schrodinger operator exactly once. If math is what will save us from making interesting new mistakes, we clearly aren’t doing enough of it.
An alternative interpretation is that we should break up long chains of reasoning into individually analyzed lemmas and break up complicated plans into subgoals.
Avoiding discussing politics directly is not the same as not personally entering politics.
It’s good advice, but only if both parties are truly following it; an admittedly implausible prospect.
What about requiring all new users to solve some different numbers of Euler problems to comment, vote, post top level, have cool neon color names, etc.? Alternatively or conjunctively, breaking up the site into “fuzzy self help” and “1337 Bayes mathhacker” sections might help.
Even assuming that this only filters out people whose contributions are unhelpful and provides useful exercise to those who are, it still sounds like too much inconvenience.
It can certainly be helpful to apply actual math to a question rather than relying on vague intuitions, but if you don’t ensure that the math corresponds to the reality, then calculations only provide an illusion of helpfulness, and illusory helpfulness is worse than transparent unhelpfulness.
I’d much prefer a system incentivizing actual empiricism (“I will go out and test this with reliable methodology”) rather than math with uncertain applicability to the real world.
It would be overwhelmingly excellent if people did that.
True, I should have said “engaging in” or similar.
I don’t have any data on these sorts of incentive programs yet.
I disagree that breaking up the site into multiple walled gardens would be helpful, under the principle that there are few enough of us as it is without fragmenting ourselves further.
Because I have nowhere better to post this:
Public key is on my wiki userpage.
I think ‘do not rule the world’ meant something ‘are not highly influential in the world, being CEOs, influential politicians, directors or large scientific projects, etc’.
I think this is a case where Eliezer’s nontraditional career path caused him to miss out on some of the traditional guidance that young researchers get. If a graduate student tells their adviser that they want to work on some far-fetched research project like quantum neurology, the adviser will have some questions for their student like “What’s the first step that you can take to conduct research on this?”, “How likely is this to pan out?”, “What publications can you get out of this?”, and “Will this help you get a job?” Most young researchers have a mentor who is trying to help them get started on a successful career, and the mentor will steer them away from unproductive projects which don’t leave them with good answers to these questions.
This careerism has its downsides, but it set a higher standard than mere falsifiability, which helps keep young researchers from wasting their careers pursuing some silly idea. You have to get tenure before you can do that. (The exception is when the whole field has already embraced the silly idea enough to publish articles about it in top journals and allow researchers to make a career out of it.)
This approach has tremendous downsides, because so many researchers are encouraged to focus on projects where it’s easy rather than useful to publish, and a majority of publications are hardly interesting or useful to anyone.
I happen to be in the middle of Zen and the Art of Motorcycle Maintenance right now and I’m amused that this post popped up. It seems almost to be aimed directly at Pirsig, whose primary problem seems (so far) to be that his use of traditional rationality to critique traditional rationality leads to the breaking of his mind. I find myself saying to the book, “Dissolve the question,” each time Pirsig reaches a dilemma or ponders a definition, but instead he builds towering recursive castles of thought (often grounded in nothing more than intuition) that would be heavily downvoted if posted here.
That came off as more negative than I had intended, and yet I still mean it.