Ray Dalio’s “Principles”. There’s a bunch of stuff in there that I disagree with, but overall he seems pretty serious about tackling these issues—and apparently has been very successful.
aaronsw
What are the optimal biases to overcome?
Use direct replies to this comment for suggesting things about tackling practical biases.
- What are the optimal biases to overcome? by Aug 4, 2012, 3:04 PM; 103 points) (
- Aug 5, 2012, 1:21 AM; 0 points) 's comment on What are the optimal biases to overcome? by (
Yes, I agree that if a politician or government official tells you the most effective thing you can do to prevent asteroids from destroying the planet is “keep NASA at current funding levels and increase funding for nuclear weapons research” then you should be very suspicious.
Can you point to something I said that’s you think is wrong?
My understanding of the history (from reading an interview with Eliezer) is that Eliezer concluded the singularity was the most important thing to work on and then decided the best way to get other people to work on it was to improve their general rationality. But whether that’s true or not, I don’t see how that’s inconsistent with the notion that Eliezer and a bunch of people similar to him are suffering from motivated reasoning.
I also don’t see how I conflated LW and SI. I said many LW readers worry about UFAI and that SI has taken the position that the best way to address this worry is to do philosophy.
Right. I tweaked the sentence to make this more clear.
Yes, “arguing about ideas on the Internet” is a shorthand for avoiding confrontations with reality (including avoiding difficult engineering problems, avoiding experimental tests of your ideas, etc.).
There’s nothing wrong with arguing on the Internet. I’m merely asking whether the belief that “arguing on the Internet is the most important thing anyone can do to help people” is the result of motivated reasoning.
A cynical explanation for why rationalists worry about FAI
On the question of the impact of rationality, my guess is that:
Luke, Holden, and most psychologists agree that rationality means something roughly like the ability to make optimal decisions given evidence and goals.
The main strand of rationality research followed by both psychologists and LWers has been focused on fairly obvious cognitive biases. (For short, let’s call these “cognitive biases”.)
Cognitive biases cause people to make choices that are most obviously irrational, but not most importantly irrational. For example, it’s very clear that spinning a wheel should not affect people’s estimates of how many African countries are in the UN. But do you know anyone for whom this sort of thing is really their biggest problem?
Since cognitive biases are the primary focus of research into rationality, rationality tests mostly measure how good you are at avoiding them. These are the tests used in the studies psychologists have done on whether rationality predicts success.
LW readers tend to be fairly good at avoiding cognitive biases (and will be even better if CFAR takes off).
But there are a whole series of much more important irrationalities that LWers suffer from. (Let’s call them “practical biases” as opposed to “cognitive biases”, even though both are ultimately practical and cognitive.)
Holden is unusually good at avoiding these sorts of practical biases. (I’ve found Ray Dalio’s “Principles”, written by Holden’s former employer, an interesting document on practical biases, although it also has a lot of stuff I disagree with or find silly.)
Holden’s superiority at avoiding practical biases is a big part of why GiveWell has tended to be more successful than SIAI. (Givewell.org has around 30x the amount of traffic as Singularity.org according to Compete.com and my impression is that it moves several times as much money, although I can’t find a 2011 fundraising total for SIAI.)
lukeprog has been better at avoiding practical biases than previous SIAI leadership and this is a big part of why SIAI is improving. (See, e.g., lukeprog’s debate with EY about simply reading Nonprofit Kit for Dummies.)
Rationality, properly understood, is in fact a predictor of success. Perhaps if LWers used success as their metric (as opposed to getting better at avoiding obvious mistakes), they might focus on their most important irrationalities (instead of their most obvious ones), which would lead them to be more rational and more successful.
- Explicit and tacit rationality by Apr 9, 2013, 11:33 PM; 62 points) (
- Aug 4, 2012, 12:30 PM; 0 points) 's comment on “Epiphany addiction” by (
Then it does seem like your AI arguments are playing reference class tennis with a reference class of “conscious beings”. For me, the force of the Tool AI argument is that there’s no reason to assume that AGI is going to behave like a sci-fi character. For example, if something like On Intelligence turns out to be true, I think the algorithms it describes will be quite generally intelligent but hardly capable of rampaging through the countryside. It would be much more like Holden’s Tool AI: you’d feed it data, it’d make predictions, you could choose to use the predictions.
(This is, naturally, the view of that school of AI implementers. Scott Brown: “People often seem to conflate having intelligence with having volition. Intelligence without volition is just information.”)
“it will have conscious observers in it if it performs computations”
So your argument against Bohm depends on information functionalism?
I have not read the MWI sequence yet, but if the argument is that MWI is simpler than collapse, isn’t Bohm even simpler than MWI?
(The best argument against Bohm I can find on LW is a brief comment that claims it implies MWI, but I don’t understand how and there doesn’t seem to be much else on the Web making that case.)
Can you unpack your argument against Bohm? Why does a real guide-wave require multiple worlds?
I’m Aaron Swartz. I used to work in software (including as a cofounder of Reddit, whose software that powers this site) and now I work in politics. I’m interested in maximizing positive impact, so I follow GiveWell carefully. I’ve always enjoyed the rationality improvement stuff here, but I tend to find the lukeprog-style self-improvement stuff much more valuable. I’ve been following Eliezer’s writing since before even the OvercomingBias days, I believe, but have recently started following LW much more carefully after a couple friends mentioned it to me in close succession.
I found myself wanting to post but don’t have any karma, so I thought I’d start by introducing myself.
I’ve been thinking on-and-off about starting a LessWrong spinoff around the self-improvement stuff (current name proposal: LessWeak). Is anyone else interested in that sort of thing? It’d be a bit like the Akrasia Tactics Review, but applied to more topics.
Fantastic. Me too!
I think #5 is bad metaethics. You write: “Why punish an improvement?” and “A moral framework should make the good outcome and bad outcome as distinct as possible, not the same.”
I think this is a holdover from Judeo-Christian metaethics in which there are distinct classes of good things to do and bad things to do (and morally-neutral things in between) and then clear rewards for doing good and punishments for doing bad. In a world without God, morality isn’t about punishing or rewarding us, so a moral framework should provide an ordering over choices rather than distinct classes of good and bad with estimates of how good or bad they are. What’s useful to know is “What’s the most moral thing to do here?” not “How much will I get punished if I don’t do this?” because you simply won’t get punished.
Re #2-6, I don’t think GiveWell has ever said these. Their argument is simply that if you are going to spend money doing good, they will advise you on how to be optimal at it. This is an argument with 80000hours and (in the case of #5) Peter Singer. And #6 I think GiveWell would explicitly disagree with.
Then why don’t you spend more time on finding tactics to increase your energy level? The eight you’ve listed seem pretty good, but surely they’re just the tip of the iceberg.
lukeprog’s writings, especially Build Small Skills in the Right Order.