I found the Nonviolent Communication method extremely helpful for feeling more connected to my friends.
Ben_LandauTaylor
I’ve noticed a related phenomenon where, when someone acquires a new insight, they judge its value by how difficult it was to understand, instead of by how much it improves their model of the world. It’s the feeling of “Well, I hadn’t thought of that before, but I suppose it’s pretty obvious.” But of course this is a mistake because the important part is “hadn’t thought of that before,” no matter whether you think you could’ve realized it in hindsight. (The most pernicious version of this is “Oh, yeah, I totally knew that already. I just hadn’t make it so explicit.”)
Awesome. PM me if you want to talk more about effective altruism. (I’m currently staffing the EA Summit, so I may not reply swiftly.)
How many rationalists does it take to change a lightbulb?
Just one. They’ll take any excuse to change something.
How many effective altruists does it take to screw in a lightbulb?
Actually, it’s far more efficient if you convince someone else to screw it in.
How many Giving What We Can members does it take to change a lightbulb?
Fifteen have pledged to change it later, but we’ll have to wait until they finish grad school.
How many MIRI researchers does it take to screw in a lightbulb?
The problem is that there are multiple ways to parse that, and while it might naively seem like the ambiguity is harmless, it would actually be disastrous if any number of MIRI researchers tried to screw inside of a lightbulb.
How many CFAR instructors does it take to change a lightbulb?
By the time they’re done, the lightbulb should be able to change itself.
How many Leverage Research employees does it take to screw in a lightbulb?
I don’t know, but we have a team working to figure that out.
How many GiveWell employees does it take to change a lightbulb?
Not many. I don’t recall the exact number; there’s a writeup somewhere on their site, if you care to check.
How many cryonicists does it take to change a lightbulb?
Two; one to change the lightbulb, and one to preserve the old one, just in case.
How many neoreactionaries does it take to screw in a lightbulb?
We’d be better off returning to the dark.
The LW study hall seems relevant.
I agree when it comes to asking questions about the facts of the situation. On the other hand, asking nonjudgmental questions about the person’s feelings is a good way to establish rapport, if that’s your goal. (See also)
The counterargument would be to claim that cows > pigs > chickens in intelligence/complexity
My understanding is that pigs > cows >> chickens. Poultry vs mammal is a difficult question that depends on nebulous value judgments, but I thought it was fairly settled that beef causes less suffering/mass than other mammals.
I’ve found that the process of creating the cards is helpful because it forces me to make the book’s major insight explicit. I usually use cloze tests to run through a book’s major points. For example, my card for The Lean Startup is:
“The Lean Startup process for continuous improvement is (1) {{c1::identify the hypothesis to test}}, (2) {{c2::determine metrics with which to evaluate the hypothesis}}, (3) {{c3::build a minimum viable product}}, (4) {{c4::use the product to get data and test the hypothesis}}.”
This isn’t especially helpful if you just remember what the four phrases are, so I use this as a cue to think briefly about each of those concepts.
I frequently give my friends detailed feedback and analysis on their writing. They know about my speed reading thing, and none of them have noticed any change in the quality of my feedback.
This happened to me all the time before I started putting valuable insights into Anki. I find that 1 card per outstanding article or lecture and 1-3 cards per excellent book is about right. (This is the only thing I use Anki for.)
I leaned from Matt Fallshaw, who IIRC was using something loosely based on the Evelyn Wood method.
My experience is that modern speed-reading techniques don’t lower comprehension unless you get extremely fast (say, 900-1500 wpm). The exception is the very early stages, so it’s good to practice on, e.g., mildly interesting fiction. After a couple of weeks with ~30 minutes of focused practice daily, I was reading at double my previous pace with the same comprehension.
We’re working on putting the guest list together. I’ll notify people as soon as we have definite answers.
Online stuff:
I have several friends in the DC area who I met because I made this post.
I found my job because I applied to a CFAR workshop, and that led me to attend the Effective Altruism Summit instead (funny story there), which is where I first met the team I work with.
Phil and Eliezer have critiqued my fiction, and I’ve done the same for Phil and Vaniver.
Meatspace stuff:
I met about a dozen good friends in Boston through LW meetups and lived with several of them before I moved to SF.
These days, my primary social group is maybe 50% self-identified rationalists and 100% people who are serious about existential risk and laugh at jokes about fundamental epistemology.
What I still don’t get is how to steer a conversation from small-talk phase to more personal topics—esp. in a group setting.
Rosenberg’s book gave me a framework that I use to understand the feelings someone is experiencing and to communicate my own experience, which I think is what you mean by “personal topics.” The differences between the first and second versions of Schelling Day are strongly informed by this framework, to give an (extremely mechanical and oversystematized) example.
Done!
It’s because of the peak-end rule. Last year, Boston’s potluck started out with us following up on what people had shared, and then drifted to our usual conversation topics. I think there are still good reasons to eat a meal together, and good reasons for such a meal to be a potluck, but I’d recommend doing so before the event. I’ll edit that in to the post.
I’ll flag that I’m currently working on revisions to the holiday based on feedback from last time. Expect that to be posted soon.
Space is limited, so we have to be pretty selective. I’d say it’s worth taking some time to present the relevant information.
- Updates from Leverage Research: history, mistakes and new focus by 22 Nov 2019 16:19 UTC; 31 points) (EA Forum;
- 15 Apr 2014 22:50 UTC; 0 points) 's comment on Effective Altruism Summit 2014 by (EA Forum;
This is a difficult problem whose implications go well beyond evaluating charities. Many people seem to defer their evaluation of experts to the experts, but then you have to figure out how to qualify those experts, and I haven’t yet seen a good solution to that.
Some heuristics that I use instead:
—Does the expert produce powerful, visible effects in their domain of expertise which non-experts can’t duplicate? If so, they’re probably reliable within their domain. (For example, engineers can build bridges and authors can make compelling stories, so they’re probably reliable in those fields.) This is only useful in cases where a non-expert can evaluate the product’s quality; it won’t help a non-mathematician evaluate theoretical physics.
—Are the domain experts split into many disagreeing camps? If so, at most one camp is right, which means most of the experts are wrong, and the field isn’t reliable. (So this rules out, e.g., experts on nutrition.) This one is a tools for assessing domains of expertise, and won’t tell you much about individual experts.