And a friend requests an article comparing IQ and conscientiousness as a predictor for different things.
tog
I asked for a good general guide to IQ (and in particular its objectivity and importance) on the LW FB group a while back. I got a bunch of answers, including these standouts:
But there’s still plenty of room for improvement on those so I’d be curious to hear others’ suggestions.
I’ve been looking for this all my life without even knowing it. (Well, at least for half a year.)
That being said, what I’m not interested in as my sole aim is to be maximally effective at doing good. I’m more interested in expressing my values in as large and impactful a way as possible—and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn’t mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good.
It’s interesting to ask to what extent this is true of everyone—I think we’ve discussed this before Matt.
Your version and phrasing of what you’re interested in is particular to you, but we could broaden the question out to ask how far people have gone a long way moving away from having primarily self-centred drives which overwhelm others when significant self-sacrifice is on the table. I think some people have gone a long way moving away from that, but I’m sceptical that any single human being goes the full distance. Most EAs plausibly don’t make any significant self-sacrifices if measured in terms of their happiness significantly dipping.* The people I know who have gone the furthest may be Joey and Kate Savoie, with whom I’ve talked about these issues a lot.
* Which doesn’t mean they haven’t done a lot of good! If people can donate 5% or 10% or 20% of their income without becoming significantly less happy then that’s great, and convincing people to do that is a low hanging fruit that we should prioritise, rather than focusing our energies on then squeezing out extra sacrifices that start to really eat into their happiness. The good consequences of people donating are what we really care about after all, not the level of sacrifice they themselves are making.
People’s expectation clock starts running from the time they hit send. More improtantly, deadlines related to the email content really sets the agenda for how often to check your email.
Then change people’s expectations, including those of the deadlines appropriate for tasks communicated by emails that people may not see for a while! (Partly a tongue in cheek answer—I know this may no be feasible, and you make a fair point).
As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. [ … ] Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.
I do know—indeed, live with :S—a couple.
Effective altruism ==/== utilitarianism
Here’s the thread on this at the EA Forum: Effective Altruism and Utilitarianism
Here’s the thread on this at the EA Forum: Effective Altruism and Utilitarianism
Potentially worth actually doing—what’d be the next step in terms of making that a possibility?
Relevant: a bunch of us are coordinating improvements to the identical EA Forum codebase at https://github.com/tog22/eaforum and https://github.com/tog22/eaforum/issues
Thanks, fixed, now points to http://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/
SSC Discussion: No Time Like The Present For AI Safety Work
For my part, I’m interested in the connection to GiveWell’s powerful advocacy of “cluster thinking”. I’ll think about this some more and post thoughts if I have time.
SSC discussion: “bicameral reasoning”, epistemology, and scope insensitivity
http://www.moneysavingexpert.com/ is the best way to learn about these.
Shop for Charity is much better − 5%+ directly to GiveWell-recommended charities, plus browser plugins people have made that apply this every time you buy from Amazon.
Did you edit your original comment?
Not that I recall
Some people offer arguments—eg http://philpapers.org/archive/SINTEA-3.pdf - and for some people it’s a basic belief or value not based on argument.
This is a good solution when marginal money has roughly equal utility to Alice and Bob, but suffers otherwise.
If C doesn’t want A to play music so loud, but it’s A’s right to do so, why should A oblige? What is in it for A?
Some (myself included) would say that A should oblige if doing so would increase total utility, even if there’s nothing in it for A self-interestedly. (I’m assuming your saying A had a right to play loud music wasn’t meant to exclude this.)
Do you think you’d use this out of interest Owen?