This is like a whole sequence condensed into a post.
Giles
Incidentally, in case it’s useful to anyone… The way I originally processed the $112M figure (or $68M as it then was), was something along the lines of:
$68M pledged
apply 90% cynicism
that gives $6.8M
that’s still way too large a number to represent actual ROI from $170K worth of volunteer time
how can I make this inconvenient number go away?
aha! This is money that’s expected to roll in over the next several decades. We really have no idea what the EA movement will turn into over that time, so should apply big future discounting when it comes to estimating our impact
(note it looks like Will was more optimistic, applying 67% cynicism to get from $400 to $130)
This implies immediately that 75-80% haven’t, and in practise that number will be higher care of the self-reporting. This substantially reduces the likely impact of 80,000 hours as a program.
Reduces it from what? There’s a point at which it’s more cots effective to just find new people than carrying on working to persuade existing ones. My intuition doesn’t say much about whether this happy point is above or below 25%.
Good point about self-reporting potentially exaggerating the impact though.
The pledging back-of-the-envelope calculation got me curious, because I had been assuming GWWC wouldn’t flat out lie about how much had been pledged (they say “We currently have 291 members … who together have pledged more than 112 million dollars” which implies an actual total not an estimate).
On the other hand, it’s just measuring pledges, it’s not an estimate of how much money anyone expects to actually materialise. It hadn’t occurred to me that anyone would read it that way—I may be mistaken here though, in which case there’s a genuine issue with how the number is being presented.
Anyway, I still wasn’t sure the pledge number made sense so I did my own back-of-the-envelope:
£72.68M pledged 291 members £250K pledged per person over the course of their life 40 years average expected time until retirement (this may be optimistic. I get the impression most members are young though) £6.2K average pledged per member per year
That would mean people are expecting to make £62K per year averaged over their entire remaining career, which still seems very optimistic. But:
some people will be pledging more than 10%
there might be some very high income people mixed in there, dragging the mean up.
So I think this passes the laugh test for me, as a measure of how much people might conceivably have pledged, not how much they’ll actually deliver.
Meetup : Toronto—What’s all this about Bayesian Probability and stuff?!
Meetup : Toronto—Rational Debate: Will Rationality Make You Rich? … and other topics
I love the landmine metaphor—it blows up in your face and it’s left over from some ancient war.
Did he mean if they’re someone else’s fault then you have to fix the person?
You also know your own results aren’t fraudulent.
That experiment has changed Latham’s opinion of priming and has him wondering now about the applications for unconscious primes in our daily lives.
He seems to have skipped right over the part where he wonders why he and Bargh see one thing and other people see something different. Do people update far more strongly on evidence if it comes from their own lab?
Also, yay priming! (I don’t want this comment to sound negative about priming as such)
2 sounds wrong to me—like you’re trying to explain why having a consistent internal belief structure is important to someone who already believes that.
The things which would occur to me are:
If both of you are having reactions like this then you’re dealing with status, in-group and out-group stuff, taking offense, etc. If you can make it not be about that and be about the philosophical issues—if you can both get curious—then that’s great. But I don’t know how to make that happen.
Does your friend actually have any contradictory beliefs? Do they believe that they do?
You could escalate—point out every time your friend applies a math thing to social justice. “2000 people? That’s counting. You’re applying a math thing there.” “You think this is better than that? That’s called a partial ordering and it’s a math thing”. I’m not sure I’d recommend this approach though.
This may appear self-evident to you, but not necessarily to your “socially progressive” friend. Can you make a convincing case for it?
Remember you have to make a convincing case without using stuff like logic
Not that I know of
Any advice on how to set one up? In particular how to add entries to it retrospectively—I was thinking about searching the comments database for things like “I intend to”, “guard against”, “publication bias” etc. and manually find the relevant ones. This is somewhat laborious, but the effect I want to avoid is “oh I’ve just finished my write-up (or am just about to), now I’ll go and add the original comment to the anti-publication bias registry”.
On the other hand it seems like anyone can safely add anyone else’s comment to the registry as long as it’s close enough in time to when the comment was written.
Any advice? (I figured if you’re involved at CFAR you might know a bit about this stuff).
This is interesting. People who are vulnerable to the donor illusion either have some of their money turned into utilons, or are taught a valuable lesson about the donor illusion, possibly creating more utilons in the long term.
This is useful to me as I’ll be attending the March workshop. If I successfully digest any of the insights presented here then I’ll have a better platform to start from. (Two particular points are the stuff about the parasympathetic nervous system, which I’d basically never heard of before, and the connection between the concepts of “epistemic rationality” and “knowing about myself” which is more obvious-in-retrospect).
Thanks for the write-up!
And yes, I’ll stick up at least a brief write-up of my own after I’m done. Does LW have an anti-publication-bias registry somewhere?
There’s probably better stuff around, but it made me think of Hanson’s comments in this thread:
There’s probably better stuff around, but it made me think of Hanson’s comments in this thread:
just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well
I think linking this concept in my mind to the concept of the Chinese Room might be helpful. Thanks!
More posts like this please!
I can imagine that if you design an agent by starting off with a reinforcement learner, and then bolting some model-based planning stuff on the side, then the model will necessarily need to tag one of its objects as “self”. Otherwise the reinforcement part would have trouble telling the model-based part what it’s supposed to be optimizing for.