Bet Payoff 1: OpenPhil/MIRI Grant Increase
takes a deep breath
It has come to one of those most special times. A time when a long-term bet is resolved.
Gregory Lewis (aka Thrasymachus) made a bet with me that the Open Philanthropy Project would not report an increased confidence in MIRI’s impact, nor an increased grant size in the next 2 years.
And today, just 4 months later, the Open Philanthropy Project has announced (early) a 3.75 million dollar grant to be spread over 3 years (that’s half of MIRI’s budget each year; it may also be increased if MIRI’s budget increases), and that this was in large part due to a significant update of their views on MIRI after a very positive review of MIRI’s work on Logical Uncertainty by an external Machine Learning expert.
The initial bet and wording is at my (4 month old) facebook post here.
Crossposted from Facebook
It’s interesting to interpret this in light of the modesty debate. Ben seems to have taken an inside view and distrusted more competent people (OpenPhil) -- and won! tips hat
(Edited to include the word “OpenPhil”)
I am not sure I interpret it quite like that. I do think (apologies to Greg if this is wrong) Gregory’s view is based on trusting ‘experts’ in math+philosophy and believing that their respective journals and field-leaders are adequate at identifying true and useful insights, whereas I’m much more skeptical (and so the lack of major publications by MIRI is evidence for Greg that is lesser for me).
It’s important to realise this was me picking a different set of experts to trust (MIRI), and I think that this is an argument that most of the work of Greg’s “modest epistemology” is just about being able to figure out which experts to trust, if you can do so at all. I didn’t take an inside view and Greg take an outside—we just have different models about who has the relevant expertise.
Hmm… That might temper my view a bit. Still, though, I think a non-standard set of experts qualifies as an inside view. For example, Inadequate Equilibria mentions trusting Scott Sumner instead of the Central Bank of Japan, and another example might be something like “all professors and investors I’ve spoken to think my startup idea is stupid, but my roommate and co-founder still really believes in it”.
I’d object to the phrase ‘non-standard’. Really, if maths journals count as ‘standard set of experts’ in Modest Epistemology (TM Greg) then Modest Epistemology (TM Greg) is sneaking in a lot of extra empirical claims—it takes work to decide to trust the academic field of math to be the best in the world at figuring out true math, and is not a primitive of one’s worldview.
You have to look at where math advances have come from historically, you have to look at incentives, you have to look at other players, and then come to the (broadly correct belief) that academic math journals are the best place to get math truth from—and then my arguments for MIRI having an edge in this market can interface with your reasons. There’s no free lunch of experts to trust, it’s personal models all the way down.
One of the fundamental things wrong with Modest Epistemology (TM Greg) is that Greg’s post appears to imply that figuring out who are experts and what their opinions are is in some way trivial. I think the alternative is not “if you think experts are wrong that’s fine” but “please dear god can we build some models of why and when to trust different groups of people to figure out different things”. The correct response is not to reject experts, but to actually practice trying to figure out when to trust them, so that you can notice inefficiencies and biases in the market and make a profit / find free value.
It felt to me like a post about a largely empirical discussion was instead 90% abstract theory—very interesting and useful theory, but it wasn’t the topic of disagreement (to my eyes).
I agree with your conclusion, that the important takeaway is to build models of whom to trust when and on what matters.
Nonetheless, I disagree that it requires as much work to decide to trust the academic field of math as to trust MIRI. Whenever you’re using the outside view, you need to define a reference class. I’ve never come across this used as an objection to the outside view. That’s probably because there often is one such class more salient than others: “people who have social status within field X”. After all, one of the key explanations for the evolution of human intelligence is that of an arms race in social cognition. For example, you see this in studies where people are clueless at solving logic problems, unless you phrase them in terms of detecting cheaters or other breaches of social contracts (see e.g. Cheng & Holyoak, 1985 and Gigerenzer & Hug, 1992). So we should expect humans to easily figure out who has status within a field, but to have a very hard time figuring out who gets the field closer to truth.
Isn’t this exactly why modesty is such an appealing and powerful view in the first place? Because choosing the reference class is so easy (not requiring much object-level investigation), and experts are correct sufficiently often, that any inside view is mistaken in expectation.
Right. I will say that the heuristic of trusting “people who have social status within field X” seems to have very obvious flaws and biases, and it’s important to build models to work past those mistakes. Furthermore, in the world around us, most high status people in institutions are messing up most of the time. Bank of Japan is fine example, but also the people running my prestigious high school, my university, researchers in my field, etc. It’s important to learn where they’re inadequate so you can find extra value.
I’m not saying that in general “trust the high status people” isn’t probably okay if you have zero further info, but you definitely need to start building detailed models and get better than that otherwise you’re definitely not going to save the world.
(After talking via PM) Oh, you mean I beat OpenPhil because I was able to predict the direction of their future beliefs! Right.
I mean, I think that I didn’t beat the grantmakers with this bet. Nick (Beckstead) writes in the 2016 recommendations for personal donations that you should donate to MIRI, and one of his reasons is:
It’s not obvious to me that four months ago Nick wouldn’t have taken the same side of the bet as I did, against Greg (although that would be weird bettor incentives/behaviour for obvious reasons).
Added: I do think that my reasons were not time-dependant, and I should’ve been able to make the bet in 2016. However, note that the above link where Nick B and Daniel D both recommend giving to MIRI was also in 2016, so still not sure I beat them to it.
I’m very confused about the notion of fitting expected updating within a Bayesian framework. Phenomena like the fact that a Bayesian agent should expect to never change any particular belief, although they might have high credence that they’ll change some belief; or that a Bayesian agent can recognize a median belief change ≠ 0 but not a mean belief change ≠ 0.
On the theoretical level, I believe that it’s consistent to say “further movement is unsurprising, but I can’t predict in which direction”.
On the practical level, it’s probably also consistent to say “If you forced betting odds out of me now, I’d probably bet that I’ll increase funding to MIRI, so if you’re trusting my view you should donate there yourself, but my process for increasing a grant size has more steps and deliberation and I’m not going to immediately decide to increase funding for MIRI—wait for my next report”.
I think I understand this a bit better know, given also Rob’s comment on FB.
On the theoretical level, that’s a very interesting belief to have, because sometimes it doesn’t pay rent in anticipated experience at all. Given that you cannot predict a change in direction, it seems rational to act as if your belief will not change, despite you being very confident it will change.
Your practical example is not a change of belief. It’s rather saying “I now believe I’ll increase funding to MIRI, but my credence is still <70% as the formal decision process usually uncovers many surprises”