and some of my sense here is that if Paul offered a portfolio bet of this kind, I might not take it myself, but EAs who were better at noticing their own surprise might say, “Wait, that’s how unpredictable Paul thinks the world is?”
If Eliezer endorses this on reflection, that would seem to suggest that Paul actually has good models about how often trend breaks happen, and that the problem-by-Eliezer’s-lights is relatively more about, either:
that Paul’s long-term predictions do not adequately take into account his good sense of short-term trend breaks.
that Paul’s long-term predictions are actually fine and good, but that his communication about it is somehow misleading to EAs.
That would be a very different kind of disagreement than I thought this was about. (Though actually kind-of consistent with the way that Eliezer previously didn’t quite diss Paul’s track-record, but instead dissed “the sort of person who is taken in by this essay [is the same sort of person who gets taken in by Hanson’s arguments in 2008 and gets caught flatfooted by AlphaGo and GPT-3 and AlphaFold 2]”?)
Also, none of this erases the value of putting forward the predictions mentioned in the original quote, since that would then be a good method of communicating Paul’s (supposedly miscommunicated) views.
If Eliezer endorses this on reflection, that would seem to suggest that Paul actually has good models about how often trend breaks happen, and that the problem-by-Eliezer’s-lights is relatively more about, either:
that Paul’s long-term predictions do not adequately take into account his good sense of short-term trend breaks.
that Paul’s long-term predictions are actually fine and good, but that his communication about it is somehow misleading to EAs.
That would be a very different kind of disagreement than I thought this was about. (Though actually kind-of consistent with the way that Eliezer previously didn’t quite diss Paul’s track-record, but instead dissed “the sort of person who is taken in by this essay [is the same sort of person who gets taken in by Hanson’s arguments in 2008 and gets caught flatfooted by AlphaGo and GPT-3 and AlphaFold 2]”?)
Also, none of this erases the value of putting forward the predictions mentioned in the original quote, since that would then be a good method of communicating Paul’s (supposedly miscommunicated) views.
Apologies for my ignorance, does EA mean Effective Altruist?
Yup. Both Effective Altruism and Effective Altruist are abbreviated as EA.