Where is the Arnold Kling quote from?
lukeprog
I haven’t read the other comments here and I know this post is >10yrs old, but…
For me, (what I’ll now call) effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don’t want to help strangers, animals, future people, etc. But I think I “want to want to” help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don’t really detect in myself a symmetrical second-order want to NOT want to help strangers. So that’s one thing that “Shut up and multiply” has over “shut up and divide,” at least for me.
That said, I realize now that I’m often guilty of ignoring this second-orderness when e.g. making the case for effective altruism. I will often appeal to my interlocutor’s occasional desire to help strangers and suggest they generalize it, but I don’t symmetrically appeal to their clearer and more common disinterest in helping strangers and suggest they generalize THAT. To be more honest and accurate while still making the case for EA, I should be appealing to their second-order desires, though of course that’s a more complicated conversation.
- 2 Sep 2022 18:54 UTC; 5 points) 's comment on Questioning the Foundations of EA by (EA Forum;
Somewhat related: Estimating the Brittleness of AI.
See also e.g. Stimson’s memo to Truman of April 25, 1945.
Some other literature OTOH:
Not “ideal,” but exploring what’s possible: Legal Systems Very Different from Ours
There’s a pretty large literature on various forms of “deliberative democracy,” e.g. see here and here
I would guess there’s been interesting discussions of ideal governance in the context of DAOs
Lots of overlap between this concept and what Open Phil calls reasoning transparency.
The Open Philanthropy and 80,000 Hours links are for the same app, just at different URLs.
On Foretell moving to ARLIS… There’s no way you could’ve known this, but as it happens Foretell is moving from one Open Phil grantee (CSET) to another (UMD ARLIS). TBC I wasn’t involved in the decision for Foretell to make that transition, but it seems fine to me, and Foretell is essentially becoming another part of the project I funded at ARLIS.
Someone with a newsletter aimed at people interested in forecasting should let them know. :)
$40k feels like a significant quantity of all the funding there is for small experiments in the forecasting space.
Seems like a fit for the EA Infrastructure Fund, no?
Previously: Model Combination and Adjustment.
Very cool that you posted these quantified predictions in advance!
Nice write-up!
A few thoughts re: Scott Alexander & Rob Wiblin on prediction.
Scott wrote that “On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were).” I just want to note that while this was indeed a very failed prediction, in a sense the supers were wrong by just two days. (WHO-counted cases only reached >200k on March 18th, two days before question close.)
One interesting pre-coronavirus probabilistic forecast of global pandemic odds is this: From 2016 through Jan 1st 2020, Metaculus users made forecasts about whether there would be a large pandemic (≥100M infections or ≥10M deaths in a 12mo period) by 2026. For most of the question’s history, the median forecast was 10%-25%, and the special Metaculus aggregated forecast was around 35%. At first this sounded high to me, but then someone pointed out that 4 pandemics from the previous 100 years qualified (I didn’t double-check this), suggesting a base rate of 40% chance per decade. So the median and aggregated forecasts on Metaculus were actually lower than the naive base rate (maybe by accident, or maybe forecasters adjusted downward because we have better surveillance and mitigation tools today?), but I’m guessing still higher than the probabilities that would’ve been given by most policymakers and journalists if they were in the habit of making quantified falsifiable forecasts. Moreover, using the Tetlockian strategy of just predicting the naive base rate with minimal adjustment would’ve yielded an even more impressive in-advance prediction of the coronavirus pandemic.
More generally, the research on probabilistic forecasting makes me suspect that prediction polls/markets with highly-selected participants (e.g. via GJI or HyperMind), or perhaps even those without highly-selected participants (e.g. via GJO or Metaculus), could achieve pretty good calibration (though not necessarily resolution) on high-stakes questions (e.g. about low-probability global risks) with 2-10 year time horizons, though this has not yet been checked.
Nice post. Were there any sources besides Wikipedia that you found especially helpful when researching this post?
If the U.S. kept racing in its military capacity after WW2, the U.S. may have been able to use its negotiating leverage to stop the Soviet Union from becoming a nuclear power: halting proliferation and preventing the build up of world threatening numbers of high yield weapons.
BTW, the most thorough published examination I’ve seen of whether the U.S. could’ve done this is Quester (2000). I’ve been digging into the question in more detail and I’m still not sure whether it’s true or not (but “may” seems reasonable).
I’m very interested in this question, thanks for looking into it!
My answer from 2017 is here.
Interesting historical footnote from Louis Francini:
This issue of differing “capacities for happiness” was discussed by the classical utilitarian Francis Edgeworth in his 1881 Mathematical Psychics (pp 57-58, and especially 130-131). He doesn’t go into much detail at all, but this is the earliest discussion of which I am aware. Well, there’s also the Bentham-Mill debate about higher and lower pleasures (“It is better to be a human being dissatisfied than a pig satisfied”), but I think that may be a slightly different issue.
Cases where scientific knowledge was in fact lost and then rediscovered provide especially strong evidence about the discovery counterfactauls, e.g. Hero’s eolipile and al-Kindi’s development of relative frequency analysis for decoding messages. Probably we underestimate how common such cases are, because the knowledge of the lost discovery is itself lost — e.g. we might easily have simply not rediscovered the Antikythera mechanism.
I’d like readers to know that fortunately, this hasn’t been true for a while now. But yes, such efforts continue to be undersupplied with talent.