There’s definitely a new trend towards custom-website essays. Forethought is a website for lots of research content, though (like Epoch), not just PrepIE.
And I don’t think it’s because of people getting more productive because of reasoning models—AI was helpful for PrepIE but more like 10-20% productivity boost than 100% boost, and I don’t think AI was used much for SA, either.
wdmacaskill
Three Types of Intelligence Explosion
Intelsat as a Model for International AGI Governance
Thanks—appreciate that! It comes up a little differently for me, but still an issue—we’ve asked the devs to fix.
Forethought: a new AI macrostrategy group
Preparing for the Intelligence Explosion
A Critique of Functional Decision Theory
Argh! Original post didn’t go through (probably my fault), so this will be shorter than it should be:
First point:
I know very little about CEA, and a brief check of their website leaves me a little unclear on why Luke recommends them, aside from the fact that they apparently work closely with FHI.
CEA = Giving What We Can, 80,000 Hours, and a bit of other stuff
Reason → donations to CEA predictably increase the size and strength of the EA community, a good proportion of whom take long-run considerations very seriously and will donate to / work for FHI/MIRI, or otherwise pursue careers with the aim of extinction risk mitigation. It’s plausible that $1 to CEA generates significantly more than $1′s worth of x-risk-value [note: I’m a trustee and founder of CEA].
Second point:
Don’t forget CSER. My view is that they are even higher-impact than MIRI or FHI (though I’d defer to Sean_o_h if he disagreed). Reason: marginal donations will be used to fund program management + grantwriting, which would turn ~$70k into a significant chance of ~$1-$10mn, and launch what I think might become one of the most important research institutions in the world. They have all the background (high profile people on the board; an already written previous grant proposal that very narrowly missed out on being successful). High leverage!
CEA and CFAR don’t do anything, to my knowledge, that would increase these odds, except in exceedingly indirect ways.
People from CEA, in collaboration with FHI, have been meeting with people in the UK government, and are producing policy briefs on unprecedented risks from new technologies, including AI (the first brief will go on the FHI website in the near future). These meetings arose as a result of GWWC media attention. CEA’s most recent hire, Owen Cotton-Barratt, will be helping with this work.
your account of effective altruism seems rather different from Will’s: “Maybe you want to do other things effectively, but >then it’s not effective altruism”. This sort of mixed messaging is exactly what I was objecting too.
I think you’ve revised the post since you initially wrote it? If so, you might want to highlight that in the italics at the start, as otherwise it makes some of the comments look weirdly off-base. In particular, I took the initial post to aim at the conclusion:
EA is utilitarianism in disguise which I think is demonstrably false.
But now the post reads more like the main conclusion is:
EA is vague on a crucial issue, which is whether the effective pursuit of non-welfarist goods counts as effective altruism. which is a much more reasonable thing to say.
I think the simple answer is that “effective altruism” is a vague term. I gave you what I thought was the best way of making it precise. Weeatquince, and Luke Muelhauser wanted to make it precise in a different way. We could have a debate about which is the more useful precisifcation, but I don’t think that here is the right place for that.
On either way of making the term precise, though, EA is clearly not trying to be the whole of morality, or to give any one very specific conception of morality. It doesn’t make a claim about side-constraints; it doesn’t make a claim about whether doing good is supererogatory or obligatory; it doesn’t make a claim about the nature of welfare. EA is broad tent, and deliberately so: very many different ethical perspectives will agree, for example, that it’s important to find out which charities do the most to improve the welfare of those living in extreme poverty (as measured by QALYs etc), and then encouraging people to give to those charities. If so, then we’ve got an important activity that people of very many different ethical backgrounds can get behind—which is great!
Hi,
Thanks for this post. The relationship between EA and well-known moral theories is something I’ve wanted to blog about in the past.
So here are a few points:
1. EA does not equal utilitarianism.
Utilitarianism makes many claims that EA does not make:
EA does not claim whether it’s obligatory or merely supererogatory to spend one’s resources helping others; utilitarianism claims that it is obligatory.
EA does not make a claim about whether there are side-constraints - certain things that it is impermissible to do, even if it were for the greater good. Utilitarianism claims that it’s always obligatory to act for the greater good.
EA does not claim that there are no other things besides welfare that are of value; utilitarianism does claim this.
EA does not make a precise claim about what promoting welfare consists in (for example, whether it’s more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.
Also, note that some eminent EAs are not even consequentialist leaning, let alone utilitarian: e.g. Thomas Pogge (political philosopher) and Andreas Mogensen (Assistant Director of Giving What We Can) explicitly endorse a rights-based theory of morality; Alex Foster (epic London EtG-er) and Catriona MacKay (head of the GWWC London chapter) are both Christian (and presumably not consequentialist, though I haven’t asked).
2. Rather, EA is something that almost every plausible moral theory is in favour of.
Almost every plausible moral theory thinks that promoting the welfare of others in an effective way is a good thing to do. Some moral theories that promoting the welfare of others is merely supererogatory, and others think that there are other values at stake. But EA is explicitly pro promoting welfare; it’s not anti other things, and it doesn’t claim that we’re obligated to be altruistic, merely that it’s a good thing to do.
3. Is EA explicitly welfarist?
The term ‘altruism’ suggests that it is. And I think that’s fine. Helping others is what EAs do. Maybe you want to do other things effectively, but then it’s not effective altruism—it’s “effective justice”, “effective environmental preservation”, or something. Note, though, that you may well think that there are non-welfarist values—indeed, I would think that you would be mistaken not to act as if there were, on moral uncertainty grounds alone—but still be part of the effective altruism movement because you think that, in practice, welfare improvement is the most important thing to focus on.
So, to answer your dilemma:
EA is not trying to be the whole of morality.
It might be the whole of morality, if being EA is the only thing that is required of one. But it’s not part of the EA package that EA is the whole of morality. Rather, it represents one aspect of morality—an aspect that is very important for those living in affluent countries, and who have tremendous power to help others. The idea that we in rich countries should be trying to work out how to help others as effectively as possible, and then actually going ahead and doing it, is an important part of almost every plausible moral theory.
- Mar 10, 2018, 4:12 PM; 5 points) 's comment on Cognitive and emotional barriers to EA’s growth by (EA Forum;
- Sep 16, 2014, 2:26 AM; 1 point) 's comment on Open Thread by (EA Forum;
I explicitly address this in the second paragraph of the “The history of GiveWell’s estimates for lives saved per dollar” section of my post as well as the “Donating to AMF has benefits beyond saving lives” section of my post.
Not really. You do mention the flow-on benefits. But you don’t analyse whether your estimate of “good done per dollar” has increased or decreased. And that’s the relevant thing to analyse. If you argued “cost per life saved has had greater regression to your prior than you’d expected; and for that reason I expect my estimates of good done per dollar to regress really substantially” (an argument I think you would endorse), I’d accept that argument, though I’d worry about how much it generalises to cause-areas other than global poverty. (e.g. I expect there to be much less of an ‘efficient market’ for activities where there are fewer agents with the same goals/values, like benefiting non-human animals, or making sure the far-future turn out well). Optimism bias still holds, of course.
You say that “cost-effectiveness estimates skew so negatively.” I was just pointing out that for me that hasn’t been the case (for good done per $), because long-run benefits strike me as swamping short-term benefits, a factor that I didn’t initially incorporate into my model of doing good. And, though I agree with the conclusion that you want as many different angles as possible (etc), focusing on cost per life saved rather than good done per dollar might lead you to miss important lessons (e.g. “make sure that you’ve identified all crucial normative and empirical considerations”). I doubt that you personally have missed those lessons. But they aren’t in your post. And that’s fine, of course, you can’t cover everything in one blog post. But it’s important for the reader not to overgeneralise.
I agree with this. I don’t think that my post suggests otherwise.
I wasn’t suggesting it does.
Good post, Jonah. You say that: “effective altruists should spend much more time on qualitative analysis than on quantitative analysis in determining how they can maximize their positive social impact”. What do you mean by “qualitative analysis”? As I understand it, your points are: i) The amount by which you should regress to your prior is much greater than you had previously thought, so ii) you should favour robustness of evidence more than you had previously. But that doesn’t favour qualitative vs non-qualitative evidence. It favours more robust evidence of lower but good cost-effectiveness over less robust evidence of higher cost-effectiveness. The nature of the evidence could be either qualitative or quantitative, and the things you mention in “implications” are generally quantitative.
In terms of “good done per dollar”—for me that figure is still far greater than I began with (and I take it that that’s the question that EAs are concerned with, rather than “lives saved per dollar”). This is because, in my initial analysis—and in what I’d presume are most people’s initial analyses—benefits to the long-term future weren’t taken into account, or weren’t thought to be morally relevant. But those (expected) benefits strike me, and strike most people I’ve spoken with who agree with the moral relevance of them, to be far greater than the short-term benefits to the person whose life is saved. So, in terms of my expectations about how much good I can do in the world, I’m able to exceed those by a far greater amount than I’d previously thought likely. And that holds true whether it costs $2000 or $20000 to save a life. I’m not mentioning that either to criticise or support your post, but just to highlight that the lesson to take from past updates on evidence can look quite different depending on whether you’re talking about “good done per dollar” or “lives saved per dollar”, and the former is what we ultimately care about.
Final point: Something you don’t mention is that, when you find out that your evidence is crappier than you’d thought, two general lessons are to pursue things with high option value and to pay to gain new evidence (though I acknowledge that this depends crucially on how much new evidence you think you’ll be able to get). Building a movement of people who are aiming to do the most good with their marginal resources, and who are trying to work out how best to do that, strikes me as a good way to achieve both of these things.
Thanks for mentioning this—I discuss Nozick’s view in my paper, so I’m going to edit my comment to mention this. A few differences:
As crazy88 says, Nozick doesn’t think that the issue is a normative uncertainty issue—his proposal is another first-order decision theory, like CDT and EDT. I argue against that account in my paper. Second, and more importantly, Nozick just says “hey, our intuitions in Newcomb-cases are stakes-sensitive” and moves on. He doesn’t argue, as I do, that we can explain the problematic cases in the literature by appeal to decision-theoretic uncertainty. Nor does he use decision-theoretic uncertainty to respond to arguments in favour of EDT. Nor does he respond to regress worries, and so on.
Don’t worry, that’s not an uncomfortable question. UDT and MDT are quite different. UDT is a first-order decision theory. MDT is a way of extending decision theories—so that you take into account uncertainty about which decision theory to use. (So, one can have meta causal decision theory, meta evidential decision theory, and (probably, thought I haven’t worked through it) meta updateless decision theory.)
UDT, as I understand it (and note I’m not at all fluent in UDT or TDT) always one-boxes; whereas if you take decision-theoretic uncertainty into account you should sometimes one-box and sometimes two-box, depending on the relative value of the contents of the two boxes. Also, UDT gets what most decision-theorists consider the wrong answer in the smoking lesion case, whereas the account I defend, meta causal decision theory, doesn’t (or, at least, doesn’t, depending on one’s credences in first-order decision theories).
To illustrate, consider the case:
High-Stakes Predictor II (HSP-II) Box C is opaque; Box D, transparent. If the Predictor predicts that you choose Box C only, then he puts one wish into Box C, and also a stick of gum. With that wish, you save the lives of 1 million terminally ill children. If he predicts that you choose both Box C and Box D, then he puts nothing into Box C. Box D — transparent to you — contains an identical wish, also with the power to save the lives of 1 million children, so if one had both wishes one would save 2 million children in total. However, Box D contains no gum. One has two options only: choose Box C only, or both Box C and Box D.
In this case, intuitively, should you one box, or two box? My view is clear: that if someone one-boxes in the above case, they made the wrong decision. And it seems to me that this is best explained with appeal to decision-theoretic uncertainty.
Other questions: Bostrom’s parliamentary model is different. Between EDT and CDT, the intertheoretic comparisons of value are easy, so there’s no need to use the parliamentary analogy—one can just straightforwardly take an expectation over decision theories.
Pascal’s Mugging (aka the “Fanaticism” worry). This is a general issue for attempts to take normative uncertainty into account in one’s decision-making, and not something I discuss in my paper. But if you’re concerned about Pascal’s mugging and, say, think that a bounded Decision Theory is the best way to respond to the problem—then at the meta level you should also have a bounded decision theory (and at the meta meta level, and so on).
Meta Decision Theory and Newcomb’s Problem
(part 3; final part)
Second: The GWWC Pledge. You say:
“The GWWC site, for example, claims that from 291 members there will be £72.68M pledged. This equates to £250K / person over the course of their life. Claiming that this level of pledging will occur requires either unreasonable rates of donation or multi-decade payment schedules. If, in line with GWWC’s projections, around 50% of people will maintain their donations, then assuming a linear drop off the expected pledge from a full time member is around £375K. Over a lifetime, this is essentially £10K / year. It seems implausible that expected mean annual earnings for GWWC members is of order £100K.”
Again, there are quite a few mistakes:
First, in comments you twice say that “£112.8M” has been pledged rather than “$112.8M”. I know that’s just a typo but it’s an important one.
Second, you say that the GWWC site claims that, “there will be £72.68M pledged” (future tense). It doesn’t, it says, “$112.8mn pledged” (past tense). It’s a pretty important difference – the pledging is something that has happened, not something that will happen. This might partly explain the confusion discussed in point 4, below. Third, and more substantively, you don’t consider the idea, raised in other comments, that some donors might be donating considerably more than 10%, or that some donors might be donating considerably more than the mean. Both are true of GWWC pledgers.
Fourth, you seem to wilfully misunderstand the verb ‘to pledge’. I regularly make the following statement: “I have pledged to give everything I earn above £20 000 p.a. [PPP and inflation-adjusted to Oxford 2009]”. Am I lying when I say that? Using synonyms, I could have said “I promise to give…”, “I commit to give…” or “I sincerely intend to give…”. None of these entail “I am certain that I will donate everything above £20 000 p.a.”. Using my belief that I will earn on average over £42 000 p.a. [PPP and inflation-adjusted to Oxford 2009] over the course of my life, and that I will work until I’m 68, I can infer that I’ve pledged to give over £1 000 000 over the course of my life, which is also something I say. Am I lying when I say that? (Also note that if only 73 people made the same pledge as me, then we would have jointly pledged the current GWWC amount).
Fifth, I don’t know why you took us to use the $100mn pledged figure as an estimate of our impact. In fact you had evidence to the contrary. In a blog post that you cite I said: “As of last March, we’d invested $170 000’s worth of volunteer time into Giving What We Can, and had moved $1.7 million to GiveWell or GWWC top-recommended development charities, and raised a further $68 million in pledged donations. Taking into account the facts that some proportion of this would have been given anyway, there will be some member attrition, and not all donations will go to the very best charities (and using data for all these factors when possible), we estimate that we had raised $8 in realised donations and $130 in future donations for every $1’s worth of volunteer time invested in Giving What We Can.” (emphasis added).
Finally, I think that the GWWC pledge is misleading only if it’s taken to be a measure of our impact. But we don’t advertise it as that. We could try to make it some other number. We could adjust the number downwards, in order to take into account: how much would have been given anyway; member attrition; a discount rate. Or we could adjust the number upwards, in order to take into account: overgiving; real growth of salaries, and inflation. It could also be adjusted downward to take into account that not all donations are to GW or GWWC recommended charities, or (perhaps) upwards to take into account the idea that we will have better evidence about the best giving opportunities in a few years’ time, and thereby be able to donate to charities better than AMF, SCI or DtW. But any number we gave based on these adjustments would be more misleading and arbitrary than the literal amount pledged. It would also be more confusing for the large majority of our website viewers who haven’t thought about things like counterfactual giving or whether the discount rate should be positive or negative over the next few years; they’re used to the social norm which is to advertise pledges as stated. Until you, no-one who does understand issues such as counterfactual giving and discount rates has understood the amount pledged figure as an impact-assessment.
In comments there was some uncertainty about how we come up with the total pledged figure. What we do is as follows. Each member, when they return their pledge form, states a) what percentage they commit to (or, if taking the Further Pledge, the baseline income above which they give everything); b) their birthdate; c) their expected average earnings per annum. Assuming a (conservative) standard retirement age, that allows us to calculate their expected donations. In some cases, members understandably don’t want to reveal their expected earnings. What we used to do, in such cases, is to use the mean earnings of all the other members who have given their incomes. However, when, recently, one member joined with very large expected earnings (pursuing earning to give), we raised the question whether this method suffers from sample bias, because people who expect to earn a lot will be more likely to report. I’m not sure that’s true: I could imagine that people who earn more often don’t want to flaunt that fact. However, wanting to be conservative, we decided instead to use the mean earnings of the country in which the member works.
Bottom Line for Readers If you’re interested in the question of whether 80,000 Hours and Giving What We Can have acted optimally or will act optimally in the future, the answer is simple: certainly not. We inevitably do some things worse than we could have done, and we value your input on concrete suggestions about how our organisations can improve.
If you’re interested in the question of whether $1 invested in 80,000 Hours or Giving What We Can produces more than $1’s worth of value for the best causes, read here, here, here and here and, most of all, contact me for the calculations and, if you’d like, our latest business plan, at will dot crouch at 80000hours.org. So far, I haven’t seen any convincing arguments to the conclusion that we fail to have a ROI greater than 1; however, it’s something I’d love additional input on, as the outside view makes me wary about believing that I work for the best charity I know of.
- Mar 12, 2013, 6:07 AM; 11 points) 's comment on CEA does not seem to be credibly high impact by (
(part 2) The most important mistakes in the post
Bizarre Failures to Acquire Relevant Evidence As lukeprog noted, you did not run this post by anyone within CEA who had sufficient knowledge to correct you on some of the matters given above. Lukeprog describes this as ‘common courtesy’. But, more than that, it’s a violation of a good epistemic principle that one should gain easily accessible relevant information before making a point publicly.
The most egregious violation of this principle is that, though you say you focus on the idea that donating to CEA has a ROI greater than 1, and though you repeatedly ask for a ‘calculation’ of impact and claim that CEA is not credible for not being able to provide such a calculation, you haven’t contacted me for the calculation of GWWC’s impact per dollar invested. This isn’t something I’ve been shy about — in a blog post that you link to (as well as elsewhere) I prominently describe this calculated impact-assessment, and invite people to contact me if they want the spreadsheet with the calculation. Insofar as this was the cornerstone of your concern, it’s odd that you didn’t contact me for the spreadsheet. Comments on that impact-assessment would have been helpful, but as far as I’m aware you haven’t read it.
Another example is where you suggest that little thought went into the change of the 80,000 Hours’ declaration of intent. Again, this is information that would have been easily accessible via a quick email to me or Ben Todd. As it happens, the declaration has gone through several iterations; there has been discussion on the core 80,000 Hours’ lists; Ben, myself and other have independently written proposals; and we commissioned one of our best interns to research the topic as part of our general marketing strategy. We concluded that having a lower initial barrier to entry was wise, because it would increase the total number of members, allow us to be more mainstream, and increase the total (though not the proportion) of members who make significant changes to their careers and thereby make the world a significantly better place. (We are also currently discussing whether to introduce a further pledge along the lines of “I intend to dedicate my life to whatever does the most good.”) It wouldn’t be an underestimate to say that several person-weeks of thought and research have gone into the pledges.
A further example is where you guess the number of researchers we have. Again, you could have e-mailed for this information, rather than trying to guess on the basis of the names listed on the website. For this reason, you substantially overestimated how many person-hours we command. Between CEA, over the last six months we have had the equivalent of 3.7 full-time staff. The first 2.6 of these started in July last year, another joined in late September and another in January. GWWC currently has the equivalent of two full-time staff; 80,000 Hours has the equivalent of two and a half full-time staff. For this reason (and perhaps also the planning fallacy), I think you severely overestimate the amount of research we could reasonably expect to deliver in that time.
Another example is where you quote the number of people we have on our mailing lists. This is a good example, because it’s one where I spoke incorrectly in Cambridge. I said that one third of Oxford students were on our mailing list; what I should have said was that about 20% of students coming through fresher’s fair were on our mailing list. It’s precisely errors like these — easy to make in the context of an impromptu group discussion — that show the value of making sure that one’s evidence is reliable.
A further example is where you say “it has been stated that GWWC has an internal price of around £1700 for new pledges” and then, in your response to my query about where this number came from, said that it came from Jacob Trefethen — a volunteer at a chapter, and not currently involved with core GWWC and 80k activities. Again, this is not the sort of evidence on which it’s rational to base a critique — when the option of simply asking me or someone else who works on strategy within CEA was merely an email away.
Another example was: “a large fraction of the people involved with 80,000 hours or GWWC behave like dilettantes”… “Nor do they seem to act as if they wish to seriously optimise the world.” But, as far as I know, you know only one person who works at CEA, Adam Casey, who is an unpaid intern, and you have about one hour’s worth of contact with me. I doubt that, if you knew us personally, and not through material written for an audience encountering the ideas of effective altruism for the first time, you would doubt our intention and commitment to “seriously optimise the world” as you put it. Seeing as this is LessWrong, I’ll quote Eliezer Yudkowsky (stated in an independent internet conversation on Ycombinator). In response to the question, “What application of $4B would, right now, generate the most utility for humanity?” he replied: “If you know the word “utility”, the people who actually seriously try to figure out the answer to that question live at:
Embarrassingly Poor Arguments First: You ask: “For example, the world bank throws ~$43B/year around. Which is easier: To upscale GWWC by a factor of ~17000, or double the mean effectiveness of the World Bank? This should not be a hypothetical question; it should be answered.”
There are a few mistakes here:
First, your comment suggests that you know that we haven’t thought about this. But that’s misleading, because you haven’t ever asked us if we’ve thought about it.
Second, I have no idea where your numbers come from. After searching (inc. here) I still don’t know where $43bn number comes from. And, after trying to figure it out, I also don’t know where your “17 000” figure comes from. GWWC has so far moved $2.5 million and raised $100mn in pledges. Even discounting the literal pledges by 99% and valuing them at $1mn (which would be far too steep in my view), the appropriate figure would be 12 300. So, whatever the basis, 17 000 seems too high.
Third, even neglecting the above points, your figure would only be correct if the cost-effectiveness of the World Bank’s spending were the same as the cost-effectiveness of GWWC top-recommended charities. But we think, and presumably you agree, that the cost-effectiveness of GWWC’s top-recommended charities are significantly better than the World Bank’s mean cost-effectiveness. Aside from anything else, there’s a major difference between donations and loans. Fourth, if you want to maximize impact yours is not the correct question to ask. If it will get progressively harder to grow GWWC, and if one think that the likelihood of achieving either outcome is very low (both reasonable assumptions), then it could be true that (i) it is easier to double the mean effectiveness of the World Bank than to increase GWWC’s size by a factor of 17000 and (ii) that one ought to use one’s marginal time and resources to grow GWWC. The reason these could both be true is that the marginal benefits from growing GWWC are greater than the marginal benefits of trying to double the effectiveness of the World Bank. Given this, it’s unclear why this question “should be answered”. Fifth, the question implicitly neglects the fact that growing GWWC has substantial knock-on benefits, including increasing the ability of some GWWC members to influence major international organisations like the World Bank (see the background on Toby’s activities, above).
In general: i) Starting with something smaller and easier to achieve has instrumental cumulative benefits and option value in a way that staking everything on one big goal does not. ii) Directly doubling the effectiveness of the World Bank – and other similar projects – is not the comparative advantage of existing EAs in Oxford. Given our success generating and mobilising talented altruists, I think the team here will have greater success taking an indirect route than by attempting to do it directly ourselves. We can use e.g. 80,000 Hours to identify precisely those who have or could develop the requisite skills, credentials and values, and provide them the encouragement, information and practical assistance required to get into positions of major influence over aid effectiveness. Finding and convincing someone to pursue this career is much easier than dedicating your entire life to it yourself, which is what led us to set up 80,000 Hours in the first place.
That’s not to say we aren’t open to the idea. It’s one of my main concerns about my current activities. But it’s misleading to suggest that you have good evidence to believe that we haven’t considered it.
- Mar 12, 2013, 6:07 AM; 11 points) 's comment on CEA does not seem to be credibly high impact by (
Ah, by the “software feedback loop” I mean: “At the point of time at which AI has automated AI R&D, does a doubling of cognitive effort result in more than a doubling of output? If yes, there’s a software feedback loop—you get (for a time, at least) accelerating rates of algorithmic efficiency progress, rather than just a one-off gain from automation.”
I see now why you could understand “RSI” to mean “AI improves itself at all over time”. But even so, the claim would still hold—even if (implausibly) AI gets no smarter than human-level, you’d still get accelerated tech development, because the quantity of AI research effort would increase at a growth rate much faster than the quantity of human research effort.