As someone said in another comment there are the core tenets of EA, and there is your median EA. Since you only seem to have quibbles with the latter, I’ll address some of those, but I don’t feel like accepting or rejecting them is particularly important for being an EA in the context of the current form of the movement. We love discussing and challenging our views. Then again I think I so happen to agree with many median EA views.
which values people based on their contributions, not just their needs
VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”
I don’t think EAs do a very good job of distinguishing their moral intuitions from good philosophical arguments
I think this has been mentioned in the comments but not very directly. The median EA view may be not to bother with philosophy at all because the branches that still call themselves philosophy haven’t managed to come to a consensus on central issues over centuries so that there is little hope for the individual EA to achieve that.
However when I talk to EAs who do have a background in philosophy, I find that a lot of them are metaethical antirealists. Lukas Gloor, who also posted in this thread, has recently convinced me that antirealism, though admittedly unintuitive to me, is the more parsimonious view and thus the view under which I operate now. Under antirealism moral intuitions, or some core ones anyway, are all we have, so that there can be no philosophical arguments (and thus no good or bad ones) for them.
Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system. However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.
Betting on a particular moral philosophy with a percentage of your income shows an immense amount of confidence, and extraordinary claims require extraordinary evidence.
From my moral vantage point, the alternative (I’ll consider a different counterfactual in a moment) that I keep the money to spend it on myself where its marginal positive impact on my happiness is easily two or three orders of magnitude lower and my uncertainty over what will make me happy is also just slightly lower than with some top charities, that alternative would be a much more extraordinary claim.
You could break that up and note that in the end I’m not deciding to just “donate effectively,” but that I’ll decide on a very specific intervention and charity to donate to, for example Animal Equality, making my decision much more shaky again, but I’d also have to make such highly specific decisions that are probably only slightly less shaky when trying to spend money on my own happiness.
However, the alternative might also be:
keeping your money in your piggy bank until more obvious opportunities emerge
That’s something the median EA has probably considered a good deal. Even at GiveWell there was a time in 2013 when some of the staff pondered whether it would be better to hold off with their personal donations and donate a year later when they’ve discovered better giving opportunities.
However several of your arguments seem to stem from uncertainty in the sense of “There is substantial uncertainty, so we should hold off doing X until the uncertainty is reduced.” Trading off these element in an expected value framework and choosing the right counterfactuals is probably again a rather personal decision when it comes to investing ones donation budget, but over time I’ve become less risk-averse and more ready to act under some uncertainty, which has hopefully brought me closer to maximizing the expected utility of my actions. Plus I don’t expect any significant decreases in uncertainty wrt the best giving opportunities in the future that I could wait for. There will hopefully be more with similar or only slightly greater levels of uncertainty though.
Part of the reason I wrote my critique is that I know that at least some EAs will learn something from it and update their thinking.
VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”
I’ll take your word that many EAs also think this way, but I don’t really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.
Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system.
Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic?
Regardless of whether you are an antirealist, not all value systems are created equal. Many people’s value systems are hopelessly contradictory, or corrupted by politics. For example, some people claim to support gay people, but they also support unselective immigration from countries with anti-gay attitudes, which will inevitably cause negative externalities for gay people. That’s a contradiction.
I just don’t think a lot of EAs have thought their value systems through very thoroughly, and their knowledge of history, politics, and object-level social science is low. I think there are a lot of object-level facts about humanity, and events in history or going on right now which EAs don’t know about, and which would cause them to update their approach if they knew about it and thought seriously about it.
Look at the argument that EAs make towards ineffective altruists: they know so little about charity and the world that they are hopelessly unable to achieve significant results in their charity. When EAs talk to non-EAs, they advocate that (a) people reflect on their value system and priorities, and (b) they learn about the likely consequences of charities at an object-level. I’m doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical context to evaluate their interventions.
However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.
What is or isn’t controversial in society is more a function of politics than of ethics. Progressive politics is memetically dominant, potentially religiously-descended, and falsely presents itself as universal. Imagine what an EA would do in Nazi Germany under the influence of propaganda. How about Soviet Effective Altruists, would they actually do good, or would they say “collectivize faster, comrade?” How do we know we aren’t also deluded by present-day politics?
It seems like there should be some basic moral requirement that EAs give their value a system a sanity-check instead of just accepting whatever the respectable politics of the time tell them. If indeed politics has a very pervasive influence on people’s knowledge and ethics, then giving your value system a sanity-check would require separating out the political component of your worldview. This would require deep knowledge of politics, history, and social science, and I just don’t see most EAs or rationalists operating at this level (I’m certainly not: the more I learn, the more I realize I don’t know).
The fact that the major EA interventions are so palatable to progressivism suggests that EA is operating with very bounded rationality. If indeed EA is bounded by progressivism, and progressivism is a flawed value system, then there are lots of EA missed opportunities lying around waiting for someone to pick them up.
I didn’t respond to your critiques that went into a more political direction because there was already discussion of those aspects there that I wouldn’t have been able to add anything to. There is concern in the movement in general and in individual EA organizations that because EAs are so predominantly computer scientists and philosophers, there is a great risk of incurring known and unknown unknowns. In the first category, more economists for example would be helpful; in the second category it will be important to bring people from a wide variety of demographics into the movement without compromising its core values. As computer scientist I’m pretty median again.
then there are lots of EA missed opportunities lying around waiting for someone to pick them up
Indeed. I’m not sure if the median EA is concerned about this problem yet, but I wouldn’t be surprised if they are. Many EA organizations are certainly very alert to the problem.
Followed to its logical conclusion, this outlook would result in a lot more concern about the West.
This concern manifests in movement-building (GWWC et al.) and capacity-building (80k Hours, CEA, et al.). There is also concern that I share but that may not yet be median EA concern that we should focus more on movement-wide capacity-building, networking, and some sort of quality over quantity approach to allow the movement to be better and more widely informed. (And by “quantity” I don’t mean to denigrate anyone but just I mean more people like myself who already feel welcomed in the movement because everyone speaks their dialect and whose peers are easily convinced too.)
Throughout the time that I’ve been part of the movement, the general sentiment either in the movement as a whole or within my bubble of it has shifted in some ways. One trend that I’ve perceived is that in the earlier days there was more concern over trying vs. really trying while now concern over putting one’s activism on a long-term sustainable basis has become more important. Again, this may be just my filter bubble. This is encouraging as it shows that everyone is very well capable of updating, but it also indicates that as of one or two years ago, we still had a bunch to learn even concerning rather core issues. In a few more years, I’ll probably be more confident that come core questions are not so much in flux anymore that new EAs can overlook or disregard them and thereby dilute what EA currently stands for or shift it into a direction I couldn’t identify with anymore.
Again, I’m not ignoring your points on political topics, I just don’t feel sufficiently well-informed to comment. I’ve been meaning to read David Roodman’s literature review on open borders–related concerns, since I greatly enjoyed some of his other work, but I haven’t yet. David Roodman now works for the Open Philanthropy Project.
Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic?
I’ve always perceived EA as whatever stands at the end of any such process, or maybe not the end but some critical threshold when a person realizes that they agree with the core tenets that they value other’s well-being, and that greater well-being or the well-being or more beings weighs heavier than lesser well-being or the well-being of fewer. If they reach such a threshold. If they do, I see all three processes as relevant.
Regardless of whether you are an antirealist, not all value systems are created equal.
Of course.
Their knowledge of history, politics, and object-level social science is low. … I’m doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical context to evaluate their interventions.
Yes, thanks! That’s why I was most interested in your comment in this thread, and because all other comments that piqued my interest in similar ways already had comprehensive replies below them when I found the thread.
This needs to be turned into a concrete strategy, and I’m sure CEA is already on that. Identifying exactly what sorts of expertise are in short supply in the movement and networking among the people who possess just this expertise. I’ve made some minimal-effort attempts to pitch EA to economists, but inviting such people to speak at events like EA Global is surely a much more effective way of drawing them and their insights into the movement. That’s not limited to economists of course.
Do you have ideas for people or professions the movement would benefit from and strategies for drawing them in and making them feel welcome?
I just don’t think a lot of EAs have thought their value systems through very thoroughly
Given how many philosophers there are in the movement, this would surprise me. Is it possible that it’s more the result of the ubiquitous disagreement between philosophers?
How do we know we aren’t also deluded by present-day politics?
I’ve wondered about that in the context of moral progress. Sometimes the idea of moral progress is attacked on the grounds that proponents base their claims for moral progress on how history has developed into the direction of our current status quo, which is rather pointless since by that logic any historical trend toward the status quo would then become “moral progress.” However, by my moral standards the status quo is far from perfect.
Analogously I see that the political views EAs are led to hold are so heterogeneous that some have even thought about coining new terms for this political stance (such as “newtilitarianism”), luckily only in jest. (I’m not objecting to the pun but I’m wary of labels like that.) That these political views are at least somewhat uncommon in their combination suggests to me that we’re not falling into that trap, or at least making an uncommonly good effort of avoiding it. Since the trap is pretty much the default starting point for many of us, it’s likely we still have many legs trapped in it despite this “uncommonly good effort.” The metaphor is already getting awkward, so I’ll just add that some sort of contrarian hypercorrection would of course constitute just another trap. (As it happens, there’s another discussion of the importance of diversity in the context of Open Phil in that Vox article.)
No need for you to address any particular political point I’m making. For now, it is sufficient for me to suggest that reigning progressive ideas about politics are flawed and holding EAs back, without you committing to any particular alternative view.
I’m glad to hear that EAs are focusing more on movement-building and collaboration. I think there is a lot of value in eigenaltruism: being altruistic only towards other eigenaltruistic people who “pay it forward” (see Scott Aaronson’s eigenmorality). Civilizations have been built with reciprocal altruism. The problem with most EA thinking is that is one-way, so the altruism is consumed immediately. This post argues that morality evolved as a system of mutual obligation, and that EAs misunderstand this.
Although there is some political heterogeneity in EA, it is overwhelmed by progressives, and the main public recommendations are all progressive causes. Moral progress is a tricky concept: for example, the French Revolution is often considered moral progress, but the pictures paint another story.
On open borders, economic analyses like Roodman’s are just too narrow. They do not take into account all of the externalities, such as crime and changes to cultural institutions. OpenBorders.info addresses many of the objections, sometimes; it does a good job of summarizing some of the anti-open borders arguments, but often fails to refute them, yet this lack of refutation doesn’t translate into them updating their general stance on immigration.
If humans are interchangeable homo economicus then open borders would be a economic and perhaps moral imperative. If indeed human groups are significantly different, such as in crime rates, then it throws a substantial wrench into open borders. If the safety of open borders is in question, then it is a risky experiment.
Some of early indicators are scary, like the Rotherham Scandal. There are reports of similar coverups in other areas, and economic analyses do not capture the harms to these thousands of children. High-crime areas where the police have trouble enforcing rule of law are well documented in Europe: they are called “no-go zones” or “sensitive urban zones” (“no-go zone” is controversial because technically you can go there, but would you want to go to this zone, especially if you were Jewish?). Britain literally has Sharia Patrols harassing gay people and women.
These are just the tip of the iceberg of what is happening with current levels of immigration. Just imagine what happens with fully open borders. I really don’t think its advocates have grappled with this graph, and what it means for Europe under open borders. No matter how generous Europe was, its institutions would never be able to handle the wave of immigrants, and open borders advocates are seriously kidding themselves if they don’t see that Europe would turn into South Africa mixed with Syria, and the US would turn into Brazil. And then who would send aid to Africa?
Rule of law is slowly breaking down in the West, and elite Westerners are sitting in their filter bubbles fiddling while Rome burns. I’m not telling you to accept this scenario as likely; you would need to go do your own research at the object-level. But with even a small risk that this scenario is possible, it’s very significant for future human welfare.
Do you have ideas for people or professions the movement would benefit from and strategies for drawing them in and making them feel welcome?
I’ll think about it. I think some of the sources I’ve cited start answering that question: finding people who are knowledgeable about the giant space of stuff that the media and academia is sweeping under the carpet for political reasons.
Before I delay my reply until I’ve read everything you’ve linked, I’ll rather post a WIP reply.
Thanks for all the data! I hope I’ll have time to look into Open Borders some more in August.
Error theorists would say that the blog post “Effective Altruists are Cute but Wrong” is cute but wrong, but more generally the idea of using PageRank for morality is beautifully elegant (but beautifully elegant things have often turned out imperfect in practice in my experience). I still have to read the rest of the blog post though.
Eigendemocracy reminds me of Cory Doctorow’s whuffie idea.
An interesting case for eigenmorality is when you have distinct groups that cooperate amongst themselves and defect against others. Especially interesting is the case where there are two large, competing groups that are about the same size.
“I’ll take your word that many EAs also think this way, but I don’t really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.”
Can you elaborate please? From my perspective, just because a western citizen is more rich / powerful doesn’t mean that helping to satisfy their preferences is more valuable in terms of indirect effects? Or are you talking about who to persuade because I don’t see many EA orgs asking Dalit groups for their cash or time yet.
It’s not the preferences of the West that are inherently more valuable, it’s the integrity of its institutions, such as rule of law, freedom of speech, etc… If the West declines, then it’s going to have negative flow-through effects for the rest of the world.
There are other countries with sound institutions, like Singapore and Japan, but I’m not so worried about them as I am about the West, because they have an eye towards self-preservation. For instance, both those countries have declining birth rates, but they protect their own rule of law (unlike the West), and have more cautious immigration policies that help avoid their population from being replaced by a foreign one (unlike the West). The West, unlike sensible Asian countries, is playing a dangerous game by treating its institutions in a cavalier way for ill-thought-out redistributionist projects and importing leftist voting blocs.
EAs should also be more worried about decline in the West, because Westerners (particularly NW Europeans) are more into charity than other populations (e.g. Eastern Europeans are super-low in charity). My previous post documents this. A Chinese- or Russian- dominated future is really, really bad for EA, for existential risk prevention, and for AI safety.
There are other countries with sound institutions, like Singapore and Japan, but I’m not so worried about them as I am about the West, because they have an eye towards self-preservation.
I wouldn’t be so cavalier about that. Japan, specifically, has about zero immigration and its population, not to mention the workforce, is already falling. Demographics is a bitch. Without any major changes, in a few decades Japan will be a backwater full of old people’s homes that some Chinese trillionaire might decide to buy on a whim and turn into a large theme park.
Open borders and no immigration are like Scylla and Charybdis—neither is a particularly appealing option for a rich and aging country.
I also feel that the question “how much immigration to allow” is overrated. I consider it much less important than the question of “precisely what kind of people should we allow in”. A desirable country has an excellent opportunity to filter a part of its future population and should use it.
I agree that Japan has its own problems. No solutions are particularly good if they can’t get their birth rates up. Singapore also has low birth rates. What problems are preventing high-IQ people from reproducing might be something that EAs should look into.
“How much immigration to allow” and “precisely what kind of people should we allow in” can be related, because the more immigration you allow, the less selective you are probably being, unless you have a long line of qualified applicants. Skepticism of open borders doesn’t require being against immigration in general.
As you say, a filtered immigration population could be very valuable. For example, you could have “open borders” for educated professionals from low-crime, low-corruption areas countries with compatible value systems and who are encouraged to assimilate. I’m pretty sure this isn’t what most open borders advocates mean by “open borders,” though.
The left doesn’t “want” a responsible immigration policy either. For their political goals, they want a large and dissatisfied voting block. And for their signaling goals, it’s much more holy to invite poor, unskilled people rather than skilled professionals who want to assimilate.
Trading off these element in an expected value framework … is probably again a rather personal decision
If you aren’t aware of the relevant decision theory, then I have good news for you!
I’m not sure this is true, at least in the narrow instance of rationalists trying to make maximally effective decisions based on well defined uncertainties. In principle, at least, it should be possible to calculate the value of information. Decision theory has a concept called the expected value of perfect information. If you’re not 100% sure of something, but the cost of obtaining information is high (which it generally is in philosophy, as evidenced by the somewhat slow progress over the centuries.) and giving opportunities are shrinking (which they are for many areas, as conditions improve) then you probably want to risk giving sub-optimally by giving now vs later. The price of information is simply higher than the expected value.
Unfortunately, you might still need to make a judgement call to guesstimate the values to plug in.
Thanks! I hadn’t seen the formulae for the expected value of perfect information before. I haven’t taken the time to think them through yet, but maybe they’ll come in handy at some point.
As someone said in another comment there are the core tenets of EA, and there is your median EA. Since you only seem to have quibbles with the latter, I’ll address some of those, but I don’t feel like accepting or rejecting them is particularly important for being an EA in the context of the current form of the movement. We love discussing and challenging our views. Then again I think I so happen to agree with many median EA views.
VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”
I think this has been mentioned in the comments but not very directly. The median EA view may be not to bother with philosophy at all because the branches that still call themselves philosophy haven’t managed to come to a consensus on central issues over centuries so that there is little hope for the individual EA to achieve that.
However when I talk to EAs who do have a background in philosophy, I find that a lot of them are metaethical antirealists. Lukas Gloor, who also posted in this thread, has recently convinced me that antirealism, though admittedly unintuitive to me, is the more parsimonious view and thus the view under which I operate now. Under antirealism moral intuitions, or some core ones anyway, are all we have, so that there can be no philosophical arguments (and thus no good or bad ones) for them.
Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system. However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.
From my moral vantage point, the alternative (I’ll consider a different counterfactual in a moment) that I keep the money to spend it on myself where its marginal positive impact on my happiness is easily two or three orders of magnitude lower and my uncertainty over what will make me happy is also just slightly lower than with some top charities, that alternative would be a much more extraordinary claim.
You could break that up and note that in the end I’m not deciding to just “donate effectively,” but that I’ll decide on a very specific intervention and charity to donate to, for example Animal Equality, making my decision much more shaky again, but I’d also have to make such highly specific decisions that are probably only slightly less shaky when trying to spend money on my own happiness.
However, the alternative might also be:
That’s something the median EA has probably considered a good deal. Even at GiveWell there was a time in 2013 when some of the staff pondered whether it would be better to hold off with their personal donations and donate a year later when they’ve discovered better giving opportunities.
However several of your arguments seem to stem from uncertainty in the sense of “There is substantial uncertainty, so we should hold off doing X until the uncertainty is reduced.” Trading off these element in an expected value framework and choosing the right counterfactuals is probably again a rather personal decision when it comes to investing ones donation budget, but over time I’ve become less risk-averse and more ready to act under some uncertainty, which has hopefully brought me closer to maximizing the expected utility of my actions. Plus I don’t expect any significant decreases in uncertainty wrt the best giving opportunities in the future that I could wait for. There will hopefully be more with similar or only slightly greater levels of uncertainty though.
Part of the reason I wrote my critique is that I know that at least some EAs will learn something from it and update their thinking.
I’ll take your word that many EAs also think this way, but I don’t really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.
Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic?
Regardless of whether you are an antirealist, not all value systems are created equal. Many people’s value systems are hopelessly contradictory, or corrupted by politics. For example, some people claim to support gay people, but they also support unselective immigration from countries with anti-gay attitudes, which will inevitably cause negative externalities for gay people. That’s a contradiction.
I just don’t think a lot of EAs have thought their value systems through very thoroughly, and their knowledge of history, politics, and object-level social science is low. I think there are a lot of object-level facts about humanity, and events in history or going on right now which EAs don’t know about, and which would cause them to update their approach if they knew about it and thought seriously about it.
Look at the argument that EAs make towards ineffective altruists: they know so little about charity and the world that they are hopelessly unable to achieve significant results in their charity. When EAs talk to non-EAs, they advocate that (a) people reflect on their value system and priorities, and (b) they learn about the likely consequences of charities at an object-level. I’m doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical context to evaluate their interventions.
What is or isn’t controversial in society is more a function of politics than of ethics. Progressive politics is memetically dominant, potentially religiously-descended, and falsely presents itself as universal. Imagine what an EA would do in Nazi Germany under the influence of propaganda. How about Soviet Effective Altruists, would they actually do good, or would they say “collectivize faster, comrade?” How do we know we aren’t also deluded by present-day politics?
It seems like there should be some basic moral requirement that EAs give their value a system a sanity-check instead of just accepting whatever the respectable politics of the time tell them. If indeed politics has a very pervasive influence on people’s knowledge and ethics, then giving your value system a sanity-check would require separating out the political component of your worldview. This would require deep knowledge of politics, history, and social science, and I just don’t see most EAs or rationalists operating at this level (I’m certainly not: the more I learn, the more I realize I don’t know).
The fact that the major EA interventions are so palatable to progressivism suggests that EA is operating with very bounded rationality. If indeed EA is bounded by progressivism, and progressivism is a flawed value system, then there are lots of EA missed opportunities lying around waiting for someone to pick them up.
I didn’t respond to your critiques that went into a more political direction because there was already discussion of those aspects there that I wouldn’t have been able to add anything to. There is concern in the movement in general and in individual EA organizations that because EAs are so predominantly computer scientists and philosophers, there is a great risk of incurring known and unknown unknowns. In the first category, more economists for example would be helpful; in the second category it will be important to bring people from a wide variety of demographics into the movement without compromising its core values. As computer scientist I’m pretty median again.
Indeed. I’m not sure if the median EA is concerned about this problem yet, but I wouldn’t be surprised if they are. Many EA organizations are certainly very alert to the problem.
This concern manifests in movement-building (GWWC et al.) and capacity-building (80k Hours, CEA, et al.). There is also concern that I share but that may not yet be median EA concern that we should focus more on movement-wide capacity-building, networking, and some sort of quality over quantity approach to allow the movement to be better and more widely informed. (And by “quantity” I don’t mean to denigrate anyone but just I mean more people like myself who already feel welcomed in the movement because everyone speaks their dialect and whose peers are easily convinced too.)
Throughout the time that I’ve been part of the movement, the general sentiment either in the movement as a whole or within my bubble of it has shifted in some ways. One trend that I’ve perceived is that in the earlier days there was more concern over trying vs. really trying while now concern over putting one’s activism on a long-term sustainable basis has become more important. Again, this may be just my filter bubble. This is encouraging as it shows that everyone is very well capable of updating, but it also indicates that as of one or two years ago, we still had a bunch to learn even concerning rather core issues. In a few more years, I’ll probably be more confident that come core questions are not so much in flux anymore that new EAs can overlook or disregard them and thereby dilute what EA currently stands for or shift it into a direction I couldn’t identify with anymore.
Again, I’m not ignoring your points on political topics, I just don’t feel sufficiently well-informed to comment. I’ve been meaning to read David Roodman’s literature review on open borders–related concerns, since I greatly enjoyed some of his other work, but I haven’t yet. David Roodman now works for the Open Philanthropy Project.
I’ve always perceived EA as whatever stands at the end of any such process, or maybe not the end but some critical threshold when a person realizes that they agree with the core tenets that they value other’s well-being, and that greater well-being or the well-being or more beings weighs heavier than lesser well-being or the well-being of fewer. If they reach such a threshold. If they do, I see all three processes as relevant.
Of course.
Yes, thanks! That’s why I was most interested in your comment in this thread, and because all other comments that piqued my interest in similar ways already had comprehensive replies below them when I found the thread.
This needs to be turned into a concrete strategy, and I’m sure CEA is already on that. Identifying exactly what sorts of expertise are in short supply in the movement and networking among the people who possess just this expertise. I’ve made some minimal-effort attempts to pitch EA to economists, but inviting such people to speak at events like EA Global is surely a much more effective way of drawing them and their insights into the movement. That’s not limited to economists of course.
Do you have ideas for people or professions the movement would benefit from and strategies for drawing them in and making them feel welcome?
Given how many philosophers there are in the movement, this would surprise me. Is it possible that it’s more the result of the ubiquitous disagreement between philosophers?
I’ve wondered about that in the context of moral progress. Sometimes the idea of moral progress is attacked on the grounds that proponents base their claims for moral progress on how history has developed into the direction of our current status quo, which is rather pointless since by that logic any historical trend toward the status quo would then become “moral progress.” However, by my moral standards the status quo is far from perfect.
Analogously I see that the political views EAs are led to hold are so heterogeneous that some have even thought about coining new terms for this political stance (such as “newtilitarianism”), luckily only in jest. (I’m not objecting to the pun but I’m wary of labels like that.) That these political views are at least somewhat uncommon in their combination suggests to me that we’re not falling into that trap, or at least making an uncommonly good effort of avoiding it. Since the trap is pretty much the default starting point for many of us, it’s likely we still have many legs trapped in it despite this “uncommonly good effort.” The metaphor is already getting awkward, so I’ll just add that some sort of contrarian hypercorrection would of course constitute just another trap. (As it happens, there’s another discussion of the importance of diversity in the context of Open Phil in that Vox article.)
No need for you to address any particular political point I’m making. For now, it is sufficient for me to suggest that reigning progressive ideas about politics are flawed and holding EAs back, without you committing to any particular alternative view.
I’m glad to hear that EAs are focusing more on movement-building and collaboration. I think there is a lot of value in eigenaltruism: being altruistic only towards other eigenaltruistic people who “pay it forward” (see Scott Aaronson’s eigenmorality). Civilizations have been built with reciprocal altruism. The problem with most EA thinking is that is one-way, so the altruism is consumed immediately. This post argues that morality evolved as a system of mutual obligation, and that EAs misunderstand this.
Although there is some political heterogeneity in EA, it is overwhelmed by progressives, and the main public recommendations are all progressive causes. Moral progress is a tricky concept: for example, the French Revolution is often considered moral progress, but the pictures paint another story.
On open borders, economic analyses like Roodman’s are just too narrow. They do not take into account all of the externalities, such as crime and changes to cultural institutions. OpenBorders.info addresses many of the objections, sometimes; it does a good job of summarizing some of the anti-open borders arguments, but often fails to refute them, yet this lack of refutation doesn’t translate into them updating their general stance on immigration.
If humans are interchangeable homo economicus then open borders would be a economic and perhaps moral imperative. If indeed human groups are significantly different, such as in crime rates, then it throws a substantial wrench into open borders. If the safety of open borders is in question, then it is a risky experiment.
Some of early indicators are scary, like the Rotherham Scandal. There are reports of similar coverups in other areas, and economic analyses do not capture the harms to these thousands of children. High-crime areas where the police have trouble enforcing rule of law are well documented in Europe: they are called “no-go zones” or “sensitive urban zones” (“no-go zone” is controversial because technically you can go there, but would you want to go to this zone, especially if you were Jewish?). Britain literally has Sharia Patrols harassing gay people and women.
These are just the tip of the iceberg of what is happening with current levels of immigration. Just imagine what happens with fully open borders. I really don’t think its advocates have grappled with this graph, and what it means for Europe under open borders. No matter how generous Europe was, its institutions would never be able to handle the wave of immigrants, and open borders advocates are seriously kidding themselves if they don’t see that Europe would turn into South Africa mixed with Syria, and the US would turn into Brazil. And then who would send aid to Africa?
Rule of law is slowly breaking down in the West, and elite Westerners are sitting in their filter bubbles fiddling while Rome burns. I’m not telling you to accept this scenario as likely; you would need to go do your own research at the object-level. But with even a small risk that this scenario is possible, it’s very significant for future human welfare.
I’ll think about it. I think some of the sources I’ve cited start answering that question: finding people who are knowledgeable about the giant space of stuff that the media and academia is sweeping under the carpet for political reasons.
Before I delay my reply until I’ve read everything you’ve linked, I’ll rather post a WIP reply.
Thanks for all the data! I hope I’ll have time to look into Open Borders some more in August.
Error theorists would say that the blog post “Effective Altruists are Cute but Wrong” is cute but wrong, but more generally the idea of using PageRank for morality is beautifully elegant (but beautifully elegant things have often turned out imperfect in practice in my experience). I still have to read the rest of the blog post though.
Eigendemocracy reminds me of Cory Doctorow’s whuffie idea.
An interesting case for eigenmorality is when you have distinct groups that cooperate amongst themselves and defect against others. Especially interesting is the case where there are two large, competing groups that are about the same size.
“I’ll take your word that many EAs also think this way, but I don’t really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.”
Can you elaborate please? From my perspective, just because a western citizen is more rich / powerful doesn’t mean that helping to satisfy their preferences is more valuable in terms of indirect effects? Or are you talking about who to persuade because I don’t see many EA orgs asking Dalit groups for their cash or time yet.
It’s not the preferences of the West that are inherently more valuable, it’s the integrity of its institutions, such as rule of law, freedom of speech, etc… If the West declines, then it’s going to have negative flow-through effects for the rest of the world.
I think its clearer then if you say sound institutions rather than the West?
There are other countries with sound institutions, like Singapore and Japan, but I’m not so worried about them as I am about the West, because they have an eye towards self-preservation. For instance, both those countries have declining birth rates, but they protect their own rule of law (unlike the West), and have more cautious immigration policies that help avoid their population from being replaced by a foreign one (unlike the West). The West, unlike sensible Asian countries, is playing a dangerous game by treating its institutions in a cavalier way for ill-thought-out redistributionist projects and importing leftist voting blocs.
EAs should also be more worried about decline in the West, because Westerners (particularly NW Europeans) are more into charity than other populations (e.g. Eastern Europeans are super-low in charity). My previous post documents this. A Chinese- or Russian- dominated future is really, really bad for EA, for existential risk prevention, and for AI safety.
I wouldn’t be so cavalier about that. Japan, specifically, has about zero immigration and its population, not to mention the workforce, is already falling. Demographics is a bitch. Without any major changes, in a few decades Japan will be a backwater full of old people’s homes that some Chinese trillionaire might decide to buy on a whim and turn into a large theme park.
Open borders and no immigration are like Scylla and Charybdis—neither is a particularly appealing option for a rich and aging country.
I also feel that the question “how much immigration to allow” is overrated. I consider it much less important than the question of “precisely what kind of people should we allow in”. A desirable country has an excellent opportunity to filter a part of its future population and should use it.
I agree that Japan has its own problems. No solutions are particularly good if they can’t get their birth rates up. Singapore also has low birth rates. What problems are preventing high-IQ people from reproducing might be something that EAs should look into.
“How much immigration to allow” and “precisely what kind of people should we allow in” can be related, because the more immigration you allow, the less selective you are probably being, unless you have a long line of qualified applicants. Skepticism of open borders doesn’t require being against immigration in general.
As you say, a filtered immigration population could be very valuable. For example, you could have “open borders” for educated professionals from low-crime, low-corruption areas countries with compatible value systems and who are encouraged to assimilate. I’m pretty sure this isn’t what most open borders advocates mean by “open borders,” though.
The left doesn’t “want” a responsible immigration policy either. For their political goals, they want a large and dissatisfied voting block. And for their signaling goals, it’s much more holy to invite poor, unskilled people rather than skilled professionals who want to assimilate.
If you aren’t aware of the relevant decision theory, then I have good news for you!
I’m not sure this is true, at least in the narrow instance of rationalists trying to make maximally effective decisions based on well defined uncertainties. In principle, at least, it should be possible to calculate the value of information. Decision theory has a concept called the expected value of perfect information. If you’re not 100% sure of something, but the cost of obtaining information is high (which it generally is in philosophy, as evidenced by the somewhat slow progress over the centuries.) and giving opportunities are shrinking (which they are for many areas, as conditions improve) then you probably want to risk giving sub-optimally by giving now vs later. The price of information is simply higher than the expected value.
Unfortunately, you might still need to make a judgement call to guesstimate the values to plug in.
Thanks! I hadn’t seen the formulae for the expected value of perfect information before. I haven’t taken the time to think them through yet, but maybe they’ll come in handy at some point.