It’s easy to see why rationalists shouldn’t help develop technologies that speed AI. (On paper, even an innovation that speeds FAI twice as much as it speeds AI itself would probably be a bad idea if it weren’t completely indispensable to FAI. On the other hand, the FAI field is so small right now that even a small absolute increase in money, influence, or intellectual power for FAI should have a much larger impact on our future than a relatively large absolute increase or decrease in the rate of progress of the rest of AI research. So we should be more interested in how effective altruism impacts the good guys than how it impacts the bad guys.)
I’m having a harder time accepting that discouraging rationalists from pulling people out of poverty is a good thing. We surely don’t expect EA as currently practiced by most to have a significant enough impact on global GDP to reliably and dramatically affect AI research in the near term. If EA continues to flourish in our social circles (and neighboring ones), I would expect the good reputation EA helps build for rationality activists and FAI researchers (in the eyes of the public, politicians, charitable donors, and soft-hearted potential researchers) to have a much bigger impact on FAI prospects than how many metal rooves Kenyans manage to acquire.
(Possibly I just haven’t thought hard enough about Eliezer’s scenarios, hence I have an easier time seeing reasoning along his lines discouraging prosaic effective altruism in the short term than having a major effect on the mid-term probability of e.g. completely overhauling a country’s infrastructure to install self-driving electric cars. Perhaps my imagination is compromised by absurdity bias or more generally by motivated reasoning; I don’t want our goals to diverge either.)
As it stands, I wouldn’t even be surprised if the moral/psychological benefits to rationalists of making small measurable progress in humanitarian endeavors outweighed the costs if those endeavors turned out to be slightly counterproductive in the context of FAI. Bracketing the open problems within FAI itself, the largest obstacles we’re seeing are failures of motivation (inciting heroism rather than denial or despair), of imagination (understanding the problem’s scope and our power as individuals to genuinely affect it), and of communication (getting our message out). Even (subtly) ineffective (broadly) conventional altruistic efforts seem like they could be useful ways of addressing all three of those problems.
It’s easy to see why rationalists shouldn’t help develop technologies that speed AI. (On paper, even an innovation that speeds FAI twice as much as it speeds AI itself would probably be a bad idea if it weren’t completely indispensable to FAI. On the other hand, the FAI field is so small right now that even a small absolute increase in money, influence, or intellectual power for FAI should have a much larger impact on our future than a relatively large absolute increase or decrease in the rate of progress of the rest of AI research. So we should be more interested in how effective altruism impacts the good guys than how it impacts the bad guys.)
I’m having a harder time accepting that discouraging rationalists from pulling people out of poverty is a good thing. We surely don’t expect EA as currently practiced by most to have a significant enough impact on global GDP to reliably and dramatically affect AI research in the near term. If EA continues to flourish in our social circles (and neighboring ones), I would expect the good reputation EA helps build for rationality activists and FAI researchers (in the eyes of the public, politicians, charitable donors, and soft-hearted potential researchers) to have a much bigger impact on FAI prospects than how many metal rooves Kenyans manage to acquire.
(Possibly I just haven’t thought hard enough about Eliezer’s scenarios, hence I have an easier time seeing reasoning along his lines discouraging prosaic effective altruism in the short term than having a major effect on the mid-term probability of e.g. completely overhauling a country’s infrastructure to install self-driving electric cars. Perhaps my imagination is compromised by absurdity bias or more generally by motivated reasoning; I don’t want our goals to diverge either.)
As it stands, I wouldn’t even be surprised if the moral/psychological benefits to rationalists of making small measurable progress in humanitarian endeavors outweighed the costs if those endeavors turned out to be slightly counterproductive in the context of FAI. Bracketing the open problems within FAI itself, the largest obstacles we’re seeing are failures of motivation (inciting heroism rather than denial or despair), of imagination (understanding the problem’s scope and our power as individuals to genuinely affect it), and of communication (getting our message out). Even (subtly) ineffective (broadly) conventional altruistic efforts seem like they could be useful ways of addressing all three of those problems.