I think a highly rational person would have high moral uncertainty at this point and not necessarily be described as “altruistic”.
Do you think the correct level of moral uncertainty would place so much probability on egoism-like hypotheses that the behavior it outputs, even after taking into account various game-theoretical concerns about cooperation as well as the surprisingly large apparent asymmetry between the size of altruistic returns available vs. the size of egoistic returns available, doesn’t end up behaving substantially more altruistically than a typical human or a typical math genius is likely to behave? It seems implausible to me, but I’m not that confident, and as I’ve been saying earlier, the topic is weirdly neglected here for one with such high import.
Given a choice between a more altruistic candidate and a more rational candidate, I think SI ought to choose the latter.
Surely it depends on how much more altruistic and how much more rational.
various game-theoretical concerns about cooperation
Most people have some pre-theoretic intuitions about cooperation, which game theory may merely formalize. It’s not clear to me that familiarity with such theoretical concerns implies one ought to be more “altruistic” than average.
the surprisingly large apparent asymmetry between the size of altruistic returns available vs. the size of egoistic returns available
If someone is altruistic because they’ve maxed out their own egoistic values (or has gotten to severely diminishing returns), I certainly wouldn’t count that against their rationality. But if “egoistic returns” include abstract values that the rest of humanity doesn’t necessarily share, “large apparent asymmetry” is unclear to me.
as I’ve been saying earlier, the topic is weirdly neglected here for one with such high import
Where did you say that? (I wrote Shut Up and Divide? which may or may not be relevant depending on what you mean by “the topic”.)
Surely it depends on how much more altruistic and how much more rational.
Why “surely”, given that I’m not a random member of humanity, and may have more values in common with a less altruistic candidate than a more altruistic candidate?
If someone is altruistic because they’ve maxed out their own egoistic values (or has gotten to severely diminishing returns), I certainly wouldn’t count that against their rationality. But if “egoistic returns” include abstract values that the rest of humanity doesn’t necessarily share, “large apparent asymmetry” is unclear to me.
I just meant that it seems to be possible to improve a lot of other people’s expected quality of life at the expense of relatively small decreases to one’s own (but that people are generally not doing so), and that this seems like it should cause the outcome of a process with moral uncertainty between egoism and altruism to skew more toward the altruist side in some sense, though I don’t understand how to deal with moral uncertainty (if anyone else does, I’d be interested in your answers to this). If by “abstract values” you mean something like making the universe as simple as possible by setting all the bits to zero, then I agree there’s no asymmetry, but I wouldn’t call that “egoistic” as such.
Where did you say that? (I wrote Shut Up and Divide? which may or may not be relevant depending on what you mean by “the topic”.)
Here. Yes, SUAD was a good and relevant contribution.
Why “surely”, given that I’m not a random member of humanity, and may have more values in common with a less altruistic candidate than a more altruistic candidate?
You’re right that it’s not certain that altruism in a FAI team candidate is, all else equal, more desirable. I guess I’m just saying that if it is, then sufficiently large differences in altruism outweigh sufficiently small differences in rationality.
I guess we don’t have more discussions of altruism vs egoism because making progress on the problem is hard. Typical debates about moral philosophy are not very productive, and it’s probably fortunate that LW is good at avoiding them.
Do you agree? Do you think there are good arguments to be had that we’re not having for some reason? Does it seem to you that most LWers are just not very interested in the problem?
Do you think the correct level of moral uncertainty would place so much probability on egoism-like hypotheses that the behavior it outputs, even after taking into account various game-theoretical concerns about cooperation as well as the surprisingly large apparent asymmetry between the size of altruistic returns available vs. the size of egoistic returns available, doesn’t end up behaving substantially more altruistically than a typical human or a typical math genius is likely to behave? It seems implausible to me, but I’m not that confident, and as I’ve been saying earlier, the topic is weirdly neglected here for one with such high import.
Surely it depends on how much more altruistic and how much more rational.
Most people have some pre-theoretic intuitions about cooperation, which game theory may merely formalize. It’s not clear to me that familiarity with such theoretical concerns implies one ought to be more “altruistic” than average.
If someone is altruistic because they’ve maxed out their own egoistic values (or has gotten to severely diminishing returns), I certainly wouldn’t count that against their rationality. But if “egoistic returns” include abstract values that the rest of humanity doesn’t necessarily share, “large apparent asymmetry” is unclear to me.
Where did you say that? (I wrote Shut Up and Divide? which may or may not be relevant depending on what you mean by “the topic”.)
Why “surely”, given that I’m not a random member of humanity, and may have more values in common with a less altruistic candidate than a more altruistic candidate?
I just meant that it seems to be possible to improve a lot of other people’s expected quality of life at the expense of relatively small decreases to one’s own (but that people are generally not doing so), and that this seems like it should cause the outcome of a process with moral uncertainty between egoism and altruism to skew more toward the altruist side in some sense, though I don’t understand how to deal with moral uncertainty (if anyone else does, I’d be interested in your answers to this). If by “abstract values” you mean something like making the universe as simple as possible by setting all the bits to zero, then I agree there’s no asymmetry, but I wouldn’t call that “egoistic” as such.
Here. Yes, SUAD was a good and relevant contribution.
You’re right that it’s not certain that altruism in a FAI team candidate is, all else equal, more desirable. I guess I’m just saying that if it is, then sufficiently large differences in altruism outweigh sufficiently small differences in rationality.
I have written a few more posts that are relevant to the “egoism vs altruism” question:
http://lesswrong.com/lw/8gk/where_do_selfish_values_come_from/
http://lesswrong.com/lw/6ta/what_if_sympathy_depends_on_anthropomorphizing/
http://lesswrong.com/lw/2b7/hacking_the_cev_for_fun_and_profit/
http://lesswrong.com/lw/1mo/the_preference_utilitarians_time_inconsistency/
I guess we don’t have more discussions of altruism vs egoism because making progress on the problem is hard. Typical debates about moral philosophy are not very productive, and it’s probably fortunate that LW is good at avoiding them.
Do you agree? Do you think there are good arguments to be had that we’re not having for some reason? Does it seem to you that most LWers are just not very interested in the problem?