I’ll readily concede that my exact species extinction numbers were made up. But does it really matter? Two hundred million years from now, the children’s children’s children of humanity in their galaxy-civilizations, are unlikely to look back and say, “You know, in retrospect, it really would have been worth not colonizing the Herculus supercluster if only we could have saved 80% of species instead of 20%”. I don’t think they’ll spend much time fretting about it at all, really. It is really incredibly hard to make the consequentialist utilitarian case here, as opposed to the warm-fuzzies case.
I believe that this argument is unwise and that the line of reasoning is outright dangerous because it justifies too much in the minds of certain people. Making decisions on the basis of the expected utility associated with colonizing the Herculus supercluster is a prime example of what I am skeptical of.
Mostly, the actions I see people taking (and exhorting others to take) on LW are “do research” and “fund others doing research,” to the negligible extent that any AI-related action is taken here at all. And you seem to support those actions.
But, sure… I guess I can see how taking a far goal seriously might in principle lead to future actions other than research, and how those actions might be negative, and I can sort of see responding to that by campaigning against taking the goal seriously rather than by campaigning against specific negative actions.
What drastic actions do you see other people following through on that you consider unjustified?
I am mainly worried about future actions. The perception of imminent risks from AI could give an enormous incentive to commit incredible stupid acts.
Consider the following comment by Eliezer:
I believe that this argument is unwise and that the line of reasoning is outright dangerous because it justifies too much in the minds of certain people. Making decisions on the basis of the expected utility associated with colonizing the Herculus supercluster is a prime example of what I am skeptical of.
Mostly, the actions I see people taking (and exhorting others to take) on LW are “do research” and “fund others doing research,” to the negligible extent that any AI-related action is taken here at all. And you seem to support those actions.
But, sure… I guess I can see how taking a far goal seriously might in principle lead to future actions other than research, and how those actions might be negative, and I can sort of see responding to that by campaigning against taking the goal seriously rather than by campaigning against specific negative actions.
Thanks for clarifying.