I think this is more so a longtermist/non-longtermist divide than a selfish/altruistic divide.
But yeah, whether you buy long-term ethics or not, and how much you discount is going to make some surprising differences about how much you support AI progress. Indeed, I’d argue that a big part of the reason why LW/EA has flirted with extreme slowdowns/extreme policies on AI has to do with the overrepresentation of very, very longtermist outlooks.
One practical point is that for most purposes, you should be focused far less on long-term impacts, even for longtermists, since people are in general very bad at predicting anything longer than say 20 years, and the most important implication is that trying to plan over the longer term leads you into essentially nowhere.
This means that for our purposes, we can cut out all the potential future generations but one, and we can probably do more than that, and radically cut the expected value of AI risk and general existential risk.
I think this is more so a longtermist/non-longtermist divide than a selfish/altruistic divide.
But yeah, whether you buy long-term ethics or not, and how much you discount is going to make some surprising differences about how much you support AI progress. Indeed, I’d argue that a big part of the reason why LW/EA has flirted with extreme slowdowns/extreme policies on AI has to do with the overrepresentation of very, very longtermist outlooks.
One practical point is that for most purposes, you should be focused far less on long-term impacts, even for longtermists, since people are in general very bad at predicting anything longer than say 20 years, and the most important implication is that trying to plan over the longer term leads you into essentially nowhere.
This means that for our purposes, we can cut out all the potential future generations but one, and we can probably do more than that, and radically cut the expected value of AI risk and general existential risk.