I feel like this argument isn’t aided by people on both sides missing that it’s okay (not to mention expected) to have complicated preferences about the universe. Reason is the slave of the passions—you shouldn’t arrive at population ethics by by reasoning from simple premises, because human values aren’t simple.
(Of course, given the choice between talking about the world as people care about it versus not having to admit they were on the wrong track, I’m sure many philosophers would say that trying to make population ethics simple was a great idea and still is, we just need to separate the “ethics” in question from what people actually will act on or be motivated by.)
So far it probably sounds like I’m agreeing with you and dunking on those professional philosophers at FHI, but that’s not so. A lot of the common-sense problems with longtermism also only work if you assume that people can’t have complicated preferences about how the future of the universe goes. E.g. you can just not like people getting blown up by bombs, this doesn’t mean you have to want people to be packed into the universe like sardines.
You might say that this makes me an odd duck, and that I (along with CSER et al.) am not a true scotsman longermist. I would counter that actually, pretty much everyone takes practical actions based on this correct picture where they have complicated preferences about how they want the universe to and up, but because philosophy is confusing many people make verbal descriptions of ethics using wrong pictures.
I feel like this argument isn’t aided by people on both sides missing that it’s okay (not to mention expected) to have complicated preferences about the universe. Reason is the slave of the passions—you shouldn’t arrive at population ethics by by reasoning from simple premises, because human values aren’t simple.
(Of course, given the choice between talking about the world as people care about it versus not having to admit they were on the wrong track, I’m sure many philosophers would say that trying to make population ethics simple was a great idea and still is, we just need to separate the “ethics” in question from what people actually will act on or be motivated by.)
So far it probably sounds like I’m agreeing with you and dunking on those professional philosophers at FHI, but that’s not so. A lot of the common-sense problems with longtermism also only work if you assume that people can’t have complicated preferences about how the future of the universe goes. E.g. you can just not like people getting blown up by bombs, this doesn’t mean you have to want people to be packed into the universe like sardines.
You might say that this makes me an odd duck, and that I (along with CSER et al.) am not a true
scotsmanlongermist. I would counter that actually, pretty much everyone takes practical actions based on this correct picture where they have complicated preferences about how they want the universe to and up, but because philosophy is confusing many people make verbal descriptions of ethics using wrong pictures.