This is interesting, but it’s not important to the claims I’m making here. Most people aren’t remotely utilitarian, either.
Here’s why I said utilitarianism pretty firmly implies longtermism. If you care about all humans equally, it seems like you should logically care equally about them regardless of when they happen to live. Slapping a discount on it means you’re not actually a utilitarian by that definition of the term. You’re a short-termist-utilitarian. That’s how I’ve understood the most common definition of utilitarianism, but of course people can use the term differently.
There’s a strong argument that even if you care about everyone equally, you should do some sort of discounting over time to account for how much harder it is to predict results of current decisions on farther future utility. But that’s different than just using a uniform discount on future utility.
I think it’s worth distinguishing between what I’ll call ‘intrinsic preference discounting’, and ‘uncertain-value discounting’. In the former case, you inherently care less about what happens in the (far?) future; in the latter case you are impartial but rationally discount future value based on your uncertainty about whether it’ll actually happen—perhaps there’ll be a supernova or something before anyone actually enjoys the utils! Economists often observe the latter, or some mixture, and attribute it to the former.
This is interesting, but it’s not important to the claims I’m making here. Most people aren’t remotely utilitarian, either.
Here’s why I said utilitarianism pretty firmly implies longtermism. If you care about all humans equally, it seems like you should logically care equally about them regardless of when they happen to live. Slapping a discount on it means you’re not actually a utilitarian by that definition of the term. You’re a short-termist-utilitarian. That’s how I’ve understood the most common definition of utilitarianism, but of course people can use the term differently.
There’s a strong argument that even if you care about everyone equally, you should do some sort of discounting over time to account for how much harder it is to predict results of current decisions on farther future utility. But that’s different than just using a uniform discount on future utility.
I think it’s worth distinguishing between what I’ll call ‘intrinsic preference discounting’, and ‘uncertain-value discounting’. In the former case, you inherently care less about what happens in the (far?) future; in the latter case you are impartial but rationally discount future value based on your uncertainty about whether it’ll actually happen—perhaps there’ll be a supernova or something before anyone actually enjoys the utils! Economists often observe the latter, or some mixture, and attribute it to the former.
Agreed, that’s exactly what I was trying to get across in that last paragraph.