Deontological principles often help maximize utility indirectly, as I’m sure most utilitarians agree in contexts like war and criminal justice. Still, I agree deontology can bias people in the direction of libertarian politics. On the other hand, folk economics can bias people away from libertarian politics.
Since utilitarianism values the sum of all future generations far more than it values the current generation, it seems like (if we ignore that existential risks are even more important) utilitarianism recommends whatever policies grow the economy the fastest in the long run. That might be an argument for libertarianism but it might also be an argument for governments spending lots of money subsidizing research and development.
Deontological principles often help maximize utility indirectly, as I’m sure most utilitarians agree in contexts like war and criminal justice. Still, I agree deontology can bias people in the direction of libertarian politics.
It seems more the other way to me—die-hard libertarians tend toward deontological positions, typically by gradual reification of consequentialist instrumental values into deontological terminal values (“free markets usually produce the best results” becomes “free markets are Good”, &c.).
On the other hand, folk economics can bias people away from libertarian politics.
This is true, of course, and it’s worth noting that I agree with a substantial majority of libertarian positions, which is part of why I find some aspects of libertarianism so irritating—it helps marginalize a political outlook that could be doing some good.
Since utilitarianism values the sum of all future generations far more than it values the current generation, it seems like (if we ignore that existential risks are even more important) utilitarianism recommends whatever policies grow the economy the fastest in the long run. That might be an argument for libertarianism but it might also be an argument for governments spending lots of money subsidizing research and development.
I’d think more likely it’d be an argument for both—subsidized research combined with lowered barriers to entry for innovative businesses—tile the country with alternating universities and silicon valley-type startup hotbeds, essentially (see also: Paul Graham’s wet dream).
Anyway, I don’t think it’s the case that all forms of utilitarianism assign value to future generations that may or may not ever exist. Assigning value to potential entities seems fraught with peril.
“Potential entities” here doesn’t mean “currently existing non-morally-significant entities that might give rise to morally significant entities”, just “entities that don’t exist yet”. A much clearer phrasing would be something like “Does my utility function aggregate over all entities existing in spacetime, or only those existing now?” IMO, the latter is obviously wrong, either being dynamically inconsistent if “now” is defined indexically, or, if “now” is some specific time, implying that we should bind ourselves not to care about people born after that time even once they do exist.
Combinatorial explosion, for starters. There’s a very large set of potential entities that may or may not exist, and most won’t. Assigning value to these entities seems likely to lead to absurdity. If nothing else, it seems to quickly lead to some manner of obligation to see as many entities created as possible.
But not assigning value to potential entities implies that we should make a lot of changes. Ignoring global warming for one. Perhaps enslaving future generations?
I think it’s arguable that global warming could impact plenty of people already alive today, and I’m not sure what you mean by enslaving future generations.
But yes, assigning no value at all to potential entities may also be problematic, but I’m not sure what a reasonable balance is.
Deontological principles often help maximize utility indirectly, as I’m sure most utilitarians agree in contexts like war and criminal justice. Still, I agree deontology can bias people in the direction of libertarian politics. On the other hand, folk economics can bias people away from libertarian politics.
Since utilitarianism values the sum of all future generations far more than it values the current generation, it seems like (if we ignore that existential risks are even more important) utilitarianism recommends whatever policies grow the economy the fastest in the long run. That might be an argument for libertarianism but it might also be an argument for governments spending lots of money subsidizing research and development.
It seems more the other way to me—die-hard libertarians tend toward deontological positions, typically by gradual reification of consequentialist instrumental values into deontological terminal values (“free markets usually produce the best results” becomes “free markets are Good”, &c.).
This is true, of course, and it’s worth noting that I agree with a substantial majority of libertarian positions, which is part of why I find some aspects of libertarianism so irritating—it helps marginalize a political outlook that could be doing some good.
I’d think more likely it’d be an argument for both—subsidized research combined with lowered barriers to entry for innovative businesses—tile the country with alternating universities and silicon valley-type startup hotbeds, essentially (see also: Paul Graham’s wet dream).
Anyway, I don’t think it’s the case that all forms of utilitarianism assign value to future generations that may or may not ever exist. Assigning value to potential entities seems fraught with peril.
Such as?
It would seem to support the biblical condemnation of onanism.
“Potential entities” here doesn’t mean “currently existing non-morally-significant entities that might give rise to morally significant entities”, just “entities that don’t exist yet”. A much clearer phrasing would be something like “Does my utility function aggregate over all entities existing in spacetime, or only those existing now?” IMO, the latter is obviously wrong, either being dynamically inconsistent if “now” is defined indexically, or, if “now” is some specific time, implying that we should bind ourselves not to care about people born after that time even once they do exist.
Combinatorial explosion, for starters. There’s a very large set of potential entities that may or may not exist, and most won’t. Assigning value to these entities seems likely to lead to absurdity. If nothing else, it seems to quickly lead to some manner of obligation to see as many entities created as possible.
But not assigning value to potential entities implies that we should make a lot of changes. Ignoring global warming for one. Perhaps enslaving future generations?
I think it’s arguable that global warming could impact plenty of people already alive today, and I’m not sure what you mean by enslaving future generations.
But yes, assigning no value at all to potential entities may also be problematic, but I’m not sure what a reasonable balance is.