Slightly more wordy: The adherents of utilitarianism take as an article of faith that only these final world states matter. Just about everyone acts as if they don’t actually believe only final world states matter. These disagreements you are having here are pretty much entirely with said idea. Basically the entirety of humanity agrees with you.
Technically, that means utilitarians should change that view since humans actually value (and thus have utility for) certain paths, but they don’t and they keep writing things that are just wrong even by the coherent version of their own standards. Utilitarians are thus wrong.
Your speculation that your professed beliefs are incompatible with altruism is also wrong, though it’s trickier to show. Improving the world as you see it to be by your own lights is, in fact, the very basis of a large portion of altruism. There’s nothing about altruism that is only about world-states.
For instance, if someone likes the idea of orphans being taken care of, they could be against taxing people to pay for orphanages, but instead founds some (good ones) with their own money for that purpose, that is both an improvement to the world by their standards, and clearly altruistic.
The adherents of utilitarianism take as an article of faith that only these final world states matter. Just about everyone acts as if they don’t actually believe only final world states matter.
What are these “final world states” you’re talking about? They’re not mentioned in the OP, and utilitarians typically don’t privilege a particular point in time as being more important than another. The view and moral treatment of time is actually a defining, unusual feature of longtermism. Longtermism typically views time as linear, and the long-term material future as being valuable, perhaps with some non-hyperbolic discounting function, and influenceable to some degree through individual action.
Note that most people act like only the near-term matters. Buddhists claim something like “there is only this moment.” Many religious folks who believe in an afterlife act as of only “eternity” matters.
Utilitarians should change that view since humans actually value (and thus have utility for) certain paths
In addition to its assumptions about what present-day humans value, this statement takes for granted the idea that present-day humans are the only group whose values matter. The longtermist EA-flavored utilitarianism OP is addressing rejects the second claim, holding instead that the values and utilities of future generations should also be taken into account in our decisions. Since we can’t ask them, but they will exist, we need some way of modeling what they’d want us, their ancestors, to do, if they were able to give us input.
Note that this is something we do all the time. Parents make decisions for the good of their children based on the expected future utility of that child, even prior to conception. They commonly think about it in exactly these terms. People do this for “future generations” on the level of society.
Longtermist utilitarianism is just about doing this more systematically, thinking farther into the future and about larger and more diverse groups of future people/sentient beings than is conventional. This is obviously an important difference, but the philosophy OP is expounding also has its radical elements. There’s no getting away from minority opinions in philosophy, simply because worldviews are extremely diverse.
Final world states is not the terminology used originally, but it’s what the discussion is talking about. The complaint is that utilitarianism is only concerned with the state of the world not how it gets there. ‘Final world states’ are the heart of choosing between better or worse worlds. It’s obvious that’s both what is being talked about by the original post, and what my reply is referencing. I suspect that you have mistaken what the word ‘final’ means. Nothing in what I said is about some ‘privileged time’. I didn’t reference time at all. ‘Final’ before ‘world states’ is clearly about paths versus destinations.
Even if you didn’t get that initially, the chosen example is clearly about the path mattering, and not just the state of the world. What I referred to wasn’t even vaguely critiquing or opposed to the ‘long-termism’ you bring up with that statement. I have critiques of ‘long-termism’ but those are completely separate, and not brought up.
My critique was of Utilitarianism itself, not of any particular sub-variety of it. (And you could easily be a, let’s call it ‘pathist long-termist’ where you care about the paths that will be available to the future rather than a utilitarian calculus.) My critique was effectively that Utilitarians need to pay more attention to the fact that people care a lot about how we get there, and it is directly counter to people’s utility functions to ignore that, rendering actual, (rather than the concept itself), Utilitarianism not in accordance to its own values. This isn’t the primary reason I subscribe to other ethical and moral systems, but it is a significant problem with how people actually practice it.
You also assume another non-existent time dimension when you try to critique my use of the words ‘humans actually value.’ This phrase sets up a broad truth about humans, and their nature, not a time-dependent one. Past people cared. Current people care. Future people will care about how we got/get/and will get there in general. You aren’t being a good long-termist if you assume away the nature of future people to care about it (unless you are proposing altering them dramatically to get them to not care, which is an ethical and moral nightmare in the making).
I care about what happened in the past, and how we got here. I care about how we are now, and where we choose to go along the path. I care about the path we will take. So does everyone else. Even your narrative about long-termism is about what path we should take toward the future. It’s possible you got there by some utility calculation...but the chance is only slightly above nothing.
The OP is calling out trying to control people, and preventing them from taking their own paths. Nowhere does the OP say that they don’t care how it turns out...because they do care. They want people to be free. That is their value here.
Side note: I think this is the first time I’ve personally seen the karma and agreement numbers notably diverge. In a number of ways, it speaks well of the place that an argument against the predominant moral system of the place can still get noticeably positive karma.
This is a minor stylistic point, not a substantive critique, but I don’t agree that your choice of wording was clear and obvious. I think it’s widely agreed that defining your terms, using quotes, giving examples, and writing with a baseline assumption that transparency of meaning is an illusion are key to successful communication.
When you don’t do these things, and then use language like “clearly,” “obvious,” “you mistook the meaning of [word],” it’s a little offputting. NBD, just a piece of feedback you can consider, or not.
I have written a fairly lengthy reply about why the approach I took was necessary. I tend to explain things in detail. My posts would be even longer and thus much harder to read and understand if I was any more specific in my explanations. You could expand every paragraph to something this long. People don’t want that level of detail. Here it is.
When you go off on completely unrelated subjects because you misunderstood what a word means in context, am I not supposed to point out that you misunderstood it? Do you not think it is worth pointing out that the entire reply was based on a false premise of what I said? Just about every point in your post was based on a clear misreading of what I wrote, and you needed to read it more carefully.
Words have specific meanings. It is ‘clear’ that ‘final world states’ is ‘what states the world ends up being in as a result’. Writing can be, and often is, ambiguous, but that was not. I could literally write paragraphs to explain each and every term I used, but that would only make things less clear! It would also be a never-ending task.
In this case ‘final’ = ‘resultant’ , so ‘resultant world states’ is clear and just as short, but very awkward. It is important informational content that it was clear, because that determines whether I should try to write that particular point differently in the future, and/or whether you need to interpret things more carefully. While in other contexts final has a relation to time, that is only because of its role in denoting the last thing in a list or sequence (and people often use chronological lists.).
It is similar with the word obvious. I am making a strong, and true, statement as to what the content of the OP’s post is by using the word ‘obvious’. This points out that you should compare that point of my statement to the meaning of the original post, and see that those pieces are the same. This is not a factor of ‘maybe’ this is what was meant or ‘there are multiple interpretations’. Those words are important parts of the message, and removing them would require leaving out large parts of what I was saying.
I do not act like what I write is automatically transparent as to meaning, but as it was, I was very precise and my meaning was clear. There are reasons other than clarity of the writing for whether someone will or won’t understand, but I can’t control those parts.
People don’t have to like that I was sending these messages, of course. That ‘of course’ is an important part of what I’m saying too. In this case, that I am acknowledging a generally known fact that people can and will dislike the messages I send, at least sometimes.
Around these parts, some people like to talk about levels of epistemic belief. The words ‘clear’ and ‘obvious’ clearly and obviously convey that my level of epistemic certainty here is very high. It is ‘epistemic certainty’ here because of just how high it is. It does not denote complete certainty like I have for 2 + 2 = 4, but more like my certainty that I am not currently conversing with an AI.
If I was trying to persuade rather than inform, I would not send these messages, and I would not say these words. Then I would pay more attention to guessing what tone people would read into the fact I was saying a certain message rather than the content of the message itself, and I might avoid phrases like ‘clear’, ‘obvious’, and so on. Their clarity would be a point against them.
I did include an example. It was one of the five paragraphs, no shorter than the others. That whole thing about someone caring about orphans, wanting them taken care of, being against taking taxes from people to do it, and founding them with their own money? There is no interpretation of things where it isn’t an example of both paths and states mattering, and about how altruism is clearly compatible with that.
Your part about using quotes is genuinely ambiguous. Are you trying to claim I should have quoted random famous people about my argument, which is wholly unnecessary, or that you wish for me to scour the internet for quotes by Utilitarians proving they think this way, which is an impossibly large endeavor for this sort of post rather than a research paper (when even that wouldn’t necessarily be telling)? Or quote the OP, even though I was responding to the whole thing?
That’s what I was talking about when I said that it meant utilitarians should ‘technically change that view’ and that it was not doing so that made Utilitarianism incoherent. Do I actually know what percentage of utilitarians explicitly include the path in the calculation? No.
I could be wrong that they generally don’t do that. Whether it is true or not, it is an intuition that both the OP and I share about Utilitarianism. I don’t think I’ve ever seen a utilitarian argument that did care about the paths. My impression is that explicitly including paths is deontological or virtue-ethical in practice. It is the one premise of my argument that could be entirely false. Would you say that a significant portion of utilitarians actually care about the path rather than the results?
Would you say you are one? Assuming you are familiar enough with it and have the time, I would be interested in how you formulate the basics of Utilitarianism in a path dependent manner, and would you say that is different in actual meaning to how it is usually formulated (assuming you agree that path-dependent isn’t the usual form)?
Yes, I consider it very likely correct to care about paths. I don’t care what percentage of utilitarians have which kinds of utilitarian views because the most common views have huge problems and are not likely to be right. There isn’t that much that utilitarians have in common other than the general concept of maximizing aggregate utility (that is, maximizing some aggregate of some kind of utility). There are disagreements over what the utility is of (it doesn’t have to be world states), what the maximization is over (doesn’t have to be actions), how the aggregation is done (doesn’t have to be a sum or an average or even to use any cardinal information, and don’t forget negative utilitarianism fits in here too), which utilities are aggregated (doesn’t have to be people’s own preference utilities, nor does it have to be happiness, nor pleasure and suffering, nor does it have to be von Neumann–Morgenstern utilities), or with what weights (if any; and they don’t need to be equal). I find it all pretty confusing. Attempts by some smart people to figure it out in the second half of the 20th century seem to have raised more questions than they have produced answers. I wouldn’t be very surprised if there were people who knew the answers and they were written up somewhere, but if so I haven’t come across that yet.
You probably don’t agree with this, but if I understand what you’re saying, utilitarians don’t really agree on anything or really have shared categories? Since utility is a nearly meaningless word outside of context due to broadness and vagueness, and they don’t agree on anything about it, Utilitarianism shouldn’t really be considered a thing itself? Just a collection of people who don’t really fit into the other paradigms but don’t rely on pure intuitions. Or in other words, pre-paradigmatic?
I don’t see “utility” or “utilitarianism” as meaningless or nearly meaningless words. “Utility” often refers to von Neumann–Morgenstern utilities and always refers to some kind of value assigned to something by some agent from some perspective that they have some reason to find sufficiently interesting to think about. And most ethical theories don’t seem utilitarian, even if perhaps it would be possible to frame them in utilitarian terms.
I can’t say I’m surprised a utilitarian doesn’t realize how vague it sounds? It is a jargon taken from a word that simply means ability to be used widely? Utility is an extreme abstraction, literally unassignable, and entirely based on guessing. You’ve straightforwardly admitted that it doesn’t have an agreed upon basis. Is it happiness? Avoidance of suffering? Fulfillment of the values of agents? Etc.
Utilitarians constantly talk about monetary situations, because that is one place they can actually use it and get results? But there, it’s hardly different than ordinary statistics. Utility there is often treated as a simple function of money but with diminishing returns. Looking up the term for the kind of utility you mentioned, it seems to once again only use monetary situations as examples, and sources claimed it was meant for lotteries and gambling.
Utility as a term makes sense there, but is the only place where your list has general agreement on what utility means? That doesn’t mean it is a useless term, but it is a very vague one.
Since you claim there isn’t agreement on the other aspects of the theories, that makes them more of an artificial category where the adherents don’t really agree on anything. The only real connection seems to be wanting to do math on on how good things are?
The only real connection seems to be wanting to do math on on how good things are?
Yes, to me utilitarian ethical theories do seem usually more interested in formalizing things. That is probably part of their appeal. Moral philosophy is confusing, so people seek to formalize it in the hope of understanding things better (that’s the good reason to do it, at least; often the motivation is instead academic, or signaling, or obfuscation). Consider Tyler Cowen’s review of Derek Parfit’s arguments in On What Matters:
Parfit at great length discusses optimific principles, namely which specifications of rule consequentialism and Kantian obligations can succeed, given strategic behavior, collective action problems, non-linearities, and other tricks of the trade. The Kantian might feel that the turf is already making too many concessions to the consequentialists, but my concern differs. I am frustrated with this very long and very central part of the book, which cries out for formalization or at the very least citations to formalized game theory.
If you’re analyzing a claim such as — “It is wrong to act in some way unless everyone could rationally will it to be true that everyone believes such acts to be morally permitted” (p.20) — words cannot bring you very far, and I write this as a not-very-mathematically-formal economist.
Parfit is operating in the territory of solution concepts and game-theoretic equilibrium refinements, but with nary a nod in their direction. By the end of his lengthy and indeed exhausting discussions, I do not feel I am up to where game theory was in 1990.
I wouldn’t be surprised if there are lots of Utilitarians who do not even consider path-dependence. Typical presentations of utility do not mention path dependence and nearly all the toy examples presented for utility calculations do not involve path dependence.
I do think that most, if prompted, would agree that utility can and in practice will depend upon path and therefore so does aggregated utility.
Simply put: Utilitarianism is wrong.
Slightly more wordy: The adherents of utilitarianism take as an article of faith that only these final world states matter. Just about everyone acts as if they don’t actually believe only final world states matter. These disagreements you are having here are pretty much entirely with said idea. Basically the entirety of humanity agrees with you.
Technically, that means utilitarians should change that view since humans actually value (and thus have utility for) certain paths, but they don’t and they keep writing things that are just wrong even by the coherent version of their own standards. Utilitarians are thus wrong.
Your speculation that your professed beliefs are incompatible with altruism is also wrong, though it’s trickier to show. Improving the world as you see it to be by your own lights is, in fact, the very basis of a large portion of altruism. There’s nothing about altruism that is only about world-states.
For instance, if someone likes the idea of orphans being taken care of, they could be against taxing people to pay for orphanages, but instead founds some (good ones) with their own money for that purpose, that is both an improvement to the world by their standards, and clearly altruistic.
What are these “final world states” you’re talking about? They’re not mentioned in the OP, and utilitarians typically don’t privilege a particular point in time as being more important than another. The view and moral treatment of time is actually a defining, unusual feature of longtermism. Longtermism typically views time as linear, and the long-term material future as being valuable, perhaps with some non-hyperbolic discounting function, and influenceable to some degree through individual action.
Note that most people act like only the near-term matters. Buddhists claim something like “there is only this moment.” Many religious folks who believe in an afterlife act as of only “eternity” matters.
In addition to its assumptions about what present-day humans value, this statement takes for granted the idea that present-day humans are the only group whose values matter. The longtermist EA-flavored utilitarianism OP is addressing rejects the second claim, holding instead that the values and utilities of future generations should also be taken into account in our decisions. Since we can’t ask them, but they will exist, we need some way of modeling what they’d want us, their ancestors, to do, if they were able to give us input.
Note that this is something we do all the time. Parents make decisions for the good of their children based on the expected future utility of that child, even prior to conception. They commonly think about it in exactly these terms. People do this for “future generations” on the level of society.
Longtermist utilitarianism is just about doing this more systematically, thinking farther into the future and about larger and more diverse groups of future people/sentient beings than is conventional. This is obviously an important difference, but the philosophy OP is expounding also has its radical elements. There’s no getting away from minority opinions in philosophy, simply because worldviews are extremely diverse.
Final world states is not the terminology used originally, but it’s what the discussion is talking about. The complaint is that utilitarianism is only concerned with the state of the world not how it gets there. ‘Final world states’ are the heart of choosing between better or worse worlds. It’s obvious that’s both what is being talked about by the original post, and what my reply is referencing. I suspect that you have mistaken what the word ‘final’ means. Nothing in what I said is about some ‘privileged time’. I didn’t reference time at all. ‘Final’ before ‘world states’ is clearly about paths versus destinations.
Even if you didn’t get that initially, the chosen example is clearly about the path mattering, and not just the state of the world. What I referred to wasn’t even vaguely critiquing or opposed to the ‘long-termism’ you bring up with that statement. I have critiques of ‘long-termism’ but those are completely separate, and not brought up.
My critique was of Utilitarianism itself, not of any particular sub-variety of it. (And you could easily be a, let’s call it ‘pathist long-termist’ where you care about the paths that will be available to the future rather than a utilitarian calculus.) My critique was effectively that Utilitarians need to pay more attention to the fact that people care a lot about how we get there, and it is directly counter to people’s utility functions to ignore that, rendering actual, (rather than the concept itself), Utilitarianism not in accordance to its own values. This isn’t the primary reason I subscribe to other ethical and moral systems, but it is a significant problem with how people actually practice it.
You also assume another non-existent time dimension when you try to critique my use of the words ‘humans actually value.’ This phrase sets up a broad truth about humans, and their nature, not a time-dependent one. Past people cared. Current people care. Future people will care about how we got/get/and will get there in general. You aren’t being a good long-termist if you assume away the nature of future people to care about it (unless you are proposing altering them dramatically to get them to not care, which is an ethical and moral nightmare in the making).
I care about what happened in the past, and how we got here. I care about how we are now, and where we choose to go along the path. I care about the path we will take. So does everyone else. Even your narrative about long-termism is about what path we should take toward the future. It’s possible you got there by some utility calculation...but the chance is only slightly above nothing.
The OP is calling out trying to control people, and preventing them from taking their own paths. Nowhere does the OP say that they don’t care how it turns out...because they do care. They want people to be free. That is their value here.
Side note: I think this is the first time I’ve personally seen the karma and agreement numbers notably diverge. In a number of ways, it speaks well of the place that an argument against the predominant moral system of the place can still get noticeably positive karma.
This is a minor stylistic point, not a substantive critique, but I don’t agree that your choice of wording was clear and obvious. I think it’s widely agreed that defining your terms, using quotes, giving examples, and writing with a baseline assumption that transparency of meaning is an illusion are key to successful communication.
When you don’t do these things, and then use language like “clearly,” “obvious,” “you mistook the meaning of [word],” it’s a little offputting. NBD, just a piece of feedback you can consider, or not.
I have written a fairly lengthy reply about why the approach I took was necessary. I tend to explain things in detail. My posts would be even longer and thus much harder to read and understand if I was any more specific in my explanations. You could expand every paragraph to something this long. People don’t want that level of detail. Here it is.
When you go off on completely unrelated subjects because you misunderstood what a word means in context, am I not supposed to point out that you misunderstood it? Do you not think it is worth pointing out that the entire reply was based on a false premise of what I said? Just about every point in your post was based on a clear misreading of what I wrote, and you needed to read it more carefully.
Words have specific meanings. It is ‘clear’ that ‘final world states’ is ‘what states the world ends up being in as a result’. Writing can be, and often is, ambiguous, but that was not. I could literally write paragraphs to explain each and every term I used, but that would only make things less clear! It would also be a never-ending task.
In this case ‘final’ = ‘resultant’ , so ‘resultant world states’ is clear and just as short, but very awkward. It is important informational content that it was clear, because that determines whether I should try to write that particular point differently in the future, and/or whether you need to interpret things more carefully. While in other contexts final has a relation to time, that is only because of its role in denoting the last thing in a list or sequence (and people often use chronological lists.).
It is similar with the word obvious. I am making a strong, and true, statement as to what the content of the OP’s post is by using the word ‘obvious’. This points out that you should compare that point of my statement to the meaning of the original post, and see that those pieces are the same. This is not a factor of ‘maybe’ this is what was meant or ‘there are multiple interpretations’. Those words are important parts of the message, and removing them would require leaving out large parts of what I was saying.
I do not act like what I write is automatically transparent as to meaning, but as it was, I was very precise and my meaning was clear. There are reasons other than clarity of the writing for whether someone will or won’t understand, but I can’t control those parts.
People don’t have to like that I was sending these messages, of course. That ‘of course’ is an important part of what I’m saying too. In this case, that I am acknowledging a generally known fact that people can and will dislike the messages I send, at least sometimes.
Around these parts, some people like to talk about levels of epistemic belief. The words ‘clear’ and ‘obvious’ clearly and obviously convey that my level of epistemic certainty here is very high. It is ‘epistemic certainty’ here because of just how high it is. It does not denote complete certainty like I have for 2 + 2 = 4, but more like my certainty that I am not currently conversing with an AI.
If I was trying to persuade rather than inform, I would not send these messages, and I would not say these words. Then I would pay more attention to guessing what tone people would read into the fact I was saying a certain message rather than the content of the message itself, and I might avoid phrases like ‘clear’, ‘obvious’, and so on. Their clarity would be a point against them.
I did include an example. It was one of the five paragraphs, no shorter than the others. That whole thing about someone caring about orphans, wanting them taken care of, being against taking taxes from people to do it, and founding them with their own money? There is no interpretation of things where it isn’t an example of both paths and states mattering, and about how altruism is clearly compatible with that.
Your part about using quotes is genuinely ambiguous. Are you trying to claim I should have quoted random famous people about my argument, which is wholly unnecessary, or that you wish for me to scour the internet for quotes by Utilitarians proving they think this way, which is an impossibly large endeavor for this sort of post rather than a research paper (when even that wouldn’t necessarily be telling)? Or quote the OP, even though I was responding to the whole thing?
Utilitarianism is pretty broad! There are utilitarians who care about the paths taken to reach an outcome.
That’s what I was talking about when I said that it meant utilitarians should ‘technically change that view’ and that it was not doing so that made Utilitarianism incoherent. Do I actually know what percentage of utilitarians explicitly include the path in the calculation? No.
I could be wrong that they generally don’t do that. Whether it is true or not, it is an intuition that both the OP and I share about Utilitarianism. I don’t think I’ve ever seen a utilitarian argument that did care about the paths. My impression is that explicitly including paths is deontological or virtue-ethical in practice. It is the one premise of my argument that could be entirely false. Would you say that a significant portion of utilitarians actually care about the path rather than the results?
Would you say you are one? Assuming you are familiar enough with it and have the time, I would be interested in how you formulate the basics of Utilitarianism in a path dependent manner, and would you say that is different in actual meaning to how it is usually formulated (assuming you agree that path-dependent isn’t the usual form)?
Yes, I consider it very likely correct to care about paths. I don’t care what percentage of utilitarians have which kinds of utilitarian views because the most common views have huge problems and are not likely to be right. There isn’t that much that utilitarians have in common other than the general concept of maximizing aggregate utility (that is, maximizing some aggregate of some kind of utility). There are disagreements over what the utility is of (it doesn’t have to be world states), what the maximization is over (doesn’t have to be actions), how the aggregation is done (doesn’t have to be a sum or an average or even to use any cardinal information, and don’t forget negative utilitarianism fits in here too), which utilities are aggregated (doesn’t have to be people’s own preference utilities, nor does it have to be happiness, nor pleasure and suffering, nor does it have to be von Neumann–Morgenstern utilities), or with what weights (if any; and they don’t need to be equal). I find it all pretty confusing. Attempts by some smart people to figure it out in the second half of the 20th century seem to have raised more questions than they have produced answers. I wouldn’t be very surprised if there were people who knew the answers and they were written up somewhere, but if so I haven’t come across that yet.
You probably don’t agree with this, but if I understand what you’re saying, utilitarians don’t really agree on anything or really have shared categories? Since utility is a nearly meaningless word outside of context due to broadness and vagueness, and they don’t agree on anything about it, Utilitarianism shouldn’t really be considered a thing itself? Just a collection of people who don’t really fit into the other paradigms but don’t rely on pure intuitions. Or in other words, pre-paradigmatic?
I don’t see “utility” or “utilitarianism” as meaningless or nearly meaningless words. “Utility” often refers to von Neumann–Morgenstern utilities and always refers to some kind of value assigned to something by some agent from some perspective that they have some reason to find sufficiently interesting to think about. And most ethical theories don’t seem utilitarian, even if perhaps it would be possible to frame them in utilitarian terms.
I can’t say I’m surprised a utilitarian doesn’t realize how vague it sounds? It is a jargon taken from a word that simply means ability to be used widely? Utility is an extreme abstraction, literally unassignable, and entirely based on guessing. You’ve straightforwardly admitted that it doesn’t have an agreed upon basis. Is it happiness? Avoidance of suffering? Fulfillment of the values of agents? Etc.
Utilitarians constantly talk about monetary situations, because that is one place they can actually use it and get results? But there, it’s hardly different than ordinary statistics. Utility there is often treated as a simple function of money but with diminishing returns. Looking up the term for the kind of utility you mentioned, it seems to once again only use monetary situations as examples, and sources claimed it was meant for lotteries and gambling.
Utility as a term makes sense there, but is the only place where your list has general agreement on what utility means? That doesn’t mean it is a useless term, but it is a very vague one.
Since you claim there isn’t agreement on the other aspects of the theories, that makes them more of an artificial category where the adherents don’t really agree on anything. The only real connection seems to be wanting to do math on on how good things are?
Yes, to me utilitarian ethical theories do seem usually more interested in formalizing things. That is probably part of their appeal. Moral philosophy is confusing, so people seek to formalize it in the hope of understanding things better (that’s the good reason to do it, at least; often the motivation is instead academic, or signaling, or obfuscation). Consider Tyler Cowen’s review of Derek Parfit’s arguments in On What Matters:
I wouldn’t be surprised if there are lots of Utilitarians who do not even consider path-dependence. Typical presentations of utility do not mention path dependence and nearly all the toy examples presented for utility calculations do not involve path dependence.
I do think that most, if prompted, would agree that utility can and in practice will depend upon path and therefore so does aggregated utility.