Moral theories of this sort need an all-knowing source of perfect judgement, else it can’t resolve the following case:
Rational Agent A, surrounded by rational Collective-B, knows to the best of their ability that they can achieve Areté through life course alpha. Collective-B disagrees, knowing Agent A can only achieve Areté through life course beta.
Assuming each party reasons from an equivalent depth of information, Eudaimonic Utilitarianism cannot resolve the conflict without an Omega’s help.
Areté itself is a subjective indicator without an Omega, though I admit seems a nice metric ’twere an Omega present. On second thought, even with an Omega, any incongruence between the agent and Omega’s value function leads not to eudaimonic fulfilment of the agent, as Omega, under your proposed theory, needs not account for the agent’s preferences lest unfulfilled preferences preclude achieving Areté.
Whatever the value function of a moral theory requisite an Omega, I see not how any agent with less knowledge and reasoning power than Omega could reconcile their theory with the territory. Hence CEV’s presumed super-AGI.
Admittedly the main challenge of Eudaimonic Utilitarianism is probably the difficulty of calculating a utility function that asks would a perfectly rational version of the agent with perfect information would do. Given that we usually only know from behaviour what an agent with bounded rationality would want, it is difficult to extrapolate without an Omega. That being said, even a rough approximation based on what is generally known about rational agents and as much information as can reasonably be mustered, is probably better than not trying at all.
If anything it is a strong imperative to gather as much information as possible (to get as close to perfect information as you can) before making decisions. So EU would probably support Rational Agent A and Collective-B pooling their information and together gathering more information and trying to come to some consensus about alpha vs beta by trying to approximate perfect information and perfect rationality as closely as they can.
It is assumed in this theory that intrinsic values would be congruent enough for the agent and the Omega to agree at the high level abstraction of were the agent given all the information and rationality that the Omega has. Of course, the agent without this information may find what the Omega does to help it achieve Eudaimonia to be strange and unintuitive, but that would be due to its lack of awareness of what the Omega knows. Admittedly this can lead to some rather paternalistic arrangements, but assuming that the Omega is benevolent, this shouldn’t be too bad an arrangement for the agent.
My apologies if I’m misunderstanding what you mean by Omega.
If anything it is a strong imperative to gather as much information as possible (to get as close to perfect information as you can) before making decisions.
This is an imperative for any rational agent insofar as the situation warrants. To assist in this process, philosopher’s develop decision theories. Decision theories are designed to assist an agent in processing information, and deciding a course of action, in furtherance of the agent’s values; they do not assist in determining what is worth valuing. Theories of proper moral conduct fill this gap.
So EU would probably support Rational Agent A and Collective-B pooling their information and together gathering more information and trying to come to some consensus about alpha vs beta by trying to approximate perfect information and perfect rationality as closely as they can.
That does indeed seem like an intermediary course of action designed to further the values of both Collective-B and Agent A. This still feels unsatisfactory, but as I cannot reason why, I must conclude I have a true rejection somewhere I can’t find at the moment. I was going to point out that the above scenario doesn’t reflect human behaviour, but there’s no need: it demonstrates the moral ideal to which we should strive.
Perhaps I object with the coining, as it seems a formalisation of what many do anyway, yet that’s no reason to—Aha!
My true rejection lies in your theory’s potential for being abused. Were one to claim they knew better than any other what would achieve others’ Areté, they could justify behaviour that in fact infringes upon others’ quest for Areté; they could falsely assume the role of Omega.
In the counter case of Preference Utilitarianism, one must account for the Preferences of others in their own utility calculation. Though it has the same pitfall, wherein one claims they know the ‘true’ preference of others’ differs from their ‘manifest’ preference.
The difference lies in each theory’s foundations. Preference utilitarianism is founded upon the empathic understanding that others pursuing their value function makes them, and thus those around them, more fulfilled. In your theory, one can always claim, “If you were only more rational, you would see I am in the right on this. Trust me.”
One becoming an evil overlord would also constitute a moral good in your theory, if their net capacity for achievement supersedes that of those whom they prey upon. I make no judgement on this.
Honestly though, I’m nitpicking by this point. Quite clearly written (setting aside the Adultery calculation), this, and good on you essaying to incorporate eudaimonia into a coherent moral theory.
Moral theories of this sort need an all-knowing source of perfect judgement, else it can’t resolve the following case:
Rational Agent A, surrounded by rational Collective-B, knows to the best of their ability that they can achieve Areté through life course alpha. Collective-B disagrees, knowing Agent A can only achieve Areté through life course beta.
Assuming each party reasons from an equivalent depth of information, Eudaimonic Utilitarianism cannot resolve the conflict without an Omega’s help.
Areté itself is a subjective indicator without an Omega, though I admit seems a nice metric ’twere an Omega present. On second thought, even with an Omega, any incongruence between the agent and Omega’s value function leads not to eudaimonic fulfilment of the agent, as Omega, under your proposed theory, needs not account for the agent’s preferences lest unfulfilled preferences preclude achieving Areté.
Whatever the value function of a moral theory requisite an Omega, I see not how any agent with less knowledge and reasoning power than Omega could reconcile their theory with the territory. Hence CEV’s presumed super-AGI.
Admittedly the main challenge of Eudaimonic Utilitarianism is probably the difficulty of calculating a utility function that asks would a perfectly rational version of the agent with perfect information would do. Given that we usually only know from behaviour what an agent with bounded rationality would want, it is difficult to extrapolate without an Omega. That being said, even a rough approximation based on what is generally known about rational agents and as much information as can reasonably be mustered, is probably better than not trying at all.
If anything it is a strong imperative to gather as much information as possible (to get as close to perfect information as you can) before making decisions. So EU would probably support Rational Agent A and Collective-B pooling their information and together gathering more information and trying to come to some consensus about alpha vs beta by trying to approximate perfect information and perfect rationality as closely as they can.
It is assumed in this theory that intrinsic values would be congruent enough for the agent and the Omega to agree at the high level abstraction of were the agent given all the information and rationality that the Omega has. Of course, the agent without this information may find what the Omega does to help it achieve Eudaimonia to be strange and unintuitive, but that would be due to its lack of awareness of what the Omega knows. Admittedly this can lead to some rather paternalistic arrangements, but assuming that the Omega is benevolent, this shouldn’t be too bad an arrangement for the agent.
My apologies if I’m misunderstanding what you mean by Omega.
This is an imperative for any rational agent insofar as the situation warrants. To assist in this process, philosopher’s develop decision theories. Decision theories are designed to assist an agent in processing information, and deciding a course of action, in furtherance of the agent’s values; they do not assist in determining what is worth valuing. Theories of proper moral conduct fill this gap.
That does indeed seem like an intermediary course of action designed to further the values of both Collective-B and Agent A. This still feels unsatisfactory, but as I cannot reason why, I must conclude I have a true rejection somewhere I can’t find at the moment. I was going to point out that the above scenario doesn’t reflect human behaviour, but there’s no need: it demonstrates the moral ideal to which we should strive.
Perhaps I object with the coining, as it seems a formalisation of what many do anyway, yet that’s no reason to—Aha!
My true rejection lies in your theory’s potential for being abused. Were one to claim they knew better than any other what would achieve others’ Areté, they could justify behaviour that in fact infringes upon others’ quest for Areté; they could falsely assume the role of Omega.
In the counter case of Preference Utilitarianism, one must account for the Preferences of others in their own utility calculation. Though it has the same pitfall, wherein one claims they know the ‘true’ preference of others’ differs from their ‘manifest’ preference.
The difference lies in each theory’s foundations. Preference utilitarianism is founded upon the empathic understanding that others pursuing their value function makes them, and thus those around them, more fulfilled. In your theory, one can always claim, “If you were only more rational, you would see I am in the right on this. Trust me.”
One becoming an evil overlord would also constitute a moral good in your theory, if their net capacity for achievement supersedes that of those whom they prey upon. I make no judgement on this.
Honestly though, I’m nitpicking by this point. Quite clearly written (setting aside the Adultery calculation), this, and good on you essaying to incorporate eudaimonia into a coherent moral theory.