I contend otherwise. The utilitarian model comes down to a subjective utility calculation which is impossible (I use the word impossible realizing the extremity of the word) to do currently. This can be further explicated somewhere else but without an unbiased consciousness—without one which does not fall prey to random changes of desires and mis-interpretations or mis-calculations (in other words the AI we wish to build), there cannot be a reasonable calculation of utility such that it would accurately model a basket of preferences. As a result it is not a reasonable nor reliable method for determining outcomes or understanding individual goals.
True there may be instances in which a crude utilitarian metric can be devised which accurately represents reality at one point in time, however the consequential argument seems to divine that the accumulated outcome of any specific action taken through consequential thought will align reasonably if not perfectly with the predicted outcome. This is how utilitarianism epistemologically fails—the outcomes are impossible to predict. Exogeny anyone?
In fact, what seems to hold truest to form in terms of long term goal and short term action setting is the virtue ethics which Aristotle so eloquently explicated. This is how in my view people come to their correct conclusions while falsely attributing their positive outcomes to other forms such as utilitarianism. E.G. someone thinking “I think that the outcomes of this particular decision will be to my net benefit in the long run because from this will lead to this etc..”. To be sure it is possible that a utilitarian calculation could be in agreement with the virtue of the decision if the known variables are finite and the exogenous variables are by and large irrelevant, however it would seem to me that when the variables are complicated past current available calculations understanding the virtue behind an action, behavior or those which are indigenous to the actor will yield better long term results.
It is odd because objective Bayesian probability is rooted in Aristotelian logic which is predicated on virtue ethics, and since Eliezer seems to be very focused on Bayesian probability that would seem to conflict with consequential utilitarianism.
However I may be read the whole thing wrong.
ED: If there is significant disagreement please explicate so I can see how my reasoning is not clear or believed to be flawed.
Whether a given process is computationally feasible or not has no bearing on whether it’s morally right. If you can’t do the right thing (whether due to computational constraints or any other reason), that’s no excuse to go pursue a completely different goal instead. Rather, you just have to find the closest approximation of right that you can.
If it turns out that e.g. virtue ethics produce consistently better consequences than direct attempts at expected utility maximization, then that very fact is a consquentialist reason to use virtue ethics for your object-level decisions. But a consequentialist would do so knowing that it’s just an approximation, and be willing to switch if a superior heuristic ever shows up.
See Two-Tier Rationalism for more discussion, and Ethical Injunctions for why you might want to do a little of this even if you can directly compute expected utility.
It is odd because objective Bayesian probability is rooted in Aristotelian logic which is predicated on virtue ethics
Just because Aristotle founded formal logic doesn’t mean he was right about ethics too, any more than about physics.
Rather, you just have to find the closest approximation of right that you can.
This assumes that we know on which track the right thing to do is. You cannot approximate if you do not even know what it is you are trying to approximate.
You can infer, or state that maximizing happiness is what you are trying to approximate however that may not be indeed what is the right thing.
I am familiar with two tier rationalism and all other consequentialist philosophies. All must boil down eventually to a utility calculation or an appeal to virtue—as the second tier does. One problem with the Two Tier solution as it is presented is that it’s solutions to the consequentialist problems are based on vague terms:
Must be moral principles that identify a situation or class of situations and call for an action in that/those situation(s).
Ok, WHICH moral principals, and based on what? How are we to know the right action in any particular situation?
Or on virtue:
Must guide you in actions that are consistent with the expressions of virtue and integrity.
I do take issue with Alicorns definition of virtue-busting, as it relegates virtue to simply patterns of behavior.
Therefore in order to be a consequentialist you must first answer “What consequence is right/correct/just?” The answer then is the correct philosophy, not simply how you got to it.
Consequentialism then may be the best guide to virtue but it cannot stand on its own without an ideal. That ideal in my mind is best represented as virtue. Virtue ethics then are the values to which there may be many routes—and consequentialism may be the best.
Ed; Seriously people, if you are going to down vote my reply then explain why.
I contend otherwise. The utilitarian model comes down to a subjective utility calculation which is impossible (I use the word impossible realizing the extremity of the word) to do currently. This can be further explicated somewhere else but without an unbiased consciousness—without one which does not fall prey to random changes of desires and mis-interpretations or mis-calculations (in other words the AI we wish to build), there cannot be a reasonable calculation of utility such that it would accurately model a basket of preferences. As a result it is not a reasonable nor reliable method for determining outcomes or understanding individual goals.
True there may be instances in which a crude utilitarian metric can be devised which accurately represents reality at one point in time, however the consequential argument seems to divine that the accumulated outcome of any specific action taken through consequential thought will align reasonably if not perfectly with the predicted outcome. This is how utilitarianism epistemologically fails—the outcomes are impossible to predict. Exogeny anyone?
In fact, what seems to hold truest to form in terms of long term goal and short term action setting is the virtue ethics which Aristotle so eloquently explicated. This is how in my view people come to their correct conclusions while falsely attributing their positive outcomes to other forms such as utilitarianism. E.G. someone thinking “I think that the outcomes of this particular decision will be to my net benefit in the long run because from this will lead to this etc..”. To be sure it is possible that a utilitarian calculation could be in agreement with the virtue of the decision if the known variables are finite and the exogenous variables are by and large irrelevant, however it would seem to me that when the variables are complicated past current available calculations understanding the virtue behind an action, behavior or those which are indigenous to the actor will yield better long term results.
It is odd because objective Bayesian probability is rooted in Aristotelian logic which is predicated on virtue ethics, and since Eliezer seems to be very focused on Bayesian probability that would seem to conflict with consequential utilitarianism.
However I may be read the whole thing wrong.
ED: If there is significant disagreement please explicate so I can see how my reasoning is not clear or believed to be flawed.
Whether a given process is computationally feasible or not has no bearing on whether it’s morally right. If you can’t do the right thing (whether due to computational constraints or any other reason), that’s no excuse to go pursue a completely different goal instead. Rather, you just have to find the closest approximation of right that you can.
If it turns out that e.g. virtue ethics produce consistently better consequences than direct attempts at expected utility maximization, then that very fact is a consquentialist reason to use virtue ethics for your object-level decisions. But a consequentialist would do so knowing that it’s just an approximation, and be willing to switch if a superior heuristic ever shows up.
See Two-Tier Rationalism for more discussion, and Ethical Injunctions for why you might want to do a little of this even if you can directly compute expected utility.
Just because Aristotle founded formal logic doesn’t mean he was right about ethics too, any more than about physics.
This assumes that we know on which track the right thing to do is. You cannot approximate if you do not even know what it is you are trying to approximate.
You can infer, or state that maximizing happiness is what you are trying to approximate however that may not be indeed what is the right thing.
I am familiar with two tier rationalism and all other consequentialist philosophies. All must boil down eventually to a utility calculation or an appeal to virtue—as the second tier does. One problem with the Two Tier solution as it is presented is that it’s solutions to the consequentialist problems are based on vague terms:
Ok, WHICH moral principals, and based on what? How are we to know the right action in any particular situation?
Or on virtue:
I do take issue with Alicorns definition of virtue-busting, as it relegates virtue to simply patterns of behavior.
Therefore in order to be a consequentialist you must first answer “What consequence is right/correct/just?” The answer then is the correct philosophy, not simply how you got to it.
Consequentialism then may be the best guide to virtue but it cannot stand on its own without an ideal. That ideal in my mind is best represented as virtue. Virtue ethics then are the values to which there may be many routes—and consequentialism may be the best.
Ed; Seriously people, if you are going to down vote my reply then explain why.