If you assume that we are doing utilitarianism, then we might be getting the maths wrong. But the same evidence could mean we are not doing utilitarianism. And there is other evidence that we are not doing utilitarianism, such as the existence of laws and rights.
Oh sure. I meant, ok, utilitarianism has a small advantage over other frameworks. Notable, that it’s actually correct. Saying you care about something like say “human rights” and acting according to some list of “principles” doesn’t produce the optimal outcome for the thing you said about. The optimal outcome is whatever action is (predicted to based on unbiased past data) maximize the actual utility, say, those human rights.
The advantage of other ethical frameworks is simply that say you might work out the math and figure out that going on a killing spree of say, FDA members, maximizes utility. But you might be wrong. Sure, new FDA members might actually listen to evidence and approve additional covid vaccines, but there may be extremely complex impossible to model side effects. (I am assuming that “you” doing this is a dictator like Stalin, so you are not personally going to suffer any consequence for purging the FDA). From a utilitarian perspective it’s correct, even a 50% chance to save 100k lives would be worth 1000 deaths, but the new bureaucrats might kill even more people. (by, for example, giving their reports to you as pseudoscience and lying to you, which is usually what happens in dictatorial regimes)
Let me try an example:
rational consequentialism : “I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to consequences for my well being is to murder my grandmother”.
rational utilitarianism : “I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to good consequences for the majority of my fellow humans is to murder my grandmother”.
[assorted other ethical frameworks]: it’s wrong to murder your grandmother because it goes against principle #n. it’s wrong to murder your grandmother because a fair poll of your community members would be against it. It’s wrong to murder your grandmother because the law says it is.
I think I have it right.
Note that for vehicle autonomy, the very same situation can and will come up. “using a rational algorithm, trained on a large amount of data, the best action with respect to consequences for the well being of the driver, is to accelerate a maximum throttle into traffic, evading the cross traffic, to prevent collision with the out of control truck about to squash this car”.
[assorted other ethical frameworks]: it’s wrong to accelerate into traffic because it endangers other drivers. It’s wrong to accelerate into traffic because the law says so.
Oh sure. I meant, ok, utilitarianism has a small advantage over other frameworks. Notable, that it’s actually correct.
That has never been shown.
Saying you care about something like say “human rights” and acting according to some list of “principles” doesn’t produce the optimal outcome for the thing you said about.
That’s question beging. You have to define optimal outcome as greatest utility to come to that conclusion . If you define optimal outcome as “greatest utility without violating rights”, then it turns out utilitarianism isnt correct.
he advantage of other ethical frameworks is simply that say you might work out the math and figure out that going on a killing spree of say, FDA members, maximizes utility. But you might be wrong.
If you can’t calculate utility, then you aren’t doing utilitarianism
Like every other defender of utilitarianism, you have switched to defending rule consequentialism.
Consequentialism is a superset of utilitarianism. “Only the consequences matter vs we must seek good consequences for the greatest number”.
In practice they are identical for actors with good intentions. Using both ethical frameworks, the most despicable action is allowed and is the right thing to do IF it, based on the data, will result in the best predicted outcome.
I have inserted in 2 assumptions : we don’t know ahead of time the consequences of an action merely what we predict they are, and some consequences are so indirect they can’t be modeled so we are forced to ignore them.
By DEFINITION though you cannot take an action better, in a real universe with limited knowledge and cognition, than the action predicted to have the “best” outcomes.
Let me try an example:
rational consequentialism : “I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to consequences for my well being is to murder my grandmother”.
rational utilitarianism : “I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to good consequences for the majority of my fellow humans is to murder my grandmother”.
[assorted other ethical frameworks]: it’s wrong to murder your grandmother because it goes against principle #n. it’s wrong to murder your grandmother because a fair poll of your community members would be against it. It’s wrong to murder your grandmother because the law says it is.
I think I have it right.
Note that for vehicle autonomy, the very same situation can and will come up. “using a rational algorithm, trained on a large amount of data, the best action with respect to consequences for the well being of the driver, is to accelerate a maximum throttle into traffic, evading the cross traffic, to prevent collision with the out of control truck about to squash this car”.
[assorted other ethical frameworks]: it’s wrong to accelerate into traffic because it endangers other drivers. It’s wrong to accelerate into traffic because the law says so.
You can sort of see how I feel on this. While I also feel a ‘shudder’ about the thought of murdering someone’s grandmother, ultimately if you actually want to do the greatest good for the greatest many—if your goal is to actually achieve whatever your principles are—vs merely giving the appearance of doing so—it appears pretty clear what algorithm you have to use.
It’s not a simple choice between doing the best thing versus doing something else, because you can’t calculate the best thing. You are using heuristics, not algorithms.
Consequentialism is a superset of utilitarianism. “Only the consequences matter vs we must seek good consequences for the greatest number”.
There are multiple forms of utilitarianism and of non-utilitarian consequentialism. In my previous comments I was talking about rule consequentialism.
In practice they are identical for actors with good intentions
Rule consequentialism is a substantively different theory to utilitarianism,
notably giving a different answer to the trolley problem .
I have inserted in 2 assumptions : we don’t know ahead of time the consequences of an action merely what we predict they are
One of the ways rule consequentialism differs from utilitarianism is how to deal with that limitation. RC suggests following rules that generally lead to good consequences. (“Don’t push the fat man because killing people generally leads to bad consequences”) If utilitarianism suggests following your own judgement , however flawed, it will give worse results than following rules, for sufficiently optimal rules. Utilitarianism as the claim that you should always do what is best is straightforward as a theoretical claim, but much more complex practically...and ethics is practical.
By DEFINITION though you cannot take an action better, in a real universe with limited knowledge and cognition, than the action predicted to have the “best” outcomes
You cannot take an action better, in a real universe with limited knowledge and cognition, than the action predicted to have the “best” outcomes by a perfect predictor. But if you are an imperfect predictor, you can do better than your own judgement.
it’s wrong to murder your grandmother because it goes against principle #n.
Rationalists love to portray deontology as a matter of blindly following arbitrary rules … But where does principle N come from? Maybe it was formulated by a better predictor than you … maybe it is a distillation of of human experience over the ages.
it’s wrong to murder your grandmother because a fair poll of your community members would be against it.
Is that supposed to be obviously wrong? Why shouldn’t collective judgement be better than individual judgement?
It’s wrong to murder your grandmother because the law says it is.
Is that supposed to be obviously wrong? Why shouldn’t the law be a formalisation of collective judgement?
it appears pretty clear what algorithm you have to use.
You can’t compute the algorithm that gives be best answer, so it is unclear what approximation you should instead, and also unclear how much you should be relying on your own judgement.
You can’t compute the algorithm that gives be best answer, so it is unclear what approximation you should instead, and also unclear how much you should be relying on your own judgement.
This I think is our point of divergence. I am not talking about “using your own judgement”. I am talking about collecting sufficient data and using an algorithm of some type. Also you then validate your predictor’s accuracy (this is somewhat debated at the moment how to best go about this).
Note that the sophistication of predictor you need depends on the difficulty of the problem. Modeling a falling rock? A second or third order curve fit should match up to data to within the margin of observation error, and whatever method you use to validate your predictor should show it is nearly perfect. Modeling the consequences of murdering your grandmother? Fair enough, I will concede that current methods can’t do this.
However if you can, then it’s the correct system to use. As an example, whether or not the police should be encouraged to kill the moment they feel threatened. From all of these examples—the law, a community poll, etc—the consensus opinion of the community appears to disagree with the data collected from European countries where police are not encouraged to kill, and they kill far fewer people, without a corresponding increase in police casualties. Massive numbers of people in the legislature and the community are just wrong.
This I think is our point of divergence. I am not talking about “using your own judgement”. I am talking about collecting sufficient data and using an algorithm of some type
You are talking about who collecting data?
However if you can, then it’s the correct system to use
That just means that utilitarianism is theoretically correct, in the sense that it gives the right answer given all the data and infinite compute. I’ve already addressed that: ethics is practical. It’s intrinsically about his to solve real world problems.
As an example, whether or not the police should be encouraged to kill the moment they feel threatened. From all of these examples—the law, a community poll, etc—the consensus opinion of the community appears to disagree with the data collected from European countries where police are not encouraged to kill, and they kill far fewer people, without a corresponding increase in police casualties. Massive numbers of people in the legislature and the community are just wrong.
The claim that ethics needs a deontological component is compatible with the claim that some existing deontological systems are flawed from the consequentialist point of view. That is a way that rule consequentialism differs from absolute deontology. Unfortunately, the rationalsphere keeps criticising absolute deontology as though it’s the only kind.
And wanting to replace flawed rules with better rules isn’t at all the same as wanting to abandon rules altogether....rule consequentialism isn’t utilitarianism, and utilitarianism isn’t just basing ethics on consequences. It’s noticeable that a lot of people who say they are utilitarians aren’t in the business of breaking laws or wanting to abolish all laws, for all that they insist that deontology is crazy.
If you assume that we are doing utilitarianism, then we might be getting the maths wrong. But the same evidence could mean we are not doing utilitarianism. And there is other evidence that we are not doing utilitarianism, such as the existence of laws and rights.
Oh sure. I meant, ok, utilitarianism has a small advantage over other frameworks. Notable, that it’s actually correct. Saying you care about something like say “human rights” and acting according to some list of “principles” doesn’t produce the optimal outcome for the thing you said about. The optimal outcome is whatever action is (predicted to based on unbiased past data) maximize the actual utility, say, those human rights.
The advantage of other ethical frameworks is simply that say you might work out the math and figure out that going on a killing spree of say, FDA members, maximizes utility. But you might be wrong. Sure, new FDA members might actually listen to evidence and approve additional covid vaccines, but there may be extremely complex impossible to model side effects. (I am assuming that “you” doing this is a dictator like Stalin, so you are not personally going to suffer any consequence for purging the FDA). From a utilitarian perspective it’s correct, even a 50% chance to save 100k lives would be worth 1000 deaths, but the new bureaucrats might kill even more people. (by, for example, giving their reports to you as pseudoscience and lying to you, which is usually what happens in dictatorial regimes)
Let me try an example:
rational consequentialism : “I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to consequences for my well being is to murder my grandmother”.
rational utilitarianism : “I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to good consequences for the majority of my fellow humans is to murder my grandmother”.
[assorted other ethical frameworks]: it’s wrong to murder your grandmother because it goes against principle #n. it’s wrong to murder your grandmother because a fair poll of your community members would be against it. It’s wrong to murder your grandmother because the law says it is.
I think I have it right.
Note that for vehicle autonomy, the very same situation can and will come up. “using a rational algorithm, trained on a large amount of data, the best action with respect to consequences for the well being of the driver, is to accelerate a maximum throttle into traffic, evading the cross traffic, to prevent collision with the out of control truck about to squash this car”.
[assorted other ethical frameworks]: it’s wrong to accelerate into traffic because it endangers other drivers. It’s wrong to accelerate into traffic because the law says so.
That has never been shown.
That’s question beging. You have to define optimal outcome as greatest utility to come to that conclusion . If you define optimal outcome as “greatest utility without violating rights”, then it turns out utilitarianism isnt correct.
If you can’t calculate utility, then you aren’t doing utilitarianism
Like every other defender of utilitarianism, you have switched to defending rule consequentialism.
Consequentialism is a superset of utilitarianism. “Only the consequences matter vs we must seek good consequences for the greatest number”.
In practice they are identical for actors with good intentions. Using both ethical frameworks, the most despicable action is allowed and is the right thing to do IF it, based on the data, will result in the best predicted outcome.
I have inserted in 2 assumptions : we don’t know ahead of time the consequences of an action merely what we predict they are, and some consequences are so indirect they can’t be modeled so we are forced to ignore them.
By DEFINITION though you cannot take an action better, in a real universe with limited knowledge and cognition, than the action predicted to have the “best” outcomes.
Let me try an example:
rational consequentialism : “I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to consequences for my well being is to murder my grandmother”.
rational utilitarianism : “I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to good consequences for the majority of my fellow humans is to murder my grandmother”.
[assorted other ethical frameworks]: it’s wrong to murder your grandmother because it goes against principle #n. it’s wrong to murder your grandmother because a fair poll of your community members would be against it. It’s wrong to murder your grandmother because the law says it is.
I think I have it right.
Note that for vehicle autonomy, the very same situation can and will come up. “using a rational algorithm, trained on a large amount of data, the best action with respect to consequences for the well being of the driver, is to accelerate a maximum throttle into traffic, evading the cross traffic, to prevent collision with the out of control truck about to squash this car”.
[assorted other ethical frameworks]: it’s wrong to accelerate into traffic because it endangers other drivers. It’s wrong to accelerate into traffic because the law says so.
You can sort of see how I feel on this. While I also feel a ‘shudder’ about the thought of murdering someone’s grandmother, ultimately if you actually want to do the greatest good for the greatest many—if your goal is to actually achieve whatever your principles are—vs merely giving the appearance of doing so—it appears pretty clear what algorithm you have to use.
It’s not a simple choice between doing the best thing versus doing something else, because you can’t calculate the best thing. You are using heuristics, not algorithms.
There are multiple forms of utilitarianism and of non-utilitarian consequentialism. In my previous comments I was talking about rule consequentialism.
Rule consequentialism is a substantively different theory to utilitarianism,
notably giving a different answer to the trolley problem .
One of the ways rule consequentialism differs from utilitarianism is how to deal with that limitation. RC suggests following rules that generally lead to good consequences. (“Don’t push the fat man because killing people generally leads to bad consequences”) If utilitarianism suggests following your own judgement , however flawed, it will give worse results than following rules, for sufficiently optimal rules. Utilitarianism as the claim that you should always do what is best is straightforward as a theoretical claim, but much more complex practically...and ethics is practical.
You cannot take an action better, in a real universe with limited knowledge and cognition, than the action predicted to have the “best” outcomes by a perfect predictor. But if you are an imperfect predictor, you can do better than your own judgement.
Rationalists love to portray deontology as a matter of blindly following arbitrary rules … But where does principle N come from? Maybe it was formulated by a better predictor than you … maybe it is a distillation of of human experience over the ages.
Is that supposed to be obviously wrong? Why shouldn’t collective judgement be better than individual judgement?
Is that supposed to be obviously wrong? Why shouldn’t the law be a formalisation of collective judgement?
You can’t compute the algorithm that gives be best answer, so it is unclear what approximation you should instead, and also unclear how much you should be relying on your own judgement.
You can’t compute the algorithm that gives be best answer, so it is unclear what approximation you should instead, and also unclear how much you should be relying on your own judgement.
This I think is our point of divergence. I am not talking about “using your own judgement”. I am talking about collecting sufficient data and using an algorithm of some type. Also you then validate your predictor’s accuracy (this is somewhat debated at the moment how to best go about this).
Note that the sophistication of predictor you need depends on the difficulty of the problem. Modeling a falling rock? A second or third order curve fit should match up to data to within the margin of observation error, and whatever method you use to validate your predictor should show it is nearly perfect. Modeling the consequences of murdering your grandmother? Fair enough, I will concede that current methods can’t do this.
However if you can, then it’s the correct system to use. As an example, whether or not the police should be encouraged to kill the moment they feel threatened. From all of these examples—the law, a community poll, etc—the consensus opinion of the community appears to disagree with the data collected from European countries where police are not encouraged to kill, and they kill far fewer people, without a corresponding increase in police casualties. Massive numbers of people in the legislature and the community are just wrong.
You are talking about who collecting data?
That just means that utilitarianism is theoretically correct, in the sense that it gives the right answer given all the data and infinite compute. I’ve already addressed that: ethics is practical. It’s intrinsically about his to solve real world problems.
The claim that ethics needs a deontological component is compatible with the claim that some existing deontological systems are flawed from the consequentialist point of view. That is a way that rule consequentialism differs from absolute deontology. Unfortunately, the rationalsphere keeps criticising absolute deontology as though it’s the only kind.
And wanting to replace flawed rules with better rules isn’t at all the same as wanting to abandon rules altogether....rule consequentialism isn’t utilitarianism, and utilitarianism isn’t just basing ethics on consequences. It’s noticeable that a lot of people who say they are utilitarians aren’t in the business of breaking laws or wanting to abolish all laws, for all that they insist that deontology is crazy.