Actually you deontology says you should NOT push the fat man . Consequentialism says you should.
This would still be the case, even if Deonotology was false
it is hard to make sense of that. If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
There is no test I can think of which would determine its veracity.
Once again I will state: moral theories are tested by their ability to match moral intuition, and by their internal consistency, etc.
Compute CEV. Then actually do learn and become this better person that was modeled to compute the CEV. See if you prefer the CEV or any other possible utility function.
Asymptotic estimations could also be made IFF utility function spaces are continuous and can be mapped by similarity: If as you learn more true things from a random sample and ordering of all possible true things you could learn, gain more brain computing power, and gain a better capacity for self-reflection, your preferences tend towards CEV-predicted preferences, then CEV is almost certainly true.
If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
D(x) and U(y) make opposite recommendations. x and y are different intuitions from different people, and these intuitions may or may not match the actual morality functions inside the brains of their proponents.
I can find no measure of which recommendation is “correct” other than inside my own brain somewhere. This directly implies that it is “correct for Frank’s Brain”, not “correct universally” or “correct across all humans”.
Based on this reasoning, if I use my moral intuition to reason about the the fat man trolley problem problem using D() and find the conclusion correct within context, then D is correct for me, and the same goes for U(). So let’s try it!
My primary deontological rule: When there exist counterfactual possible futures where the expected number of deaths is lower than all other possible futures, always take the course of action which leads to this less-expected-deaths future. (In simple words: Do Not Kill, where inaction that leads to death is considered Killing).
A train is going to hit five people. There is a fat man which I can push down to save the five people with 90% probability. (let’s just assume I’m really good at quickly estimating this kind of physics within this thought experiment)
If I don’t push the fat man, 5 people die with 99% probability (shit happens). If I push the fat man, 1 person dies with 99% probabilty (shit happens), and the 5 others still die with 10% probability.
Expected deaths of not-pushing: 4.95.
Expected deaths of pushing: 1.49.
I apply the deontological rule. That fat man is doomed.
Now let’s try the utilitarian vers—Oh wait. That’s already what we did. We created a deontological rule that says to pick the highest expected utility action, and that’s also what utilitarianism tells me to do.
See what I mean when I say there is no meaningful distinction? If you calibrate your rules consistently, all “moral theories” I see philosophers arguing about produce the same output. Equal output, in fact.
So to return to the earlier point: D(trolley, Frank’s Rule) is correct where trolley is the problem and Frank’s is the rules I find most moral. U(trolley, Frank’s Utility Function) is also correct. D(trolley, ARBITRARY RULE PICKED AT RANDOM FROM ALL POSSIBLE RULES) is incorrect for me. Likewise, U(trolley, ARBITRARY UTILITY FUNCTION PICKED AT RANDOM FROM ALL POSSIBLE UTILITY FUNCTION) is incorrect for me.
This means that U(trolley) and D(trolley) cannot be “correct” or “incorrect, because in typical Functional Programming fashion, U(trolley) and D(trolley) return curried functions, that is, they return a function of a certain type which takes a rule (for D) or a utility function (for U) and returns a recommendation based on this for the trolley problem.
To reiterate some previous claims of mine, in which I am fairly confident, in the above jargon: There does not exist any single-parameter U(x) or D(x) functions that return a single truth-valuable recommendation without any rules or utility functions as input. All deontological systems rely on the rules supplied to them, and all utilitarian systems rely on the utility functions supplied to them. There exists a utility function equivalent to each possible rule, and there exists a rule equivalent to each possible utility function.
The rules or the utility functions are inside human brains. And either can be expressed in terms of the other interchangeably—which we use is merely a matter of convenience as one will correspond to the brain’s algorithm more easily than the other.
My primary deontological rule: When there exist counterfactual possible futures where the expected number of deaths is lower than all other possible futures, always take the course of action which leads to this less-expected-deaths future. (In simple words: Do Not Kill, where inaction that leads to death is considered Killing).
I suspect that defining deontology as obeying the single rule “maximize utility” would be a non-central redefinition of the term. something most deontologists would find unacceptable.
The simplified “Do Not Kill” formulation sounds very much like most deontological rules I’ve heard of (AFAIK, “Do not kill.” is a bread-and-butter standard deontological rule). It also happens to be a rule which I explicitly attempt to implement in my everyday life in exactly the format I’ve exposed—it’s not just a toy example, this is actually my primary “deontological” rule as far as I can tell.
And to me there is no difference between “Pull the trigger” or “Remain immobile” when both are extremely likely to lead to the death of someone. To me, both are “Kill”. So if not pulling the trigger leads to one death, and pulling the trigger leads to three deaths, both options are horrible, but I still really prefer not pulling the trigger.
So if for some inexplicable reason it’s really certain that the fat man will save the workers and that there is no better solution (this is an extremely unlikely proposition, and by default would not trust myself to have searched the whole space of possible options), then I would prefer pushing the fat man.
If I considered standing by and watching people die because I did nothing to not be “Kill”, then I would enforce that rule, and my utility function would also different. And then I wouldn’t push the fat man either way, whether I calculate it with utility functions or whether I follow the rule “Do Not Kill”.
I agree that it’s non-central, but IME most “central” rules I’ve heard of are really simple wordings that obfuscate the complexity and black boxes that are really going on in the human brain. At the base level, “do not kill” and “do not steal” are extremely complex. I trust that this part isn’t controversial except in naive philosophical journals of armchair philosophizing.
to me there is no difference between “Pull the trigger” or “Remain immobile” when both are extremely likely to lead to the death of someone. To me, both are “Kill”.
I believe that this is where many deontologists would label you a consequentialist.
most “central” rules I’ve heard of are really simple wordings that obfuscate the complexity and black boxes that are really going on in the human brain. At the base level, “do not kill” and “do not steal” are extremely complex. I trust that this part isn’t controversial except in naive philosophical journals of armchair philosophizing.
There are certainly the complex edge cases, like minimum necessary self-defense and such, but in most scenarios the application of the rules is pretty simple. Moreover, “inaction = negative action” is quite non-standard. In fact, even if I believe that in your example pushing the fat man would be the “right thing to do”, I do not alieve it (i.e. I would probably not do it if push came to shove, so to speak).
I believe that this is where many deontologists would label you a consequentialist.
With all due respect to all parties involved, if that’s how it works I would label the respective hypothetical individuals who would label me that “a bunch of hypocrites”. They’re no less consequentialist, in my view, since they hide behind words the fact that they have to make the assumption that pulling a trigger will lead to the consequence of a bullet coming out of it which will lead to the complex consequence of someone’s life ending.
I wish I could be more clear and specific, but it is difficult to discuss and argue all the concepts I have in mind as they are not all completely clear to me, and the level of emotional involvement I have in the whole topic of morality (as, I expect, do most people) along with the sheer amount of fun I’m having in here are certainly not helping mental clarity and debiasing. (yes, I find discussions, arguments, debates etc. of this type quite fun, most of the time)
In fact, even if I believe that in your example pushing the fat man would be the “right thing to do”, I do not alieve it (i.e. I would probably not do it if push came to shove, so to speak).
I’m not sure it’s just a question of not alieving it. There are many good reasons not to believe evidence that this will work, and even more good reasons to believe there is probably a better option, and many reasons why it could be extremely detrimental to you in the long term to push down a fat man onto train tracks, and if push come to shove it might end up being the more rational action in a real-life situation similar to the thought experiment.
Actually you deontology says you should NOT push the fat man . Consequentialism says you should.
I’m quite aware of that.
it is hard to make sense of that. If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
At this point, I simply must tap out. I’m at a loss at how else to explain what you seem to be consistently missing in my questions, but DaFranker is doing a very good job of it, so I’ll just stop trying.
moral theories are tested by their ability to match moral intuition,
Really? This is news to me. I guess Moore was right all along...
TL;DR: You should push the fat man if and only if X. You should not push the fat man if and only if ¬X.
X can be derived into a rule to use with D(X’) to compute whether you should push or not. X can also be derived into a utility function to use with U(X’) to compute whether you should push or not. The answer in either case doesn’t depend on U or D, it depends on your derivation of X, which itself depends on X.
This is shown by the assumption that for all reasonable a, there exists a g(a) where U(a) = D(g(a)). Since, by their ambiguity and vague definitions, both U() and D() seem to cover an infinite domain and are the equivalent of turing-complete, this assumption seems very natural.
Actually you deontology says you should NOT push the fat man . Consequentialism says you should.
it is hard to make sense of that. If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
Once again I will state: moral theories are tested by their ability to match moral intuition, and by their internal consistency, etc.
Once again, I will ask: how would you test CEV?
Compute CEV. Then actually do learn and become this better person that was modeled to compute the CEV. See if you prefer the CEV or any other possible utility function.
Asymptotic estimations could also be made IFF utility function spaces are continuous and can be mapped by similarity: If as you learn more true things from a random sample and ordering of all possible true things you could learn, gain more brain computing power, and gain a better capacity for self-reflection, your preferences tend towards CEV-predicted preferences, then CEV is almost certainly true.
D(x) and U(y) make opposite recommendations. x and y are different intuitions from different people, and these intuitions may or may not match the actual morality functions inside the brains of their proponents.
I can find no measure of which recommendation is “correct” other than inside my own brain somewhere. This directly implies that it is “correct for Frank’s Brain”, not “correct universally” or “correct across all humans”.
Based on this reasoning, if I use my moral intuition to reason about the the fat man trolley problem problem using D() and find the conclusion correct within context, then D is correct for me, and the same goes for U(). So let’s try it!
My primary deontological rule: When there exist counterfactual possible futures where the expected number of deaths is lower than all other possible futures, always take the course of action which leads to this less-expected-deaths future. (In simple words: Do Not Kill, where inaction that leads to death is considered Killing).
A train is going to hit five people. There is a fat man which I can push down to save the five people with 90% probability. (let’s just assume I’m really good at quickly estimating this kind of physics within this thought experiment)
If I don’t push the fat man, 5 people die with 99% probability (shit happens). If I push the fat man, 1 person dies with 99% probabilty (shit happens), and the 5 others still die with 10% probability.
Expected deaths of not-pushing: 4.95.
Expected deaths of pushing: 1.49.
I apply the deontological rule. That fat man is doomed.
Now let’s try the utilitarian vers—Oh wait. That’s already what we did. We created a deontological rule that says to pick the highest expected utility action, and that’s also what utilitarianism tells me to do.
See what I mean when I say there is no meaningful distinction? If you calibrate your rules consistently, all “moral theories” I see philosophers arguing about produce the same output. Equal output, in fact.
So to return to the earlier point: D(trolley, Frank’s Rule) is correct where trolley is the problem and Frank’s is the rules I find most moral. U(trolley, Frank’s Utility Function) is also correct. D(trolley, ARBITRARY RULE PICKED AT RANDOM FROM ALL POSSIBLE RULES) is incorrect for me. Likewise, U(trolley, ARBITRARY UTILITY FUNCTION PICKED AT RANDOM FROM ALL POSSIBLE UTILITY FUNCTION) is incorrect for me.
This means that U(trolley) and D(trolley) cannot be “correct” or “incorrect, because in typical Functional Programming fashion, U(trolley) and D(trolley) return curried functions, that is, they return a function of a certain type which takes a rule (for D) or a utility function (for U) and returns a recommendation based on this for the trolley problem.
To reiterate some previous claims of mine, in which I am fairly confident, in the above jargon: There does not exist any single-parameter U(x) or D(x) functions that return a single truth-valuable recommendation without any rules or utility functions as input. All deontological systems rely on the rules supplied to them, and all utilitarian systems rely on the utility functions supplied to them. There exists a utility function equivalent to each possible rule, and there exists a rule equivalent to each possible utility function.
The rules or the utility functions are inside human brains. And either can be expressed in terms of the other interchangeably—which we use is merely a matter of convenience as one will correspond to the brain’s algorithm more easily than the other.
I suspect that defining deontology as obeying the single rule “maximize utility” would be a non-central redefinition of the term. something most deontologists would find unacceptable.
The simplified “Do Not Kill” formulation sounds very much like most deontological rules I’ve heard of (AFAIK, “Do not kill.” is a bread-and-butter standard deontological rule). It also happens to be a rule which I explicitly attempt to implement in my everyday life in exactly the format I’ve exposed—it’s not just a toy example, this is actually my primary “deontological” rule as far as I can tell.
And to me there is no difference between “Pull the trigger” or “Remain immobile” when both are extremely likely to lead to the death of someone. To me, both are “Kill”. So if not pulling the trigger leads to one death, and pulling the trigger leads to three deaths, both options are horrible, but I still really prefer not pulling the trigger.
So if for some inexplicable reason it’s really certain that the fat man will save the workers and that there is no better solution (this is an extremely unlikely proposition, and by default would not trust myself to have searched the whole space of possible options), then I would prefer pushing the fat man.
If I considered standing by and watching people die because I did nothing to not be “Kill”, then I would enforce that rule, and my utility function would also different. And then I wouldn’t push the fat man either way, whether I calculate it with utility functions or whether I follow the rule “Do Not Kill”.
I agree that it’s non-central, but IME most “central” rules I’ve heard of are really simple wordings that obfuscate the complexity and black boxes that are really going on in the human brain. At the base level, “do not kill” and “do not steal” are extremely complex. I trust that this part isn’t controversial except in naive philosophical journals of armchair philosophizing.
I believe that this is where many deontologists would label you a consequentialist.
There are certainly the complex edge cases, like minimum necessary self-defense and such, but in most scenarios the application of the rules is pretty simple. Moreover, “inaction = negative action” is quite non-standard. In fact, even if I believe that in your example pushing the fat man would be the “right thing to do”, I do not alieve it (i.e. I would probably not do it if push came to shove, so to speak).
With all due respect to all parties involved, if that’s how it works I would label the respective hypothetical individuals who would label me that “a bunch of hypocrites”. They’re no less consequentialist, in my view, since they hide behind words the fact that they have to make the assumption that pulling a trigger will lead to the consequence of a bullet coming out of it which will lead to the complex consequence of someone’s life ending.
I wish I could be more clear and specific, but it is difficult to discuss and argue all the concepts I have in mind as they are not all completely clear to me, and the level of emotional involvement I have in the whole topic of morality (as, I expect, do most people) along with the sheer amount of fun I’m having in here are certainly not helping mental clarity and debiasing. (yes, I find discussions, arguments, debates etc. of this type quite fun, most of the time)
I’m not sure it’s just a question of not alieving it. There are many good reasons not to believe evidence that this will work, and even more good reasons to believe there is probably a better option, and many reasons why it could be extremely detrimental to you in the long term to push down a fat man onto train tracks, and if push come to shove it might end up being the more rational action in a real-life situation similar to the thought experiment.
I’m quite aware of that.
At this point, I simply must tap out. I’m at a loss at how else to explain what you seem to be consistently missing in my questions, but DaFranker is doing a very good job of it, so I’ll just stop trying.
Really? This is news to me. I guess Moore was right all along...
You have proof that you should push the fat man?
Lengthy breakdown of my response.
TL;DR: You should push the fat man if and only if X. You should not push the fat man if and only if ¬X.
X can be derived into a rule to use with D(X’) to compute whether you should push or not. X can also be derived into a utility function to use with U(X’) to compute whether you should push or not. The answer in either case doesn’t depend on U or D, it depends on your derivation of X, which itself depends on X.
This is shown by the assumption that for all reasonable a, there exists a g(a) where U(a) = D(g(a)). Since, by their ambiguity and vague definitions, both U() and D() seem to cover an infinite domain and are the equivalent of turing-complete, this assumption seems very natural.