Trolley problems make a lot of sense in deontological ethics, to test supposedly universal moral rules in extreme situations.
Trolley problems do not make much sense in consequentialist ethics, as optimal action for a consequentialist can differ drastically between messy complicated real world and idealized world of thought experiments.
If you’re a consequentialist, trolley problems are entirely irrelevant.
The messy complicated real world never contains situations where you can sacrifice a few people to benefit many people?
Or if it does, in such situations we’ll figure out the optimal action using completely different considerations from those we would use in the idealized case?
In messy complicated real world always contains people with different agendas, massive uncertainty and disagreement about likely outcomes, moral hazard, and affected people pushing to get their desired result by any means available.
If assume them away, trolley problem has nothing to do with the real world.
Exactly. The central problems of real-world morality center around dealing with the uncertainty, bias and signaling issues of realistic high-stakes scenarios. By assuming all that complexity away trolley problems end up about as relevant as an economics problem without money or preferences.
A more useful research program would focus on probing the effects of uncertainty and socal issues on moral decision-making. But that makes for poor cocktail party conversation.
Trolley problems may be useful if you’re e.g. an extremely smart person doing Philosophy, Politics and Economics at Oxford and you’re destined for a career in politics where dealing with real-life lose-lose hypotheticals is going to be part of the job. Or if you want to understand such people, e.g. because you’re on one of the metaphorical tracks.
If you’re a consequentialist, trolley problems are easy.
Only if you know whether or not someone is watching!
That is, getting caught not acting like a deontologist is a consequence that must sometimes be avoided. This becomes relevant when considering, for example, whether to murder AGI developers with a relatively small chance of managing friendliness but a high chance of managing recursive self improvement.
This becomes relevant when considering, for example, whether to murder AGI developers with a relatively small chance of managing friendliness but a high chance of managing recursive self improvement.
Relevant, perhaps, but if you absolutely can’t talk them out of it, the negative expected utility of allowing them to continue could outweigh that of being imprisoned for murder by a great deal.
Of course, it would take a very atypical person to actually carry through on that choice, but if humans weren’t so poorly built for utility calculations we might not even need AGI in the first place.
This is still true:
Trolley problems make a lot of sense in deontological ethics, to test supposedly universal moral rules in extreme situations.
Trolley problems do not make much sense in consequentialist ethics, as optimal action for a consequentialist can differ drastically between messy complicated real world and idealized world of thought experiments.
If you’re a consequentialist, trolley problems are entirely irrelevant.
The messy complicated real world never contains situations where you can sacrifice a few people to benefit many people?
Or if it does, in such situations we’ll figure out the optimal action using completely different considerations from those we would use in the idealized case?
I don’t believe either of those.
In messy complicated real world always contains people with different agendas, massive uncertainty and disagreement about likely outcomes, moral hazard, and affected people pushing to get their desired result by any means available.
If assume them away, trolley problem has nothing to do with the real world.
Exactly. The central problems of real-world morality center around dealing with the uncertainty, bias and signaling issues of realistic high-stakes scenarios. By assuming all that complexity away trolley problems end up about as relevant as an economics problem without money or preferences.
A more useful research program would focus on probing the effects of uncertainty and socal issues on moral decision-making. But that makes for poor cocktail party conversation.
Trolley problems may be useful if you’re e.g. an extremely smart person doing Philosophy, Politics and Economics at Oxford and you’re destined for a career in politics where dealing with real-life lose-lose hypotheticals is going to be part of the job. Or if you want to understand such people, e.g. because you’re on one of the metaphorical tracks.
Of course, it does. This is why such hypotheticals are used to entrap politicians, the ones who usually have the job of making the decision.
It’s not clear to me whether the avoidance or entrapment came first.
If you’re a consequentialist, trolley problems are easy.
Only if you know whether or not someone is watching!
That is, getting caught not acting like a deontologist is a consequence that must sometimes be avoided. This becomes relevant when considering, for example, whether to murder AGI developers with a relatively small chance of managing friendliness but a high chance of managing recursive self improvement.
Relevant, perhaps, but if you absolutely can’t talk them out of it, the negative expected utility of allowing them to continue could outweigh that of being imprisoned for murder by a great deal.
Of course, it would take a very atypical person to actually carry through on that choice, but if humans weren’t so poorly built for utility calculations we might not even need AGI in the first place.