Although I somewhat agree with the comment about style, I feel that the point you’re making could be received with some more enthusiasm. How well-recognized is this trolley problem fallacy? The way I see it, the energy spent on thinking about the trolley problem in isolation illustrates innate human short-sightedness and perhaps a clear limit of human intelligence as well. ‘Correctly’ solving one trolley problem does not prevent that you or someone else will be confronted with the next. My line of arguing is that the question of ethical decision making requires an agent to also have a proper ‘theory of mind’: if I am making this decision, what decision will a next person or agent have to deal with? If my car with four passengers chooses to avoid running over five people to just hit one, could it also put another oncoming car in the position where they have to choose between a collision with 8 people and evading and killing 5? And of course: whose decisions resulted in the trolley problem I’m currently facing and what is their responsibility?
I recently contributed a piece that is essentially about propagating consequences of decisions and I’m curious how it will be received. Could it be that this is a bit of a blind spot in ethics and/or AI safety? Given the situations we’ve gotten ourselves in as a society, I feel this also is an area in which humans can very easily be outsmarted...
Although I somewhat agree with the comment about style, I feel that the point you’re making could be received with some more enthusiasm. How well-recognized is this trolley problem fallacy? The way I see it, the energy spent on thinking about the trolley problem in isolation illustrates innate human short-sightedness and perhaps a clear limit of human intelligence as well. ‘Correctly’ solving one trolley problem does not prevent that you or someone else will be confronted with the next. My line of arguing is that the question of ethical decision making requires an agent to also have a proper ‘theory of mind’: if I am making this decision, what decision will a next person or agent have to deal with? If my car with four passengers chooses to avoid running over five people to just hit one, could it also put another oncoming car in the position where they have to choose between a collision with 8 people and evading and killing 5? And of course: whose decisions resulted in the trolley problem I’m currently facing and what is their responsibility? I recently contributed a piece that is essentially about propagating consequences of decisions and I’m curious how it will be received. Could it be that this is a bit of a blind spot in ethics and/or AI safety? Given the situations we’ve gotten ourselves in as a society, I feel this also is an area in which humans can very easily be outsmarted...