What are the consequences of neutral actions in ethics? After a quick perusal of Google, there doesn’t seem to be anything addressing my question, and I think there should be some discussion on this.
This question is related to a problem I’ve been having with ethics lately; namely, should ones ethical system be viable in any kind of reality? Failing that, shouldn’t there be some omniversal meta-ethical structure?
I’ve had a few thoughts on this, and some arguments played out in my head, but I want to see what others think.
What sort of consequences are you thinking of? The idea that ethics can consider two options equally preferable and not care which one you take follows from the idea of an ethical utility function (even a complicated function that only exists in an abstract mathematical sense). We don’t need to assume it directly, we can go with the Archimedean property (roughly, that crossing the street can be worth a small chance of death).
What are the consequences of neutral actions in ethics? After a quick perusal of Google, there doesn’t seem to be anything addressing my question, and I think there should be some discussion on this.
This question is related to a problem I’ve been having with ethics lately; namely, should ones ethical system be viable in any kind of reality? Failing that, shouldn’t there be some omniversal meta-ethical structure?
I’ve had a few thoughts on this, and some arguments played out in my head, but I want to see what others think.
What sort of consequences are you thinking of? The idea that ethics can consider two options equally preferable and not care which one you take follows from the idea of an ethical utility function (even a complicated function that only exists in an abstract mathematical sense). We don’t need to assume it directly, we can go with the Archimedean property (roughly, that crossing the street can be worth a small chance of death).