That you can pick hypothetical conditions where your deontological intuition is satisfied by your “utility function” tells us nothing about the situations where the intuition is in direct conflict with your “utility function”.
Let’s make this simple: if you were certain your utility function was maximized by torturing children, would you do it?
As a side note, the topic seems to be utilitarianism, not consequentialism. The terms are not interchangeable.
That you can pick hypothetical conditions where your deontological intuition is satisfied by your “utility function” tells us nothing about the situations where the intuition is in direct conflict with your “utility function”.
Let’s make this simple: if you were certain your utility function was maximized by torturing children, would you do it?
As a side note, the topic seems to be utilitarianism, not consequentialism. The terms are not interchangeable.
I am not Omega. I can’t be “certain”.