That’s basically the reason consequences appear consistent.
If the agent can change its mind, then naturally it can’t prove that it won’t. I was assuming a situation where the agent is reasoning further after it has already decided on the action, so that further moral arguments are not.
That’s basically the reason consequences appear consistent.
If the agent can change its mind, then naturally it can’t prove that it won’t. I was assuming a situation where the agent is reasoning further after it has already decided on the action, so that further moral arguments are not.