To lay my cards on the table, I’m basically a utilitarian. I think we should maximize happiness and minimize suffering, and frankly am shocked that anyone takes Kant seriously.
Kant’s categorical imperative is weird, and seems distantly related to some of the reasoning you employ:
Kant’s categorical imperative is weird, and seems distantly related to some of the reasoning you employ:
Overall I’d say...
Goals*, Knowledge → Intentions*/Plans → Actions
*I’m using “goals” here instead of the OP’s “intentions” to refer to the desire for puppies to not suffer prior to seeing the puppy suffering.
Why do consequences matter? Because our models don’t always work. What do we do about it? Fix our models, probably.
In this framework, consequences ‘should’ ‘backpropagate’ back into intentions or ethics. If it doesn’t, then maybe something isn’t working right.