Firstly, it is impossible to assign a numbered utility to each action. That is just not understanding human brain.
Of course, the brain isn’t perfect. The fact that humans can’t always or even can’t usually apply truths doesn’t make them untrue.
Secondly, it is impossible to sum up utilities, give me an example where summing different people utilities make any sense.
Pressing a button kills one person, not pressing the button kills two people. utility(1 death) + utility(1 death) < utility(1 death)
Thirdly, it regards the action as an one-time action. But just it isn’t. If you teach .people to push the fat guy to kill it. You just not only will have three people less dead. You’ll also have a bunch of emotionless people who think it is ok to kill people if it is for the greater good.
Assuming it’s bad to teach consequentialism to people doesn’t make consequentialism wrong. It’s bad to teach people how to make bombs but that doesn’t mean the knowledge to create bombs is incorrect. See Ethical Injunctions
Fourthly, people don’t always come immediately to the truth. You can’t say you should kill the fat guy if you really think that’s gonna save the other people.
Such thought experiments often make unlikely assumptions such as perfect knowledge of consequences. That doesn’t make the conclusions of those thought experiments wrong, it just constrains them to unlikely situations.
Fifthly, if utility is not quantitative, the logic of morality can’t be a computation.
Qualitative analysis is still computable. If humans can do something it is computable.
The discovery of reallity might be a calculation, because you go outside to see.
Of course, the brain isn’t perfect. The fact that humans can’t always or even can’t usually apply truths doesn’t make them untrue.
Pressing a button kills one person, not pressing the button kills two people. utility(1 death) + utility(1 death) < utility(1 death)
Assuming it’s bad to teach consequentialism to people doesn’t make consequentialism wrong. It’s bad to teach people how to make bombs but that doesn’t mean the knowledge to create bombs is incorrect. See Ethical Injunctions
Such thought experiments often make unlikely assumptions such as perfect knowledge of consequences. That doesn’t make the conclusions of those thought experiments wrong, it just constrains them to unlikely situations.
Qualitative analysis is still computable. If humans can do something it is computable.
Solomonoff induction is a formalized model of prediction of future events.