If I AM utility function maximizer and I proved that killing baby reliably maximizes it, then sure I’ll kill.
But I am not. My poorly defined unclosurable morality meter will break in and demand revising that nice and consistent utility function I’ve based my decision on. And so moral agonising begins.
Answer: I don’t know, and it will be a painful work to decide, weighting all pro and cons, building and checking new utility functions, rewriting moral itself...
If I AM utility function maximizer and I proved that killing baby reliably maximizes it, then sure I’ll kill.
But I am not. My poorly defined unclosurable morality meter will break in and demand revising that nice and consistent utility function I’ve based my decision on. And so moral agonising begins.
Answer: I don’t know, and it will be a painful work to decide, weighting all pro and cons, building and checking new utility functions, rewriting moral itself...