You say: It’s possible to specify a utility function such that, if we feed it to a strong optimization process, the result will be good.
Formally, it’s trivially true even as you put it, as you can encode any program with an appropriately huge utility function. Therefore, whatever way of doing things is better than using ape-brains, can be represented this way.
It’s not necessarily useful to look at the problem in a way you stated it: I’m at this point doubtful of “expected utility maximization” being the form of a usefully stated correct solution. So I speak of tools. That there are tools better than ape-brains should be intuitively obvious, as a particular case of a tool is just an ape-brain that has been healed of all ills, an example of a step in the right direction, proving that steps in the right directions are possible. I contend there are more steps to be taken, some not as gradual or obvious.
Formally, it’s trivially true even as you put it, as you can encode any program with an appropriately huge utility function. Therefore, whatever way of doing things is better than using ape-brains, can be represented this way.
It’s not necessarily useful to look at the problem in a way you stated it: I’m at this point doubtful of “expected utility maximization” being the form of a usefully stated correct solution. So I speak of tools. That there are tools better than ape-brains should be intuitively obvious, as a particular case of a tool is just an ape-brain that has been healed of all ills, an example of a step in the right direction, proving that steps in the right directions are possible. I contend there are more steps to be taken, some not as gradual or obvious.
Vladimir, sorry. I noticed my mistake before you replied, and deleted my comment. Your reply is pretty much correct.