The project of moving morality from brains into tools is the same project as moving arithmetic from brains into calculators: you are more likely to get a correct answer, and you become able to answer orders of magnitude more difficult questions. If the state of the tool is such that the intuitive answer is better, then one should embrace intuitive answers (for now). The goal is to eventually get a framework that is actually better than intuitive answers in at least some nontrivial area of applicability (or to work in the direction of this goal, while it remains unattainable).
The problem with “moral codes” is that they are mostly insane, in their overconfidence considering rather confused raw material as useful answers. Trying to finally get it right is not the same as welcoming insanity, although the risk is always there.
You say: It’s possible to specify a utility function such that, if we feed it to a strong optimization process, the result will be good.
Formally, it’s trivially true even as you put it, as you can encode any program with an appropriately huge utility function. Therefore, whatever way of doing things is better than using ape-brains, can be represented this way.
It’s not necessarily useful to look at the problem in a way you stated it: I’m at this point doubtful of “expected utility maximization” being the form of a usefully stated correct solution. So I speak of tools. That there are tools better than ape-brains should be intuitively obvious, as a particular case of a tool is just an ape-brain that has been healed of all ills, an example of a step in the right direction, proving that steps in the right directions are possible. I contend there are more steps to be taken, some not as gradual or obvious.
The project of moving morality from brains into tools is the same project as moving arithmetic from brains into calculators: you are more likely to get a correct answer, and you become able to answer orders of magnitude more difficult questions. If the state of the tool is such that the intuitive answer is better, then one should embrace intuitive answers (for now). The goal is to eventually get a framework that is actually better than intuitive answers in at least some nontrivial area of applicability (or to work in the direction of this goal, while it remains unattainable).
The problem with “moral codes” is that they are mostly insane, in their overconfidence considering rather confused raw material as useful answers. Trying to finally get it right is not the same as welcoming insanity, although the risk is always there.
You say: It’s possible to specify a utility function such that, if we feed it to a strong optimization process, the result will be good.
I say: Yeah? Why do you think so? What little evidence we currently have, isn’t on your side.
Formally, it’s trivially true even as you put it, as you can encode any program with an appropriately huge utility function. Therefore, whatever way of doing things is better than using ape-brains, can be represented this way.
It’s not necessarily useful to look at the problem in a way you stated it: I’m at this point doubtful of “expected utility maximization” being the form of a usefully stated correct solution. So I speak of tools. That there are tools better than ape-brains should be intuitively obvious, as a particular case of a tool is just an ape-brain that has been healed of all ills, an example of a step in the right direction, proving that steps in the right directions are possible. I contend there are more steps to be taken, some not as gradual or obvious.
Vladimir, sorry. I noticed my mistake before you replied, and deleted my comment. Your reply is pretty much correct.