Maybe you already have the answer Wei Dai.
If we posit a putative friendly AI as one which e.g. kills no one as a base rule AND is screened by competent AI researchers for any maximizing functions then any remaining “nice to haves” can just be put to a vote.
Thank you for the reply.
I don’t know if it was your wording, but parts of that went way over my head.
Can I paraphrase by breaking it down to try to understand what your’re saying? Please correct me if I misinterpret.
Firstly: “An AI can be seen as just a very powerful optimization process.”
So whatever process you apply the AI to, it will make it far more efficient and faster would be examples? And thus just by compound interest, any small deviations from our utility function would rapidly compound till they ran to 100% thus creating problems (or extinction) for us? Doesn’t that mean however, that the AI isn’t really that intelligent as it seems (perhaps naively) that any creature that seeks to tile the universe with paperclips or smiley faces isn’t very intelligent at all. Are we therefore equating an AI to a superfast factory?
Secondly: “Our local reason of the universe being strongly optimal according to that process’s utility function.”
With the word “reason” in there I don’t understand this sentence. Are you saying “the processes we use to utilize resources will become strongly optimal”? If not, can you break it down a little further? I’m particular struggling with this phrase “Our local reason of the universe”.
I pretty much get the rest.
One extrapolation, however, is that we ourselves aren’t very optimal. If we speed up and e.g. make our resource extraction processes more efficient without putting into place recycling we will much more rapidly run out of resources with a process optimizer to help us do it.