Well, yes, but I interpreted the problem of impossibly complicated value definition as the eFAI* (which does seem to be a problem with Paul’s specific proposal, even if we assume that it theoretically converges to a FAI) never coming out of its destructive phase, and hence possibly just eating the universe without producing anything of value, so “destroy the world” is in a sense the sole manifestation of the problem with a hypothetical implementation of that proposal...
[* eFAI = eventually-Friendly AI, let’s coin this term]
Oh, please reinterpret my comment as replying to this comment of yours. (That one is specifically talking about Paul’s proposal, right?)
Well, yes, but I interpreted the problem of impossibly complicated value definition as the eFAI* (which does seem to be a problem with Paul’s specific proposal, even if we assume that it theoretically converges to a FAI) never coming out of its destructive phase, and hence possibly just eating the universe without producing anything of value, so “destroy the world” is in a sense the sole manifestation of the problem with a hypothetical implementation of that proposal...
[* eFAI = eventually-Friendly AI, let’s coin this term]