It’s a direct logical consequence, isn’t it? If one doesn’t have a precise understanding of AI’s goals, then whatever goals one imparts into AI won’t be precise. And they must be precise, or (step3) ⇒ disaster.
He can’t think that god-powerfully optimizing for a forever-fixed not-precisely-correct goal would lead to anything but disaster. Not if he ever saw a non-human optimization process at work.
So he can only think precision is not important if he believes that (1) human values are an attractor in the goal space, and any reasonably close goals would converge there before solidifying, and/or (2) acceptable human values form a large convex region within the goal space, and optimizing for any point within this region is correct.
Without better understanding of AI goals, both can only be an article of faith...
It’s a direct logical consequence, isn’t it? If one doesn’t have a precise understanding of AI’s goals, then whatever goals one imparts into AI won’t be precise. And they must be precise, or (step3) ⇒ disaster.
He doesn’t agree that they must be precise, so I guess step 3 is also out.
He can’t think that god-powerfully optimizing for a forever-fixed not-precisely-correct goal would lead to anything but disaster. Not if he ever saw a non-human optimization process at work.
So he can only think precision is not important if he believes that
(1) human values are an attractor in the goal space, and any reasonably close goals would converge there before solidifying, and/or
(2) acceptable human values form a large convex region within the goal space, and optimizing for any point within this region is correct.
Without better understanding of AI goals, both can only be an article of faith...
From the conversation with Luke, he apparently accepts faith.