Caledonian, I think Eliezer’s going off of his distinction (in Knowability of AI and elsewhere) between “optimal” and “optimized”, which more colloquial senses of the words don’t include. There may be more optimal ways of achieving our goals, but that doesn’t take away from the fact that we regularly achieve results that
(1) we explicitly set out to do
(2) we can distinguish clearly from other results
(3) would be incredibly unlikely to achieve by random effort.
I.e. this comment isn’t close to optimal, but it’s optimized enough as a coherent reply in a conversation that you’d ascribe a decent level of intelligence to whatever optimization process produced it. You wouldn’t, say, wonder if I were a spambot, let alone a random word generator.
Caledonian, I think Eliezer’s going off of his distinction (in Knowability of AI and elsewhere) between “optimal” and “optimized”, which more colloquial senses of the words don’t include. There may be more optimal ways of achieving our goals, but that doesn’t take away from the fact that we regularly achieve results that
(1) we explicitly set out to do (2) we can distinguish clearly from other results (3) would be incredibly unlikely to achieve by random effort.
I.e. this comment isn’t close to optimal, but it’s optimized enough as a coherent reply in a conversation that you’d ascribe a decent level of intelligence to whatever optimization process produced it. You wouldn’t, say, wonder if I were a spambot, let alone a random word generator.