Pei Wang’s definition of intelligence is just “optimization process” in fancy clothes.
I’ve heard expressions such as “sufficiently powerful optimization process” around LW pretty often, too, especially in the context of sidelining metaphysical questions such as “will AI be ‘conscious’?”
(nods) I try to use “superhuman optimizer” to refer to superhuman optimizers, both to sidestep irrelevant questions about consciousness and sentience, and to sidestep irrelevant questions about intelligence. It’s not always socially feasible, though. (Or at least, I can’t always fease it socially.)
I’ve heard expressions such as “sufficiently powerful optimization process” around LW pretty often, too, especially in the context of sidelining metaphysical questions such as “will AI be ‘conscious’?”
(nods) I try to use “superhuman optimizer” to refer to superhuman optimizers, both to sidestep irrelevant questions about consciousness and sentience, and to sidestep irrelevant questions about intelligence. It’s not always socially feasible, though. (Or at least, I can’t always fease it socially.)