To use some drastically different pairing, if you agree that an amoeba can never comprehend fish, that fish can never comprehend chimps, that chimps can never understand humans, then there is no reason to stop there and proclaim that humans would understand whatever intelligence comes next.
Yes, if you look through the tower of goals, more intelligent species have more complex goals.
This seems like a bogus use of the outside view. AGI is qualitatively different to evolved intelligence, in that it is not evolved, but built by a lesser intelligence. Moreover, there’s a simple explanation for the observation that more intelligent animals have more complex goals, which is that more intelligence permits more subgoals, and natural selection generally alters a species’ goals by adding, rather than simplifying. This is pretty much totally inapplicable to a constructed AGI.
This seems like a bogus use of the outside view. AGI is qualitatively different to evolved intelligence, in that it is not evolved, but built by a lesser intelligence. Moreover, there’s a simple explanation for the observation that more intelligent animals have more complex goals, which is that more intelligence permits more subgoals, and natural selection generally alters a species’ goals by adding, rather than simplifying. This is pretty much totally inapplicable to a constructed AGI.
I’d love to hear what actual AGI experts think about it, not just us idle forum dwellers.