I’ve used the same terms (horizontal and vertical generality) to refer to (what I think) are different concepts than what’s discussed here, but wanted to share my versions of these terms in case there’s any parallels you see
Horizontal generality: An intelligence’s ability to take knowledge/information learned from an observation/experience solving a problem and use it to solve other similarly-structured/isomorphic problems (e.g. a human notices that a problem in finding optimal routing can be essentially mapped to a graph theory problem and solving one solves the other)
Vertical generality: An intelligence’s ability to use their existing knowledge to augment their own intelligence with tools or by successfully designing smarter agents aligned to it (e.g. a human is struggling with solving problems in quantum mechanics, and no amount of effort is helping them. They find an alternative route to solving these problems by learning how to create aligned superintelligence which helps them solve the problems)
If you’re an intelligence solving problems, increasing horizontal generality helps because it lets you see how problems you’ve already solved actually apply to problems you didn’t think they applied to before you increased horizontal generality. Increasing vertical generality helps because it finds an alternative route to solving the problem by actually increasing your effective problem solving ability.
I’ve used the same terms (horizontal and vertical generality) to refer to (what I think) are different concepts than what’s discussed here, but wanted to share my versions of these terms in case there’s any parallels you see
Horizontal generality: An intelligence’s ability to take knowledge/information learned from an observation/experience solving a problem and use it to solve other similarly-structured/isomorphic problems (e.g. a human notices that a problem in finding optimal routing can be essentially mapped to a graph theory problem and solving one solves the other)
Vertical generality: An intelligence’s ability to use their existing knowledge to augment their own intelligence with tools or by successfully designing smarter agents aligned to it (e.g. a human is struggling with solving problems in quantum mechanics, and no amount of effort is helping them. They find an alternative route to solving these problems by learning how to create aligned superintelligence which helps them solve the problems)
If you’re an intelligence solving problems, increasing horizontal generality helps because it lets you see how problems you’ve already solved actually apply to problems you didn’t think they applied to before you increased horizontal generality. Increasing vertical generality helps because it finds an alternative route to solving the problem by actually increasing your effective problem solving ability.