I feel that you are onto something here, but I have trouble coming up with an operationalization that is precise enough for me to make predictions based on your descriptions. But you clearly seem to have operations in mind. Can you help me? For me, it starts with “define intelligence as the ability to process information and make decisions” which is not concrete enough for me to have an ongoing model to use for the vertical vs horizontal operations.
I like to literally imagine a big list of tasks, along the lines of:
Invent and deploy a new profitable AI system
Build a skyscraper
Form a cat, which catches and eats mice, mates and raises kittens
etc.
An operationalization of horizontal generality would then be the number of tasks on the list that something can contribute to. For instance restricting ourselves to the first three items, a cat has horizontal generality 1, a calculator has horizontal generality 2, and a superintelligence has generality 3.
Within each task, we can then think of various subtasks that are necessary to complete it, e.g. for building a skyscraper, you need land, permissions, etc., and then you need to dig, set up stuff, pour concrete, etc. (I don’t know much about skyscrapers, can you tell? 😅). Each of these subtasks need some physical interventions (which we ignore because this is about intelligence, though they may be relevant for evaluating the generality of robotics rather than of intelligence) and some cognitive processing. The fraction of the required cognitive subtasks that can be performed by an entity within a task is its vertical generality (within that specific task).
I feel that you are onto something here, but I have trouble coming up with an operationalization that is precise enough for me to make predictions based on your descriptions. But you clearly seem to have operations in mind. Can you help me? For me, it starts with “define intelligence as the ability to process information and make decisions” which is not concrete enough for me to have an ongoing model to use for the vertical vs horizontal operations.
I like to literally imagine a big list of tasks, along the lines of:
Invent and deploy a new profitable AI system
Build a skyscraper
Form a cat, which catches and eats mice, mates and raises kittens
etc.
An operationalization of horizontal generality would then be the number of tasks on the list that something can contribute to. For instance restricting ourselves to the first three items, a cat has horizontal generality 1, a calculator has horizontal generality 2, and a superintelligence has generality 3.
Within each task, we can then think of various subtasks that are necessary to complete it, e.g. for building a skyscraper, you need land, permissions, etc., and then you need to dig, set up stuff, pour concrete, etc. (I don’t know much about skyscrapers, can you tell? 😅). Each of these subtasks need some physical interventions (which we ignore because this is about intelligence, though they may be relevant for evaluating the generality of robotics rather than of intelligence) and some cognitive processing. The fraction of the required cognitive subtasks that can be performed by an entity within a task is its vertical generality (within that specific task).