This is relevant to a topic I have been pondering, which is what are the differences between current AI, self-improving AI, and human-level AI. First, brief definitions:
Current AI: GPT-4, etc.
Self-improving AI: AI capable of improving its own software without direct human intervention. i.e. It can do everything OpenAI’s R&D group does, without human assistance.
Human-level AI: AI that can do everything a human does. Often called AGI (for Artificial General Intelligence).
In your framework, self-improving AI is vertically general (since it can do everything necessary for the task of AI R&D) but not horizontally general (since there are many tasks it cannot attempt, such as driving a car). Human-level AI, on the other hard, needs to be both vertically general and horizontally general, since humans are.
Here are some concrete examples of what self-improving AI doesn’t need to be able to do, yet humans can do:
Motor control. e.g. Using a spoon to eat, driving a car, etc.
Low latency. e.g. Real-time, natural conversation.
Certain input modalities might not be necessary. e.g. The ability to watch video.
Even though this list isn’t very long, lacking these abilities greatly decreases the horizontal generality of the AI.
In your framework, self-improving AI is vertically general (since it can do everything necessary for the task of AI R&D)
It might actually not be, it’s sort of hard to be vertically general.
An AI needs electricity and hardware. If it gets its electricity by its human creators and needs its human creators to actively choose to maintain its hardware, then those are necessary subtasks in AI R&D which it can’t solve itself.
I think it makes sense to distinguish between a self-improving AI which can handle contract negotiations etc. in order to earn the money needed to make an income and buy electricity and hire people to handle its hardware, vs an AI that must be owned in order to achieve this.
That said a self-improving AI may still be more vertically general than other things. I think it’s sort of a continuum.
Even though this list isn’t very long, lacking these abilities greatly decreases the horizontal generality of the AI.
One thing that is special about self-improving AIs is that they are, well, self-improving. So presumably they either increase their horizontal generality, their vertical generality, or their cost-efficiency over time (or more likely, increase a combination of them).
An AI needs electricity and hardware. If it gets its electricity by its human creators and needs its human creators to actively choose to maintain its hardware, then those are necessary subtasks in AI R&D which it can’t solve itself.
I think the electricity and hardware can be considered part of the environment the AI exists in. After all, a typical animal (like say a cat) needs food, water, air, etc. in its environment, which it doesn’t create itself, yet (if I understood the definitions correctly) we’d still consider a cat to be vertically general.
That said, I admit that it’s somewhat arbitrary what’s considered part of the environment. With electricity, I feel comfortable saying it’s a generic resource (like air to a cat) that can be assumed to exist. That’s more arguable in the case of hardware (though cloud computing makes it close).
I think there’s a distinction between the environment being in ~equillibrium and you wrestling a resource out from the equllibrium, versus you being part of a greater entity which wrestles resources out from the equillibrium and funnels them to your part?
That’s a good point, though I’d word it as an “uncaring” environment instead. Let’s imagine though that the self-improving AI pays for its electricity and cloud computing with money, which (after some seed capital) it earns by selling use of its improved versions through an API. Then the environment need not show any special preference towards the AI. In that case, the AI seems to demonstrate as much vertical generality as an animal or plant.
This is relevant to a topic I have been pondering, which is what are the differences between current AI, self-improving AI, and human-level AI. First, brief definitions:
Current AI: GPT-4, etc.
Self-improving AI: AI capable of improving its own software without direct human intervention. i.e. It can do everything OpenAI’s R&D group does, without human assistance.
Human-level AI: AI that can do everything a human does. Often called AGI (for Artificial General Intelligence).
In your framework, self-improving AI is vertically general (since it can do everything necessary for the task of AI R&D) but not horizontally general (since there are many tasks it cannot attempt, such as driving a car). Human-level AI, on the other hard, needs to be both vertically general and horizontally general, since humans are.
Here are some concrete examples of what self-improving AI doesn’t need to be able to do, yet humans can do:
Motor control. e.g. Using a spoon to eat, driving a car, etc.
Low latency. e.g. Real-time, natural conversation.
Certain input modalities might not be necessary. e.g. The ability to watch video.
Even though this list isn’t very long, lacking these abilities greatly decreases the horizontal generality of the AI.
It might actually not be, it’s sort of hard to be vertically general.
An AI needs electricity and hardware. If it gets its electricity by its human creators and needs its human creators to actively choose to maintain its hardware, then those are necessary subtasks in AI R&D which it can’t solve itself.
I think it makes sense to distinguish between a self-improving AI which can handle contract negotiations etc. in order to earn the money needed to make an income and buy electricity and hire people to handle its hardware, vs an AI that must be owned in order to achieve this.
That said a self-improving AI may still be more vertically general than other things. I think it’s sort of a continuum.
One thing that is special about self-improving AIs is that they are, well, self-improving. So presumably they either increase their horizontal generality, their vertical generality, or their cost-efficiency over time (or more likely, increase a combination of them).
I think the electricity and hardware can be considered part of the environment the AI exists in. After all, a typical animal (like say a cat) needs food, water, air, etc. in its environment, which it doesn’t create itself, yet (if I understood the definitions correctly) we’d still consider a cat to be vertically general.
That said, I admit that it’s somewhat arbitrary what’s considered part of the environment. With electricity, I feel comfortable saying it’s a generic resource (like air to a cat) that can be assumed to exist. That’s more arguable in the case of hardware (though cloud computing makes it close).
I think there’s a distinction between the environment being in ~equillibrium and you wrestling a resource out from the equllibrium, versus you being part of a greater entity which wrestles resources out from the equillibrium and funnels them to your part?
That’s a good point, though I’d word it as an “uncaring” environment instead. Let’s imagine though that the self-improving AI pays for its electricity and cloud computing with money, which (after some seed capital) it earns by selling use of its improved versions through an API. Then the environment need not show any special preference towards the AI. In that case, the AI seems to demonstrate as much vertical generality as an animal or plant.
That seems reasonable to me.