In your framework, self-improving AI is vertically general (since it can do everything necessary for the task of AI R&D)
It might actually not be, it’s sort of hard to be vertically general.
An AI needs electricity and hardware. If it gets its electricity by its human creators and needs its human creators to actively choose to maintain its hardware, then those are necessary subtasks in AI R&D which it can’t solve itself.
I think it makes sense to distinguish between a self-improving AI which can handle contract negotiations etc. in order to earn the money needed to make an income and buy electricity and hire people to handle its hardware, vs an AI that must be owned in order to achieve this.
That said a self-improving AI may still be more vertically general than other things. I think it’s sort of a continuum.
Even though this list isn’t very long, lacking these abilities greatly decreases the horizontal generality of the AI.
One thing that is special about self-improving AIs is that they are, well, self-improving. So presumably they either increase their horizontal generality, their vertical generality, or their cost-efficiency over time (or more likely, increase a combination of them).
An AI needs electricity and hardware. If it gets its electricity by its human creators and needs its human creators to actively choose to maintain its hardware, then those are necessary subtasks in AI R&D which it can’t solve itself.
I think the electricity and hardware can be considered part of the environment the AI exists in. After all, a typical animal (like say a cat) needs food, water, air, etc. in its environment, which it doesn’t create itself, yet (if I understood the definitions correctly) we’d still consider a cat to be vertically general.
That said, I admit that it’s somewhat arbitrary what’s considered part of the environment. With electricity, I feel comfortable saying it’s a generic resource (like air to a cat) that can be assumed to exist. That’s more arguable in the case of hardware (though cloud computing makes it close).
I think there’s a distinction between the environment being in ~equillibrium and you wrestling a resource out from the equllibrium, versus you being part of a greater entity which wrestles resources out from the equillibrium and funnels them to your part?
That’s a good point, though I’d word it as an “uncaring” environment instead. Let’s imagine though that the self-improving AI pays for its electricity and cloud computing with money, which (after some seed capital) it earns by selling use of its improved versions through an API. Then the environment need not show any special preference towards the AI. In that case, the AI seems to demonstrate as much vertical generality as an animal or plant.
It might actually not be, it’s sort of hard to be vertically general.
An AI needs electricity and hardware. If it gets its electricity by its human creators and needs its human creators to actively choose to maintain its hardware, then those are necessary subtasks in AI R&D which it can’t solve itself.
I think it makes sense to distinguish between a self-improving AI which can handle contract negotiations etc. in order to earn the money needed to make an income and buy electricity and hire people to handle its hardware, vs an AI that must be owned in order to achieve this.
That said a self-improving AI may still be more vertically general than other things. I think it’s sort of a continuum.
One thing that is special about self-improving AIs is that they are, well, self-improving. So presumably they either increase their horizontal generality, their vertical generality, or their cost-efficiency over time (or more likely, increase a combination of them).
I think the electricity and hardware can be considered part of the environment the AI exists in. After all, a typical animal (like say a cat) needs food, water, air, etc. in its environment, which it doesn’t create itself, yet (if I understood the definitions correctly) we’d still consider a cat to be vertically general.
That said, I admit that it’s somewhat arbitrary what’s considered part of the environment. With electricity, I feel comfortable saying it’s a generic resource (like air to a cat) that can be assumed to exist. That’s more arguable in the case of hardware (though cloud computing makes it close).
I think there’s a distinction between the environment being in ~equillibrium and you wrestling a resource out from the equllibrium, versus you being part of a greater entity which wrestles resources out from the equillibrium and funnels them to your part?
That’s a good point, though I’d word it as an “uncaring” environment instead. Let’s imagine though that the self-improving AI pays for its electricity and cloud computing with money, which (after some seed capital) it earns by selling use of its improved versions through an API. Then the environment need not show any special preference towards the AI. In that case, the AI seems to demonstrate as much vertical generality as an animal or plant.
That seems reasonable to me.