Good point. Though I guess we could still say that the weak AI is recursively self-improving in this scenario—it’s just using the developers’ brains as its platform, as opposed to digital hardware.
Can’t we limit the meaning of “self-improving” to at least stuff that the AI actually does? We can already say more precisely that the AI is being iteratively improved by the creators. We don’t have to go around removing the distinction between what an agent does and what the creator of the agent happens to do to it.
Can’t we limit the meaning of “self-improving” to at least stuff that the AI actually does? We can already say more precisely that the AI is being iteratively improved by the creators. We don’t have to go around removing the distinction between what an agent does and what the creator of the agent happens to do to it.
Yeah, I am totally onboard with this suggestion.
Great. I hope I wasn’t being too pedantic there. I wasn’t trying to find technical fault with anything essential to your position.