One key difference between olalonde’s plan and SIAI’s plan is the assumption SIAI is making: they are assuming that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels. Thus, from their perspective, olalonde’s step (2) above might as well say, “build a machine that’s guaranteed to eat us all”, which would clearly be a bad thing.
A good summary. I’d slightly modify it in as much as they would allow the possibility that a really weak AGI may not do much in the way of FOOMing but they pretty much ignore those ones and expect they would just be a stepping stone for the developers who would go on to make better ones. (This is just my reasoning but I assume they would think similarly.)
Good point. Though I guess we could still say that the weak AI is recursively self-improving in this scenario—it’s just using the developers’ brains as its platform, as opposed to digital hardware. I don’t know whether the SIAI folks would endorse this view, though.
Good point. Though I guess we could still say that the weak AI is recursively self-improving in this scenario—it’s just using the developers’ brains as its platform, as opposed to digital hardware.
Can’t we limit the meaning of “self-improving” to at least stuff that the AI actually does? We can already say more precisely that the AI is being iteratively improved by the creators. We don’t have to go around removing the distinction between what an agent does and what the creator of the agent happens to do to it.
A good summary. I’d slightly modify it in as much as they would allow the possibility that a really weak AGI may not do much in the way of FOOMing but they pretty much ignore those ones and expect they would just be a stepping stone for the developers who would go on to make better ones. (This is just my reasoning but I assume they would think similarly.)
Good point. Though I guess we could still say that the weak AI is recursively self-improving in this scenario—it’s just using the developers’ brains as its platform, as opposed to digital hardware. I don’t know whether the SIAI folks would endorse this view, though.
Can’t we limit the meaning of “self-improving” to at least stuff that the AI actually does? We can already say more precisely that the AI is being iteratively improved by the creators. We don’t have to go around removing the distinction between what an agent does and what the creator of the agent happens to do to it.
Yeah, I am totally onboard with this suggestion.
Great. I hope I wasn’t being too pedantic there. I wasn’t trying to find technical fault with anything essential to your position.