We don’t have an AGI that doesn’t kill us. Having one would be a significant step towards FAI. In fact, “a human-equivalent-or-better AGI that doesn’t do anything greatly harmful to humanity” is a pretty good definition of FAI, or maybe “weak FAI”.
If it’s a tool AGI, I don’t see how it would help with friendliness, and if it’s an active self-developing AGI, I thought the canonical position of LW was that there could be only one? and it’s too late to do anything about friendliness at this point?
I agree there would probably only be one successful AGI, so it’s not the first step of many. I meant it would be a step in that direction. Poor phrasing on my part.
Options (b) and (c) are basically wishes and those are complex X-D
“Not kill us” is an easy criterion, we already have an AI like that, it plays Go well.
We don’t have an AGI that doesn’t kill us. Having one would be a significant step towards FAI. In fact, “a human-equivalent-or-better AGI that doesn’t do anything greatly harmful to humanity” is a pretty good definition of FAI, or maybe “weak FAI”.
If it’s a tool AGI, I don’t see how it would help with friendliness, and if it’s an active self-developing AGI, I thought the canonical position of LW was that there could be only one? and it’s too late to do anything about friendliness at this point?
I agree there would probably only be one successful AGI, so it’s not the first step of many. I meant it would be a step in that direction. Poor phrasing on my part.