In this analogy, the relevant concern maps for me to the notion of “safety” of airplanes. And we know what “safely” for airplanes is. It means people don’t die. It’s hard to make a proper analogy, since for all usual technology the moral questions are easy, and you are left with technical questions. But with FAI, we also need to do something about moral questions, on an entirely new level.
I agree that solving FAI also involves solving non-technical, moral questions, and that considerable headway can probably be made on these without knowledge about AGI. I was only saying that there’s a limit on how far you can get that way.
How far or near that limit is, I don’t know. But I would think that there’d be something useful to be found from pure AGI earlier than one might naively expect. E.g. the Sequences draw on plenty of math/compsci related material, and I expect that likewise some applications/techniques from AGI will also be necessary for FAI.
In this analogy, the relevant concern maps for me to the notion of “safety” of airplanes. And we know what “safely” for airplanes is. It means people don’t die. It’s hard to make a proper analogy, since for all usual technology the moral questions are easy, and you are left with technical questions. But with FAI, we also need to do something about moral questions, on an entirely new level.
I agree that solving FAI also involves solving non-technical, moral questions, and that considerable headway can probably be made on these without knowledge about AGI. I was only saying that there’s a limit on how far you can get that way.
How far or near that limit is, I don’t know. But I would think that there’d be something useful to be found from pure AGI earlier than one might naively expect. E.g. the Sequences draw on plenty of math/compsci related material, and I expect that likewise some applications/techniques from AGI will also be necessary for FAI.