The conservative assumption is that AGI is easy, and FAI is hard.
I don’t know if this is actually true. I think FAI is harder than AGI, but I’m very much not a specialist in the area—either area. However, I do know that I’d very much rather overshoot the required safety margin by a mile than undershoot by a meter.
“FAI” here generally means “Friendly AGI”, which would make “FAI is harder than AGI” trivially true.
Perhaps you meant one of the following more interesting propositions:
“(The sub-problem of) AGI is harder than (the sub-problem of) Friendliness.”
“AGI is sufficiently hard relative to Friendliness, such that by the time AGI is solved, Friendliness is unlikely to have been solved.”
(Assuming even the sub-problem of Friendliness still has prerequisite part or all of AGI, the latter proposition implies “Friendliness isn’t so easy relative to AGI such that progress on Friendliness will lag insignificantly behind progress on AGI.”)
The conservative assumption is that AGI is easy, and FAI is hard.
I don’t know if this is actually true. I think FAI is harder than AGI, but I’m very much not a specialist in the area—either area. However, I do know that I’d very much rather overshoot the required safety margin by a mile than undershoot by a meter.
“FAI” here generally means “Friendly AGI”, which would make “FAI is harder than AGI” trivially true.
Perhaps you meant one of the following more interesting propositions:
“(The sub-problem of) AGI is harder than (the sub-problem of) Friendliness.”
“AGI is sufficiently hard relative to Friendliness, such that by the time AGI is solved, Friendliness is unlikely to have been solved.”
(Assuming even the sub-problem of Friendliness still has prerequisite part or all of AGI, the latter proposition implies “Friendliness isn’t so easy relative to AGI such that progress on Friendliness will lag insignificantly behind progress on AGI.”)