I am assuming that the problem of AGI Friendliness can be addressed independently of the question of actually achieving AGI.
That is probably not true. There may well be some differences, though. For instance, it is hard to see how the corner cases in decision theory that are so discussed around here have much relevance to the problem of actually constructing a machine intelligence—UNLESS you want to prove things about how its goal system behaves under iterative self-modification.
That is probably not true. There may well be some differences, though. For instance, it is hard to see how the corner cases in decision theory that are so discussed around here have much relevance to the problem of actually constructing a machine intelligence—UNLESS you want to prove things about how its goal system behaves under iterative self-modification.