It’s not impossible that human values are itself conflicted. Sole existence of AGI would “rob” us from that, because even if AGI restrained from doing all the work for humans, it would still be “cheating”—AGI could do all that better, so human achievement is still pointless.
And since we may not want to be fooled (to be made think that it is not the case), it is possible that in that regard even best optimisation must result in loss.
Anyway—I can think of at least two more ways. First is creating games, vastly simulating the “joy of work”. Second, my favourite, is humans becoming part of the AGI, in other words, AGI sharing parts of its superintelligence with humans.
It’s not impossible that human values are itself conflicted. Sole existence of AGI would “rob” us from that, because even if AGI restrained from doing all the work for humans, it would still be “cheating”—AGI could do all that better, so human achievement is still pointless. And since we may not want to be fooled (to be made think that it is not the case), it is possible that in that regard even best optimisation must result in loss.
Anyway—I can think of at least two more ways. First is creating games, vastly simulating the “joy of work”. Second, my favourite, is humans becoming part of the AGI, in other words, AGI sharing parts of its superintelligence with humans.