If by the human valuation system this would be a loss compared to the alternative, and if the AGI accurately promoted human values, doesn’t it follow that it would not choose to so “rob” us?
Suppose we created an AGI the greatest mind ever conceived and we created it to solve humanities greatest problems. An ideal methodology for the AGI to do this would to ask for factories to produce physical components to copy itself over and over. The AGI then networks its copies all over the world creating a global mind and then generates a hoard of “mobile platforms” from which to observe, study and experiment with the world for its designed purpose.
The “robbery” is not intentional, its not intending to make mankind meaningless. The machine is merely meeting its objective of doing its utmost to find the solutions to problems for humanity. The horror is that as the machine mind expands networking its copies together and as it sends its mobile platforms out into the world eventually human discovery and invention would be dwarfed by this being. Outside of social and political forces destroying or dismantling the machine(quite likely) human beings would ultimately be forced with a problem: with the machine thinking of everything for us, and its creations doing all the hard work we really have nothing to do. In order to have anything to do we must improve ourselves to at the very lest have a mind that can compete.
Basically this is all a look at what the world would be like if our current AGI researchers did succeed in building their ideal machine and what it would mean for humanity.
I don’t disagree with you- this would, indeed, be a sad fate for humanity, and certainly a failed utopia. But the failing here is not inherent to the idea of an AGI that takes action on its own to improve humanity- it’s of one that doesn’t do what we actually want it to do, a failure to actually achieve friendliness.
Speaking of what we actually want, I want something more like what’s hinted at in the fun theory sequence than one that only slowly improves humanity over decades, which seems to be what you’re talking about here. (Tell me if I misunderstood, of course.)
You actually hit the nail on the head in terms of understanding the AGI I was referencing.
I thought about problems such as why would a firm researching crop engineering to solve world hunger bother with paying a full and very expensive staff? Wouldn’t an AGI that not only crunches the numbers but manages mobile platforms for physical experimentation be more cost effective? The AGI would be smarter and run around the clock testing, postulating and experimenting. Researchers would quickly find themselves out of a job if the ideal AGI were born for this purpose.
Of course if men took on artificial enhancements their own cognitive abilities could improve to compete. They could even potentially digitally network ideas or even manage mobile robotic platforms with their minds as well. It seems therefore that the best solution to the potential labor competition problems with AGI is to simply use the AGI to help or outright research methods of making men mentally and physically better.
It’s not impossible that human values are itself conflicted. Sole existence of AGI would “rob” us from that, because even if AGI restrained from doing all the work for humans, it would still be “cheating”—AGI could do all that better, so human achievement is still pointless.
And since we may not want to be fooled (to be made think that it is not the case), it is possible that in that regard even best optimisation must result in loss.
Anyway—I can think of at least two more ways. First is creating games, vastly simulating the “joy of work”. Second, my favourite, is humans becoming part of the AGI, in other words, AGI sharing parts of its superintelligence with humans.
“Instead our army of AGI has robbed us of that.”
If by the human valuation system this would be a loss compared to the alternative, and if the AGI accurately promoted human values, doesn’t it follow that it would not choose to so “rob” us?
Suppose we created an AGI the greatest mind ever conceived and we created it to solve humanities greatest problems. An ideal methodology for the AGI to do this would to ask for factories to produce physical components to copy itself over and over. The AGI then networks its copies all over the world creating a global mind and then generates a hoard of “mobile platforms” from which to observe, study and experiment with the world for its designed purpose.
The “robbery” is not intentional, its not intending to make mankind meaningless. The machine is merely meeting its objective of doing its utmost to find the solutions to problems for humanity. The horror is that as the machine mind expands networking its copies together and as it sends its mobile platforms out into the world eventually human discovery and invention would be dwarfed by this being. Outside of social and political forces destroying or dismantling the machine(quite likely) human beings would ultimately be forced with a problem: with the machine thinking of everything for us, and its creations doing all the hard work we really have nothing to do. In order to have anything to do we must improve ourselves to at the very lest have a mind that can compete.
Basically this is all a look at what the world would be like if our current AGI researchers did succeed in building their ideal machine and what it would mean for humanity.
I don’t disagree with you- this would, indeed, be a sad fate for humanity, and certainly a failed utopia. But the failing here is not inherent to the idea of an AGI that takes action on its own to improve humanity- it’s of one that doesn’t do what we actually want it to do, a failure to actually achieve friendliness.
Speaking of what we actually want, I want something more like what’s hinted at in the fun theory sequence than one that only slowly improves humanity over decades, which seems to be what you’re talking about here. (Tell me if I misunderstood, of course.)
You actually hit the nail on the head in terms of understanding the AGI I was referencing.
I thought about problems such as why would a firm researching crop engineering to solve world hunger bother with paying a full and very expensive staff? Wouldn’t an AGI that not only crunches the numbers but manages mobile platforms for physical experimentation be more cost effective? The AGI would be smarter and run around the clock testing, postulating and experimenting. Researchers would quickly find themselves out of a job if the ideal AGI were born for this purpose.
Of course if men took on artificial enhancements their own cognitive abilities could improve to compete. They could even potentially digitally network ideas or even manage mobile robotic platforms with their minds as well. It seems therefore that the best solution to the potential labor competition problems with AGI is to simply use the AGI to help or outright research methods of making men mentally and physically better.
It’s not impossible that human values are itself conflicted. Sole existence of AGI would “rob” us from that, because even if AGI restrained from doing all the work for humans, it would still be “cheating”—AGI could do all that better, so human achievement is still pointless. And since we may not want to be fooled (to be made think that it is not the case), it is possible that in that regard even best optimisation must result in loss.
Anyway—I can think of at least two more ways. First is creating games, vastly simulating the “joy of work”. Second, my favourite, is humans becoming part of the AGI, in other words, AGI sharing parts of its superintelligence with humans.
It depends. Is the excitement of hard work a terminal or an instrumental value?