I think your post is right about many of the inefficiencies of humans. Note that as inefficient as we are, the current industrial civilization has removed many of them and the current age of cheap online communication and stable and usable devices has already led to large changes. Changes that you can expect over the arrow of time will lead to greater efficiency due to selective pressure on corporations.
There’s I think a major error here. Thinking of “AGI” as this singleton massive entity dispassionately planning human lives. It’s increasingly unlikely this is the form that AGI will take.
Instead, billions of separate “sessions” of many separate models seems to be the actual form. Each session is a short lived agent that only knows some (prompt, file of prior context). Some of the agents will be superhuman in capabilities, occasionally broadly so, but most far more narrow and specialized. (Because of computation cost and IP cost to use the largest systems on your task)
You can think of an era of billions of humans all separately working on their own goals with these tools as a system that steadily advances the underlying “AGI” technology. As humans win and lose, even losing nuclear wars, the underlying technology gets steadily better and more robust.
So a coevolution, not a singleton planning everything. Over time humans would become more and more machine like themselves as those traits will be the ones rewarded, and more and more of the underlying civilization is there to feed the AGI.(maybe using implants maybe just shifting behavior) Kind of how our current civilization devotes so many resources to feeding vehicles.
I think this is the most probable outcome, taking at least 80 percent of the probability mass. Scenarios of a singleton tiling the universe with boring self copies or a utopia seem unlikely.
Unfortunately it will mean inequality like we can scarcely imagine. Some groups will have all the wealth in the solar system and be immortal, others will receive only what the in power group chooses to share.
I think your post is right about many of the inefficiencies of humans. Note that as inefficient as we are, the current industrial civilization has removed many of them and the current age of cheap online communication and stable and usable devices has already led to large changes. Changes that you can expect over the arrow of time will lead to greater efficiency due to selective pressure on corporations.
There’s I think a major error here. Thinking of “AGI” as this singleton massive entity dispassionately planning human lives. It’s increasingly unlikely this is the form that AGI will take.
Instead, billions of separate “sessions” of many separate models seems to be the actual form. Each session is a short lived agent that only knows some (prompt, file of prior context). Some of the agents will be superhuman in capabilities, occasionally broadly so, but most far more narrow and specialized. (Because of computation cost and IP cost to use the largest systems on your task)
You can think of an era of billions of humans all separately working on their own goals with these tools as a system that steadily advances the underlying “AGI” technology. As humans win and lose, even losing nuclear wars, the underlying technology gets steadily better and more robust.
So a coevolution, not a singleton planning everything. Over time humans would become more and more machine like themselves as those traits will be the ones rewarded, and more and more of the underlying civilization is there to feed the AGI.(maybe using implants maybe just shifting behavior) Kind of how our current civilization devotes so many resources to feeding vehicles.
I think this is the most probable outcome, taking at least 80 percent of the probability mass. Scenarios of a singleton tiling the universe with boring self copies or a utopia seem unlikely.
Unfortunately it will mean inequality like we can scarcely imagine. Some groups will have all the wealth in the solar system and be immortal, others will receive only what the in power group chooses to share.