This post and many of the comments are ignoring one of the main reasons that money becomes so much more critical post-AGI. It’s because of the revolution in self-modification that ensues shortly afterwards.
Pre-AGI, a person can use their intelligence to increase their money, but not the other way around; post-AGI it’s the opposite. The same applies if you swap intelligence for knowledge, health, willpower, energy, happiness set-point, or percentage of time spent awake.
This post makes half of that observation: that it becomes impossible to increase your money using your personal qualities. But it misses the other half: that it becomes possible to improve your personal qualities using your money.
The value of capital is so much higher once it can be used for self-modification.
For one thing, these modifications are very desirable in themselves. It’s easy to imagine a present-day billionaire giving up all he owns for a modest increase along just a few of these axes, say a 300% increase in intelligence and a 100% increase in energy.
But even if you trick yourself into believing that you don’t really want self-modification (most people will claim that immortality is undesirable, so long as they can’t have it, and likewise for wireheading), there are race dynamics that mean you can’t just ignore it.
People who engage in self-modification will be better equipped to influence the world, affording them more opportunities for self-modification. They will undergo recursive self-improvement similar to the kind we imagine for AGI. At some point, they will think and move so much faster than an unaugmented human that it will be impossible to catch up.
This might be okay if they respected the autonomy of unaugmented people, but all of the arguments about AGI being hard to control, and destroying its creators by default, apply equally well to hyperaugmented humans. If you try to coexist with entities who are vastly more powerful than you, you will eventually be crushed or deprived of key resources. In fact, this applies even moreso with humans than AIs, since humans were not explicitly designed to be helpful or benevolent.
You might say, “Well, there’s nothing I can do in that world anyway, because I’m always going to lose a self-modification race to the people who start as billionaires, and being a winner-takes-all situation, there’s no prize for giving it a decent try.” However, this isn’t necessarily true. Once self-modification becomes possible, there will still be time to take advantage of it before things start getting out of control. It will start out very primitive, resembling curing diseases more than engineering new capabilities. In this sense, it arguably already exists in a very limited form.
In this critical early period, a person will still have the ability to author their destiny, with the degree of that ability being mostly determined by the amount of self-modification they can afford.
Under some conditions, they may be able to permanently escape the influence of a hostile superintelligence (whether artificial or a hyperaugmented human). For example, a nearly perfect escape outcome could be achieved by travelling in a straight line close to the speed of light, bringing with you sufficient resources and capabilities to:
Stay alive indefinitely
Continue the process of self-improvement
In the chaos of an oncoming singularity, it’s not unimaginable that a few people could slip away in that fashion. But it won’t happen if you’re broke.
Notes
The line between buying an exocortex and buying/renting intelligent servants is somewhat blurred, so arguably the OP doesn’t totally miss the self-modification angle. But it should be called out a lot more explicitly, since it is one of the key changes coming down the pike.
Most of this comment doesn’t apply if AGI leads to a steady state where humans have limited agency (e.g. ruling AGIs or their owners prevent self-modification, or humans are replaced entirely by AGIs). But if that sort of outcome is coming, then our present-day actions have no positive or negative effects on our future, so there’s no point in preparing for it.
This might be okay if they respected the autonomy of unaugmented people, but all of the arguments about AGI being hard to control, and destroying its creators by default, apply equally well to hyperaugmented humans. If you try to coexist with entities who are vastly more powerful than you, you will eventually be crushed or deprived of key resources. In fact, this applies even moreso with humans than AIs, since humans were not explicitly designed to be helpful or benevolent.
I would go further and say that augmented humans are probably more risky than AIs, because you can’t do a lot of the experimentation on a human that is legal to do to AI, and importantly it’s way riskier from a legal perspective and a difficulty perspective to align a human to you, because it is essentially brainwashing, and it’s easier to control an AI’s data source than a human’s data source.
This is a big reason why I never really liked the augmentation of humans path to solve AI alignment that people like Tsvi Benson-Tilsen want, because you now possibly have 2 alignment problems, not just 1 (link is below):
This post and many of the comments are ignoring one of the main reasons that money becomes so much more critical post-AGI. It’s because of the revolution in self-modification that ensues shortly afterwards.
Pre-AGI, a person can use their intelligence to increase their money, but not the other way around; post-AGI it’s the opposite. The same applies if you swap intelligence for knowledge, health, willpower, energy, happiness set-point, or percentage of time spent awake.
This post makes half of that observation: that it becomes impossible to increase your money using your personal qualities. But it misses the other half: that it becomes possible to improve your personal qualities using your money.
The value of capital is so much higher once it can be used for self-modification.
For one thing, these modifications are very desirable in themselves. It’s easy to imagine a present-day billionaire giving up all he owns for a modest increase along just a few of these axes, say a 300% increase in intelligence and a 100% increase in energy.
But even if you trick yourself into believing that you don’t really want self-modification (most people will claim that immortality is undesirable, so long as they can’t have it, and likewise for wireheading), there are race dynamics that mean you can’t just ignore it.
People who engage in self-modification will be better equipped to influence the world, affording them more opportunities for self-modification. They will undergo recursive self-improvement similar to the kind we imagine for AGI. At some point, they will think and move so much faster than an unaugmented human that it will be impossible to catch up.
This might be okay if they respected the autonomy of unaugmented people, but all of the arguments about AGI being hard to control, and destroying its creators by default, apply equally well to hyperaugmented humans. If you try to coexist with entities who are vastly more powerful than you, you will eventually be crushed or deprived of key resources. In fact, this applies even moreso with humans than AIs, since humans were not explicitly designed to be helpful or benevolent.
You might say, “Well, there’s nothing I can do in that world anyway, because I’m always going to lose a self-modification race to the people who start as billionaires, and being a winner-takes-all situation, there’s no prize for giving it a decent try.” However, this isn’t necessarily true. Once self-modification becomes possible, there will still be time to take advantage of it before things start getting out of control. It will start out very primitive, resembling curing diseases more than engineering new capabilities. In this sense, it arguably already exists in a very limited form.
In this critical early period, a person will still have the ability to author their destiny, with the degree of that ability being mostly determined by the amount of self-modification they can afford.
Under some conditions, they may be able to permanently escape the influence of a hostile superintelligence (whether artificial or a hyperaugmented human). For example, a nearly perfect escape outcome could be achieved by travelling in a straight line close to the speed of light, bringing with you sufficient resources and capabilities to:
Stay alive indefinitely
Continue the process of self-improvement
In the chaos of an oncoming singularity, it’s not unimaginable that a few people could slip away in that fashion. But it won’t happen if you’re broke.
Notes
The line between buying an exocortex and buying/renting intelligent servants is somewhat blurred, so arguably the OP doesn’t totally miss the self-modification angle. But it should be called out a lot more explicitly, since it is one of the key changes coming down the pike.
Most of this comment doesn’t apply if AGI leads to a steady state where humans have limited agency (e.g. ruling AGIs or their owners prevent self-modification, or humans are replaced entirely by AGIs). But if that sort of outcome is coming, then our present-day actions have no positive or negative effects on our future, so there’s no point in preparing for it.
I would go further and say that augmented humans are probably more risky than AIs, because you can’t do a lot of the experimentation on a human that is legal to do to AI, and importantly it’s way riskier from a legal perspective and a difficulty perspective to align a human to you, because it is essentially brainwashing, and it’s easier to control an AI’s data source than a human’s data source.
This is a big reason why I never really liked the augmentation of humans path to solve AI alignment that people like Tsvi Benson-Tilsen want, because you now possibly have 2 alignment problems, not just 1 (link is below):
https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods