Unless we get a hard-takeoff singleton, which is admittedly the SIAI expectation, there will be massive inequality, with a few very wealthy beings and average income barely above subsistence. Thus saith Robin Hanson, and I’ve never seen any significant holes poked in that thesis.
Robin Hanson seems to be assuming that human preferences will, in general, remain in their current ranges. This strikes me as unlikely in the face of technological self-modification.
I’ve never gotten that impression. What I’ve gotten is that evolutionary pressures will, in the long term, still exist—even if technological self-modification leads to a population that’s 99.99% satisfied to live within strict resource consumption limits, unless they harshly punish defectors the .01% with a drive for replication or expansion will overwhelm the rest within a few millenia, until the average income is back to subsistence. This doesn’t depend on human preferences, just the laws of physics and natural selection.
What evolutionary pressures? Even making the incredible assumption that we will continue to use sequences of genes as a large part of our identities, what’s to stop a singleton of some variety from eliminating drives for replication or expansion entirely?
I feel uncomfortable speculating about a post-machine-intelligence future even to this extent; this is not a realm in which I am confident about any proposition. Consequently, I view all confident conclusions with great skepticism.
You’re still not getting the breadth and generality of Hanson’s model. To use recent LW terminology, it’s an anti-prediction.
It doesn’t matter whether agents perpetuate their strategies by DNA mixing, binary fission, cellular automata, or cave paintings. Even if all but a tiny minority of posthumans self-modify not to want growth or replication, the few that don’t will soon dominate the light-cone. A singleton, like I’d mentioned, is one way to avert this. Universal extinction and harsh, immediate punishment of expansion-oriented agents are the only others I see.
You (or Robin, I suppose) are just describing a many-agent prisoner’s dilemma. If TDT agents beat the dilemma by cooperating with other TDT agents, then any agents that started out with a different decision theory will have long since self-modified to use TDT.
Alternately, if there is no best decision theoretical solution to the prisoner’s dilemma, then we probably don’t need to worry about surviving to face this problem.
Now, there’s a generalized answer. It even covers the possibility of meeting aliens—finding TDT is a necessary condition for reaching the stars. Harsh punishment of inconsiderate expanders might still be required, but there could be a stable equilibrium without ever actually inflicting that punishment. That’s a new perspective for me, thanks!
To be even more general, suppose that there is at least one thing X that is universally necessary for effective superintelligences to function. X might be knowledge of the second law of thermodynamics, TDT, a computational substrate of some variety, or any number of other things. There are probably very many such X’s, many of which are entirely non-obvious to any entity that is not itself a superintelligence (i.e. us). Furthermore, there may be at least one thing Y that is universally incompatible with effective superintelligence. Y might be an absolute belief in the existence of the deity Thor or desiring only to solve the Halting Problem using a TM-equivalent. For the Hansonian model to hold, all X’s and no Y’s must be compatible with the desire and ability to expand and/or replicate.
This argument is generally why I dislike speculating about superintelligences. It is impossible for ordinary humans to have exhaustive (or even useful, partial) knowledge of all X and all Y. The set of all things Y in particular may not even be enumerable.
We cannot be sure that there are difficulties beyond our comprehension but we are certainly able to assign probabilities to that hypothesis based on what we know. I would be justifiably shocked if something we could call a super-intelligence couldn’t be formed based on knowledge that is accessible to us, even if the process of putting the seed of a super-intelligence together is beyond us.
Humans aren’t even remotely optimised for generalised intelligence, it’s just a trick we picked up to, crudely speaking, get laid. There is no reason that a intelligence of the form “human thinking minus the parts that suck and a bit more of the parts that don’t suck” couldn’t be created using the knowledge available to us and that is something we can easily place a high probability on. Then you run the hardware at more than 60hz.
Oh, I agree. We just don’t know what self-modifications will be necessary to achieve non-speed-based optimizations.
To put it another way, if superintelligences are competing with each other and self-modifying in order to do so, predictions about the qualities those superintelligences will possess are all but worthless.
To put it another way, if superintelligences are competing with each other and self-modifying in order to do so, predictions about the qualities those superintelligences will possess are all but worthless.
What evolutionary pressures? Even making the incredible assumption that we will continue to use sequences of genes as a large part of our identities, what’s to stop a singleton of some variety from eliminating drives for replication or expansion entirely?
Your point is spot on. Competition can not be relied on to produce adaptation if someone wins the competition once and for all.
I wasn’t trying to make an especially long-term prediction:
“We saw the first millionaire in 1716, the first billionaire in 1916 - and can expect the first trillionaire within the next decade—probably before 2016.”
The greatest peak net worth in recorded history, adjusted for inflation, was Bill Gates’ $101 billion, which was ten years ago. No one since then has come close. A 10-fold increase in <6 years strikes me as unlikely.
In any case, your extrapolated curve points to 2116, not 2016.
I am increasingly convinced that your comments on this topic are made in less than good faith.
Yes, the last figure looks wrong to me too—hopefully I will revisit the issue.
Update 2011-05-30: yes: 2016 was a simple math mistake! I have updated the text I was quoting from to read “later this century”.
Anyway, the huge modern wealth inequalities are well established—and projecting them into the future doesn’t seem especially controversial. Today’s winners in IT are hugely rich—and tomorrow’s winners may well be even richer. People thinking something like they will “be on the inside track when the singularity happens” would not be very surprising.
Anyway, the huge modern wealth inequalities are well established—and projecting them into the future doesn’t seem especially controversial.
Projecting anything into a future with non-human intelligences is controversial. You have made an incredibly large assumption without realizing it. Please update.
If you actually want your questions answered, then money is society’s representation of utility—and I think there will probably be something like that in the future—no matter how far out you go. What you may not find further out is “people”. However, I wasn’t talking about any of that, really. I just meant while there are still money and people with bank accounts around.
We have been building intelligent machines for many decades now. If you are talking about something that doesn’t yet exist, I think you would be well advised to find another term for it.
How, in a post-AGI world, would you define wealth? Computational resources? Matter?
I don’t think there’s any foundation for speculation on this topic at this time.
Unless we get a hard-takeoff singleton, which is admittedly the SIAI expectation, there will be massive inequality, with a few very wealthy beings and average income barely above subsistence. Thus saith Robin Hanson, and I’ve never seen any significant holes poked in that thesis.
Robin Hanson seems to be assuming that human preferences will, in general, remain in their current ranges. This strikes me as unlikely in the face of technological self-modification.
I’ve never gotten that impression. What I’ve gotten is that evolutionary pressures will, in the long term, still exist—even if technological self-modification leads to a population that’s 99.99% satisfied to live within strict resource consumption limits, unless they harshly punish defectors the .01% with a drive for replication or expansion will overwhelm the rest within a few millenia, until the average income is back to subsistence. This doesn’t depend on human preferences, just the laws of physics and natural selection.
What evolutionary pressures? Even making the incredible assumption that we will continue to use sequences of genes as a large part of our identities, what’s to stop a singleton of some variety from eliminating drives for replication or expansion entirely?
I feel uncomfortable speculating about a post-machine-intelligence future even to this extent; this is not a realm in which I am confident about any proposition. Consequently, I view all confident conclusions with great skepticism.
You’re still not getting the breadth and generality of Hanson’s model. To use recent LW terminology, it’s an anti-prediction.
It doesn’t matter whether agents perpetuate their strategies by DNA mixing, binary fission, cellular automata, or cave paintings. Even if all but a tiny minority of posthumans self-modify not to want growth or replication, the few that don’t will soon dominate the light-cone. A singleton, like I’d mentioned, is one way to avert this. Universal extinction and harsh, immediate punishment of expansion-oriented agents are the only others I see.
You (or Robin, I suppose) are just describing a many-agent prisoner’s dilemma. If TDT agents beat the dilemma by cooperating with other TDT agents, then any agents that started out with a different decision theory will have long since self-modified to use TDT.
Alternately, if there is no best decision theoretical solution to the prisoner’s dilemma, then we probably don’t need to worry about surviving to face this problem.
Now, there’s a generalized answer. It even covers the possibility of meeting aliens—finding TDT is a necessary condition for reaching the stars. Harsh punishment of inconsiderate expanders might still be required, but there could be a stable equilibrium without ever actually inflicting that punishment. That’s a new perspective for me, thanks!
To be even more general, suppose that there is at least one thing X that is universally necessary for effective superintelligences to function. X might be knowledge of the second law of thermodynamics, TDT, a computational substrate of some variety, or any number of other things. There are probably very many such X’s, many of which are entirely non-obvious to any entity that is not itself a superintelligence (i.e. us). Furthermore, there may be at least one thing Y that is universally incompatible with effective superintelligence. Y might be an absolute belief in the existence of the deity Thor or desiring only to solve the Halting Problem using a TM-equivalent. For the Hansonian model to hold, all X’s and no Y’s must be compatible with the desire and ability to expand and/or replicate.
This argument is generally why I dislike speculating about superintelligences. It is impossible for ordinary humans to have exhaustive (or even useful, partial) knowledge of all X and all Y. The set of all things Y in particular may not even be enumerable.
We cannot be sure that there are difficulties beyond our comprehension but we are certainly able to assign probabilities to that hypothesis based on what we know. I would be justifiably shocked if something we could call a super-intelligence couldn’t be formed based on knowledge that is accessible to us, even if the process of putting the seed of a super-intelligence together is beyond us.
Humans aren’t even remotely optimised for generalised intelligence, it’s just a trick we picked up to, crudely speaking, get laid. There is no reason that a intelligence of the form “human thinking minus the parts that suck and a bit more of the parts that don’t suck” couldn’t be created using the knowledge available to us and that is something we can easily place a high probability on. Then you run the hardware at more than 60hz.
Oh, I agree. We just don’t know what self-modifications will be necessary to achieve non-speed-based optimizations.
To put it another way, if superintelligences are competing with each other and self-modifying in order to do so, predictions about the qualities those superintelligences will possess are all but worthless.
On this I totally agree!
Your point is spot on. Competition can not be relied on to produce adaptation if someone wins the competition once and for all.
Control, owned by preferences.
I wasn’t trying to make an especially long-term prediction:
“We saw the first millionaire in 1716, the first billionaire in 1916 - and can expect the first trillionaire within the next decade—probably before 2016.”
Inflation.
The richest person on earth currently has a net worth of $53.5 billion.
The greatest peak net worth in recorded history, adjusted for inflation, was Bill Gates’ $101 billion, which was ten years ago. No one since then has come close. A 10-fold increase in <6 years strikes me as unlikely.
In any case, your extrapolated curve points to 2116, not 2016.
I am increasingly convinced that your comments on this topic are made in less than good faith.
Yes, the last figure looks wrong to me too—hopefully I will revisit the issue.
Update 2011-05-30: yes: 2016 was a simple math mistake! I have updated the text I was quoting from to read “later this century”.
Anyway, the huge modern wealth inequalities are well established—and projecting them into the future doesn’t seem especially controversial. Today’s winners in IT are hugely rich—and tomorrow’s winners may well be even richer. People thinking something like they will “be on the inside track when the singularity happens” would not be very surprising.
Projecting anything into a future with non-human intelligences is controversial. You have made an incredibly large assumption without realizing it. Please update.
If you actually want your questions answered, then money is society’s representation of utility—and I think there will probably be something like that in the future—no matter how far out you go. What you may not find further out is “people”. However, I wasn’t talking about any of that, really. I just meant while there are still money and people with bank accounts around.
A few levels up, you said,
My dispute is with the notion that people with bank accounts and machine intelligence will coexist for a non-trivial amount of time.
We have been building intelligent machines for many decades now. If you are talking about something that doesn’t yet exist, I think you would be well advised to find another term for it.
Apologies; I assumed you were using “machine intelligence” as a synonym for AI, as wikipedia does.
Machine intelligence *is—more-or-less—a synonym for artificial intelligence.
Neither term carries the implication of human-level intelligence.
We don’t really have a good canonical term for “AI or upload”.