If a superior alien civilization sent us a text message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here—we’ll leave the lights on”? Probably not—but this is more or less what is happening with AI.
If a few decades is enough to make an FAI, we could build one and either have it deal with the aliens, or have it upload everyone, put them in static storage, and send a few Von Neumann probes faster than it would be economical for aliens to send them to catch us if they are interested in maximum spread, instead of maximum speed, to galaxies which will soon be outside the aliens’ cosmological horizon.
It is unlikely that the FAI would be able to deal with the aliens. The aliens would have (or be) their own “FAIs” much older and therefore more powerful.
Regarding probes to extremely far galaxies: theoretically might work, depending on economics of space colonization. We would survive at the cost of losing most of potential colonization space. Neat.
It is unlikely that the FAI would be able to deal with the aliens. The aliens would have (or be) their own “FAIs” much older and therefore more powerful.
This needs unpacking of “deal with”. A FAI is still capable of optimizing a “hopeless” situation better than humans, so if you focus on optimizing and not satisficing, it doesn’t matter if the absolute value of the outcome is much less than without the aliens. Considering this comparison (value with aliens vs. without) is misleading, because it’s a part of the problem statement, not of a consequentialist argument that informs some decision within that problem statement. FAI would be preferable simply as long as it delivers more expected value than alternative plans that would use the same resources to do something else.
Apart from that general point, it might turn out to be easy (for an AGI) to quickly develop significant control over a local area of the physical world that’s expensive to take away (or take away without hurting its value) even if the opponent is a superintelligence that spent aeons working on this problem (analogy with modern cryptography, where defense wins against much stronger offense), in which case a FAI would have something to bargain with.
A FAI is still capable of optimizing a “hopeless” situation better than humans...
This argument is not terribly convincing by itself. For example, a Neanderthal is a much better optimizer than a fruit fly but both a almost equally powerless against an H-bomb.
...it might turn out to be easy (for an AGI) to quickly develop significant control over a local area of the physical world that’s expensive to take away...
Hmm, what about the following idea. The FAI can threaten the aliens to somehow consume a large portion of the free energy in the solar system. Assuming the 2nd law of thermodynamics is watertight, it will be profitable for them to leave us a significant portion (1/2?) of that portion. Essentially it’s the Ultimatum game. The negotiation can be done acausally assuming each side has sufficient information about the other.
Thus we remain a small civilization but survive for a long time.
For example, a Neanderthal is a much better optimizer than a fruit fly but both a almost equally powerless against an H-bomb.
There is no reason to expect exact equality, only close similarity. If you optimize, you still prefer something that’s a tiny bit better to something that’s a tiny bit worse. I’m not claiming that there is a significant difference. I’m claiming that there is some expected difference, all else equal, however tiny, which is all it takes to prefer one decision over another. In this case, a FAI gains you as much difference as available, minus the opportunity cost of FAI’s development (if we set aside the difficulty in predicting success of a FAI development project).
(There are other illustrations I didn’t give for how the difference may not be “tiny” in some senses of “tiny”. For example, one possible effect is a few years of strongly optimized world, which might outweigh all of the moral value of the past human history. This is large compared to the value of millions of human lives, tiny compared to the value of uncontested future light cone.)
(I wouldn’t give a Neanderthal as a relevant example of an optimizer, as the abstract argument about FAI’s value is scrambled by the analogy beyond recognition. The Neanderthal in the example would have to be better than the fly at optimizing fly values (which may be impossible to usefully define for flies), and have enough optimization power to render the difference in bodies relatively morally irrelevant, compared to the consequences. Otherwise, the moral difference between their bodies is a confounder that renders the point of the difference in their optimization power, all else equal, moot, because all alse is now significantly not equal.)
...a FAI gains you as much difference as available, minus the opportunity cost of FAI’s development...
Exactly. So for building FAI to be a good idea we need to expect its benefits to outweigh the opportunity cost (we can spend the remaining time “partying” rather than developing FAI).
For example, one possible effect is a few years of strongly optimized world, which might outweigh all of the moral value of the past human history.
Neat. One way it might work is the FAI running much-faster-than-realtime WBE’s so that we gain a huge amount of subjective years of life. This works for any inevitable impending disaster.
Thus we remain a small civilization but survive for a long time.
It’s not obvious that having a long time is preferable. For example, optimizing a large amount of resources in a short time might be better than optimizing a small amount of resources for a long time. Whatever’s preferable, that’s the trade that a FAI might be in a position to facilitate.
Eeeehhhhh… it’s not that surprising when you consider that billions of people people really, truly believe in a form of divine-command moral realism that implies universally compelling arguments.
Hawking/Russell/Tegmark/Wilczek:
Nice.
Actually, in the alien civilization scenario we would already be screwed: there wouldn’t be much that can be done. This is not the case with AI.
If a few decades is enough to make an FAI, we could build one and either have it deal with the aliens, or have it upload everyone, put them in static storage, and send a few Von Neumann probes faster than it would be economical for aliens to send them to catch us if they are interested in maximum spread, instead of maximum speed, to galaxies which will soon be outside the aliens’ cosmological horizon.
It is unlikely that the FAI would be able to deal with the aliens. The aliens would have (or be) their own “FAIs” much older and therefore more powerful.
Regarding probes to extremely far galaxies: theoretically might work, depending on economics of space colonization. We would survive at the cost of losing most of potential colonization space. Neat.
This needs unpacking of “deal with”. A FAI is still capable of optimizing a “hopeless” situation better than humans, so if you focus on optimizing and not satisficing, it doesn’t matter if the absolute value of the outcome is much less than without the aliens. Considering this comparison (value with aliens vs. without) is misleading, because it’s a part of the problem statement, not of a consequentialist argument that informs some decision within that problem statement. FAI would be preferable simply as long as it delivers more expected value than alternative plans that would use the same resources to do something else.
Apart from that general point, it might turn out to be easy (for an AGI) to quickly develop significant control over a local area of the physical world that’s expensive to take away (or take away without hurting its value) even if the opponent is a superintelligence that spent aeons working on this problem (analogy with modern cryptography, where defense wins against much stronger offense), in which case a FAI would have something to bargain with.
This argument is not terribly convincing by itself. For example, a Neanderthal is a much better optimizer than a fruit fly but both a almost equally powerless against an H-bomb.
Hmm, what about the following idea. The FAI can threaten the aliens to somehow consume a large portion of the free energy in the solar system. Assuming the 2nd law of thermodynamics is watertight, it will be profitable for them to leave us a significant portion (1/2?) of that portion. Essentially it’s the Ultimatum game. The negotiation can be done acausally assuming each side has sufficient information about the other.
Thus we remain a small civilization but survive for a long time.
There is no reason to expect exact equality, only close similarity. If you optimize, you still prefer something that’s a tiny bit better to something that’s a tiny bit worse. I’m not claiming that there is a significant difference. I’m claiming that there is some expected difference, all else equal, however tiny, which is all it takes to prefer one decision over another. In this case, a FAI gains you as much difference as available, minus the opportunity cost of FAI’s development (if we set aside the difficulty in predicting success of a FAI development project).
(There are other illustrations I didn’t give for how the difference may not be “tiny” in some senses of “tiny”. For example, one possible effect is a few years of strongly optimized world, which might outweigh all of the moral value of the past human history. This is large compared to the value of millions of human lives, tiny compared to the value of uncontested future light cone.)
(I wouldn’t give a Neanderthal as a relevant example of an optimizer, as the abstract argument about FAI’s value is scrambled by the analogy beyond recognition. The Neanderthal in the example would have to be better than the fly at optimizing fly values (which may be impossible to usefully define for flies), and have enough optimization power to render the difference in bodies relatively morally irrelevant, compared to the consequences. Otherwise, the moral difference between their bodies is a confounder that renders the point of the difference in their optimization power, all else equal, moot, because all alse is now significantly not equal.)
Exactly. So for building FAI to be a good idea we need to expect its benefits to outweigh the opportunity cost (we can spend the remaining time “partying” rather than developing FAI).
Neat. One way it might work is the FAI running much-faster-than-realtime WBE’s so that we gain a huge amount of subjective years of life. This works for any inevitable impending disaster.
It’s not obvious that having a long time is preferable. For example, optimizing a large amount of resources in a short time might be better than optimizing a small amount of resources for a long time. Whatever’s preferable, that’s the trade that a FAI might be in a position to facilitate.
Just FYI, that analogy is originally due to Russell specifically, according to an interview I saw with Norvig.
Eeeehhhhh… it’s not that surprising when you consider that billions of people people really, truly believe in a form of divine-command moral realism that implies universally compelling arguments.
It is, however, worrying.