Even if AGI had only very small comparative advance(in skill and ability to recursively self-improve) over humans supported by then best available computer technology and thus their ability to self-improve, it would eventually, propably even then quite fast, overpower humans totally and utterly. And it seems fairly likely that eventually you could build fully artifical agent that was strictly superior(or could recursively self-update to be one) to humans. This intuition is fairly likely given that humans are not ultimately designed to be the singularity-survivors with best possible mindware to keep up with ever advancing technology, and re-engineering would most likely be more difficult than it was to build a new AI from scratch.
So in conclusion, nothing of a great importance, as far as I can tell, is changed. AGI that comes forward still has to be Friendly, or we’re doomed. And humans still are going to be totally overpowered by that machine intelligence in a relatively short time.
Yup. Intelligence explosion is pretty much irrelevant at this point (even if in fact real). Given the moral weight of the consequences, one doesn’t need impending doom to argue high marginal worth of pursuing Friendly AI. Abstract arguments get stronger by discarding irrelevant detail, even correct detail.
(It’s unclear what’s more difficult to argue, intelligence explosion, or expected utility of starting to work on possibly long-term Friendly AI right now. But using both abstract arguments allows to convince even if only one of them gets accepted.)
Even if AGI had only very small comparative advance(in skill and ability to recursively self-improve) over humans supported by then best available computer technology and thus their ability to self-improve, it would eventually, propably even then quite fast, overpower humans totally and utterly. And it seems fairly likely that eventually you could build fully artifical agent that was strictly superior(or could recursively self-update to be one) to humans. This intuition is fairly likely given that humans are not ultimately designed to be the singularity-survivors with best possible mindware to keep up with ever advancing technology, and re-engineering would most likely be more difficult than it was to build a new AI from scratch.
So in conclusion, nothing of a great importance, as far as I can tell, is changed. AGI that comes forward still has to be Friendly, or we’re doomed. And humans still are going to be totally overpowered by that machine intelligence in a relatively short time.
Yup. Intelligence explosion is pretty much irrelevant at this point (even if in fact real). Given the moral weight of the consequences, one doesn’t need impending doom to argue high marginal worth of pursuing Friendly AI. Abstract arguments get stronger by discarding irrelevant detail, even correct detail.
(It’s unclear what’s more difficult to argue, intelligence explosion, or expected utility of starting to work on possibly long-term Friendly AI right now. But using both abstract arguments allows to convince even if only one of them gets accepted.)