Indeed. As to why I find extreme consequences from general AI highly unlikely, see here. Alas, my main reason is partly buried in the comments (I really need to do a new post on this subject). It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones. Specialized algorithms are what we should hope for or fear, and their positive and negative consequences occur a little at a time—and have been occurring for a long time already, so we have many actual real-world observations to go by. They can be addressed specifically, each passing tests 1-3, so that we can solve these problems and achieve these hopes one specialized task at a time, as well as induce general theories from these experiences (e.g. of security), without getting sucked into any of the near-infinity of Pascal scams one could dream up about the future of computing and robotics.
It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones.
It would be better to present, as your main reason, “the kinds of general algorithms that humans are likely to develop and implement, even absent impediments caused by AI-existential risk activism, will almost certainly be far inferior to specialized ones”. That there exist general-purpose algorithms which subsume the competitive abilities of all existing human-engineered special-purpose algorithms, given sufficient advantages of scale in number of problem domains, is trivial by the existence proof constituted by the human economy.
Put another way: There is some currently-unsubstitutable aspect of the economy which is contained strictly within human cognition and communication. Consider the case where the intellectual difficulties involved in understanding the essence of this unsubstitutable function were overcome, and it were implemented in silico, with an initial level of self-engineering insight already equal to that which was used to create it, and with starting capital and education sufficient to overcome transient learning-curve effects on its initial success. There would then be some fraction of the economy directed by the newly engineered process. Would this fraction of the economy inevitably be at a net competitive advantage, or disadvantage, relative to the fraction of the economy which was directed by humans?
If that fraction of the economy would have an advantage, then this would be an example of a general algorithm ultimately superior to all contemporarily-available specialized algorithms. In that case, what you claim to be the core of your argument would be defeated; the strength of your argument would instead have to come from a focus on the reasons why it were improbable that anyone had a relevant chance of ever achieving this kind of software substitute for human strategy and insight (that is, before everyone else was adequately prepared for it to prevent catastrophe), and that even to the point that supposing otherwise deserves to be tarred with a label of “scam”. And if the software-directed economy would have a disadvantage even at steady state, then this would be a peculiar fact about software and computing machinery relative to neural states and brains, and it could not be assumed without argument. Digital software and computing machinery both have properties that have made them, in most respects, much more tractable to large returns to scale from purposeful re-engineering for higher performance than neural states and brains, and this is likely to continue to be true into the future.
It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones.
I don’t understand your reasoning here. If you have a general AI, it can always choose to apply or invent a specialized algorithm when the situation calls for that, but if all you have is a collection of specialized algorithms, then you have to try to choose/invent the right algorithm yourself, and will likely do a worse (possibly much worse) job than the general AI if it is smarter than you are. So why do we not have to worry about “extreme consequences from general AI”?
Skill at making such choices is itself a specialty, and doesn’t mean you’ll be good at other things. Indeed, the ability to properly choose algorithms in one problem domain often doesn’t make you an expert at choosing them for a different problem domain. And as the software economy becomes more sophisticated these distinctions will grow ever sharper (basic Adam Smith here—the division of labor grows with the size of the market). Such software choosers will come in dazzling variety: they like other useful or threatening software will not be general purpose. And who will choose the choosers? No sentient entity at all—they’ll be chosen they way they are today, by a wide variety of markets, except that there too the variety will be far greater.
Such markets and technologies are already far beyond the ability of any single human to comprehend, and that gap between economic and technological reality and our ability to comprehend and predict it grows wider every year. In that sense, the singularity already happened, and long ago.
And who will choose the choosers? No sentient entity at all—they’ll be chosen they way they are today, by a wide variety of markets, except that there too the variety will be far greater.
Such markets and technologies are already far beyond the ability of any single human to comprehend[. . .]
Can you expand on this? The way you say it suggests that it might be your core objection to the thesis of economically explosive strong AI. -- put into words, the way the emotional charge would hook into the argument here would be: “Such a strong AI would have to be at least as smart as the market, and yet it would have been designed by humans, which would mean there had to be a human at least as smart as the market: and belief in this possibility is always hubris, and is characteristically disastrous for its bearer—something you always want to be on the opposite side of an argument from”? (Where “smart” here is meant to express something metaphorically similar to a proof system’s strength: “the system successfully uses unknowably diverse strategies that a lesser system would either never think to invent or never correctly decide how much to trust”.)
I guess, for this explanation to work, it also has to be your core objection to Friendly AI as a mitigation strategy: “No human-conceived AI architecture can subsume or substitue for all the lines of innovation that the future of the economy should produce, much less control such an economy to preserve any predicate relating to human values. Any preservation we are going to get is going to have to be built incrementally from empirical experience with incremental software economic threats to those values, each of which we will necessarily be able to overcome if there had ever been any hope for humankind to begin with; and it would be hubris, and throwing away any true hope we have, to cling to a chimerical hope of anything less partial, uncertain, or temporary.”
Would you agree that humans are in general not very good at inventing new algorithms, many useful algorithms remain undiscovered, and as a result many jobs are still being done by humans instead of specialized algorithms? Isn’t it possible that this situation (i.e., many jobs still being done by humans, including the jobs of inventing new algorithms) is still largely the case by the time that a general AI smarter than human (for example, an upload of John von Neumann running at 10 times human speed) is created, which at a minimum results in many humans suddenly losing their jobs and at a maximum allows the AI or its creators to take over the world? Do you have an argument why this isn’t possible or isn’t worth worrying about (or hoping for)?
To answer your second sentence on, one consideration is that it is highly questionable whether scanning and uploading is even possible in any practical sense, as people who actually work with brain preservation on a daily basis and would love to be able to extract state from the preserved material seem to consider the matter: It’s “possible” philosophically, but not at all practically. This suggests that it’s low enough feasibility at present that even paying serious attention to it may be a waste of time of the “Pascal’s scam” form described in the linked post (whether the word “scam” is fair or not).
If uploads are infeasible, what about other possible ways to build AGIs? In any case, I’m responding to Nick’s argument that we do not have have to worry about extreme consequences from AGIs because “specialized algorithms are generally far superior to general ones”, which seems to be a separate argument from whether AGIs are feasible.
When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there’s already a growth trend in such jobs today).
The robot apocalypse, in other worlds, will arrive and is arriving one algorithm at a time. It’s a process we can observe unfolding, since it has been going on for a long time already, and learn from—real data rather than imagination. Targetting an imaginary future algorithm does nothing to stop it.
If, for example, you can’t make current algorithms “friendly”, it’s highly unlikely that you’re going to make the even more hyperspecialized algorithms of the future friendly either. Instead of postulting imaginary solutions to imaginary problems, it’s much more useful to work empirically, e.g. on computer scecurity that mathematically prevents algorithms in general from violating particular desired rights. Recognize real problems and demonstrate real solutions to them.
When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there’s already a growth trend in such jobs today).
The phrasing suggests a level of certainty that’s uncalled for for a claim that’s so detailed and given without supporting evidence. I’m not sure there is enough support for even paying attention to this hypothesis. Where does it come from?
(Obvious counterexample that doesn’t seem unlikely: AGI is invented early, so all the cultural changes you’ve listed aren’t present at that time.)
All of these kinds of futuristic speculations are stated with false certainly—especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above “see here” link—extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.
Given the benefits of specialization, how do you explain the existence of general intelligence (i.e. humans)? Why weren’t all the evolutionary niches that humans current occupy already taken by organisms with more specialized intelligence?
My explanation is that generalized algorithms may be less efficient than specialized algorithms when specialized algorithms are available, but inventing specialized algorithm is hard (both for us and for evolution) so often specialized algorithms simply aren’t available. You don’t seem to have responded to this line of argument...
All of these kinds of futuristic speculations are stated with false certainty
The belief that an error is commonly made doesn’t make it OK in any particular case.
(When, for example, I say that I believe that AGI is dangerous, this isn’t false certainty, in the sense that I do believe that it’s very likely the case. If I’m wrong on this point, at least my words accurately reflect my state of belief. Having an incorrect belief and incorrectly communicating a belief are two separate unrelated potential errors. If you don’t believe that something is likely, but state it in the language that suggests that it is, you are being unnecessarily misleading.)
When some day some people (or some things) build an AGI [...] Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human
To rephrase my question, how confident are you of this, and why? It seems to me quite possible that by the time someone builds an AGI, there are still plenty of human jobs that have not been taken over by specialized algorithms due to humans not being smart enough to have invented the necessary specialized algorithms yet. Do you have a reason to think this can’t be true?
ETA: My reply is a bit redundant given Nesov’s sibling comment. I didn’t see his when I posted mine.
I am far more confident in it than I am in the AGI-is-important argument. Which of course isn’t anywhere close to saying that I am highly confident in it. Just that the evidence for AGI-is-unimportant far outweighs that for AGI-is-important.
The upload thread talks about the difficulties in making an upload of a single specific adult human, which would have the acquired memories and skills from the biological human reproduced exactly. (Admittedly, “an upload of John von Neumann”, taken literally, is exactly this.) A neuromorphic AI that skips the problem of engineering a general intelligence by copying the general structure of the human brain and running it in emulation doesn’t need to be based on any specific person, though, just a general really very good understanding of the human brain, and it only needs to be built to the level of a baby with the capability to learn in place, instead of somehow having memories from a biological human transferred to it. The biggest showstopper for practical brain preservation seems to be preserving, retrieving and interpreting stored memories, so this approach seems quite a bit more viable. You could still have your von Neumann army, you’d just have to raise the first one yourself and then start making copies of him.
Indeed. As to why I find extreme consequences from general AI highly unlikely, see here. Alas, my main reason is partly buried in the comments (I really need to do a new post on this subject). It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones. Specialized algorithms are what we should hope for or fear, and their positive and negative consequences occur a little at a time—and have been occurring for a long time already, so we have many actual real-world observations to go by. They can be addressed specifically, each passing tests 1-3, so that we can solve these problems and achieve these hopes one specialized task at a time, as well as induce general theories from these experiences (e.g. of security), without getting sucked into any of the near-infinity of Pascal scams one could dream up about the future of computing and robotics.
It would be better to present, as your main reason, “the kinds of general algorithms that humans are likely to develop and implement, even absent impediments caused by AI-existential risk activism, will almost certainly be far inferior to specialized ones”. That there exist general-purpose algorithms which subsume the competitive abilities of all existing human-engineered special-purpose algorithms, given sufficient advantages of scale in number of problem domains, is trivial by the existence proof constituted by the human economy.
Put another way: There is some currently-unsubstitutable aspect of the economy which is contained strictly within human cognition and communication. Consider the case where the intellectual difficulties involved in understanding the essence of this unsubstitutable function were overcome, and it were implemented in silico, with an initial level of self-engineering insight already equal to that which was used to create it, and with starting capital and education sufficient to overcome transient learning-curve effects on its initial success. There would then be some fraction of the economy directed by the newly engineered process. Would this fraction of the economy inevitably be at a net competitive advantage, or disadvantage, relative to the fraction of the economy which was directed by humans?
If that fraction of the economy would have an advantage, then this would be an example of a general algorithm ultimately superior to all contemporarily-available specialized algorithms. In that case, what you claim to be the core of your argument would be defeated; the strength of your argument would instead have to come from a focus on the reasons why it were improbable that anyone had a relevant chance of ever achieving this kind of software substitute for human strategy and insight (that is, before everyone else was adequately prepared for it to prevent catastrophe), and that even to the point that supposing otherwise deserves to be tarred with a label of “scam”. And if the software-directed economy would have a disadvantage even at steady state, then this would be a peculiar fact about software and computing machinery relative to neural states and brains, and it could not be assumed without argument. Digital software and computing machinery both have properties that have made them, in most respects, much more tractable to large returns to scale from purposeful re-engineering for higher performance than neural states and brains, and this is likely to continue to be true into the future.
I don’t understand your reasoning here. If you have a general AI, it can always choose to apply or invent a specialized algorithm when the situation calls for that, but if all you have is a collection of specialized algorithms, then you have to try to choose/invent the right algorithm yourself, and will likely do a worse (possibly much worse) job than the general AI if it is smarter than you are. So why do we not have to worry about “extreme consequences from general AI”?
Skill at making such choices is itself a specialty, and doesn’t mean you’ll be good at other things. Indeed, the ability to properly choose algorithms in one problem domain often doesn’t make you an expert at choosing them for a different problem domain. And as the software economy becomes more sophisticated these distinctions will grow ever sharper (basic Adam Smith here—the division of labor grows with the size of the market). Such software choosers will come in dazzling variety: they like other useful or threatening software will not be general purpose. And who will choose the choosers? No sentient entity at all—they’ll be chosen they way they are today, by a wide variety of markets, except that there too the variety will be far greater.
Such markets and technologies are already far beyond the ability of any single human to comprehend, and that gap between economic and technological reality and our ability to comprehend and predict it grows wider every year. In that sense, the singularity already happened, and long ago.
Can you expand on this? The way you say it suggests that it might be your core objection to the thesis of economically explosive strong AI. -- put into words, the way the emotional charge would hook into the argument here would be: “Such a strong AI would have to be at least as smart as the market, and yet it would have been designed by humans, which would mean there had to be a human at least as smart as the market: and belief in this possibility is always hubris, and is characteristically disastrous for its bearer—something you always want to be on the opposite side of an argument from”? (Where “smart” here is meant to express something metaphorically similar to a proof system’s strength: “the system successfully uses unknowably diverse strategies that a lesser system would either never think to invent or never correctly decide how much to trust”.)
I guess, for this explanation to work, it also has to be your core objection to Friendly AI as a mitigation strategy: “No human-conceived AI architecture can subsume or substitue for all the lines of innovation that the future of the economy should produce, much less control such an economy to preserve any predicate relating to human values. Any preservation we are going to get is going to have to be built incrementally from empirical experience with incremental software economic threats to those values, each of which we will necessarily be able to overcome if there had ever been any hope for humankind to begin with; and it would be hubris, and throwing away any true hope we have, to cling to a chimerical hope of anything less partial, uncertain, or temporary.”
Would you agree that humans are in general not very good at inventing new algorithms, many useful algorithms remain undiscovered, and as a result many jobs are still being done by humans instead of specialized algorithms? Isn’t it possible that this situation (i.e., many jobs still being done by humans, including the jobs of inventing new algorithms) is still largely the case by the time that a general AI smarter than human (for example, an upload of John von Neumann running at 10 times human speed) is created, which at a minimum results in many humans suddenly losing their jobs and at a maximum allows the AI or its creators to take over the world? Do you have an argument why this isn’t possible or isn’t worth worrying about (or hoping for)?
To answer your second sentence on, one consideration is that it is highly questionable whether scanning and uploading is even possible in any practical sense, as people who actually work with brain preservation on a daily basis and would love to be able to extract state from the preserved material seem to consider the matter: It’s “possible” philosophically, but not at all practically. This suggests that it’s low enough feasibility at present that even paying serious attention to it may be a waste of time of the “Pascal’s scam” form described in the linked post (whether the word “scam” is fair or not).
If uploads are infeasible, what about other possible ways to build AGIs? In any case, I’m responding to Nick’s argument that we do not have have to worry about extreme consequences from AGIs because “specialized algorithms are generally far superior to general ones”, which seems to be a separate argument from whether AGIs are feasible.
When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there’s already a growth trend in such jobs today).
The robot apocalypse, in other worlds, will arrive and is arriving one algorithm at a time. It’s a process we can observe unfolding, since it has been going on for a long time already, and learn from—real data rather than imagination. Targetting an imaginary future algorithm does nothing to stop it.
If, for example, you can’t make current algorithms “friendly”, it’s highly unlikely that you’re going to make the even more hyperspecialized algorithms of the future friendly either. Instead of postulting imaginary solutions to imaginary problems, it’s much more useful to work empirically, e.g. on computer scecurity that mathematically prevents algorithms in general from violating particular desired rights. Recognize real problems and demonstrate real solutions to them.
The phrasing suggests a level of certainty that’s uncalled for for a claim that’s so detailed and given without supporting evidence. I’m not sure there is enough support for even paying attention to this hypothesis. Where does it come from?
(Obvious counterexample that doesn’t seem unlikely: AGI is invented early, so all the cultural changes you’ve listed aren’t present at that time.)
All of these kinds of futuristic speculations are stated with false certainly—especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above “see here” link—extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.
Given the benefits of specialization, how do you explain the existence of general intelligence (i.e. humans)? Why weren’t all the evolutionary niches that humans current occupy already taken by organisms with more specialized intelligence?
My explanation is that generalized algorithms may be less efficient than specialized algorithms when specialized algorithms are available, but inventing specialized algorithm is hard (both for us and for evolution) so often specialized algorithms simply aren’t available. You don’t seem to have responded to this line of argument...
The belief that an error is commonly made doesn’t make it OK in any particular case.
(When, for example, I say that I believe that AGI is dangerous, this isn’t false certainty, in the sense that I do believe that it’s very likely the case. If I’m wrong on this point, at least my words accurately reflect my state of belief. Having an incorrect belief and incorrectly communicating a belief are two separate unrelated potential errors. If you don’t believe that something is likely, but state it in the language that suggests that it is, you are being unnecessarily misleading.)
To rephrase my question, how confident are you of this, and why? It seems to me quite possible that by the time someone builds an AGI, there are still plenty of human jobs that have not been taken over by specialized algorithms due to humans not being smart enough to have invented the necessary specialized algorithms yet. Do you have a reason to think this can’t be true?
ETA: My reply is a bit redundant given Nesov’s sibling comment. I didn’t see his when I posted mine.
I am far more confident in it than I am in the AGI-is-important argument. Which of course isn’t anywhere close to saying that I am highly confident in it. Just that the evidence for AGI-is-unimportant far outweighs that for AGI-is-important.
The upload thread talks about the difficulties in making an upload of a single specific adult human, which would have the acquired memories and skills from the biological human reproduced exactly. (Admittedly, “an upload of John von Neumann”, taken literally, is exactly this.) A neuromorphic AI that skips the problem of engineering a general intelligence by copying the general structure of the human brain and running it in emulation doesn’t need to be based on any specific person, though, just a general really very good understanding of the human brain, and it only needs to be built to the level of a baby with the capability to learn in place, instead of somehow having memories from a biological human transferred to it. The biggest showstopper for practical brain preservation seems to be preserving, retrieving and interpreting stored memories, so this approach seems quite a bit more viable. You could still have your von Neumann army, you’d just have to raise the first one yourself and then start making copies of him.