Would you agree that humans are in general not very good at inventing new algorithms, many useful algorithms remain undiscovered, and as a result many jobs are still being done by humans instead of specialized algorithms? Isn’t it possible that this situation (i.e., many jobs still being done by humans, including the jobs of inventing new algorithms) is still largely the case by the time that a general AI smarter than human (for example, an upload of John von Neumann running at 10 times human speed) is created, which at a minimum results in many humans suddenly losing their jobs and at a maximum allows the AI or its creators to take over the world? Do you have an argument why this isn’t possible or isn’t worth worrying about (or hoping for)?
To answer your second sentence on, one consideration is that it is highly questionable whether scanning and uploading is even possible in any practical sense, as people who actually work with brain preservation on a daily basis and would love to be able to extract state from the preserved material seem to consider the matter: It’s “possible” philosophically, but not at all practically. This suggests that it’s low enough feasibility at present that even paying serious attention to it may be a waste of time of the “Pascal’s scam” form described in the linked post (whether the word “scam” is fair or not).
If uploads are infeasible, what about other possible ways to build AGIs? In any case, I’m responding to Nick’s argument that we do not have have to worry about extreme consequences from AGIs because “specialized algorithms are generally far superior to general ones”, which seems to be a separate argument from whether AGIs are feasible.
When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there’s already a growth trend in such jobs today).
The robot apocalypse, in other worlds, will arrive and is arriving one algorithm at a time. It’s a process we can observe unfolding, since it has been going on for a long time already, and learn from—real data rather than imagination. Targetting an imaginary future algorithm does nothing to stop it.
If, for example, you can’t make current algorithms “friendly”, it’s highly unlikely that you’re going to make the even more hyperspecialized algorithms of the future friendly either. Instead of postulting imaginary solutions to imaginary problems, it’s much more useful to work empirically, e.g. on computer scecurity that mathematically prevents algorithms in general from violating particular desired rights. Recognize real problems and demonstrate real solutions to them.
When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there’s already a growth trend in such jobs today).
The phrasing suggests a level of certainty that’s uncalled for for a claim that’s so detailed and given without supporting evidence. I’m not sure there is enough support for even paying attention to this hypothesis. Where does it come from?
(Obvious counterexample that doesn’t seem unlikely: AGI is invented early, so all the cultural changes you’ve listed aren’t present at that time.)
All of these kinds of futuristic speculations are stated with false certainly—especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above “see here” link—extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.
Given the benefits of specialization, how do you explain the existence of general intelligence (i.e. humans)? Why weren’t all the evolutionary niches that humans current occupy already taken by organisms with more specialized intelligence?
My explanation is that generalized algorithms may be less efficient than specialized algorithms when specialized algorithms are available, but inventing specialized algorithm is hard (both for us and for evolution) so often specialized algorithms simply aren’t available. You don’t seem to have responded to this line of argument...
All of these kinds of futuristic speculations are stated with false certainty
The belief that an error is commonly made doesn’t make it OK in any particular case.
(When, for example, I say that I believe that AGI is dangerous, this isn’t false certainty, in the sense that I do believe that it’s very likely the case. If I’m wrong on this point, at least my words accurately reflect my state of belief. Having an incorrect belief and incorrectly communicating a belief are two separate unrelated potential errors. If you don’t believe that something is likely, but state it in the language that suggests that it is, you are being unnecessarily misleading.)
When some day some people (or some things) build an AGI [...] Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human
To rephrase my question, how confident are you of this, and why? It seems to me quite possible that by the time someone builds an AGI, there are still plenty of human jobs that have not been taken over by specialized algorithms due to humans not being smart enough to have invented the necessary specialized algorithms yet. Do you have a reason to think this can’t be true?
ETA: My reply is a bit redundant given Nesov’s sibling comment. I didn’t see his when I posted mine.
I am far more confident in it than I am in the AGI-is-important argument. Which of course isn’t anywhere close to saying that I am highly confident in it. Just that the evidence for AGI-is-unimportant far outweighs that for AGI-is-important.
The upload thread talks about the difficulties in making an upload of a single specific adult human, which would have the acquired memories and skills from the biological human reproduced exactly. (Admittedly, “an upload of John von Neumann”, taken literally, is exactly this.) A neuromorphic AI that skips the problem of engineering a general intelligence by copying the general structure of the human brain and running it in emulation doesn’t need to be based on any specific person, though, just a general really very good understanding of the human brain, and it only needs to be built to the level of a baby with the capability to learn in place, instead of somehow having memories from a biological human transferred to it. The biggest showstopper for practical brain preservation seems to be preserving, retrieving and interpreting stored memories, so this approach seems quite a bit more viable. You could still have your von Neumann army, you’d just have to raise the first one yourself and then start making copies of him.
Would you agree that humans are in general not very good at inventing new algorithms, many useful algorithms remain undiscovered, and as a result many jobs are still being done by humans instead of specialized algorithms? Isn’t it possible that this situation (i.e., many jobs still being done by humans, including the jobs of inventing new algorithms) is still largely the case by the time that a general AI smarter than human (for example, an upload of John von Neumann running at 10 times human speed) is created, which at a minimum results in many humans suddenly losing their jobs and at a maximum allows the AI or its creators to take over the world? Do you have an argument why this isn’t possible or isn’t worth worrying about (or hoping for)?
To answer your second sentence on, one consideration is that it is highly questionable whether scanning and uploading is even possible in any practical sense, as people who actually work with brain preservation on a daily basis and would love to be able to extract state from the preserved material seem to consider the matter: It’s “possible” philosophically, but not at all practically. This suggests that it’s low enough feasibility at present that even paying serious attention to it may be a waste of time of the “Pascal’s scam” form described in the linked post (whether the word “scam” is fair or not).
If uploads are infeasible, what about other possible ways to build AGIs? In any case, I’m responding to Nick’s argument that we do not have have to worry about extreme consequences from AGIs because “specialized algorithms are generally far superior to general ones”, which seems to be a separate argument from whether AGIs are feasible.
When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there’s already a growth trend in such jobs today).
The robot apocalypse, in other worlds, will arrive and is arriving one algorithm at a time. It’s a process we can observe unfolding, since it has been going on for a long time already, and learn from—real data rather than imagination. Targetting an imaginary future algorithm does nothing to stop it.
If, for example, you can’t make current algorithms “friendly”, it’s highly unlikely that you’re going to make the even more hyperspecialized algorithms of the future friendly either. Instead of postulting imaginary solutions to imaginary problems, it’s much more useful to work empirically, e.g. on computer scecurity that mathematically prevents algorithms in general from violating particular desired rights. Recognize real problems and demonstrate real solutions to them.
The phrasing suggests a level of certainty that’s uncalled for for a claim that’s so detailed and given without supporting evidence. I’m not sure there is enough support for even paying attention to this hypothesis. Where does it come from?
(Obvious counterexample that doesn’t seem unlikely: AGI is invented early, so all the cultural changes you’ve listed aren’t present at that time.)
All of these kinds of futuristic speculations are stated with false certainly—especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above “see here” link—extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.
Given the benefits of specialization, how do you explain the existence of general intelligence (i.e. humans)? Why weren’t all the evolutionary niches that humans current occupy already taken by organisms with more specialized intelligence?
My explanation is that generalized algorithms may be less efficient than specialized algorithms when specialized algorithms are available, but inventing specialized algorithm is hard (both for us and for evolution) so often specialized algorithms simply aren’t available. You don’t seem to have responded to this line of argument...
The belief that an error is commonly made doesn’t make it OK in any particular case.
(When, for example, I say that I believe that AGI is dangerous, this isn’t false certainty, in the sense that I do believe that it’s very likely the case. If I’m wrong on this point, at least my words accurately reflect my state of belief. Having an incorrect belief and incorrectly communicating a belief are two separate unrelated potential errors. If you don’t believe that something is likely, but state it in the language that suggests that it is, you are being unnecessarily misleading.)
To rephrase my question, how confident are you of this, and why? It seems to me quite possible that by the time someone builds an AGI, there are still plenty of human jobs that have not been taken over by specialized algorithms due to humans not being smart enough to have invented the necessary specialized algorithms yet. Do you have a reason to think this can’t be true?
ETA: My reply is a bit redundant given Nesov’s sibling comment. I didn’t see his when I posted mine.
I am far more confident in it than I am in the AGI-is-important argument. Which of course isn’t anywhere close to saying that I am highly confident in it. Just that the evidence for AGI-is-unimportant far outweighs that for AGI-is-important.
The upload thread talks about the difficulties in making an upload of a single specific adult human, which would have the acquired memories and skills from the biological human reproduced exactly. (Admittedly, “an upload of John von Neumann”, taken literally, is exactly this.) A neuromorphic AI that skips the problem of engineering a general intelligence by copying the general structure of the human brain and running it in emulation doesn’t need to be based on any specific person, though, just a general really very good understanding of the human brain, and it only needs to be built to the level of a baby with the capability to learn in place, instead of somehow having memories from a biological human transferred to it. The biggest showstopper for practical brain preservation seems to be preserving, retrieving and interpreting stored memories, so this approach seems quite a bit more viable. You could still have your von Neumann army, you’d just have to raise the first one yourself and then start making copies of him.