When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there’s already a growth trend in such jobs today).
The phrasing suggests a level of certainty that’s uncalled for for a claim that’s so detailed and given without supporting evidence. I’m not sure there is enough support for even paying attention to this hypothesis. Where does it come from?
(Obvious counterexample that doesn’t seem unlikely: AGI is invented early, so all the cultural changes you’ve listed aren’t present at that time.)
All of these kinds of futuristic speculations are stated with false certainly—especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above “see here” link—extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.
Given the benefits of specialization, how do you explain the existence of general intelligence (i.e. humans)? Why weren’t all the evolutionary niches that humans current occupy already taken by organisms with more specialized intelligence?
My explanation is that generalized algorithms may be less efficient than specialized algorithms when specialized algorithms are available, but inventing specialized algorithm is hard (both for us and for evolution) so often specialized algorithms simply aren’t available. You don’t seem to have responded to this line of argument...
All of these kinds of futuristic speculations are stated with false certainty
The belief that an error is commonly made doesn’t make it OK in any particular case.
(When, for example, I say that I believe that AGI is dangerous, this isn’t false certainty, in the sense that I do believe that it’s very likely the case. If I’m wrong on this point, at least my words accurately reflect my state of belief. Having an incorrect belief and incorrectly communicating a belief are two separate unrelated potential errors. If you don’t believe that something is likely, but state it in the language that suggests that it is, you are being unnecessarily misleading.)
The phrasing suggests a level of certainty that’s uncalled for for a claim that’s so detailed and given without supporting evidence. I’m not sure there is enough support for even paying attention to this hypothesis. Where does it come from?
(Obvious counterexample that doesn’t seem unlikely: AGI is invented early, so all the cultural changes you’ve listed aren’t present at that time.)
All of these kinds of futuristic speculations are stated with false certainly—especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above “see here” link—extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.
Given the benefits of specialization, how do you explain the existence of general intelligence (i.e. humans)? Why weren’t all the evolutionary niches that humans current occupy already taken by organisms with more specialized intelligence?
My explanation is that generalized algorithms may be less efficient than specialized algorithms when specialized algorithms are available, but inventing specialized algorithm is hard (both for us and for evolution) so often specialized algorithms simply aren’t available. You don’t seem to have responded to this line of argument...
The belief that an error is commonly made doesn’t make it OK in any particular case.
(When, for example, I say that I believe that AGI is dangerous, this isn’t false certainty, in the sense that I do believe that it’s very likely the case. If I’m wrong on this point, at least my words accurately reflect my state of belief. Having an incorrect belief and incorrectly communicating a belief are two separate unrelated potential errors. If you don’t believe that something is likely, but state it in the language that suggests that it is, you are being unnecessarily misleading.)