All of these kinds of futuristic speculations are stated with false certainly—especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above “see here” link—extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.
Given the benefits of specialization, how do you explain the existence of general intelligence (i.e. humans)? Why weren’t all the evolutionary niches that humans current occupy already taken by organisms with more specialized intelligence?
My explanation is that generalized algorithms may be less efficient than specialized algorithms when specialized algorithms are available, but inventing specialized algorithm is hard (both for us and for evolution) so often specialized algorithms simply aren’t available. You don’t seem to have responded to this line of argument...
All of these kinds of futuristic speculations are stated with false certainty
The belief that an error is commonly made doesn’t make it OK in any particular case.
(When, for example, I say that I believe that AGI is dangerous, this isn’t false certainty, in the sense that I do believe that it’s very likely the case. If I’m wrong on this point, at least my words accurately reflect my state of belief. Having an incorrect belief and incorrectly communicating a belief are two separate unrelated potential errors. If you don’t believe that something is likely, but state it in the language that suggests that it is, you are being unnecessarily misleading.)
All of these kinds of futuristic speculations are stated with false certainly—especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above “see here” link—extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.
Given the benefits of specialization, how do you explain the existence of general intelligence (i.e. humans)? Why weren’t all the evolutionary niches that humans current occupy already taken by organisms with more specialized intelligence?
My explanation is that generalized algorithms may be less efficient than specialized algorithms when specialized algorithms are available, but inventing specialized algorithm is hard (both for us and for evolution) so often specialized algorithms simply aren’t available. You don’t seem to have responded to this line of argument...
The belief that an error is commonly made doesn’t make it OK in any particular case.
(When, for example, I say that I believe that AGI is dangerous, this isn’t false certainty, in the sense that I do believe that it’s very likely the case. If I’m wrong on this point, at least my words accurately reflect my state of belief. Having an incorrect belief and incorrectly communicating a belief are two separate unrelated potential errors. If you don’t believe that something is likely, but state it in the language that suggests that it is, you are being unnecessarily misleading.)