It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones.
It would be better to present, as your main reason, “the kinds of general algorithms that humans are likely to develop and implement, even absent impediments caused by AI-existential risk activism, will almost certainly be far inferior to specialized ones”. That there exist general-purpose algorithms which subsume the competitive abilities of all existing human-engineered special-purpose algorithms, given sufficient advantages of scale in number of problem domains, is trivial by the existence proof constituted by the human economy.
Put another way: There is some currently-unsubstitutable aspect of the economy which is contained strictly within human cognition and communication. Consider the case where the intellectual difficulties involved in understanding the essence of this unsubstitutable function were overcome, and it were implemented in silico, with an initial level of self-engineering insight already equal to that which was used to create it, and with starting capital and education sufficient to overcome transient learning-curve effects on its initial success. There would then be some fraction of the economy directed by the newly engineered process. Would this fraction of the economy inevitably be at a net competitive advantage, or disadvantage, relative to the fraction of the economy which was directed by humans?
If that fraction of the economy would have an advantage, then this would be an example of a general algorithm ultimately superior to all contemporarily-available specialized algorithms. In that case, what you claim to be the core of your argument would be defeated; the strength of your argument would instead have to come from a focus on the reasons why it were improbable that anyone had a relevant chance of ever achieving this kind of software substitute for human strategy and insight (that is, before everyone else was adequately prepared for it to prevent catastrophe), and that even to the point that supposing otherwise deserves to be tarred with a label of “scam”. And if the software-directed economy would have a disadvantage even at steady state, then this would be a peculiar fact about software and computing machinery relative to neural states and brains, and it could not be assumed without argument. Digital software and computing machinery both have properties that have made them, in most respects, much more tractable to large returns to scale from purposeful re-engineering for higher performance than neural states and brains, and this is likely to continue to be true into the future.
It would be better to present, as your main reason, “the kinds of general algorithms that humans are likely to develop and implement, even absent impediments caused by AI-existential risk activism, will almost certainly be far inferior to specialized ones”. That there exist general-purpose algorithms which subsume the competitive abilities of all existing human-engineered special-purpose algorithms, given sufficient advantages of scale in number of problem domains, is trivial by the existence proof constituted by the human economy.
Put another way: There is some currently-unsubstitutable aspect of the economy which is contained strictly within human cognition and communication. Consider the case where the intellectual difficulties involved in understanding the essence of this unsubstitutable function were overcome, and it were implemented in silico, with an initial level of self-engineering insight already equal to that which was used to create it, and with starting capital and education sufficient to overcome transient learning-curve effects on its initial success. There would then be some fraction of the economy directed by the newly engineered process. Would this fraction of the economy inevitably be at a net competitive advantage, or disadvantage, relative to the fraction of the economy which was directed by humans?
If that fraction of the economy would have an advantage, then this would be an example of a general algorithm ultimately superior to all contemporarily-available specialized algorithms. In that case, what you claim to be the core of your argument would be defeated; the strength of your argument would instead have to come from a focus on the reasons why it were improbable that anyone had a relevant chance of ever achieving this kind of software substitute for human strategy and insight (that is, before everyone else was adequately prepared for it to prevent catastrophe), and that even to the point that supposing otherwise deserves to be tarred with a label of “scam”. And if the software-directed economy would have a disadvantage even at steady state, then this would be a peculiar fact about software and computing machinery relative to neural states and brains, and it could not be assumed without argument. Digital software and computing machinery both have properties that have made them, in most respects, much more tractable to large returns to scale from purposeful re-engineering for higher performance than neural states and brains, and this is likely to continue to be true into the future.