Alas, as seen in the abovecriticisms [links in a different spot in the original post], it seems far too common in the AI risk world to presume that past patterns of software and business are largely irrelevant, as AI will be a glorious new shiny unified thing without much internal structure or relation to previous things. (As predicted by far views.)
The rise of deep learning in recent years seems to be evidence in favor of [AI will be a glorious new shiny thing without much relation to previous things] (assuming “previous things” here is limited to things that affected markets at the time).
The history of vastly overestimating the ease of making huge firms in capitalism, and the similar typical nubbie error of overestimating the ease of making large unstructured software systems, are seen as largely irrelevant.
While I see how conventional economic models are obviously useful here, I do not see how they can be useful in predicting the performance of “novel computations” (e.g. a computation that uses 1,000,000 GPU hours and a shiny new neural architecture) or predicting some critical technical properties of the development of transformative systems (e.g. “is there a secret sauce that a top AI lab will suddenly find?”).
The rise of deep learning in recent years seems to be evidence in favor of [AI will be a glorious new shiny thing without much relation to previous things] (assuming “previous things” here is limited to things that affected markets at the time).
While I see how conventional economic models are obviously useful here, I do not see how they can be useful in predicting the performance of “novel computations” (e.g. a computation that uses 1,000,000 GPU hours and a shiny new neural architecture) or predicting some critical technical properties of the development of transformative systems (e.g. “is there a secret sauce that a top AI lab will suddenly find?”).