effective debate notes: I’ve read main points of every first-level comment in this thread and the author’s clarification.
epistemic status: This argument is mostly about values. I hope we can all agree with the facts I mentioned here, but can also consider this alternative framework which I believe is a “better map of reality”.
I disagree with your conclusion because I disagree with your model of classification of tech frontiers. In the body of this article and most comments, people seem to agree with your division of technology into 6 parts. Here’s why I think this model might be not very capable: I believe it assumes every kind of innovation is “fundamentally” or “canonically” same.
Specifically, it ignores different area’s different “transferrability” or “interconnectivity” with other fields. For example, innovations in manufacturing/agriculture/energy/transportation/medicine generally cannot be transferred to one another directly; while innovations in information can be transferred to other fields easily. Google scholar “machine learning” + any of the five fields and we should be able to find plenty of literature reviews on them.
It doesn’t care about how important different fields matter to us on a “meta” level. One defining characteristic for humans is the use of complex language and theory of mind. None of the other 5 field in the original framework contribute directly to languages and the use of languages; while information technology by definition includes the enhanced efficiency and accessibility of a wide variety of discussions and knowledge-sharing. I see that you might agree with this point here by your discussions in the first half of the article. However, the impact of this aspect of information technology can be much bigger than other fields in the original classification, including:
Simply more potential for progress: if you ask scholars 50 years ago in their very specifically subdivided fields, that they would have so many pre-prints to read even if they don’t sleep or eat, they’ll probably ask “what’s a pre-print?” before even saying “no way!”. Internet-based tools are constantly increasing accessibility and thus quantity of research, while simultaneously increasing the speed of research through streamlining the research process from knowledge acquisition to reproducibility to peer review. Results may take a bit to propagate from the science world to tech world, but this level of meta-scientific discovery is really unprecedented since the invention of the printing press (and I know you’re not talking about the scientific frontier here, so this is just an example rather than a point).
From nation states to human civilization: sure, technologies like social media divide us and that’s a huge problem we should think about hard, but instant communication across the globe and very good machine translation services have already transforming a large part of the human population into a group of shared values and fundamentally agreeable ways of thinking (im kinda talking about the type of thinking by the time of enlightenment without coining a specific name for it, since people outside of the western tradition doesn’t call it our way; doesn’t mean they don’t think like this), and it should be trivial that this would bring technological progress.
Economically, bits > atoms. The previous sentence is used metaphorically, and what I’m trying to convey is that bits are ideas. Ideas can be copied from person to person, or from machine to machine, at a speed many times as fast as copying physical objects, and at a price of zero (mostly). This makes even a tiny innovation in IT matter the same as huge innovations in traditional industries: you can have an entire field dealing with more efficient ways to predict protein structures while spending countless hours, but a machine learning model (alphafold 2) can match lab performance at much less cost and much better scalability (I know way less about biology than I know about machine learning, so please correct me if I’m wrong!)
Cost and scalability are the central point being made here—one of the most important innovations in my book is the public cloud industry led by the Amazon Web Service. Hiring more research assistants have a diseconomies of scale: coordination, management, communication (in a PHYSICAL environment! be sure not have any OSHA violations—and lawyer up! - and make sure to secure your lab equipment—maybe add a guard - …etc) all become harder the more people you hire. However, if you’re just adding another 100 GPUs to your infrastructure, you wouldn’t need “middle management” for them—maybe their programming counterpart, but they’re much cheaper, and you can get many of them open-source.
So my main point is that, the information revolution should really be a printing-press-level innovation, and comparing it to electricity or steam engine really missed a lot of important fundamental differences of IT, and these unique characteristics are already manifesting themselves everywhere. So here’s my alternative framework for the original categories (roughly):
All important technological innovation categories (impact* from least to most)
Prescriptively (more of ‘predicting the future’), my belief is that although in the past years our focus shifted from more heavy-industrial innovations to more “meta” and indirect ones (also including non-technical ones like communication theory), the latter has more potential than the former from the points above.
*: Since no innovation come alone, our value functions for importance and impact of an innovation should not only include immediate impacts, but also potential ones that might take longer to fully manifest but we can already see coming.
Interesting, but I think you’re underestimating the impact of other general-purpose technologies, such as in energy or manufacturing. New energy sources can be applied broadly across many areas, for instance.
effective debate notes: I’ve read main points of every first-level comment in this thread and the author’s clarification.
epistemic status: This argument is mostly about values. I hope we can all agree with the facts I mentioned here, but can also consider this alternative framework which I believe is a “better map of reality”.
I disagree with your conclusion because I disagree with your model of classification of tech frontiers. In the body of this article and most comments, people seem to agree with your division of technology into 6 parts. Here’s why I think this model might be not very capable: I believe it assumes every kind of innovation is “fundamentally” or “canonically” same.
Specifically, it ignores different area’s different “transferrability” or “interconnectivity” with other fields. For example, innovations in manufacturing/agriculture/energy/transportation/medicine generally cannot be transferred to one another directly; while innovations in information can be transferred to other fields easily. Google scholar “machine learning” + any of the five fields and we should be able to find plenty of literature reviews on them.
It doesn’t care about how important different fields matter to us on a “meta” level. One defining characteristic for humans is the use of complex language and theory of mind. None of the other 5 field in the original framework contribute directly to languages and the use of languages; while information technology by definition includes the enhanced efficiency and accessibility of a wide variety of discussions and knowledge-sharing. I see that you might agree with this point here by your discussions in the first half of the article. However, the impact of this aspect of information technology can be much bigger than other fields in the original classification, including:
Simply more potential for progress: if you ask scholars 50 years ago in their very specifically subdivided fields, that they would have so many pre-prints to read even if they don’t sleep or eat, they’ll probably ask “what’s a pre-print?” before even saying “no way!”. Internet-based tools are constantly increasing accessibility and thus quantity of research, while simultaneously increasing the speed of research through streamlining the research process from knowledge acquisition to reproducibility to peer review. Results may take a bit to propagate from the science world to tech world, but this level of meta-scientific discovery is really unprecedented since the invention of the printing press (and I know you’re not talking about the scientific frontier here, so this is just an example rather than a point).
From nation states to human civilization: sure, technologies like social media divide us and that’s a huge problem we should think about hard, but instant communication across the globe and very good machine translation services have already transforming a large part of the human population into a group of shared values and fundamentally agreeable ways of thinking (im kinda talking about the type of thinking by the time of enlightenment without coining a specific name for it, since people outside of the western tradition doesn’t call it our way; doesn’t mean they don’t think like this), and it should be trivial that this would bring technological progress.
Economically, bits > atoms. The previous sentence is used metaphorically, and what I’m trying to convey is that bits are ideas. Ideas can be copied from person to person, or from machine to machine, at a speed many times as fast as copying physical objects, and at a price of zero (mostly). This makes even a tiny innovation in IT matter the same as huge innovations in traditional industries: you can have an entire field dealing with more efficient ways to predict protein structures while spending countless hours, but a machine learning model (alphafold 2) can match lab performance at much less cost and much better scalability (I know way less about biology than I know about machine learning, so please correct me if I’m wrong!)
Cost and scalability are the central point being made here—one of the most important innovations in my book is the public cloud industry led by the Amazon Web Service. Hiring more research assistants have a diseconomies of scale: coordination, management, communication (in a PHYSICAL environment! be sure not have any OSHA violations—and lawyer up! - and make sure to secure your lab equipment—maybe add a guard - …etc) all become harder the more people you hire. However, if you’re just adding another 100 GPUs to your infrastructure, you wouldn’t need “middle management” for them—maybe their programming counterpart, but they’re much cheaper, and you can get many of them open-source.
So my main point is that, the information revolution should really be a printing-press-level innovation, and comparing it to electricity or steam engine really missed a lot of important fundamental differences of IT, and these unique characteristics are already manifesting themselves everywhere. So here’s my alternative framework for the original categories (roughly):
All important technological innovation categories (impact* from least to most)
Helping us enhance reality
Manufacturing & construction
concrete, civil engineering, skyscrapers
Energy
non-renewable energy, fission, renewable energy, fusion
Transportation
highways, containerization, international shipping, self-driving cars
Helping us enhance ourselves (but physical)
Agriculture (not as important technology-wise since we can already meet all the needs we have; just a matter of redistribution)
genetically-modified corps, genetically-engineered corps
Transportation (partly)
subways, intercontinental flights, self-driving cars
Medicine/Bio-*
polio vaccines, CRISPR, COVID-19 vaccines, immortality
Helping us enhance ourselves (but conceptual)
Information
alphabets, printing press, internet
Prescriptively (more of ‘predicting the future’), my belief is that although in the past years our focus shifted from more heavy-industrial innovations to more “meta” and indirect ones (also including non-technical ones like communication theory), the latter has more potential than the former from the points above.
*: Since no innovation come alone, our value functions for importance and impact of an innovation should not only include immediate impacts, but also potential ones that might take longer to fully manifest but we can already see coming.
Edit 1: add epistemic status
Interesting, but I think you’re underestimating the impact of other general-purpose technologies, such as in energy or manufacturing. New energy sources can be applied broadly across many areas, for instance.