Honestly I think globalism is ending it all on its own. Free trade has had the side effect of creating multiple economic powers in competition. China, USA, EU, taiwan, japan, korea—multiple powers. And we don’t need every technology to move forward to end stagnation. We need just one, a form of AI that can accelerate R&D in itself.
And then we would very rapidly find a world where at least one power has self replicating robotics swarms, each producing design optimized products including more of themselves and of course weapons. And any other power, if they don’t want to eventually be defeated will have to invest in the same technology. (At a certain level of scale self replicating robotics will trump nuclear weapons because you could mass manufacture sufficient defenses and bunkers to fight a nuclear war and win)
Typically those who subscribe to the premise of The Great Stagnation have a dim perception of progress in the world of bits. The argument goes that digital progress is overhyped in the present day because it’s one of few, if not the only industry still exhibiting fast progress.
Zooming out to the wider economy, GDP growth rates have slowed enormously in the developed world since the early 70s and total factor productivity growth has declined to rates possibly not seen since the dawn of the industrial revolution. If these measurements are indeed still a reasonable metric of technological progress in the digital era then advancements in computing have not managed to match midcentury levels of progress.
If you accept this analysis but still bank on AI as the solution moving forward, that means accepting the notion that computing progress hasn’t lived up to expectations in the past but will lead to tremendous advancements just on the horizon. Given how many times AI has been hyped only to fall into a stagnant winter soon after, I’m not so sure about this.
Also I don’t have any reason to accept “bankunderground” as a credible and accurate measurement of progress. Especially as I can just “look out the window” and see that somehow this means all of China’s absurd growth in per citizen productivity just...doesn’t show up in the data. Huh.
The data pertains to Britain, not the developing world and the data comes from The Bank of England. Obviously, tremendous economic progress has been made globally since the 70s. But that economic progress is mostly “catching-up” aka developing countries adopting existing technologies. Far less development has happened on the frontier.
Oh. Well focusing on just britain is meaningless. Why not focus on cuba. Or one household down your block. Point is a “small” country can easily do mediocre for any number of reasons while absurd runaway progress can happen elsewhere.
The same trend can be found in every country which was developed by the 70s. Britain is simply a particularly good example because of the amount of record keeping they performed in the 18th and 19th centuries compared to other countries.
However, just looking at data from the 20th century onward, accessible for any developed country, growth at the technological frontier has slowed tremendously since the 70s. Jason Crawford already aggregated a lot of data pertaining to the slowdown here. Long story short, not much technological progress has been made outside of computing in decades.
I thought about this problem a bit more, and let’s drop speculation about what may or may not be possible in the future. And just talk about specific professions over the last 50 years.
Primary schoolteacher. That person has to give a lesson, in front of a limited number of students as children need personalized attention. So you need 1 teacher per about ~20 students, give or take 10, and that’s their full time job. Computers make it where the teacher can fill out their paperwork more easily, but there is more of it, so it’s about the same. Maybe they do slightly better a job than in the 1970s but the productivity is the same.
Janitor. You have a mop, a broom, brushes, a cart full of supplies. For large open areas there are electric floor washing machines of various types. Pretty sure all of this was available by 70s. Only improvement I know of is that companies don’t have their own staff to do it, they outsource, and the outsourcing firms may get slightly more labor per work week out of their workers.
Retail clerk. Once the bar code and register able to scan the code, look up the price, and add it to a total was readily available, late 70s at the latest, that’s about the limit. A clerk has to scan each item, and credit cards take about the same time as cash.
restaurant waitstaff, cook—totally unchanged
accountant—computers have automated huge swaths of it, but companies are far more complex than they were.
Anyways we can go down the list and find a long list of jobs that have changed minimally if at all. And therefore the productivity per worker cannot be expected to improve if the amount of human labor needed hasn’t shrunk. Some tasks, like nuclear plant worker, over time they have become less productive as more of them are needed per megawatt of power, due to more and more long tail risks being discovered.
And then you can talk about how you might get a meaningful increase in productivity from each of these roles. And, well, it’s all coming up AI. I know of no other way. You must build an automated system able to perform most of the task. Some (like schoolteacher) are nearly impossible, others like janitor are surprisingly hard, and some are being automated as we speak. (Amazon Go for retail clerks)
Some tasks, like nuclear plant worker, over time they have become less productive as more of them are needed per megawatt of power, due to more and more long tail risks being discovered.
It’s not just about discovering more tail risks but about having a different culture on risk in those companies. One example someone from the industry gave me is that they tell their workers in yearly seminars about how to avoid cutting themselves with paper.
Right. So this is one of those anti-progress patterns I see around. What happens internally to the company is that over the Very Serious People create some Very Serious Internal Processes like There Shall be Risk Management Training (on the prevention of papercuts). And anyone suggesting that maybe they could run the company more efficiently by skipping this training has to argue either (1) elders in the company were wrong to institute such training or (2) they (personally) are pro risk.
Hard to be “pro risk” in the same way if you spoke against, say, diversity quotas by definition you are for discrimination.
So over time the company adds more and more cruft—while not really deleting much—making it less and less economically efficient. This is why big rich companies have to constantly be buying startups for the technology, because they are unable to get their own engineers to develop the same tech (because those engineers are too busy/beat down by mandatory training). And why eventually most big rich companies fail, and their assets and technology get bought up by younger companies who, when the merger goes well, basically throw out the legacy trash. (except when the opposite happens, like in the boeing mcdonnell douglas merger)
Sure. Point is that this lets you go from 10 workers in a restaurant to 9.5 or other small increments. It’s not like the innovation of the tractor and fertilizer and other innovations, which have reduced farmers from 50% of the population (1900) to 2% (today).
To get this with restaurants the only way is intelligent robotics, at least the only way I can see. Other than just “everyone stops eating restaurant food and starts eating homogenous soylent packets.” We could automate that fully with today’s tech. Where today a restaurant with 10 workers gets replaced with 0.4 workers, who work offsite, and respond to elevated customer servers calls and elevated maintenance issues. (‘elevated’ means the autonomy tried and failed to solve the issue already. While automated maintenance isn’t too common, Amazon is experimenting with automated customer service, where in my experience a bot will basically just give a refund if you have any complaint at all about an order)
Ok, so first you aren’t talking about progress really, you are linking data on productivity per worker. Which has gone up over the decades but at a slower pace. Why is that?
Well, the simplest theory is that suppose there is a class of tasks that are easy to automate, a second class that is hard but feasible to automate with simple computers, and a set of modestly complex tasks with hundreds of thousands of edge cases.
Well, today, almost none of the improvements in AI you have read about are being used where it counts, in factories and warehouses and mines and to control trucks. This is for several reasons, the biggest one being that for a “small” niche market it isn’t currently worth the engineering investment, the money is going into autonomous cars, and those aren’t finished, either.
So set [A] got automated in the 1970s. Set [B] gets automated slowly but only where the demand is extremely high for a product using this method, and where the cost of the automation is less than paying thousands of chinese factory workers instead. (they have gotten more expensive). Set [C] is all done by humans, but over time small tricks have reduced how many humans are required.
TFP doesn’t mean productivity per worker. It’s designed to identify economic progress which can’t be attributed to increases in labor or capital intensification aka technological progress applied to make an economy more efficient. Advances in automation should be captured under such a measurement.
You are saying “improvements in output not accomplished by spending more real dollars in equipment or having more people working”.
Hypothetically if we had sentient robots tommorow they would initially be priced extremely high, where the TCO over time of such a system is only slightly less than a worker. Are you positive your metric would correctly account for such a change? This would be a revolutionary improvement that would eventually change everything but in year 1 the new sentient robots are just doing existing jobs with less labor and very high capital costs
No it wouldn’t. TFP is in a sense, a lagging indicator. It captures economic benefits of technological progress but does not evaluate emerging technologies which have yet to make an economic imprint. That said, no AI I’m aware of that presently exists is remotely comparable to a human level AI. Level 5 self driving doesn’t even exist yet and once the computational power used to power AI catches up with Moore’s Law, the field seems due for a slowdown.
I think the most succinct argument I can make as to why I bank on “AI as the solution moving forward” is I just took a break from my day job. And without revealing any proprietary information, to do just basic tasks and analyze video frames in real time, at low resolution, for objects of interest (aka resnet-50, etc) requires teraflops. We trivially talk about how many “TOPs” a given workload is, aka trillion operations per second. And it takes hundreds to do anything semi-good enough to be useful.
That simply didn’t exist during past hype days of AI. It was flat out impossible. There was no future world where the researchers looking into AI in the 1960s could have gotten the results we get today. (which I am sure you will point out are still mediocre, autonomous cars now drive themselves but only when all the conditions are just right). Or in 2010.
So I am just going to take any past hype as the ‘rantings of a uncredible madman’, regardless of which Ivy League lab they were working out of, and go with recent results as my barometer for when AI is going to really takeoff. Which are getting rather good.
Anyways the other piece of this is all your trends you are linking are observing a process where:
a. technology keeps getting more complicated
b. the human beings trying to improve it are not getting smarter very fast (if at all), and they live finite lives.
So it’s perfectly reasonable for real progress to slow over time, except in fields where the technology can help you develop itself, which in some domains of computers has clearly happened. (succinct example: frameworks have made highly sophisticated apps and websites, that would have required an entire studio months of effort 20 years ago to create, doable by one person in a week).
No doubt very significant advances in AI have occurred within the past decade or so. AlphaFold practically “solving” the problem of protein folding for example, is a hopeful glitter of technological progress and the promise of artificial intelligence.
However, it remains an open question how far AI will advance before it runs out of track. Because does appear to be approaching a wall. OpenAI observes that the rate at which added computational power is being supplied to create ever more advanced AI models is far outstripping Moore’s Law. It doubles every 3.4 months. This can’t be sustained for much longer.
Meanwhile, many of the advancements in the quality of the actual algorithms utilized seem to be ephemeral. Numerousstudies have discovered that many of the actual models used by AI today aren’t objectively better than those which already existed years ago.
Given that progress in the quality of models seems to be progressing relatively slowly and the brute force method of adding more computational power isn’t sustainable, another AI winter is well within the cards.
To drag civilization out of a technological stagnation, AI doesn’t need to reach human levels, but it also needs to be able to do much more than it can today. Enabling level 5 autonomous vehicles would probably be a feat on the same scale as the triumphs of the 20th century, but so far AI has continued to fail to deliver complete self driving and it isn’t guaranteed that it manages to before hitting the winter.
Solving protein folding and related problems might be enough to create a new era of progress. Look at the issues we have with vaccines. If we could run computer simulations that tell us what kind of antibodies the body creates when given different antigens we would gain a lot in vaccine design. That means we could make a lot of progress on a universal flu vaccine and an AIDS vaccine.
Designing protein to catalyse various chemical progresses is also a huge win.
If AlphaFold 2 is as accurate as its creators have claimed it undoubtedly represents an enormous technical leap. However, it remains to be seen how regulatory constraints and IP laws will erode its value. mRNA vaccines also represent a huge advancement, but were it not for COVID-19 lowering normal regulatory barriers (and providing a deluge of capital), they would still be a decade away, despite most of the fundamental technology already being ready.
With or without advanced protein folding simulations, so long as we remain in an environment where it costs billions to get a novel medical treatment to mass market, there’s little doubt the full potential of this breakthrough will not be realized anytime soon. The question is, how much progress will remain possible working within these constraints. I still expect it will aid in numerous future medical breakthroughs but I dunno about it unilaterally ushering in a new era of progress.
I don’t think the regulatory barrier against mRNA vaccines were completely unreasonable. If we for example at a recent paper that describes the problems with PEGylated anything (including mRNA) from 2019:
The administration of PEGylated drugs can lead to the production of anti-PEG antibodies(anti-PEG immunoglobulin M (IgM)) and immune response (Figure 1) [30]. Due to these phenomena,the PEG-conjugation of drugs/NPs often only provides a biological advantage during the first dose of a treatment course. By the second dose, the PEGylated agents have been recognized by themononuclear phagocyte system in the spleen and liver and are rapidly cleared from circulation.
We now have a way to deliever mRNA a few times per person to people but outside of pandemic conditions it’s unclear why you want to make the few times that you can effectively give mRNA to a person a vaccine that likely could be made is well via established methods at the cost of maybe not being able to give the person later an mRNA cancer treatment because of too much PEG immunogenicity.
The side effects of the second dose of an mRNA vaccines we see currently are higher then the side effects we see from our well tested vaccine formulations.
I think globalization is actually detrimental to progress. In a globalized world, technological innovations spread around so quickly that whoever fronts the initial investment in capital and effort will be the sucker left standing in the rain. China’s entire success story is based on this.
You’re looking at the fate of individuals (both people and companies). The overall system seems to be flourishing. China has made more growth in absolute terms, and very high numbers in relative terms, than any nation in history. You might notice how Amazon and Alibaba are now flooded with China giving back in the form of innovations. Yes, these products are frequently of questionable quality but even that is a form of experimentation. (how cheap can we make it and it still gets sales...)
Not necessarily. Globalization has had many negative second-order effects. For example: As much as air connectivity has helped us travel across the world, it has also increased the risk of infections travelling longer distances quicker than if we had a localism-based model. If we are epistemically humble enough, it is not difficult to see how many COVID-like events might have happened in the past in various isolated parts of the world, that we do not know of, but never ravaged the entire world.
Globalization has benefits, not saying that it is not useful, but describing progress as a function of globalization is what I take issue with. Progress is a multiscale phenomenon. You need a strong localism-based core for innovation and you also need decentralization to accentuate the process, and then you can use globalization to scale. And then there is also the part where you need a lot of wisdom to know what should be scaled and what shouldn’t be.
I stated that real global progress had been made. China has not “taken” business away from the USA in a sort of mercantile “zero sum game”. They have taken a lot, the USA has gained some, and the global economy is bigger than ever.
Honestly I think globalism is ending it all on its own. Free trade has had the side effect of creating multiple economic powers in competition. China, USA, EU, taiwan, japan, korea—multiple powers. And we don’t need every technology to move forward to end stagnation. We need just one, a form of AI that can accelerate R&D in itself.
And then we would very rapidly find a world where at least one power has self replicating robotics swarms, each producing design optimized products including more of themselves and of course weapons. And any other power, if they don’t want to eventually be defeated will have to invest in the same technology. (At a certain level of scale self replicating robotics will trump nuclear weapons because you could mass manufacture sufficient defenses and bunkers to fight a nuclear war and win)
Typically those who subscribe to the premise of The Great Stagnation have a dim perception of progress in the world of bits. The argument goes that digital progress is overhyped in the present day because it’s one of few, if not the only industry still exhibiting fast progress.
Zooming out to the wider economy, GDP growth rates have slowed enormously in the developed world since the early 70s and total factor productivity growth has declined to rates possibly not seen since the dawn of the industrial revolution. If these measurements are indeed still a reasonable metric of technological progress in the digital era then advancements in computing have not managed to match midcentury levels of progress.
If you accept this analysis but still bank on AI as the solution moving forward, that means accepting the notion that computing progress hasn’t lived up to expectations in the past but will lead to tremendous advancements just on the horizon. Given how many times AI has been hyped only to fall into a stagnant winter soon after, I’m not so sure about this.
Also I don’t have any reason to accept “bankunderground” as a credible and accurate measurement of progress. Especially as I can just “look out the window” and see that somehow this means all of China’s absurd growth in per citizen productivity just...doesn’t show up in the data. Huh.
The data pertains to Britain, not the developing world and the data comes from The Bank of England. Obviously, tremendous economic progress has been made globally since the 70s. But that economic progress is mostly “catching-up” aka developing countries adopting existing technologies. Far less development has happened on the frontier.
Oh. Well focusing on just britain is meaningless. Why not focus on cuba. Or one household down your block. Point is a “small” country can easily do mediocre for any number of reasons while absurd runaway progress can happen elsewhere.
The same trend can be found in every country which was developed by the 70s. Britain is simply a particularly good example because of the amount of record keeping they performed in the 18th and 19th centuries compared to other countries.
However, just looking at data from the 20th century onward, accessible for any developed country, growth at the technological frontier has slowed tremendously since the 70s. Jason Crawford already aggregated a lot of data pertaining to the slowdown here. Long story short, not much technological progress has been made outside of computing in decades.
I thought about this problem a bit more, and let’s drop speculation about what may or may not be possible in the future. And just talk about specific professions over the last 50 years.
Primary schoolteacher. That person has to give a lesson, in front of a limited number of students as children need personalized attention. So you need 1 teacher per about ~20 students, give or take 10, and that’s their full time job. Computers make it where the teacher can fill out their paperwork more easily, but there is more of it, so it’s about the same. Maybe they do slightly better a job than in the 1970s but the productivity is the same.
Janitor. You have a mop, a broom, brushes, a cart full of supplies. For large open areas there are electric floor washing machines of various types. Pretty sure all of this was available by 70s. Only improvement I know of is that companies don’t have their own staff to do it, they outsource, and the outsourcing firms may get slightly more labor per work week out of their workers.
Retail clerk. Once the bar code and register able to scan the code, look up the price, and add it to a total was readily available, late 70s at the latest, that’s about the limit. A clerk has to scan each item, and credit cards take about the same time as cash.
restaurant waitstaff, cook—totally unchanged
accountant—computers have automated huge swaths of it, but companies are far more complex than they were.
Anyways we can go down the list and find a long list of jobs that have changed minimally if at all. And therefore the productivity per worker cannot be expected to improve if the amount of human labor needed hasn’t shrunk. Some tasks, like nuclear plant worker, over time they have become less productive as more of them are needed per megawatt of power, due to more and more long tail risks being discovered.
And then you can talk about how you might get a meaningful increase in productivity from each of these roles. And, well, it’s all coming up AI. I know of no other way. You must build an automated system able to perform most of the task. Some (like schoolteacher) are nearly impossible, others like janitor are surprisingly hard, and some are being automated as we speak. (Amazon Go for retail clerks)
It’s not just about discovering more tail risks but about having a different culture on risk in those companies. One example someone from the industry gave me is that they tell their workers in yearly seminars about how to avoid cutting themselves with paper.
Right. So this is one of those anti-progress patterns I see around. What happens internally to the company is that over the Very Serious People create some Very Serious Internal Processes like There Shall be Risk Management Training (on the prevention of papercuts). And anyone suggesting that maybe they could run the company more efficiently by skipping this training has to argue either (1) elders in the company were wrong to institute such training or (2) they (personally) are pro risk.
Hard to be “pro risk” in the same way if you spoke against, say, diversity quotas by definition you are for discrimination.
So over time the company adds more and more cruft—while not really deleting much—making it less and less economically efficient. This is why big rich companies have to constantly be buying startups for the technology, because they are unable to get their own engineers to develop the same tech (because those engineers are too busy/beat down by mandatory training). And why eventually most big rich companies fail, and their assets and technology get bought up by younger companies who, when the merger goes well, basically throw out the legacy trash. (except when the opposite happens, like in the boeing mcdonnell douglas merger)
Ordering at MacDonalds is very different then it was in the past. You can now both order and pay digitally.
For cooks Googling finds https://magazine.rca.asn.au/kitchen-innovations/ . According to it there are various innovations in commencial kitchens like induction cooking.
Sure. Point is that this lets you go from 10 workers in a restaurant to 9.5 or other small increments. It’s not like the innovation of the tractor and fertilizer and other innovations, which have reduced farmers from 50% of the population (1900) to 2% (today).
To get this with restaurants the only way is intelligent robotics, at least the only way I can see. Other than just “everyone stops eating restaurant food and starts eating homogenous soylent packets.” We could automate that fully with today’s tech. Where today a restaurant with 10 workers gets replaced with 0.4 workers, who work offsite, and respond to elevated customer servers calls and elevated maintenance issues. (‘elevated’ means the autonomy tried and failed to solve the issue already. While automated maintenance isn’t too common, Amazon is experimenting with automated customer service, where in my experience a bot will basically just give a refund if you have any complaint at all about an order)
Ok, so first you aren’t talking about progress really, you are linking data on productivity per worker. Which has gone up over the decades but at a slower pace. Why is that?
Well, the simplest theory is that suppose there is a class of tasks that are easy to automate, a second class that is hard but feasible to automate with simple computers, and a set of modestly complex tasks with hundreds of thousands of edge cases.
Well, today, almost none of the improvements in AI you have read about are being used where it counts, in factories and warehouses and mines and to control trucks. This is for several reasons, the biggest one being that for a “small” niche market it isn’t currently worth the engineering investment, the money is going into autonomous cars, and those aren’t finished, either.
So set [A] got automated in the 1970s. Set [B] gets automated slowly but only where the demand is extremely high for a product using this method, and where the cost of the automation is less than paying thousands of chinese factory workers instead. (they have gotten more expensive). Set [C] is all done by humans, but over time small tricks have reduced how many humans are required.
So that would explain the observation.
TFP doesn’t mean productivity per worker. It’s designed to identify economic progress which can’t be attributed to increases in labor or capital intensification aka technological progress applied to make an economy more efficient. Advances in automation should be captured under such a measurement.
You are saying “improvements in output not accomplished by spending more real dollars in equipment or having more people working”.
Hypothetically if we had sentient robots tommorow they would initially be priced extremely high, where the TCO over time of such a system is only slightly less than a worker. Are you positive your metric would correctly account for such a change? This would be a revolutionary improvement that would eventually change everything but in year 1 the new sentient robots are just doing existing jobs with less labor and very high capital costs
No it wouldn’t. TFP is in a sense, a lagging indicator. It captures economic benefits of technological progress but does not evaluate emerging technologies which have yet to make an economic imprint. That said, no AI I’m aware of that presently exists is remotely comparable to a human level AI. Level 5 self driving doesn’t even exist yet and once the computational power used to power AI catches up with Moore’s Law, the field seems due for a slowdown.
I think the most succinct argument I can make as to why I bank on “AI as the solution moving forward” is I just took a break from my day job. And without revealing any proprietary information, to do just basic tasks and analyze video frames in real time, at low resolution, for objects of interest (aka resnet-50, etc) requires teraflops. We trivially talk about how many “TOPs” a given workload is, aka trillion operations per second. And it takes hundreds to do anything semi-good enough to be useful.
That simply didn’t exist during past hype days of AI. It was flat out impossible. There was no future world where the researchers looking into AI in the 1960s could have gotten the results we get today. (which I am sure you will point out are still mediocre, autonomous cars now drive themselves but only when all the conditions are just right). Or in 2010.
So I am just going to take any past hype as the ‘rantings of a uncredible madman’, regardless of which Ivy League lab they were working out of, and go with recent results as my barometer for when AI is going to really takeoff. Which are getting rather good.
Anyways the other piece of this is all your trends you are linking are observing a process where:
a. technology keeps getting more complicated
b. the human beings trying to improve it are not getting smarter very fast (if at all), and they live finite lives.
So it’s perfectly reasonable for real progress to slow over time, except in fields where the technology can help you develop itself, which in some domains of computers has clearly happened. (succinct example: frameworks have made highly sophisticated apps and websites, that would have required an entire studio months of effort 20 years ago to create, doable by one person in a week).
No doubt very significant advances in AI have occurred within the past decade or so. AlphaFold practically “solving” the problem of protein folding for example, is a hopeful glitter of technological progress and the promise of artificial intelligence.
However, it remains an open question how far AI will advance before it runs out of track. Because does appear to be approaching a wall. OpenAI observes that the rate at which added computational power is being supplied to create ever more advanced AI models is far outstripping Moore’s Law. It doubles every 3.4 months. This can’t be sustained for much longer.
Meanwhile, many of the advancements in the quality of the actual algorithms utilized seem to be ephemeral. Numerous studies have discovered that many of the actual models used by AI today aren’t objectively better than those which already existed years ago.
Given that progress in the quality of models seems to be progressing relatively slowly and the brute force method of adding more computational power isn’t sustainable, another AI winter is well within the cards.
To drag civilization out of a technological stagnation, AI doesn’t need to reach human levels, but it also needs to be able to do much more than it can today. Enabling level 5 autonomous vehicles would probably be a feat on the same scale as the triumphs of the 20th century, but so far AI has continued to fail to deliver complete self driving and it isn’t guaranteed that it manages to before hitting the winter.
Solving protein folding and related problems might be enough to create a new era of progress. Look at the issues we have with vaccines. If we could run computer simulations that tell us what kind of antibodies the body creates when given different antigens we would gain a lot in vaccine design. That means we could make a lot of progress on a universal flu vaccine and an AIDS vaccine.
Designing protein to catalyse various chemical progresses is also a huge win.
If AlphaFold 2 is as accurate as its creators have claimed it undoubtedly represents an enormous technical leap. However, it remains to be seen how regulatory constraints and IP laws will erode its value. mRNA vaccines also represent a huge advancement, but were it not for COVID-19 lowering normal regulatory barriers (and providing a deluge of capital), they would still be a decade away, despite most of the fundamental technology already being ready.
With or without advanced protein folding simulations, so long as we remain in an environment where it costs billions to get a novel medical treatment to mass market, there’s little doubt the full potential of this breakthrough will not be realized anytime soon. The question is, how much progress will remain possible working within these constraints. I still expect it will aid in numerous future medical breakthroughs but I dunno about it unilaterally ushering in a new era of progress.
I don’t think the regulatory barrier against mRNA vaccines were completely unreasonable. If we for example at a recent paper that describes the problems with PEGylated anything (including mRNA) from 2019:
We now have a way to deliever mRNA a few times per person to people but outside of pandemic conditions it’s unclear why you want to make the few times that you can effectively give mRNA to a person a vaccine that likely could be made is well via established methods at the cost of maybe not being able to give the person later an mRNA cancer treatment because of too much PEG immunogenicity.
The side effects of the second dose of an mRNA vaccines we see currently are higher then the side effects we see from our well tested vaccine formulations.
I think globalization is actually detrimental to progress. In a globalized world, technological innovations spread around so quickly that whoever fronts the initial investment in capital and effort will be the sucker left standing in the rain. China’s entire success story is based on this.
You’re looking at the fate of individuals (both people and companies). The overall system seems to be flourishing. China has made more growth in absolute terms, and very high numbers in relative terms, than any nation in history. You might notice how Amazon and Alibaba are now flooded with China giving back in the form of innovations. Yes, these products are frequently of questionable quality but even that is a form of experimentation. (how cheap can we make it and it still gets sales...)
Not necessarily. Globalization has had many negative second-order effects. For example: As much as air connectivity has helped us travel across the world, it has also increased the risk of infections travelling longer distances quicker than if we had a localism-based model. If we are epistemically humble enough, it is not difficult to see how many COVID-like events might have happened in the past in various isolated parts of the world, that we do not know of, but never ravaged the entire world.
Globalization has benefits, not saying that it is not useful, but describing progress as a function of globalization is what I take issue with. Progress is a multiscale phenomenon. You need a strong localism-based core for innovation and you also need decentralization to accentuate the process, and then you can use globalization to scale. And then there is also the part where you need a lot of wisdom to know what should be scaled and what shouldn’t be.
I can’t see any structured reasoning steps in your argument.
I stated that real global progress had been made. China has not “taken” business away from the USA in a sort of mercantile “zero sum game”. They have taken a lot, the USA has gained some, and the global economy is bigger than ever.
At this point good faith has broken in this argument, we should stop.