superconductors—there is a large amount of scientific and research equipment that relies on superconductors. Simplest is NMR magnets. Chemists would not be as productive, but you can argue they would have used alternative technologies not fully exploited in a universe without superconductivity but all the other exploitable natural phenomena we have. So semi correct.
genetic engineering—were you aware that most of the USA corn crop is GMO? And other deliberate changes? The mRNA vaccines were all based on it?
I suspect you meant to make narrower claims:
(1) power transmission and maglev transport superconductivity
(2) human genetic engineering
I would agree completely with your narrower claims.
And would then ask for you to examine the “business case” in the era these things were first discovered. Explain how :
Human spaceflight
superconducting power transmission/maglev transport
3. human genetic engineering
Would ever, even at the heyday after discovery, provide ROI. Think of ROI in terms of real resources and labor instead of money if it helps. Assume the government is willing to loan limitless money at low interest if it helps, just it does have to produce ROI.
Finally, look at the business case for AI.
These are not the same classes of technology. Human spaceflight has zero ROI. Superconductors for the 2 purposes increase efficiency, but often only on the order of 10-20%, so unless the cost of energy saved > cost of equipment it has zero ROI. And human genetic engineering takes too long to give any ROI, even with low interest rates, you have to wait basically for the edited humans to become adults and be productive, and you pay a human cost for all the failures that has enormous reputational costs to any institution.
AI has explosive, self amplifying ROI that is a real business case for effectively “unfathomably large money/material resources”. This is with very conservative assumptions about what AI can be trusted to do. (aka if you ONLY trusted narrow, sandboxed AIs to perform a well defined task, and shut down if anything is more than slightly outside the training environment)
My claim is different—that there is no defined threshold for significance, but on the spectrum from useless to world-changing some technologies which looked very promising decades ago still lie closer to lower end. So it is possible that in 2053 AI products would be about as important as MRI scanners and GMO crops in 2023.
Ok. But how. GMO crops at their theoretical limit cannot fix carbon any faster than thermodynamics will allow. Given all the parts the edited genes spec for come from nature’s codon space, this is what, 100 percent gain at the limit?
So you might get double the growth rate, probably with tradeoffs that make the crop more fragile and more expensive to grow.
MRI well, it lets you crudely see inside the human body in a different way the x-rays. It lets you watch helplessly as tumors kill someone- it provides no tooling to do anything about it. Presumably with the right dyes and alternate techniques like ct scanning you can learn about the same information.
Please try to phrase how AI, with it’s demonstrated abilities, lumps into the above. Does it not let you build self replicating robots? Why?
If you mean human genetic enhancement like designer babies, then sure. Not much impact because ethical concerns prevent it. However, the advent of tech like CRISPR allows for significant impact like gene therapy, though this is still an emerging field. (Just ask someone with sickle cell disease if they think a cure would be significant.)
Lalartu’s claim was that the technology offered no major benefit so far.
Note that treating a few people with severe genetic disease provides no ROI.
This is because those people are rare (most will have died), and there is simply not the market to support the expensive effort to develop a treatment. This is why gene therapy efforts are limited.
Treating diseases isn’t much of a positive feedback loop but claiming “no ROI” strikes me as extremely callous towards those afflicted. Maybe it doesn’t affect enough people to be sufficiently “significant” in this context but it’s certainly not zero return on investment unless reducing human suffering has no value.
Unfortunately, for our purposes it kinda is. There are 2 issues:
Most people don’t have diseases that can be cured or prevented this way
CRISPR is actually quite limited, and in particular the requirement that it only affects your children basically makes it a dealbreaker for human genetic engineering, especially if you’re trying to make superpowered people.
Genetic engineering for humans needs to be both seriously better and they need to be able to edit the somatic cells as well as the gametes cells, or it doesn’t matter.
I don’t dispute this and there are publicly funded efforts that at a small scale, do help people where there isn’t ROI. A few people with blindness or paralysis have received brain implants. A few people have received gene therapies. But the overall thread is it significant. Is the technology mainstream, with massive amounts of sales and R&D effort going into improving it? Is it benefitting most living humans? And the answer is no and no. The brain implants and gene therapies are not very good: they are frankly crap, for the reason that there is not enough resources to make them better.
And from a utilitarian perspective this is correct: in a world where you have very finite resources, most of those resources should be spent on activities that give ROI, as in more resources than you started with. This may sound “callous” but having more resources allows more people to benefit overall from a general sense.
This is why AI and AGI is so different : it trivially gives ROI. Just the current llms produce more value per dollar, on the subset of tasks they are able to do, than any educated human, even from the cheapest countries.
human spaceflight—you’re correct
superconductors—there is a large amount of scientific and research equipment that relies on superconductors. Simplest is NMR magnets. Chemists would not be as productive, but you can argue they would have used alternative technologies not fully exploited in a universe without superconductivity but all the other exploitable natural phenomena we have. So semi correct.
genetic engineering—were you aware that most of the USA corn crop is GMO? And other deliberate changes? The mRNA vaccines were all based on it?
I suspect you meant to make narrower claims:
(1) power transmission and maglev transport superconductivity
(2) human genetic engineering
I would agree completely with your narrower claims.
And would then ask for you to examine the “business case” in the era these things were first discovered. Explain how :
Human spaceflight
superconducting power transmission/maglev transport
3. human genetic engineering
Would ever, even at the heyday after discovery, provide ROI. Think of ROI in terms of real resources and labor instead of money if it helps. Assume the government is willing to loan limitless money at low interest if it helps, just it does have to produce ROI.
Finally, look at the business case for AI.
These are not the same classes of technology. Human spaceflight has zero ROI. Superconductors for the 2 purposes increase efficiency, but often only on the order of 10-20%, so unless the cost of energy saved > cost of equipment it has zero ROI. And human genetic engineering takes too long to give any ROI, even with low interest rates, you have to wait basically for the edited humans to become adults and be productive, and you pay a human cost for all the failures that has enormous reputational costs to any institution.
AI has explosive, self amplifying ROI that is a real business case for effectively “unfathomably large money/material resources”. This is with very conservative assumptions about what AI can be trusted to do. (aka if you ONLY trusted narrow, sandboxed AIs to perform a well defined task, and shut down if anything is more than slightly outside the training environment)
My claim is different—that there is no defined threshold for significance, but on the spectrum from useless to world-changing some technologies which looked very promising decades ago still lie closer to lower end. So it is possible that in 2053 AI products would be about as important as MRI scanners and GMO crops in 2023.
Ok. But how. GMO crops at their theoretical limit cannot fix carbon any faster than thermodynamics will allow. Given all the parts the edited genes spec for come from nature’s codon space, this is what, 100 percent gain at the limit?
So you might get double the growth rate, probably with tradeoffs that make the crop more fragile and more expensive to grow.
MRI well, it lets you crudely see inside the human body in a different way the x-rays. It lets you watch helplessly as tumors kill someone- it provides no tooling to do anything about it. Presumably with the right dyes and alternate techniques like ct scanning you can learn about the same information.
Please try to phrase how AI, with it’s demonstrated abilities, lumps into the above. Does it not let you build self replicating robots? Why?
“human genetic engineering”
If you mean human genetic enhancement like designer babies, then sure. Not much impact because ethical concerns prevent it. However, the advent of tech like CRISPR allows for significant impact like gene therapy, though this is still an emerging field. (Just ask someone with sickle cell disease if they think a cure would be significant.)
Lalartu’s claim was that the technology offered no major benefit so far.
Note that treating a few people with severe genetic disease provides no ROI.
This is because those people are rare (most will have died), and there is simply not the market to support the expensive effort to develop a treatment. This is why gene therapy efforts are limited.
Treating diseases isn’t much of a positive feedback loop but claiming “no ROI” strikes me as extremely callous towards those afflicted. Maybe it doesn’t affect enough people to be sufficiently “significant” in this context but it’s certainly not zero return on investment unless reducing human suffering has no value.
Unfortunately, for our purposes it kinda is. There are 2 issues:
Most people don’t have diseases that can be cured or prevented this way
CRISPR is actually quite limited, and in particular the requirement that it only affects your children basically makes it a dealbreaker for human genetic engineering, especially if you’re trying to make superpowered people.
Genetic engineering for humans needs to be both seriously better and they need to be able to edit the somatic cells as well as the gametes cells, or it doesn’t matter.
I don’t dispute this and there are publicly funded efforts that at a small scale, do help people where there isn’t ROI. A few people with blindness or paralysis have received brain implants. A few people have received gene therapies. But the overall thread is it significant. Is the technology mainstream, with massive amounts of sales and R&D effort going into improving it? Is it benefitting most living humans? And the answer is no and no. The brain implants and gene therapies are not very good: they are frankly crap, for the reason that there is not enough resources to make them better.
And from a utilitarian perspective this is correct: in a world where you have very finite resources, most of those resources should be spent on activities that give ROI, as in more resources than you started with. This may sound “callous” but having more resources allows more people to benefit overall from a general sense.
This is why AI and AGI is so different : it trivially gives ROI. Just the current llms produce more value per dollar, on the subset of tasks they are able to do, than any educated human, even from the cheapest countries.