Lalartu’s claim was that the technology offered no major benefit so far.
Note that treating a few people with severe genetic disease provides no ROI.
This is because those people are rare (most will have died), and there is simply not the market to support the expensive effort to develop a treatment. This is why gene therapy efforts are limited.
Treating diseases isn’t much of a positive feedback loop but claiming “no ROI” strikes me as extremely callous towards those afflicted. Maybe it doesn’t affect enough people to be sufficiently “significant” in this context but it’s certainly not zero return on investment unless reducing human suffering has no value.
Unfortunately, for our purposes it kinda is. There are 2 issues:
Most people don’t have diseases that can be cured or prevented this way
CRISPR is actually quite limited, and in particular the requirement that it only affects your children basically makes it a dealbreaker for human genetic engineering, especially if you’re trying to make superpowered people.
Genetic engineering for humans needs to be both seriously better and they need to be able to edit the somatic cells as well as the gametes cells, or it doesn’t matter.
I don’t dispute this and there are publicly funded efforts that at a small scale, do help people where there isn’t ROI. A few people with blindness or paralysis have received brain implants. A few people have received gene therapies. But the overall thread is it significant. Is the technology mainstream, with massive amounts of sales and R&D effort going into improving it? Is it benefitting most living humans? And the answer is no and no. The brain implants and gene therapies are not very good: they are frankly crap, for the reason that there is not enough resources to make them better.
And from a utilitarian perspective this is correct: in a world where you have very finite resources, most of those resources should be spent on activities that give ROI, as in more resources than you started with. This may sound “callous” but having more resources allows more people to benefit overall from a general sense.
This is why AI and AGI is so different : it trivially gives ROI. Just the current llms produce more value per dollar, on the subset of tasks they are able to do, than any educated human, even from the cheapest countries.
Lalartu’s claim was that the technology offered no major benefit so far.
Note that treating a few people with severe genetic disease provides no ROI.
This is because those people are rare (most will have died), and there is simply not the market to support the expensive effort to develop a treatment. This is why gene therapy efforts are limited.
Treating diseases isn’t much of a positive feedback loop but claiming “no ROI” strikes me as extremely callous towards those afflicted. Maybe it doesn’t affect enough people to be sufficiently “significant” in this context but it’s certainly not zero return on investment unless reducing human suffering has no value.
Unfortunately, for our purposes it kinda is. There are 2 issues:
Most people don’t have diseases that can be cured or prevented this way
CRISPR is actually quite limited, and in particular the requirement that it only affects your children basically makes it a dealbreaker for human genetic engineering, especially if you’re trying to make superpowered people.
Genetic engineering for humans needs to be both seriously better and they need to be able to edit the somatic cells as well as the gametes cells, or it doesn’t matter.
I don’t dispute this and there are publicly funded efforts that at a small scale, do help people where there isn’t ROI. A few people with blindness or paralysis have received brain implants. A few people have received gene therapies. But the overall thread is it significant. Is the technology mainstream, with massive amounts of sales and R&D effort going into improving it? Is it benefitting most living humans? And the answer is no and no. The brain implants and gene therapies are not very good: they are frankly crap, for the reason that there is not enough resources to make them better.
And from a utilitarian perspective this is correct: in a world where you have very finite resources, most of those resources should be spent on activities that give ROI, as in more resources than you started with. This may sound “callous” but having more resources allows more people to benefit overall from a general sense.
This is why AI and AGI is so different : it trivially gives ROI. Just the current llms produce more value per dollar, on the subset of tasks they are able to do, than any educated human, even from the cheapest countries.