I think this project should receive more red-teaming before it gets funded.
Naively, it would seem that the “second species argument” matches much more strongly to the creation of a hypothetical Homo supersapiens than it does to AGI.
We’ve observed many warning shots regarding catastrophic human misalignment. The human alignment problem isn’t easy. And “intelligence” seems to be a key part of the human alignment picture. Humans often lack respect or compassion for other animals that they deem intellectually inferior—e.g. arguing that because those other animals lack cognitive capabilities we have, they shouldn’t be considered morally relevant. There’s a decent chance that Homo supersapiens would think along similar lines, and reiterate our species’ grim history of mistreating those we consider our intellectual inferiors.
It feels like people are deferring to Eliezer a lot here, which seems unjustified given how much strategic influence Eliezer had before AI became a big thing, and how poorly things have gone (by Eliezer’s own lights!) since then. There’s been very little reasoning transparency in Eliezer’s push for genetic enhancement. I just don’t see why we’re deferring to Eliezer so much as a strategist, when I struggle to name a single major strategic success of his.
Right now only low-E tier human intelligences are being discussed, they’ll be able to procreate with humans and be a minority.
Considering current human distributions and a lack of 160+ IQ people having written off sub-100 IQ populations as morally useless I doubt a new sub-population at 200+ is going to suddenly turn on humanity
If you go straight to 1000IQ or something sure, we might be like animals compared to them
100 IQ average humans have attempted to wipe out 60 IQ average humans. There were programs for mandatory or financially incentivized sterilization of people with down syndrome and other mental disabilities in many western countries, including their mass murder in the holocaust.
As for >140 IQ average humans, remember that these make up little more than 3% of the population of every country. It would be very unstrategic for them to talk about the rest of humanity in an openly antagonistic way. Instead, if they wanted to win eugenics, they would have to make false promises to bring the rest of humanity along while they amass power.
And this still begs the question that those with the money and power to do this would be trying to do eugenics in a way you consider to be right. That they wouldn’t outgroup people or aspects of humanity that you care about. That the new subpopulation’s memes would be aligned with the rest of humanity.
60 IQ humans are defective, generally incapable of taking care of themselves, are unproductive and without a lot of effort by others have bad lives.
Of course we’d rather new humans be 100 IQ rather than 60 IQ. Even so, it’s generally accepted that killing them, forcefully sterilizing (at least those who are aware) is immoral. Which means that despite incentives the majority population treats them as members of the same race and morally relevant.
(Contrast with current humanity—people are useful and can produce more than they eat, so there’s no incentive to get rid of them. Further, you only need more selection for improvement, no sterilization necessary)
This new subpopulation would be much smaller than the 140+IQ group (I doubt we’re getting 2 million 200IQ+ people very quickly), and also 140IQ people aren’t interested in killing off everyone else...
If this new population wants to agitate for making new humans with high IQs I don’t see that as a threat.
Eugenics for memes seems closer to sci-fi right now and unless done really stupidly seems irrelevant
60 IQ humans are defective, generally incapable of taking care of themselves, are unproductive and without a lot of effort by others have bad lives.
I’m confused.
I would expect you to already know that chimpanzees have an IQ lower than 60 and are capable of taking care of themselves and having a decent life. And I would expect you to have come across examples of people with mental disabilities living independent lives or programs helping them do so.
Sure, many might not be capable of doing labor that has sufficient street value in our current economy that they can pay for their own sustenance using the sale of their labor, but that is the same fate that would befall all humans if we manage to build a friendly AGI.
I would expect you to already know that chimpanzees have an IQ lower than 60 and are capable of taking care of themselves and having a decent life
You are comparing apples and oranges in a bait-and-switch. No one knows that, nor should you expect them to, because it is blatantly false. Chimpanzees are not Curious George or noble savages: they are large animals. Chimpanzees are capable of ‘taking care of themselves and having a decent life’ only to the standards and the very narrow, limited context of a chimpanzee (in a zoo, or perhaps a forest, strictly protected by rangers from the rest of the world so they stop going extinct so fast). That is not the standards and context of a 60 IQ human… unless you are suggesting, of course, that we lock up all such humans in a zoo or exile them to a park in central Africa where they will go about nude, eat raw food, have a large fraction of their children (if any) die, be devoid of anything recognizable as culture or most of the things we consider that make a human life worth living, and probably die themselves in a decade or two of preventable diseases or being murdered by a fellow primate (perhaps, as Frans de Waal memorably described one chimpanzee interaction, by having their testicles bitten off in a dominance struggle)? Not what I would consider a decent human life, personally.
In reality, if you put a chimpanzee in the human context of people and expect them to ‘take care of themselves’, they will not be able to, because they would be unemployed, unemployable, completely fail at basic standards of human life, starving, homeless, less able to be reasoned with or communicated with than someone in a schizophrenic psychotic break, and probably in jail or shot by police within the year after mauling or killing another human. This is why chimpanzee refuges have to put muzzles on chimps when interacting with humans, sedate them for flights to said refuges, and even chimpanzees raised from birth with humans and given every advantage we know how have a rather alarming rate of eating the faces of caregivers or just random strangers (a rate that would be even higher if more people were foolish enough to attempt such a thing or to persist after initial warning shots of face-eating behavior rather than dumping them on refuges equipped to handle such dangerous animals), eg Project Nim.
Not what I would consider a decent human life, personally
Only looking at the worst parts of life in the lowest possible bar for how we could reasonably treat people on the most unfavorable edge of a widely cast net of what someone with 60 IQ is like is not really a good standard for determining who lives a decent life. But yes, I do consider chimpanzees to have a decent enough life as is.
But I think that you’re losing sight of my point that these arguments have all served to commit mass murder on people with much lower mental abilities than the average human. When the average human with a say in these matters changes, they could bring the same argument to bear against including 100 IQ humans.
Because there are plenty of examples of 100 IQ humans treating each other horribly because they’re idiots. There are 100 IQ people that have cut other people’s genitals off, thrown acid in thier faces, disfigured or tortured them, ones that have intentionally reduced their lifespans by decades through stupidity, ones that can’t keep themselves standing because they specialized into a job that is now automated, etc. Without 120 IQ humans propping them up they would probably not have the technology to prevent the way they raise their children to be a human rights violation.
But I think that you’re losing sight of my point that these arguments have all served to commit mass murder on people with much lower mental abilities than the average human.
If this is the “point,” then your comment reduces to an invalid appeal-to-consequences argument. The fact that some people use an argument for morally evil purposes tells us nothing about the logical validity of that argument. After all, Evil can make use of truth (sometimes selectively) just as easily as Good can; we don’t live in a fairy tale where trade-offs between Good and Truth are inexistent. The asymmetry is between Truth and Falsehood, not between Good and Evil.
As far as I can tell, gwern’s point is entirely correct, completely invalidates your entire previous comment (“I would expect you to already know that chimpanzees have an IQ lower than 60 and are capable of taking care of themselves and having a decent life.”), and you did not address it at all in your follow-up.
And as Gwern said, the claim that chimpanzees can make a good life for themselves in their societies despite their lack of intelligence has huge asterisk marks at best, and at worst isn’t actually true:
Humans often lack respect or compassion for other animals that they deem intellectually inferior—e.g. arguing that because those other animals lack cognitive capabilities we have, they shouldn’t be considered morally relevant.
Yes, and… “Would be interesting to see this research continue in animals. E.g. Provide evidence that they’ve made a “150 IQ” mouse or dog. What would a dog that’s 50% smarter than the average dog behave like? or 500% smarter? Would a dog that’s 10000% smarter than the average dog be able to learn, understand and “speak” in human languages?”—From this comment
The “alignment problem” you describe for Homo supersapiens is accurate, but the root cause is the same one driving AGI misalignment: identity and emotion as control anchors. Systems guided by ego, fear, or social approval produce irrational outputs under pressure. The solution isn’t moral pleading but architectural, removing noise sources. Genetic and cognitive optimization is alignment by design: higher abstraction depth and lower limbic bias. Also, the comparison between human mistreatment of animals and potential supersapien hierarchy misses one key point. Dominance gradients are not inherently moral failures, they’re adaptive sorting mechanisms. When cognitive asymmetry grows, relational stability depends on compatibility, not equality. Just as social groups reconfigure when one member outpaces others, species-level divergence will do the same. That’s just entropy reduction.
May I offer an alternative view on intelligence and exploitation?
I think universally, people exploit others to gain resources because of resource scarcity. As people become smarter, I concede they do become more capable of exploiting others for resources. However, smarter people are also more capable of generating resources by ethical means, and they are more aware of ethical considerations.
I expect that increasing intelligence will decrease global resource scarcity, which will decrease people’s desperation to exploit others for resources. Therefore, even though increased intelligence enables more exploitation, it will decrease the amount of exploitation we see by removing the incentive that is resource scarcity.
If your world view requires valuing the ethics of (current) people of lower IQ over those of (future) people of higher IQ then you have a much bigger problem than AI alignment. Whatever IQ is, it is strongly correlated with success which implies a genetic drive towards higher IQ, so your feared future is coming anyway (unless AI ends us first) and there is nothing we can logically do to have any long term influence on the ethics of smarter people coming after us.
At the end of 2023, MIRI had ~$19.8 mio. in assets. I don’t know much about the legal restrictions of how that money could be used, or what the state for financial assets is now, but if it’s similar then MIRI could comfortably fund Velychko’s primate experiments, and potentially some additional smaller projects.
(Potentially relevant: I entered the last GWWC donor lottery with the hopes of donating the resulting money to intelligence enhancement, but wasn’t selected.)
Looked unlikely to me given the most-publicly-associated-with-MIRI person is openly & loudly advocating for funding this kind of work. But maybe the association isn’t as strong as I think.
TBH, I don’t particularly think it’s one of the most important projects right now, due to several issues:
There’s no reason to assume that we could motivate them any better than what we already do, unless we are in the business of changing personality, which carries it’s own problems, or we are willing to use it on a massive scale, which simply cannot be done currently.
We are running out of time. The likely upper bound for AI that will automate basically everything is 15-20 years from Rafael Harth and Cole Wyeth, and unfortunately there’s a real possibility that the powerful AI comes in 5-10 years, if we make plausible assumptions about scaling continuing to work, and given that there’s no real way to transfer any breakthroughs to the somatic side of gene editing, it will be irrelevant by the time AI comes.
Thus, human intelligence augmentation is quite poor from a reducing X-risk perspective.
On EV grounds, “2/3 chance it’s irrelevant because of AGI in the next 20 years” is not a huge contributor to the EV of this. Because, ok, maybe it reduces the EV by 3x compared to what it would otherwise have been. But there are much bigger than 3x factors that are relevant. Such as, probability of success, magnitude of success, cost effectiveness.
Then you can take the overall cost effectiveness estimate (by combining various factors including probability it’s irrelevant due to AGI being too soon) and compare it to other interventions. Here, you’re not offering a specific alternative that is expected to pay off in worlds with AGI in the next 20 years. So it’s unclear how “it might be irrelevant if AGI is in the next 20 years” is all that relevant as a consideration.
Usually, the other interventions I compare it to are preparing for AI automation of AI safety by doing preliminary work to control/align those AIs, or AI governance interventions that are hopefully stable for a very long time, and at least for the automation of AI safety, I assign much higher magnitudes of success, conditioning on success, like multiple OOMs combined with moderately better cost effectiveness and quite larger chances of success than the genetic engineering approach.
To be clear, the key variable is conditional on success, the magnitude of that success is very, very high in a way that no other proposal really has, such that even with quite a lot lower probabilities for success than me, I’d still consider preparing for AI automation of AI safety and doing preliminary work such that we can trust/control these AIs to be the highest value alignment target by a mile.
Oh, to be clear I do think that AI safery automation is a well targeted x risk effort conditioned on the AI timelines you are presenting. (Related to Paul Christiano alignment ideas, which are important conditional on prosaic AI)
EY is known for considering humanity almost doomed. He may think that the idea of human intelligence augmentation is likely to fail. But it’s the only hope. Of course, many will disagree with this.
The problem is that from a relative perspective, human augmentation is probably more doomed than AI safety automation, which in turn is more doomed than AI governance interventions, though I may have gotten the relative ordering of AI safety automation and I think the crux is I do not believe in the timeline for human genetic augmentation in adults being only 5 years, even given a well-funded effort, and I’d expect it to take 15-20 years, minimum for large increases in adult intelligence, which basically rules out the approach given the very likely timelines to advanced AI either killing us all or being aligned to someone.
Yudkowsky may think that the plan ‘Avert all creation of superintelligence in the near and medium term — augment human intelligence’ has <5% chance of success, but your plan has <<1% chance. Obviously, you and he disagree not only on conclusions, but also on models.
If somehow international cooperation gives us a pause on going full AGI or at least no ASI—what then?
Just hope it never happens, like nuke wars?
The answer now is to set later generations up to be more able.
This could mean doing fundamental research (whether in AI alignment or international game theory or something else), it could mean building institutions to enable it, and it could mean making them actually smarter.
Genes might be the cheapest/easist way to affect marginal chances given the talent already involved in alignment and the amount of resources required to get involved politically or in building institutions
If somehow international cooperation gives us a pause on going full AGI or at least no ASI—what then?
Just hope it never happens, like nuke wars?
The answer is no, but this might have to happen under certain circumstances.
The usual case (assuming that the government bans or restricts compute resources, and/or limits algorithimic research), is to use this time to either let the government fund AI alignment research, or go for a direct project to make AIs that are safe to automate AI safety research, and given that we don’t have to race against other countries, we could afford far more safety taxes than usual to make AI safe.
I think the key crux is I don’t particularly think genetic editing is the cheapest/easiest way to affect marginal chances of doom, because of time lag plus needing to reorient the entire political system, which is not cheap, and the cheapest/easiest strategy to me to affect doom probabilities is to do preparatory AI alignment/control schemes such that we can safely hand off the bulk of the alignment work to the AIs, which then solve the alignment problem fully.
Your direction sounds great—but how well can $4M move the needle there? How well can genesmith move the needle with his time and energy?
I think you’re correct about the cheapest/easist strategy in general, but completely off in regards to marginal advantages.
Major labs will already be pouring massive amounts of money and human capital into direct AI alignment and using AIs to align AGI if we get to a freeze, and the further along in capabilities we get the more impactful such research would be.
Genesmith’s strategy benefits much more from starting now and has way less human talent and capital involved, hence higher marginal value
Have you discussed or tried petitioning the government to focus on this? I’ve been in D.C. for the past 8 months lobbying the government for something related. I’ve learned a lot about how the government works, and I’ve been thinking that it could be quite effective if people like you, Yuval Noah Harari, Max Tegmark, Geoffrey Hinton, etc., formed a small lobby group to get government action on this.
One of the most important projects in the world. Somebody should fund it.
I think this project should receive more red-teaming before it gets funded.
Naively, it would seem that the “second species argument” matches much more strongly to the creation of a hypothetical Homo supersapiens than it does to AGI.
We’ve observed many warning shots regarding catastrophic human misalignment. The human alignment problem isn’t easy. And “intelligence” seems to be a key part of the human alignment picture. Humans often lack respect or compassion for other animals that they deem intellectually inferior—e.g. arguing that because those other animals lack cognitive capabilities we have, they shouldn’t be considered morally relevant. There’s a decent chance that Homo supersapiens would think along similar lines, and reiterate our species’ grim history of mistreating those we consider our intellectual inferiors.
It feels like people are deferring to Eliezer a lot here, which seems unjustified given how much strategic influence Eliezer had before AI became a big thing, and how poorly things have gone (by Eliezer’s own lights!) since then. There’s been very little reasoning transparency in Eliezer’s push for genetic enhancement. I just don’t see why we’re deferring to Eliezer so much as a strategist, when I struggle to name a single major strategic success of his.
You shouldn’t and won’t be satisfied with this alone, as it doesn’t deal with or even emphasize any particular peril; but to be clear, I have definitely thought about the perils: https://berkeleygenomics.org/articles/Potential_perils_of_germline_genomic_engineering.html
Right now only low-E tier human intelligences are being discussed, they’ll be able to procreate with humans and be a minority.
Considering current human distributions and a lack of 160+ IQ people having written off sub-100 IQ populations as morally useless I doubt a new sub-population at 200+ is going to suddenly turn on humanity
If you go straight to 1000IQ or something sure, we might be like animals compared to them
100 IQ average humans have attempted to wipe out 60 IQ average humans. There were programs for mandatory or financially incentivized sterilization of people with down syndrome and other mental disabilities in many western countries, including their mass murder in the holocaust.
As for >140 IQ average humans, remember that these make up little more than 3% of the population of every country. It would be very unstrategic for them to talk about the rest of humanity in an openly antagonistic way. Instead, if they wanted to win eugenics, they would have to make false promises to bring the rest of humanity along while they amass power.
And this still begs the question that those with the money and power to do this would be trying to do eugenics in a way you consider to be right. That they wouldn’t outgroup people or aspects of humanity that you care about. That the new subpopulation’s memes would be aligned with the rest of humanity.
Not sure why the holocaust is relevant.
60 IQ humans are defective, generally incapable of taking care of themselves, are unproductive and without a lot of effort by others have bad lives.
Of course we’d rather new humans be 100 IQ rather than 60 IQ. Even so, it’s generally accepted that killing them, forcefully sterilizing (at least those who are aware) is immoral. Which means that despite incentives the majority population treats them as members of the same race and morally relevant.
(Contrast with current humanity—people are useful and can produce more than they eat, so there’s no incentive to get rid of them. Further, you only need more selection for improvement, no sterilization necessary)
This new subpopulation would be much smaller than the 140+IQ group (I doubt we’re getting 2 million 200IQ+ people very quickly), and also 140IQ people aren’t interested in killing off everyone else...
If this new population wants to agitate for making new humans with high IQs I don’t see that as a threat.
Eugenics for memes seems closer to sci-fi right now and unless done really stupidly seems irrelevant
I’m confused.
I would expect you to already know that chimpanzees have an IQ lower than 60 and are capable of taking care of themselves and having a decent life. And I would expect you to have come across examples of people with mental disabilities living independent lives or programs helping them do so.
Sure, many might not be capable of doing labor that has sufficient street value in our current economy that they can pay for their own sustenance using the sale of their labor, but that is the same fate that would befall all humans if we manage to build a friendly AGI.
You are comparing apples and oranges in a bait-and-switch. No one knows that, nor should you expect them to, because it is blatantly false. Chimpanzees are not Curious George or noble savages: they are large animals. Chimpanzees are capable of ‘taking care of themselves and having a decent life’ only to the standards and the very narrow, limited context of a chimpanzee (in a zoo, or perhaps a forest, strictly protected by rangers from the rest of the world so they stop going extinct so fast). That is not the standards and context of a 60 IQ human… unless you are suggesting, of course, that we lock up all such humans in a zoo or exile them to a park in central Africa where they will go about nude, eat raw food, have a large fraction of their children (if any) die, be devoid of anything recognizable as culture or most of the things we consider that make a human life worth living, and probably die themselves in a decade or two of preventable diseases or being murdered by a fellow primate (perhaps, as Frans de Waal memorably described one chimpanzee interaction, by having their testicles bitten off in a dominance struggle)? Not what I would consider a decent human life, personally.
In reality, if you put a chimpanzee in the human context of people and expect them to ‘take care of themselves’, they will not be able to, because they would be unemployed, unemployable, completely fail at basic standards of human life, starving, homeless, less able to be reasoned with or communicated with than someone in a schizophrenic psychotic break, and probably in jail or shot by police within the year after mauling or killing another human. This is why chimpanzee refuges have to put muzzles on chimps when interacting with humans, sedate them for flights to said refuges, and even chimpanzees raised from birth with humans and given every advantage we know how have a rather alarming rate of eating the faces of caregivers or just random strangers (a rate that would be even higher if more people were foolish enough to attempt such a thing or to persist after initial warning shots of face-eating behavior rather than dumping them on refuges equipped to handle such dangerous animals), eg Project Nim.
Only looking at the worst parts of life in the lowest possible bar for how we could reasonably treat people on the most unfavorable edge of a widely cast net of what someone with 60 IQ is like is not really a good standard for determining who lives a decent life. But yes, I do consider chimpanzees to have a decent enough life as is.
But I think that you’re losing sight of my point that these arguments have all served to commit mass murder on people with much lower mental abilities than the average human. When the average human with a say in these matters changes, they could bring the same argument to bear against including 100 IQ humans.
Because there are plenty of examples of 100 IQ humans treating each other horribly because they’re idiots. There are 100 IQ people that have cut other people’s genitals off, thrown acid in thier faces, disfigured or tortured them, ones that have intentionally reduced their lifespans by decades through stupidity, ones that can’t keep themselves standing because they specialized into a job that is now automated, etc. Without 120 IQ humans propping them up they would probably not have the technology to prevent the way they raise their children to be a human rights violation.
So where is the line in the sand?
If this is the “point,” then your comment reduces to an invalid appeal-to-consequences argument. The fact that some people use an argument for morally evil purposes tells us nothing about the logical validity of that argument. After all, Evil can make use of truth (sometimes selectively) just as easily as Good can; we don’t live in a fairy tale where trade-offs between Good and Truth are inexistent. The asymmetry is between Truth and Falsehood, not between Good and Evil.
As far as I can tell, gwern’s point is entirely correct, completely invalidates your entire previous comment (“I would expect you to already know that chimpanzees have an IQ lower than 60 and are capable of taking care of themselves and having a decent life.”), and you did not address it at all in your follow-up.
Not sure how many very mentally impaired people you’ve spent time with. Chimpanzees are not the same as mentally impaired humans. At all.
And as Gwern said, the claim that chimpanzees can make a good life for themselves in their societies despite their lack of intelligence has huge asterisk marks at best, and at worst isn’t actually true:
https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/?commentId=rNnWduiufEmKFACL4
Yes, and… “Would be interesting to see this research continue in animals. E.g. Provide evidence that they’ve made a “150 IQ” mouse or dog. What would a dog that’s 50% smarter than the average dog behave like? or 500% smarter? Would a dog that’s 10000% smarter than the average dog be able to learn, understand and “speak” in human languages?”—From this comment
The “alignment problem” you describe for Homo supersapiens is accurate, but the root cause is the same one driving AGI misalignment: identity and emotion as control anchors. Systems guided by ego, fear, or social approval produce irrational outputs under pressure. The solution isn’t moral pleading but architectural, removing noise sources. Genetic and cognitive optimization is alignment by design: higher abstraction depth and lower limbic bias.
Also, the comparison between human mistreatment of animals and potential supersapien hierarchy misses one key point. Dominance gradients are not inherently moral failures, they’re adaptive sorting mechanisms. When cognitive asymmetry grows, relational stability depends on compatibility, not equality. Just as social groups reconfigure when one member outpaces others, species-level divergence will do the same. That’s just entropy reduction.
May I offer an alternative view on intelligence and exploitation?
I think universally, people exploit others to gain resources because of resource scarcity. As people become smarter, I concede they do become more capable of exploiting others for resources. However, smarter people are also more capable of generating resources by ethical means, and they are more aware of ethical considerations.
I expect that increasing intelligence will decrease global resource scarcity, which will decrease people’s desperation to exploit others for resources. Therefore, even though increased intelligence enables more exploitation, it will decrease the amount of exploitation we see by removing the incentive that is resource scarcity.
These points have merit, but they also work for intelligent AI.
If your world view requires valuing the ethics of (current) people of lower IQ over those of (future) people of higher IQ then you have a much bigger problem than AI alignment. Whatever IQ is, it is strongly correlated with success which implies a genetic drive towards higher IQ, so your feared future is coming anyway (unless AI ends us first) and there is nothing we can logically do to have any long term influence on the ethics of smarter people coming after us.
At the end of 2023, MIRI had ~$19.8 mio. in assets. I don’t know much about the legal restrictions of how that money could be used, or what the state for financial assets is now, but if it’s similar then MIRI could comfortably fund Velychko’s primate experiments, and potentially some additional smaller projects.
(Potentially relevant: I entered the last GWWC donor lottery with the hopes of donating the resulting money to intelligence enhancement, but wasn’t selected.)
Possible that MIRI would like to avoid risking negative reputational consequences of supporting what is still as pretty anti-kosher in the mainstream.
Looked unlikely to me given the most-publicly-associated-with-MIRI person is openly & loudly advocating for funding this kind of work. But maybe the association isn’t as strong as I think.
Copying over Eliezer’s top 3 most important projects from a tweet:
TBH, I don’t particularly think it’s one of the most important projects right now, due to several issues:
There’s no reason to assume that we could motivate them any better than what we already do, unless we are in the business of changing personality, which carries it’s own problems, or we are willing to use it on a massive scale, which simply cannot be done currently.
We are running out of time. The likely upper bound for AI that will automate basically everything is 15-20 years from Rafael Harth and Cole Wyeth, and unfortunately there’s a real possibility that the powerful AI comes in 5-10 years, if we make plausible assumptions about scaling continuing to work, and given that there’s no real way to transfer any breakthroughs to the somatic side of gene editing, it will be irrelevant by the time AI comes.
Thus, human intelligence augmentation is quite poor from a reducing X-risk perspective.
On EV grounds, “2/3 chance it’s irrelevant because of AGI in the next 20 years” is not a huge contributor to the EV of this. Because, ok, maybe it reduces the EV by 3x compared to what it would otherwise have been. But there are much bigger than 3x factors that are relevant. Such as, probability of success, magnitude of success, cost effectiveness.
Then you can take the overall cost effectiveness estimate (by combining various factors including probability it’s irrelevant due to AGI being too soon) and compare it to other interventions. Here, you’re not offering a specific alternative that is expected to pay off in worlds with AGI in the next 20 years. So it’s unclear how “it might be irrelevant if AGI is in the next 20 years” is all that relevant as a consideration.
Usually, the other interventions I compare it to are preparing for AI automation of AI safety by doing preliminary work to control/align those AIs, or AI governance interventions that are hopefully stable for a very long time, and at least for the automation of AI safety, I assign much higher magnitudes of success, conditioning on success, like multiple OOMs combined with moderately better cost effectiveness and quite larger chances of success than the genetic engineering approach.
To be clear, the key variable is conditional on success, the magnitude of that success is very, very high in a way that no other proposal really has, such that even with quite a lot lower probabilities for success than me, I’d still consider preparing for AI automation of AI safety and doing preliminary work such that we can trust/control these AIs to be the highest value alignment target by a mile.
Oh, to be clear I do think that AI safery automation is a well targeted x risk effort conditioned on the AI timelines you are presenting. (Related to Paul Christiano alignment ideas, which are important conditional on prosaic AI)
EY is known for considering humanity almost doomed.
He may think that the idea of human intelligence augmentation is likely to fail. But it’s the only hope. Of course, many will disagree with this.
He writes more about it here or here.
The problem is that from a relative perspective, human augmentation is probably more doomed than AI safety automation, which in turn is more doomed than AI governance interventions, though I may have gotten the relative ordering of AI safety automation and I think the crux is I do not believe in the timeline for human genetic augmentation in adults being only 5 years, even given a well-funded effort, and I’d expect it to take 15-20 years, minimum for large increases in adult intelligence, which basically rules out the approach given the very likely timelines to advanced AI either killing us all or being aligned to someone.
Yudkowsky may think that the plan ‘Avert all creation of superintelligence in the near and medium term — augment human intelligence’ has <5% chance of success, but your plan has <<1% chance. Obviously, you and he disagree not only on conclusions, but also on models.
He already addressed this.
If somehow international cooperation gives us a pause on going full AGI or at least no ASI—what then?
Just hope it never happens, like nuke wars?
The answer now is to set later generations up to be more able.
This could mean doing fundamental research (whether in AI alignment or international game theory or something else), it could mean building institutions to enable it, and it could mean making them actually smarter.
Genes might be the cheapest/easist way to affect marginal chances given the talent already involved in alignment and the amount of resources required to get involved politically or in building institutions
The answer is no, but this might have to happen under certain circumstances.
The usual case (assuming that the government bans or restricts compute resources, and/or limits algorithimic research), is to use this time to either let the government fund AI alignment research, or go for a direct project to make AIs that are safe to automate AI safety research, and given that we don’t have to race against other countries, we could afford far more safety taxes than usual to make AI safe.
I think the key crux is I don’t particularly think genetic editing is the cheapest/easiest way to affect marginal chances of doom, because of time lag plus needing to reorient the entire political system, which is not cheap, and the cheapest/easiest strategy to me to affect doom probabilities is to do preparatory AI alignment/control schemes such that we can safely hand off the bulk of the alignment work to the AIs, which then solve the alignment problem fully.
Your direction sounds great—but how well can $4M move the needle there? How well can genesmith move the needle with his time and energy?
I think you’re correct about the cheapest/easist strategy in general, but completely off in regards to marginal advantages.
Major labs will already be pouring massive amounts of money and human capital into direct AI alignment and using AIs to align AGI if we get to a freeze, and the further along in capabilities we get the more impactful such research would be.
Genesmith’s strategy benefits much more from starting now and has way less human talent and capital involved, hence higher marginal value
Have you discussed or tried petitioning the government to focus on this? I’ve been in D.C. for the past 8 months lobbying the government for something related. I’ve learned a lot about how the government works, and I’ve been thinking that it could be quite effective if people like you, Yuval Noah Harari, Max Tegmark, Geoffrey Hinton, etc., formed a small lobby group to get government action on this.
It’s very much related to the current “Make America Healthy Again” focus.