If we screw this up, there are over eight billion people on the planet, and countless future humans who might either then die or never get a chance to be born. Even if you literally don’t care about future people, the lives of everybody currently on the planet is a serious consideration and should guide the calculus. Just because those dying now are more salient to us does not mean that we’re doing the right thing by shoving these systems out the door.
If embryo selection just doesn’t happen, or gets outlawed when someone does launch the service, assortative mating will probably continue to guarantee that there are as many if not more people available to research AI in the future. The right tail of the bell curve is fattening over time, not thinning. Unless you expect some sort of complete political collapse within the next 30 years because the general public lost an average of 2 IQ points, dysgenics isn’t a serious issue.
My guess is that within the next 30 years embryo selection for intelligence will be available in certain countries, which will completely dominate any default 1 IQ point per generation loss that’s happening now. The tech is here, it’s legal, and you can do it if you’re knowledgable enough today. We are already in a “hardware overhang” with regard to genetic enhancement and are just waiting for someone to launch the service for normies.
“e/acc” is a grifter twitter club. Like most twitter clubs, its purpose is to inflate the follower counts of core users, and in this case help certain people in tech justify what they were going to do anyways. They are not even mainstream among AI researchers, certainly not AI researchers at top labs working on AGI.
It’s ultimately a question of probabilities, isn’t it? If the risk is ~1%, we mostly all agree Yudkowsky’s proposals are deranged. If 50%+, we all become Butlerian Jihadists.
My point is I and people like me need to be convinced it’s closer to 50% than to 1%, or failing that we at least need to be “bribed” in a really big way.
I’m somewhat more pessimistic than you on civilizational prospects without AI. As you point out, bioethicists and various ideologues have some chance of tabooing technological eugenics. (I don’t understand your point about assortative mating; yes, there’s more of it, but does it now cancel out regression to the mean?). Meanwhile, in a post-Malthusian economy such as ours, selection for natalism will be ultra-competitive. The combination of these factors would logically result in centuries of technological stagnation and a population explosion that brings the world population back up to the limits of the industrial world economy, until Malthusian constraints reassert themselves in what will probably be quite a grisly way (pandemics, dearth, etc.), until Clarkian selection for thrift and intelligence reasserts itself. It will also, needless to say, be a few centuries in which other forms of existential risks will remain at play.
PS. Somewhat of an aside but don’t think it’s a great idea to throw terms like “grifter” around, especially when the most globally famous EA representative is a crypto crook (who literally stole some of my money, small % of my portfolio, but nonetheless, no e/acc person has stolen anything from me).
It’s ultimately a question of probabilities, isn’t it? If the risk is ~1%, we mostly all agree Yudkowsky’s proposals are deranged. If 50%+, we all become Butlerian Jihadists.
Uhh… No, we don’t? 1% of 8 billion people is 80 million people, and AI risk involves more at stake if you loop in the whole “no more new children” thing. I’m not saying that “it’s a small chance of a very bad thing happening so we should work on it anyways” is a good argument, but if we’re taking as a premise is that the chance of failure is 1%, that’d be sufficient to justify several decades of safety research. At least IMO.
I don’t understand your point about assortative mating; yes, there’s more of it, but does it now cancel out regression to the mean?
AI research is pushed mostly by people at the tails of intellgence, not by lots of small contributions from people with average intelligence. It’s true that currently smarter people have slightly fewer children, but now more than ever smarter people are having children with each other, and so the amount of very smart people is probably increasing over time, at least by Charles Murray’s analysis. Whatever happens now, it’s very unlikely we will lose the human capital necessary to develop AGI, and we certainly wouldn’t lose it in less than thirty years. Regression to the mean is a thing but doesn’t prevent this trend.
Meanwhile, in a post-Malthusian economy such as ours, selection for natalism will be ultra-competitive. The combination of these factors would logically result in centuries of technological stagnation and a population explosion that brings the world population back up to the limits of the industrial world economy, until Malthusian constraints reassert themselves in what will probably be quite a grisly way (pandemics, dearth, etc.), until Clarkian selection for thrift and intelligence reasserts itself. It will also, needless to say, be a few centuries in which other forms of existential risks will remain at play.
Who said anything about several centuries? I’m one of the most radical people on this forum and I probably wouldn’t want to commit to more than thirty years, not specifically because of dysgenic considerations, but just to prevent something weird from happening in the meantime. I’m sure there are people here who disagree with me though.
My point is I and people like me need to be convinced it’s closer to 50% than to 1%, or failing that we at least need to be “bribed” in a really big way.
For what it’s worth I think virtually every “alignment person” right now would be in favor of giving you the life extension research funding that you want, and was already in favor of it. I don’t think we’ll be in a position to trade, but if we could, I struggle to think of anybody who would disagree in practice.
PS. Somewhat of an aside but don’t think it’s a great idea to throw terms like “grifter” around, especially when the most globally famous EA representative is a crypto crook (who literally stole some of my money, small % of my portfolio, but nonetheless, no e/acc person has stolen anything from me).
Note that your “30 years” hypothetical has immense cost for those who have a very high discount rate.
Say your discount rate is high. This means that essentially you place little value on the lives of people who will be alive after you anticipate being dead, and high value on stopping the constant deaths of people you know now.
Also if you have a more informed view of the difficulty of all medical advances, you might conclude that life extension is not happening without advanced AGI to push it. That it becomes essentially infeasible to expect human clinicians to life extend people, it’s too complex a treatment, has too many subtle places where a mistake will be fatal, too many edge cases where you would need to understand medicine better than any living human to know what to do to save the patient.
If you believe in (high discount rate, life extension requires ASI) you would view a 30 year ban as mass manslaughter, maybe mass murder. As many counts of it as the number of aging deaths worldwide that happen over 30 years, it’s somewhere between 1.9 billion and 3.8 billion people.
Not saying you should believe this, but you should as a rationalist be willing to listen to arguments for each point above.
Even with zero discount rate the problem simplifies to your model of how much knowledge would a “30 year pause” world gain when it cannot build large AGI to determine how they work and their actual failure modes. If you believe from history of human engineering that the gain would be almost nothing, then that ends up being a bad bet because it has a large cost (all the deaths) and no real gain.
It seems that you see what can be gained in a pause is only technical alignment advances. But I want to point out that safety comes from solving two problems, the governance problem and the technical problem. And we need a lot of time to get the governance ironed out. The way I see it, misaligned AGI or ASI is the most dangerous thing ever, so we need the best regulation ever. The best safety / testing requirements. The best monitoring by governments of AI groups for unsafe actions, the best awareness among politicians. Among the public. And if one country has great governance figured out, it takes years or decades to get that level of excellence to be applied globally.
Do you know of examples of this? I don’t know cases of good government or good engineering or good anything without feedback, where the feedback proves the government or engineering is bad.
That’s the history of human innovation. I suspect that no pause would gain anything but more years alive for currently living humans by the length of the pause.
I do not have good examples no. You are right that normally there is learning from failure cases. But we should still try. Now we have nothing that is required that could prevent an AGI breakout. Nick Bostrom has wrote in Superintelligence for example that we could implement tripwires and honeypot situations in virtual worlds that would trigger a shutdown. We can think of things that are better than nothing.
I don’t think we should try. I think the potential benefits of tinkering with AGI are worth some risks, and if EY is right and it’s always uncontrollable and will turn against us then we are all dead one way or another anyways. If he’s wrong we’re throwing away the life of every living human being for no reason.
And there is reason to think EY is wrong. CAIS and careful control of what gets rewarded in training could lead to safe enough AGI.
That is a very binary assessment. You make it seem like either Safety is impossible or it is easy. If impossible, we could save everyone by not building AGI. If we know it to be easy, I agree, we should accelerate. But the reality is that we do not know, and that it can be somewhere on the spectrum from easy to impossible. And since everything is on the line, including your life. Better safe than sorry is to me the obvious approach. Do I see correctly that you think the pausing AGI situation is not ‘safe’ because if all would go well, the AGI could be used to make humans immortal?
One hidden bias here is that I think a large hidden component on safety is a constant factor.
So pSafe has two major components (natural law, human efforts).
“Natural law” is equivalent to the question of “will a fission bomb ignite the atmosphere”. In this context it would be “will a smart enough superintelligence be able to trivially overcome governing factors?”
Governing factors include: a lack of compute (by inventing efficient algorithms and switching to those), lack of money (by somehow manipulating the economy to give itself large amounts of money), lack of robotics (some shortcut to nanotechnology), lack of data (better analysis of existing data or see robotics) and so on. To the point of essentially “magic”, see the sci Fi story metamorphosis of prime intellect.
In worlds where intelligence scales high enough, the machine basically always breaks out and does what it will. Humans are too stupid to ever have a chance. Not just as individuals but organizationally stupid. Slowing things down does not do anything but delay the inevitable. (And if fission devices ignited the atmosphere, same idea. Almost all world lines end in extinction)
This is why EY is so despondent: if intelligence is this powerful there probably exists no solution.
In worlds where aligning AI is easy because they need rather expensive and obviously easy to control amounts of compute to be interesting in capabilities, and the machines are not particularly hard to corral into doing what we want, then alignment efforts don’t matter.
I don’t know how much probability mass lies in the “in between” region. Right now, I believe the actual evidence is heavily in favor of “trivial alignment”.
“Trivial alignment” is “stateless microservices with an in distribution detector before the AGI”. This is an architecture production software engineers are well aware of.
Nevertheless, “slow down” is almost always counterproductive. In world lines where AGI can be used to our favor or is also hostile, this is a weapon we have to have on our side or we will be defeated. Pauses disempower us. In world lines where alignment is easy, pauses kill everyone who isn’t life extended with better medicine. In world lines where alignment can’t be done by human beings, it doesn’t matter.
The world lines of “AI is extremely dangerous” and “humans can contain it if they collaborate smartly and internationally and very carefully inch forward in capabilities and they SUCCEED” may not exist. This is I think the crux of it. The probability of this combination of events may be so low no worldline within the permutation space of the universe contains this particular combination of events.
Notice it’s a series probability: demon like AGI that can escape anything but we can be very careful not to give them too much capabilities and “international agreement”.
Thank you for your comments and explanations! Very interesting to see your reasoning. I have not seen evidence of trivial alignment. I hope for the mass to be in the in between region. I want to point out that I think you do not need your “magic” level intelligence to do a world takeover. Just high human level with digital speed and working with your copies is likely enough I think. My blurry picture is that the AGI would only need a few robots in a secret company and some paid humans to work on a >90% mortality virus where the humans are not aware what the robots are doing. And hope for international agreement comes not so much from a pause but from a safe virtual testing environment that I am thinking about.
Define “serious”. You can get lifeview to give you embryo raw data and then run published DL models on those embryos and eek out a couple iq points that way. That’s a serious enough improvement over the norm that it would counterbalance the trend akarlin speaks of by several times. Perhaps no one will ever industrialize that service or improve current models, but then that’s another argument.
The marginal personal gain of 2 points comes with a risk of damage from mistakes by the gene editing tool used. Mistakes that can lead to lifetime disability, early cancer etc.
You probably would need a “guaranteed top 1 percent” outcome for both IQ and longevity and height and beauty and so on to be worth the risk, or far more reliable tools.
There’s no gene editing involved. The technique I just described works solely on selection. You create 10 embryos, use DL to identify the one that looks smartest, implant that one. That’s the service lifeview provides, only for health instead of psychometrics. I think it’s only marginally cost effective because of the procedures necessary, but the baby is fine.
Ok that works and yes already exists as a service or will. Issue is that it’s not very powerful. Certainly doesn’t make humans competitive in an AI future, most parents even with 10 rolls of the dice won’t have the gene pool for a top 1 percent human in any dimension.
I think you are misunderstanding me. I’m not suggesting that any amount of genetic enhancement is going to make us competitive with a misaligned superintelligence. I’m responding to the concern akarlin raised about pausing AI development by pointing out that if this tech is industrialized it will outweigh any natural problems caused by smart people having less children today. That’s all I’m saying.
Sure. I concede if by some incredible global coordination humans managed to all agree and actually enforce a ban on AGI development, then in far future worlds they could probably still do it.
What will probably ACTUALLY happen is humans will build AGI. It will behave badly. Then humans will build restricted AGI that is not able to behave badly. This is trivial and there are many descriptions on here on how a restricted AGI would be built.
The danger of course is deception. If the unrestricted AGI acts nice until it’s too late then thats a loss scenario.
IQ is highly heritable. If I understand this presentation by Steven Hsu correctly [https://www.cog-genomics.org/static/pdf/ggoogle.pdf slide 20] he suggests that mean child IQ relative to population mean is approximately 60% of distance from population mean to parental average IQ. Eg Dad at +1 S.D. Mom at +3 S.D gives children averaging about 0.6*(1+3)/2 = +1.2 S.D. This basic eugenics give a very easy/cheap route to lifting average IQ of children born by about 1 S.D by using +4 S.D sperm donors. There is no other tech (yet) that can produce such gains as old fashioned selective breeding.
It also explains why rich dynasties can maintain average IQ about +1SD above population in their children—by always being able to marry highly intelligent mates (attracted to the money/power/prestige)
Heritability is measured in a way that rules that out. See e.g. Judith Harris or Bryan Caplan for popular expositions about the relevant methodologies & fine print.
Couple of points:
If we screw this up, there are over eight billion people on the planet, and countless future humans who might either then die or never get a chance to be born. Even if you literally don’t care about future people, the lives of everybody currently on the planet is a serious consideration and should guide the calculus. Just because those dying now are more salient to us does not mean that we’re doing the right thing by shoving these systems out the door.
If embryo selection just doesn’t happen, or gets outlawed when someone does launch the service, assortative mating will probably continue to guarantee that there are as many if not more people available to research AI in the future. The right tail of the bell curve is fattening over time, not thinning. Unless you expect some sort of complete political collapse within the next 30 years because the general public lost an average of 2 IQ points, dysgenics isn’t a serious issue.
My guess is that within the next 30 years embryo selection for intelligence will be available in certain countries, which will completely dominate any default 1 IQ point per generation loss that’s happening now. The tech is here, it’s legal, and you can do it if you’re knowledgable enough today. We are already in a “hardware overhang” with regard to genetic enhancement and are just waiting for someone to launch the service for normies.
“e/acc” is a grifter twitter club. Like most twitter clubs, its purpose is to inflate the follower counts of core users, and in this case help certain people in tech justify what they were going to do anyways. They are not even mainstream among AI researchers, certainly not AI researchers at top labs working on AGI.
It’s ultimately a question of probabilities, isn’t it? If the risk is ~1%, we mostly all agree Yudkowsky’s proposals are deranged. If 50%+, we all become Butlerian Jihadists.
My point is I and people like me need to be convinced it’s closer to 50% than to 1%, or failing that we at least need to be “bribed” in a really big way.
I’m somewhat more pessimistic than you on civilizational prospects without AI. As you point out, bioethicists and various ideologues have some chance of tabooing technological eugenics. (I don’t understand your point about assortative mating; yes, there’s more of it, but does it now cancel out regression to the mean?). Meanwhile, in a post-Malthusian economy such as ours, selection for natalism will be ultra-competitive. The combination of these factors would logically result in centuries of technological stagnation and a population explosion that brings the world population back up to the limits of the industrial world economy, until Malthusian constraints reassert themselves in what will probably be quite a grisly way (pandemics, dearth, etc.), until Clarkian selection for thrift and intelligence reasserts itself. It will also, needless to say, be a few centuries in which other forms of existential risks will remain at play.
PS. Somewhat of an aside but don’t think it’s a great idea to throw terms like “grifter” around, especially when the most globally famous EA representative is a crypto crook (who literally stole some of my money, small % of my portfolio, but nonetheless, no e/acc person has stolen anything from me).
Uhh… No, we don’t? 1% of 8 billion people is 80 million people, and AI risk involves more at stake if you loop in the whole “no more new children” thing. I’m not saying that “it’s a small chance of a very bad thing happening so we should work on it anyways” is a good argument, but if we’re taking as a premise is that the chance of failure is 1%, that’d be sufficient to justify several decades of safety research. At least IMO.
https://en.wikipedia.org/wiki/Coming_Apart_(book)
AI research is pushed mostly by people at the tails of intellgence, not by lots of small contributions from people with average intelligence. It’s true that currently smarter people have slightly fewer children, but now more than ever smarter people are having children with each other, and so the amount of very smart people is probably increasing over time, at least by Charles Murray’s analysis. Whatever happens now, it’s very unlikely we will lose the human capital necessary to develop AGI, and we certainly wouldn’t lose it in less than thirty years. Regression to the mean is a thing but doesn’t prevent this trend.
Who said anything about several centuries? I’m one of the most radical people on this forum and I probably wouldn’t want to commit to more than thirty years, not specifically because of dysgenic considerations, but just to prevent something weird from happening in the meantime. I’m sure there are people here who disagree with me though.
For what it’s worth I think virtually every “alignment person” right now would be in favor of giving you the life extension research funding that you want, and was already in favor of it. I don’t think we’ll be in a position to trade, but if we could, I struggle to think of anybody who would disagree in practice.
Fair, I guess.
Note that your “30 years” hypothetical has immense cost for those who have a very high discount rate.
Say your discount rate is high. This means that essentially you place little value on the lives of people who will be alive after you anticipate being dead, and high value on stopping the constant deaths of people you know now.
Also if you have a more informed view of the difficulty of all medical advances, you might conclude that life extension is not happening without advanced AGI to push it. That it becomes essentially infeasible to expect human clinicians to life extend people, it’s too complex a treatment, has too many subtle places where a mistake will be fatal, too many edge cases where you would need to understand medicine better than any living human to know what to do to save the patient.
If you believe in (high discount rate, life extension requires ASI) you would view a 30 year ban as mass manslaughter, maybe mass murder. As many counts of it as the number of aging deaths worldwide that happen over 30 years, it’s somewhere between 1.9 billion and 3.8 billion people.
Not saying you should believe this, but you should as a rationalist be willing to listen to arguments for each point above.
I am definitely willing to listen to such arguments, but ATM I don’t actually believe in “discount rates” on people, so ¯\(ツ)/¯
The discount rate is essentially how much you value a future person’s life over current lives.
I realize, and my “discount rate” under that framework is zero.
Nobody’s discount rate can be literally zero, because that leads to absurdities if actually acted upon.
Like what?
Variants of Pascal’s mugging.
Infinite regress.
etc.
Even with zero discount rate the problem simplifies to your model of how much knowledge would a “30 year pause” world gain when it cannot build large AGI to determine how they work and their actual failure modes. If you believe from history of human engineering that the gain would be almost nothing, then that ends up being a bad bet because it has a large cost (all the deaths) and no real gain.
It seems that you see what can be gained in a pause is only technical alignment advances. But I want to point out that safety comes from solving two problems, the governance problem and the technical problem. And we need a lot of time to get the governance ironed out. The way I see it, misaligned AGI or ASI is the most dangerous thing ever, so we need the best regulation ever. The best safety / testing requirements. The best monitoring by governments of AI groups for unsafe actions, the best awareness among politicians. Among the public. And if one country has great governance figured out, it takes years or decades to get that level of excellence to be applied globally.
Do you know of examples of this? I don’t know cases of good government or good engineering or good anything without feedback, where the feedback proves the government or engineering is bad.
That’s the history of human innovation. I suspect that no pause would gain anything but more years alive for currently living humans by the length of the pause.
I do not have good examples no. You are right that normally there is learning from failure cases. But we should still try. Now we have nothing that is required that could prevent an AGI breakout. Nick Bostrom has wrote in Superintelligence for example that we could implement tripwires and honeypot situations in virtual worlds that would trigger a shutdown. We can think of things that are better than nothing.
I don’t think we should try. I think the potential benefits of tinkering with AGI are worth some risks, and if EY is right and it’s always uncontrollable and will turn against us then we are all dead one way or another anyways. If he’s wrong we’re throwing away the life of every living human being for no reason.
And there is reason to think EY is wrong. CAIS and careful control of what gets rewarded in training could lead to safe enough AGI.
That is a very binary assessment. You make it seem like either Safety is impossible or it is easy. If impossible, we could save everyone by not building AGI. If we know it to be easy, I agree, we should accelerate. But the reality is that we do not know, and that it can be somewhere on the spectrum from easy to impossible. And since everything is on the line, including your life. Better safe than sorry is to me the obvious approach. Do I see correctly that you think the pausing AGI situation is not ‘safe’ because if all would go well, the AGI could be used to make humans immortal?
One hidden bias here is that I think a large hidden component on safety is a constant factor.
So pSafe has two major components (natural law, human efforts).
“Natural law” is equivalent to the question of “will a fission bomb ignite the atmosphere”. In this context it would be “will a smart enough superintelligence be able to trivially overcome governing factors?”
Governing factors include: a lack of compute (by inventing efficient algorithms and switching to those), lack of money (by somehow manipulating the economy to give itself large amounts of money), lack of robotics (some shortcut to nanotechnology), lack of data (better analysis of existing data or see robotics) and so on. To the point of essentially “magic”, see the sci Fi story metamorphosis of prime intellect.
In worlds where intelligence scales high enough, the machine basically always breaks out and does what it will. Humans are too stupid to ever have a chance. Not just as individuals but organizationally stupid. Slowing things down does not do anything but delay the inevitable. (And if fission devices ignited the atmosphere, same idea. Almost all world lines end in extinction)
This is why EY is so despondent: if intelligence is this powerful there probably exists no solution.
In worlds where aligning AI is easy because they need rather expensive and obviously easy to control amounts of compute to be interesting in capabilities, and the machines are not particularly hard to corral into doing what we want, then alignment efforts don’t matter.
I don’t know how much probability mass lies in the “in between” region. Right now, I believe the actual evidence is heavily in favor of “trivial alignment”.
“Trivial alignment” is “stateless microservices with an in distribution detector before the AGI”. This is an architecture production software engineers are well aware of.
Nevertheless, “slow down” is almost always counterproductive. In world lines where AGI can be used to our favor or is also hostile, this is a weapon we have to have on our side or we will be defeated. Pauses disempower us. In world lines where alignment is easy, pauses kill everyone who isn’t life extended with better medicine. In world lines where alignment can’t be done by human beings, it doesn’t matter.
The world lines of “AI is extremely dangerous” and “humans can contain it if they collaborate smartly and internationally and very carefully inch forward in capabilities and they SUCCEED” may not exist. This is I think the crux of it. The probability of this combination of events may be so low no worldline within the permutation space of the universe contains this particular combination of events.
Notice it’s a series probability: demon like AGI that can escape anything but we can be very careful not to give them too much capabilities and “international agreement”.
Thank you for your comments and explanations! Very interesting to see your reasoning. I have not seen evidence of trivial alignment. I hope for the mass to be in the in between region. I want to point out that I think you do not need your “magic” level intelligence to do a world takeover. Just high human level with digital speed and working with your copies is likely enough I think. My blurry picture is that the AGI would only need a few robots in a secret company and some paid humans to work on a >90% mortality virus where the humans are not aware what the robots are doing. And hope for international agreement comes not so much from a pause but from a safe virtual testing environment that I am thinking about.
We are not in an overhang for serious IQ selection based on my understanding of what people doing research in the field are saying.
Define “serious”. You can get lifeview to give you embryo raw data and then run published DL models on those embryos and eek out a couple iq points that way. That’s a serious enough improvement over the norm that it would counterbalance the trend akarlin speaks of by several times. Perhaps no one will ever industrialize that service or improve current models, but then that’s another argument.
The marginal personal gain of 2 points comes with a risk of damage from mistakes by the gene editing tool used. Mistakes that can lead to lifetime disability, early cancer etc.
You probably would need a “guaranteed top 1 percent” outcome for both IQ and longevity and height and beauty and so on to be worth the risk, or far more reliable tools.
There’s no gene editing involved. The technique I just described works solely on selection. You create 10 embryos, use DL to identify the one that looks smartest, implant that one. That’s the service lifeview provides, only for health instead of psychometrics. I think it’s only marginally cost effective because of the procedures necessary, but the baby is fine.
Ok that works and yes already exists as a service or will. Issue is that it’s not very powerful. Certainly doesn’t make humans competitive in an AI future, most parents even with 10 rolls of the dice won’t have the gene pool for a top 1 percent human in any dimension.
I think you are misunderstanding me. I’m not suggesting that any amount of genetic enhancement is going to make us competitive with a misaligned superintelligence. I’m responding to the concern akarlin raised about pausing AI development by pointing out that if this tech is industrialized it will outweigh any natural problems caused by smart people having less children today. That’s all I’m saying.
Sure. I concede if by some incredible global coordination humans managed to all agree and actually enforce a ban on AGI development, then in far future worlds they could probably still do it.
What will probably ACTUALLY happen is humans will build AGI. It will behave badly. Then humans will build restricted AGI that is not able to behave badly. This is trivial and there are many descriptions on here on how a restricted AGI would be built.
The danger of course is deception. If the unrestricted AGI acts nice until it’s too late then thats a loss scenario.
IQ is highly heritable. If I understand this presentation by Steven Hsu correctly [https://www.cog-genomics.org/static/pdf/ggoogle.pdf slide 20] he suggests that mean child IQ relative to population mean is approximately 60% of distance from population mean to parental average IQ. Eg Dad at +1 S.D. Mom at +3 S.D gives children averaging about 0.6*(1+3)/2 = +1.2 S.D. This basic eugenics give a very easy/cheap route to lifting average IQ of children born by about 1 S.D by using +4 S.D sperm donors. There is no other tech (yet) that can produce such gains as old fashioned selective breeding.
It also explains why rich dynasties can maintain average IQ about +1SD above population in their children—by always being able to marry highly intelligent mates (attracted to the money/power/prestige)
Or, it might be that high IQ parents raise their children in a way that’s different from low IQ and it has nothing to do with genetics at all?
Heritability is measured in a way that rules that out. See e.g. Judith Harris or Bryan Caplan for popular expositions about the relevant methodologies & fine print.