Taiwan could possibly be an option. They have quite inexpensive egg retrieval (something like $3-4k all in). But you’d have to look into data sharing and/or shipping eggs abroad.
TsviBT
Regarding the last section, I think you’re being quite dismissive, i.e. not addressing their concerns and acting as though they don’t have legitimate concerns (while I think probably not in fact understanding their concerns). For example,
If you don’t see the difference between that and embryo selection
I mean, did they say “there is no difference between embryo selection and state-enforced murder”? I think you’re strawmanning them.
If you don’t want to deal with these sorts of comments, fine, that’s understandable, and there’s a lot of other valuable things that you do such that you don’t need to work on addressing these sorts of comments with more attention. As I’ve said repeatedly, I AM VOLUNTEERING TO GIVE THOUGHTFUL RESPECTFUL REAL ANSWERS TO MORAL/ETHICAL CONCERNS ABOUT REPROGENETICS. Please just tag me instead! I imagine (not confidently, but this is my top guess when I quickly try to empathize with you) that you’re doing a social motion that’s something like demonstrating+performing confidence / power, like “yeah actually I’m right, I know I’m right, I know lots of other people agree with me, and I’m expecting lots of people to back me up on this, now and then even more in the future”. I think that’s fine and in some cases good to do, as a general category. But I think that doing it by strawmanning and dismissing is bad. I think that you think that (or act as though) if someone can’t express their concern very clearly, and so you can give a shallow counterargument that they can’t quickly give a compelling response to, then that’s a win. I think that it’s sometimes a win and sometimes a loss, because if the version you’re doing is strawmanning them, then they have a concern which you haven’t addressed but you’ve put them in a position where their (fumbling) attempts to get their concern (however coherent or not it may be) addressed are met with dismissal or even derision, and no easy recourse for more helpful engagement.
Options are great, as long as you can predict the long-term group-consequences of your individual preferences.
We don’t apply this standard to other technologies or other sectors of life such as public policy, state structure, social norms, language, institutions, diet, etc. What seems to work are things like:
Empowering individuals, e.g. with more health, longevity, capacity to pursue things.
Setting up structures that create liberal containers in which free thinking and agency can grow many strands of progress.
These don’t involve directly predicting long term group consequences, because that’s infeasible. Instead you heal what’s in front of you.
We didn’t get a certain distribution of traits by accident, it is part of an evolutionairy proven model of distributing properties among people so that on average, we’ll be making progress.
It sounds like you think there’s something especially good or ideal about the default evolutionary pressures. Is that right? If so, why do you think so? It seems fairly unlikely. I mean, there’s clearly something kinda good about them, in that e.g. they tend toward empowering people at least somewhat, and on some very long timescale we’d expect some degree of niche-filling. But there could just as well be poor incentives. Human-evolution, like all species-evolutions, just greedily picks allele-frequency-increasing alleles given the current environment and gene pool; no strong reason for that to be aligned with our humane values.
So, if we tweak the distribution of traits, we might end up in a not easily reversed suboptimal situation.
I think this is possible in theory, and we should avoid this (e.g. with strong norms against such genomic choices, and maybe international treaties about it). I think it’s quite unlikely because (1) we don’t know much about genetics of personality (2) even if we did, personality is probably quite variable anyway, and (3) there will be a huge spread of who uses reprogenetics at all and what genomic choices parents will make, both in a given year and also as time goes on, and (4) the gene pool has a huge reservoir of variance and (5) what you do with reprogenetics can usually be reversed to a significant extent and (6) we can get multigenerational feedback. This adds up to “effects on mean traits of populations are quite weak for a long time, except for increasing some tails (and, possibly, with very large uptake of basic reprogenetics (e.g. embryo screening), decreasing some downside tails (e.g. severe monogenic diseases))”.
A society with all leaders or all scientists would be likely pretty horrible.
Eh, IDK about horrible. Seems not ideal, sure. Scientists can lead and leaders can science.
For practical reason, most people need to be followers. You need a reserve of psychopaths for when shit hits the fan (societally speaking).
I’m not convinced, why do you think this (compared to hypothetical alternatives, such as figuring out good healthy sane humane competent leadership)?
Also, I don’t think you can eliminate suffering in general, you can only shift the boundaries of what’s considering suffering.
That’s all well and good, but I think quite a lot of people, myself included, would rather set up future children for the kind of suffering involved in not knowing which groundbreaking intellectual or artistic effort to invest in, rather than the kind of suffering involved in cystic fibrosis and Alzheimer’s.
(Assuming this is true) One would wish for LW to be one of the earliest places for Truth to master pants, but this hasn’t happened quite yet...
Open to suggestions that are better than some long phrase, but I don’t think “superbabies” is a good term. See: https://www.lesswrong.com/posts/PPLHfFhNWMuWCnaTt/the-practical-guide-to-superbabies-3?commentId=hFj8yTRaMCRo6XpJt
“Genius babies” is better IMO if you must have a term, though it’s of course silly (but so is “superbabies”); that’s what I say when I’m reaching for this term. “Children whose parents/makers used reprogenetics to make them have very high expected intelligence” is the very long phrase; I’d want a good term for “reprogenetics children”, and separately a term like “IQ-increased children”, and “superbabies” could be replaced by just combining those two words, though that might be too long. One could use some more florid term like “indigo children”. Another issue is that “reprogenetics IQ-increased children” isn’t actually all that relevant besides the facts that (1) most very very intelligent kids will likely be reprogenetics IQ-increased kids, later on; and (2) reprogenetics is the way to increase that category.
Quoting myself from here:
The field of advanced reprotech and reprogenetics is not for intelligence amplification, existential risk reduction, or anything about AGI.
(I get, and agree, that legibility is good, but that balances against these other considerations; better lexicogenesis is tugging the rope sideways.)
Great post, thanks for your work!
(As I have told GeneSmith, superbabies is also IMO a mildly toxic term, as it’s a bad concept to apply to children. For example, it makes it seem like there’s some category here, which there isn’t much of, and it makes it seem like a product you’re buying (“designer babies”), which it isn’t (they are people), and it subtly bakes in a universal notion of good (similar to “enhancement”), which there shouldn’t be, and it kinda instrumentalizes / objectifies kids, which you shouldn’t do.)
Maybe; but that’s also an especially difficult task, as it’s especially difficult to measure.
Thanks. I don’t think that’s quite hitting on the same thing though? I didn’t read the full post, but the quick take and the diagram and a cmd-F for “review” and “validate” don’t seem to talk about “we expected to have to go back and look at a bunch of superficially automated stuff in order to understand things well enough to get past deeper bottlenecks/obstacles”.
Cool, thanks. Of course r and c would depend quite a lot on the task. It’s also an ontology that would diverge significantly from the reality in some important cases. In particular, r is described as a fraction of the automated work, but what are we counting? Is it tokens generated? The human still has to read the judge’s ruling. So we could include tokens processed or whatever? But for a software project, the human has to decide what ze even wants, which can take a lot of thinking, and has to decide some deep architecture stuff, which can also take a lot of thinking; and neither of those are really measurable as a fraction of automated work, if you see what I mean.
Anyway, I think that in some cases the effective r*c constant could be quite high, like .5 or more, leading to less than a 2x speedup. Think for example of generating art. Yes, you could make something ok really fast. But the process of painstakingly going over each bit of the artwork, which could apparently be superfast automated, is actually in many cases an integral part of meditating on what you want, running your fingers (metaphorically or literally) over each square millimeter of the artwork to familiarize yourself with it and with the obstacles and opportunities there.
Cf. https://www.lesswrong.com/posts/yCjDGmwQhS7hjEKk5/the-ease-disease
A thing I imagine some people miss about near-term non-X-risk impacts of gippities: If 95% of some task is automatable, that doesn’t necessarily mean you can speed it up 20x using automation. That may sound strange, but consider this: there’s a lot of hidden value in humans having context loaded up. If the non-automatable 5% is really important, then you still want the human doing that 5% well. For the human to do it well, the human may have to have deeply reviewed many parts of the 95%. For example, even if some judge’s ruling will only directly warrant some obvious automated next response, that doesn’t mean the human can just skip reading the ruling; some aspects of it may inform the deeper legal strategy. Or something. Similarly, if the human needs to make many of the deepest architectural choices in a big software project, the human may have to be well familiar with the constraints of many specific elements of the project, even of the immediate functionality of those elements could easily be implemented by gippity coding.
(None of this strongly implies there won’t be some huge effects from gippities, and none of this bears much on actual AGI.)
Herasight Health is Herasight’s new “health only” product that is less expensive than their standard $50k product, but doesn’t include screening for IQ,
Noting that this is a red flag, according to me. Acknowledging that this easy for me to say as someone who is not trying to make a startup work, I think it’s bad for reprogenetics clinics in general to be withholding something major like IQ screening from the (“mere”) $20k customers.
This is much more important in the long run compared to the short run, so assuming this is a temporary thing that will change soon, it’s not that big of a deal. I’ve always assumed that, initially, advanced reprotech will be quite expensive, and wealthy people will get access first, and then over time the price will drop due to innovation; and furthermore, innovation includes paying off previous expenditures for research, as well as innovation in building out the business in general (not just the science).
However, a big part of the social contract around reprogenetics is that it will be maximally accessible in the long-run, and in particular, it won’t create increasing inequality due to economically important traits like intelligence being hoarded. For this reason, assuming it’s the case that IQ screening is basically zero additional marginal cost (for a client who’s already purchasing the Health product), and given that IQ is such an important trait, the lion’s share of the benefits of IQ screening should be made as accessible as possible as soon as possible. I think that making a sharp cutoff, where anyone below the $50k level gets no IQ screening, is probably unnecessary (admitting that I don’t actually know the business situation) and is probably too costly in terms of acting out what appear to be the beginnings of a bad longer-term policy.
Even if the business situation does make a sharp cutoff necessary for now, that would be hard for outsiders to know; and in any case, I would like to apply societal pressure to reprogenetics clinics to get rid of the cutoff ASAP. In general, there should be a culture of innovation and accessibility.
(To be clear, it’s not solely the fact that it’s zero marginal cost; it seems perfectly fine for a company to make money by charging for access to software etc. But IQ specifically, as well as the largest expected impact diseases, shouldn’t be sharply withheld.)
It seems, from the sidelines, like there should be a lot of options whereby a reprogenetics company could upsell to people with money. Besides the normal stuff (concierge and custom service, better sequencing and phenotyping, etc.), they could e.g. offer the most up-to-date IQ predictors in the most expensive product, and have a previous open-source model used for the less expensive product that gets most but not all of the predictive power.
tradeoff choices
This doesn’t help directly, but I just wanted to note that, as stronger reprogenetics in general is developed, most tradeoffs will go away. (Cf. https://berkeleygenomics.org/articles/Methods_for_strong_human_germline_engineering.html#strong-gv-and-why-it-matters ) Many / most of the traits of interest (disease traits, cognitive capacities) are uncorrelated / weakly correlated with each other, and most of the weak correlations are in the non-antagonistic direction (e.g. low disease risk usually correlates slightly positively between different diseases). That means you can just get very low disease risk across the board, and whatever cognitive capacities can be upregulated, all at the same time.
There would still be some tradeoffs:
It might be very hard to know all of the important effects of genes. There might therefore be some downside risk to affecting too many genes (hence moving outside of the envelope of natural human genomes). If you push some trait around, you could be causing some weird bad effect that wasn’t measured or that only shows up when you’re pushing fairly far.
Some traits are intrinsically pleiotropic for life outcomes. E.g. if you increase a future child’s likely degree of interest in math, you’re kinda necessarily slightly relatively decreasing their interest in other things.
being the reason his life is worse.
One way to think about it is comparing to the alternative of not choosing. There are tradeoffs, but it’s a net improvement over the alternative of a random genome. By going through substantial inconvenience in order to make some genomic choices on behalf of your future child, you’re giving them that gift. Maybe it’s not the optimal gift, but it’s still a supererogatory gift.
If I pick the former and the die roll goes badly, I know I’m directly responsible for that, for the rest of my life. If I pick the latter, then, every single time my kid misses the game-winning catch by a hair, or gets waitlisted to his dream school, I think about my paranoia being the reason his life is worse.
Are you making sure to also account (both in reasoning and in intuition / emotion) for the upsides in both cases? If not, then you’re applying a sort of Copenhagen ethics, where you’re automatically punished for taking responsibility, compared to leaving it up to random chance.
I think a lot of people do apply Copenhagen ethics, and I sometimes do too, so it’s not crazy. But my experience has been that sometimes it becomes just totally untenable to have no one taking responsibility, so you have to do it yourself. And then you have a bunch of difficult fast-paced choices where any option can be criticized. I think this situation is probably basically necessary as parent (though I’m not a parent)? Like, there’s just a ton of choices (what food? what water filter? what books? where live? what about phone / screens? school? rules? enforcement? etc. etc.), and you have to make some choice, and they all have flaws, but it’s ok, you’re doing your best—that’s the standard, not “was I totally blameless for all bad outcomes”.
being the reason his life is worse.
Another way to think about it, is making your best guess at what he would want. Of course, this doesn’t really answer any concrete question, but it’s maybe a slightly different stance. If you could, you should give him control over his own genome; but that doesn’t make sense because he doesn’t exist yet. ( https://berkeleygenomics.org/articles/Genomic_emancipation.html#appendix-the-origins-of-souls ) One way to try to compute what he’d want, is to ask what you’d wish your parents would have done for you.
I guess generally, ideally, there’d be lots of events generally falling under “really good debates trying to innovate on having good debates”, and they could try different things. Personally I wouldn’t want to enforce a strict, totalizing schedule in most events, because I think some really important value when one or both interlocutors get frustrated with the discourse and then jump out of the loops and are like “wait stop, stop everything, let’s clarify THIS” and then you might make progress.
That said, I like the idea of having some timeboxes thrown in, but I would do them somewhat more like Congressional hearings or cross-examinations. Each interlocutor gets one or two turns to cross-examine the other. During that 20 minutes, they have total control, in the sense that they can cut off the other speaker, speak as much or as little as they want, and ask questions / direct the conversation as they like. (But importantly it is of course symmetric, i.e. they take turns.)
prevent most large-scale investment into this kind of research
Seems plausible. At a guess, the effect of that would be to barely or at best somewhat slow down the actual core research leading to AGI, while greatly slowing down any visible impacts and last mile research. (I’m unsure because IDK how much of the current influx of resources gets directed to [real capabilities research, as defined presuming my mainline model where learning programs as of 2026 aren’t all that relevant to AGI], and I’m unsure what the effect on research is of current big labs blundering around and showing more clearly what limits are of current learning programs.)
Ot1h, you would get more warning of AGI coming in the sense that people who deeply understand the related fields (e.g. let’s say, vaguely, cognitive science, “technical philosophy/epistemology”, algorithms) might be able to call that we’re actually getting most of the relevant understanding. Ototh, you would get less warning in the sense that you wouldn’t be getting big new coalescences of economic productivity or scary demos.
Cf. Red vs. Blue here: https://www.lesswrong.com/posts/K4K6ikQtHxcG49Tcn/hia-and-x-risk-part-2-why-it-hurts#Red_vs__Blue_AGI_capabilities_research
I kinda agree, and in general I definitely agree it’s probably much harder than in the case of pandemics. Partly I think this is something we can change (by explaining risks and offering alternatives). On the other hand, as Critch points out, we do have quite strong and functionally operative intuitions around cutting down the tall poppies. In many cases this is a bad intuition, but it’s not 100% wholly without merit in general, and in this case it would fit well.
I slightly agree that the term and concept “superbabies” has some affinity with eugenical thinking, and is kinda bad (see my other comments on this post).
Note that it is, and very much should be, parents who are ranking the embryos. Clinics provide predictions broken out by each specific condition or trait; parents make choices based on that, however they see fit.
I’m not totally sure what you’re saying here, and would be curious for you to unpack it. A guess I’d make is that you’re thinking something like: “Social/economic incentives about traits/types of people will push parents to make genomic choices for their future children that respond to those incentives; in particular, whatever the strongest incentives are, though incentives would induce all parents to make the same genomic choices in response. In effect, this is some single force (the aggregate of the main economic incentives) making a decision about what sort of person should exist that gets applied uniformly across all of society. That’s basically eugenics.” Is that close? How would you correct this?
If that’s roughly what you’re thinking: I almost agree that this is a kind of eugenics, but I don’t quite agree. Eugenics was a highly varied ideology so it’s hard to analyze, but my attempt is here: https://www.lesswrong.com/posts/yH9FtLgPJxbimamKg/genomic-emancipation-contra-eugenics#The_Eugenical_Maxim_as_the_shared_moral_core_of_eugenics_ My hypothesis there is basically that eugenics largely boils down to “There are Good and Bad traits (a single universal concept); they’re important; so we should push for all children to have Good traits.”. So, the uniformity is important, but at least according to me, truly eugenical thinking is about justifying a universal application of one standard of Good traits based on conceiving of a single universal notion of Good traits. Responding to incentives could (if extremely pervasive) be a uniform/universal application of one standard, but it’s not necessarily being justified that way. This matters because the justification is where much of the really bad stuff comes from. If you put a lot of stock in your single universal notion of Good traits, then you can justify imposing that notion on other people, even using force.
That said, a free market could result in that universal notion of Good traits being justified as such—in other words, people could start blaming children for being economically unproductive / burdensome because they weren’t sufficiently genomically optimized. That would start to shade into “soft eugenics”, which is still less bad than coercive eugenics, but is bad, and could then shade into coercive eugenics. For example, people could say “why should we provide you healthcare, when your disease is genetically preventable and your parents refused to prevent it for no good reason”. (In an absolute sense, in the grand scheme of things, I don’t think this is that much of a problem; there’s really a ton of latitude (legal and social) in our society to take a huge variety of approaches toward life and child-rearing, though things seem like they’ve gotten significantly worse in the past decades—but as long as there is this store of liberty, I think the consequences aren’t that bad. I’m also skeptical that how much healthcare our society provides is all that linked to which options specific parents would have had to avert sickness.)
Now, even if you agree with my analysis about eugenics and eugenical thinking, you might still be concerned simply about any sort of pressure that applies some human-judgement standard to the genomes of future children. E.g. you might think that are notions of “intelligence” are so deeply flawed that a kid selected to have a higher IQ in expectation would also tend to have some bad quality in expectation. Or you might be concerned about negative consequences of uniformity, regardless of the attendant social attitudes (eugenical or not). E.g. you might think that selecting for IQ would make kids who all have “the same kind of intelligence”, and that would be bad. Are these your concern?