Thoughts on minimizing designer baby drama
I previously wrote a post hypothesizing that inter-group conflict is more common when most humans belong to readily identifiable, discrete factions.
This seems relevant to the recent human gene editing advance. Full human gene editing capability probably won’t come soon, but this got me thinking anyway. Consider the following two scenarios:
1. Designer babies become socially acceptable and widespread some time in the near future. Because our knowledge of the human genome is still maturing, they initially aren’t that much different than regular humans. As our knowledge matures, they get better and better. Fortunately, there’s a large population of “semi-enhanced” humans from the early days of designer babies to keep the peace between the “fully enhanced” and “not at all enhanced” factions.
2. Designer babies are considered socially unacceptable in many parts of the world. Meanwhile, the technology needed to produce them continues to advance. At a certain point people start having them anyway. By this point the technology has advanced to the point where designer babies clearly outclass regular babies at everything, and there’s a schism between “fully enhanced” and “not at all enhanced” humans.
Of course, there’s another scenario where designer babies just never become widespread. But that seems like an unstable equilibrium given the 100+ sovereign countries in the world, each with their own set of laws, and the desire of parents everywhere to give birth to the best kids possible.
We already see tons of drama related to the current inequalities between individuals, especially inequality that’s allegedly genetic in origin. Designer babies might shape up to be the greatest internet flame war of this century. This flame war could spill over in to real world violence. But since one of the parties has not arrived to the flame war yet, maybe we can prepare.
One way to prepare might be differential technological development. In particular, maybe it’s possible to decrease the cost of gene editing/selection technologies while retarding advances in our knowledge of which genes contribute to intelligence. This could allow designer baby technology to become socially acceptable and widespread before “fully enhanced” humans were possible. Just as with emulations, a slow societal transition seems preferable to a fast one.
Other ideas (edit: speculative!): extend the benefits of designer babies to everyone for free regardless of their social class. Push for mandatory birth control technology so unwanted and therefore unenhanced babies are no longer a thing. (Imagine how lousy it would be to be born as an unwanted child in a world where everyone was enhanced except you.) Require designer babies to possess genes for compassion, benevolence, and reflectiveness by law, and try to discover those genes before we discover genes for intelligence. (Edit: leaning towards reflectiveness being the most important of these.) (Researching the genetic basis of psychopathy to prevent enhanced psychopaths also seems like a good idea… although I guess this would also create the knowledge necessary to deliberately create psychopaths?) Regulate the modification of genes like height if game theory suggests allowing arbitrary modifications to them would be a bad idea.
I don’t know very much about the details of these technologies, and I’m open to radically revising my views if I’m missing something important. Please tell me if there’s anything I got wrong in the comments.
- 14 May 2015 5:18 UTC; 0 points) 's comment on Thoughts on minimizing designer baby drama by (
- 18 May 2015 7:16 UTC; 0 points) 's comment on Thoughts on minimizing designer baby drama by (
- 15 May 2015 20:23 UTC; 0 points) 's comment on Thoughts on minimizing designer baby drama by (
- 31 May 2015 6:26 UTC; 0 points) 's comment on Request for Advice : A.I. - can I make myself useful? by (
That’s not going to help you much. Some mothers want their daughters to have really big brains, and some mothers want their daughters to have really big boobs...
Oh, dear. Let’s not go there. I bet the first genes “required by law” will be loyalty and obedience to authority.
That isn’t what they’ll be called. They’ll be “just basic fixes to prevent antisocial personality disorder.” And whenever there’s a new fix that needs to be marketed, the obvious route will be to pathologize the current state (if it isn’t already).
That kind of approach is not just good for publicity, it helps pressure health insurers to pay for the procedures.
Of course. See this :-)
Remember there’s not a trade-off here. I think most people want their kids to be at least a little bit smarter than they are. And if parents know that all the other parents are maxing out the “intelligence” dial, I think they will want to as well. Parents compete against one another to have the best kids. Maxing out the “intelligence” dial is an easy way to compete.
It’s true that any laws about what genes to give your baby might end up being bad laws. So it’s possible that we’d be better off with no laws at all. I think most parents would prefer not to give birth to a psychopath, so maybe it’s not necessary to have a law against giving birth to a psychopath. I know I would prefer to have a kid who is more compassionate and benevolent than I am, but I’m not sure if this tendency is universal.
One idea is to make certain changes mandatory only if you’re having a designer baby, and not make having a designer baby mandatory.
I don’t know about that. I feel pretty certain there will be trade-offs, we don’t know yet of what kind and how severe.
So, I switch on a TV and lo and behold! The Kardashian sisters! They clearly won at life. So if I want my daughters to win at life like them, what do I maximize? X-)
In your social circle intelligence is paramount, in others—not necessarily.
The opposite of obedient to authority is not “psychopath”, it’s “troublemaker”. Like Richard Feynman, for example.
It looks as if you’re responding to JM4 as if he’s claiming “it’s OK to have laws requiring babies to be tweaked for obedience to authority, because that’s how we avoid getting psychopaths”. But it looks to me as if he’s actually claiming “There won’t be laws requiring babies to be tweaked for obedience to authority—the populace wouldn’t stand for it. What there might be is laws requiring them to be tweaked for not-being-psychopaths, and that would be OK”.
It’s perfectly reasonable to be concerned that what there would actually be is a law called the Psychopathy Prevention Act whose text, when read carefully, in fact requires babies to be tweaked for obedience to authority. But if indeed that’s what would happen then JM4′s problem is not that he approves of enforcing obedience to authority but that he is making a wrong prediction about future politics.
I mostly have slippery slope concerns here. Once you’ve given the government the right to choose some—any—characteristics of your children, I fear the list of these characteristics can only expand.
You propose several ways to “minimize designer baby drama”, but you have ignored the potential costs of these interventions.
I see a few large costs of your proposals:
1) Reducing the potential intelligence of a number of future individuals
...
2) Monetary cost and deadweight loss from taxes
3) Parent preference frustration
...
...
Still, probably better than civil war.
Better than what probability of civil war?
Civil wars have been fought over class before, and I think different classes diverging into different species would massively raise the probability of this. I’m really not sure how to put a number to this, but I’d guess somewhere around 10% for the US, and around 90% for terrorism, but not tanks on the street.
The probability would be higher for less stable countries.
The main thing I’m optimizing for is minimizing existential risk. I acknowledge the costs you describe are significant.
Has anybody a guess of when the technology will reach a stage where a human designer baby with has genes added to a single chromosome will costs 20.000$ or less and there aren’t meaningful medical problems with the procedure?
When? At least ten years ago.
The dairy industry routinely carries out genomic selection on cow embryos resulting in vastly modified individuals showing traits that would rarely, if ever, be expressed in the wild: http://www.sciencedirect.com/science/article/pii/S0022030209703479
If carried out in humans this type of radical genetic engineering would be considered ‘designer baby’ technology in every meaningful sense. How much does it cost? Not $20,000. More like $20. By the way, in-vitro-fertilization is done as well, and that’s also cheap (and no, this isn’t because of lowered safety or hygiene standards—the labs in which these procedures are carried out are as modern, safe, and sterile as any human-oriented lab; and they’d have to be since cattle are fairly expensive assets).
If you’re just talking about designer babies, then we can already do that. If you’re talking about the specific issue of splicing genes, we might still be a ways off, depending how you define ‘meaningful.’
In mice, of course, gene splicing is commonplace and pretty cheap, although in the case of laboratory mice there is the freedom that you can abort the fetus at any stage if it shows signs of improper development, and in gene splicing experiments you typically have to abort a lot of fetuses.
I don’t think of selection as genetic design. Sperm banks do selection based since their inception. It would surprise me if there isn’t a sperm bank out there that already does 23andMe type genetic screening.
Still $20 dollar doesn’t buy you an hour of qualified physician time, so the price of the procedure is likely higher.
Around five years ago I heard from a professor that the cost of getting a new gene manipulated mouse is roughly a PHD project and there are mouses dying in the process.
I suggest reading the link I posted plus other material on genomic selection. The type of genomic selection used in the dairy industry is very different from simple 23andMe type genetic screening.
I read the abstract. The link says they do 50,000 SNP’s with their chip in the dairy industry. 23andMe does hundred thousands of SNPs according to their website.
They do some statistics with that data because to see which SNPs are important but otherwise I don’t see what they do much different.
High SNP numbers are misleading and have little predictive value. But that’s not the important bit; the important bit is how this information is actually used.
Poll? (set 2015, 9999 if you just want to see results; 9999 also stands for “never”).
Year I expect this technology earliest [pollid:958]
Year I expect this technology latest [pollid:959]
Thanks for including a “just want to see the results” option, though a question for that might have been better. I assume the 9999′s are screwing up the means.
Would anyone who thought the tech would be available in the fairly near future be willing to explain their line of thought?
http://lesswrong.com/lw/m6b/thoughts_on_minimizing_designer_baby_drama/cd1x
I thought about that—but then some value still would be needed for the numeric result.
The mean you mean I guess.
I recommend looking at the raw numbers—it is easy.
You’re right. I meant the mean—I’ve corrected it.
Some results after looking at the data (29 votes):
Two people (7%) apparently believe that the technology is possibly already there (by giving least values <2015).
Two people (7%) believe that this technology will even under the best circumstance (least estimate) be possible.
About 30% (10/29) believe that this technology might never arrive (by giving an upper bound of 9999).
At the beginning the effect of designer babies will be unclear. It might take two decades to see whether they are much more capable than other humans.
Different designers are also likely to try different ways to enchanced humans. Even if every human is enchanced that doesn’t mean that there are no differences. Some designers will argue that humans should be able to produce their own Vitamin C while others will consider Vitamin C production superfluous.
There will be a trade off between experimental design where the result isn’t clear beforehand and safer approaches.
I found an article that suggests that removing mutational load would significantly improve traits, especially intelligence. Mutational load is all the random mutations you carry, many of which cause slight detrimental effects.
I don’t think this would be anywhere near as controversial, and wouldn’t decrease the existing genetic variation.
I don’t know if that has been pointed out, but it has been done only recently and with moderately bad results …
It could become a thing if every human on the planet wouldn’t go crazy at every mention of “gene editing” (or simply “gene” for that matter, as 80% of americans support the labeling of DNA containing foods …).
This kind of development would be … strange. The generations of semi-enhanced humans would indeed feel rather strange.
But I have to point out one thing: genes don’t work like that. You don’t have one (or few) gene for height, genes for psychopathy, genes for intelligence, compassion, benevolence, reflectiveness … Each of those groups has more than one single effect. Modifying such genes (or corresponding regulatory regions) have way more than one single effect and the result would be much harder to guess than what our imagination enables us to when we hear “gene-editing”.
I’m dubious about designer babies being an existential threat (is anyone arguing that?), even after they grow up. Designer microbes are much scarier.
However, one real problem might be just having more variation among people, and having to invent ways for them to get along well with each other, if this is even possible.
How about people who aren’t susceptible to superstimuli? That could lead to quite a cultural divide.
As technology advances, I expect wars will get more devastating. So things that cause divisive conflict, e.g. trouble in the Middle East, will constitute stronger existential threats. I figure LW isn’t well positioned to tug ropes in the Middle East, but there might be opportunities to gain leverage over designer baby related stuff if no one is paying attention because it’s still decades out.
Interesting, want to elaborate? When I think of modern people who aren’t susceptible to superstimuli, I think of Buddhist monks. It doesn’t seem like they are causing too many problems.
The -potential- for war to be more devastating will increase (the link does a good job justifying this position, and there are other reasons to believe this), but this is not the same as war actually getting more devastating.
Sure.
I didn’t think it out in detail, but it seems to me that people bond over superstimuli, so people who are immune to superstimuli and people who are subject to superstimuli will have a harder time feeling connected to each other.
I think it’s more likely to be possible to create people who are immune to a list of superstimuli rather than people who are immune to all superstimuli.
People bond over strong shared emotions.
If you are willing to equate superstimuli to strong emotions then “people who are immune to superstimuli” are common—we call them “depressed”, “anhedonic”, and “possessing a flat affect”.
I expect that after an initial short transient, widespread availability of human genetic enhancement will lead to less variability rather than more: some modifications will be mandated by the law, other will be prohibited, and among those which are voluntary, most people will go for some popular set of modifications.
Increased variation could be possible if there is a high price difference between techniques (this could be caused by patents, for instance). In that cases, the rich will become superhumans while the poor remain unenhanced or only get basic enhancements. After some time, the rich and the poor could effectively become separate species.
It’s hard to say because of advantages to specialization. I think it’s very unlikely that you would be able to just get all the gains without trading off something in other areas. This implies (to no surprise) that people genetically specialized for a particular class of tasks will be superior at them. And this implies that people, given the chance, will genetically specialize. To what degree is an interesting question.
It can’t actually—Medical patents are already borderline in terms of “political viability”. A system of patents that gave the rich this kind of advantage would result in the end of patents. Heck, it is already law in many places that you cannot hold IP in human genes.
Perhaps. But never underestimate the political clout of the wealthy.
In this case they would have to change already existing law in a way that is blatantly against the interests of the majority and manage to do so it globally—because if any country defects from a policy of limiting top mods to the upper class, every country has to, or get buried 20 years later. This is not a winnable political struggle.
150,000 people die each day, which is a strong reason to speed up technological progress. The problem with EMs is that they are likely to quickly lead to a fairly fast take-off, possibly leading to an unfriendly singleton. But this is not a problem with designer babies.
I can’t imagine mandatory designer babies coming first, and I personally would be morally uncomfortable with this.
I think it more likely that first babies being illegal/unregulated, then legalisation/normalisation once these children become part of society.
Even at a late stage mandatory enhancement is unlikely—by comparison, vaccines are not mandatory. But, having a certificate that you do not have genes for violence could be attractive to, for instance, romantic partners.
The exception to this would be if the ‘archipelago’ or ‘patchwork’ model of government as city-states takes off, in which case the minority of people who believe in mandatory DBs might start their own city-state
If this were a sci-fi story, the highly advanced culture of pacifism would be attacked by the less advanced people who are willing to fight. Of course, by this time warfare would be largely automated anyway, so this probably wouldn’t be a problem.
Yep. But the universe is huge, and it will be around for a long time, which, in my mind, is an even stronger reason to get technological progress right and not destroy ourselves. That’s why I consider conflict avoidance to be a higher priority than speed of technological advance.
That’s fair. I added that bit as an afterthought without thinking about it as hard.
Indeed, however this is dependent upon utility function—many people value the people who are alive now to an extent that cannot be compensated by future lives, even if there may be many orders of magnitude more people in the future. If everyone could co-ordinate and decide to develop disruptive technologies slowly then the future would be a lot safer, but realistically this is unlikely to happen in most cases.
AGI might be an exception, as it might be such a hard problem that anyone who might solve it will understand the danger it poses. Theres no first-mover advantage to being the first to develop clippy. But genetic engineering is far simpler and far safer, and since some actor is bound to develop it, it’s in everyone’s interests to develop it first.
So I see what you mean in principle, but in practice I think the co-ordination problem is too hard.
Sure. One thing I might mention to someone with that utility function is that if humanity gets destroyed by an enhanced psychopath, that will probably happen right around the same time that enhanced scientists would be working to speed technological progress. So even someone with a relatively myopic utility function will in many cases favor caution.
Clearly there are a lot of people very interested in the ethics of genetic enhancement. The current consensus among the scientific community in the West seems to be that enhancing kids is totally unethical, and gene modification techs should only be used to fix genetic diseases. In other words, currently in the West at least, there is a very strong (and effective, within the West) attempt being made to enforce coordination on this problem.
I think the current coordination strategy is a fairly hopeless one, for reasons I outlined in my post. All I’m trying to do is improve on it. Do you think I’ve succeeded there? Can you think of an even better coordination strategy than mine? The thing I like about my idea is that it doesn’t require total coordination. It just requires that some things get discovered before other things, which is something that individual scientists can affect.
I agree that affecting the future is hard. But from my perspective (and the perspective of many other people who do think future lives are very important), it’s worth attempting even if it’s hard. If you’re the kind of person who gives up when faced with hard challenges, that’s fine; I guess we’re just different in that way. “Shut up and do the impossible” and all that—the logic is similar to that of FAI. (Challenges can be exciting; easy video games aren’t always very fun.) And in some cases things can be surprisingly possible (for example, it’s surprisingly easy to find the email addresses of prominent scientists online, and they also have office hours).
I appreciate specific criticisms but if you’re just going to be generically demoralizing, I don’t usually find myself getting a lot out of that.
Why are you calling your suggestions a “coordination strategy”? As far as I can see you are suggesting top-down policies enforced by the usual state enforcement mechanisms. You are talking in the language of “require”, “forbid”, “regulate”—that’s not coordination, that’s the usual command-and-control.
Connotations again...
If the cooperative thing to do is to have a nice medium-height kid, and the selfish thing to do is to have a mean tall one, then in principle you can “command-and-control” people to cooperate. Standard prisoner’s dilemma scenario.
I didn’t think about the legal part very hard; it was an off-the-cuff idea. Feel free to come up with something better or explain why laws are unnecessary.
For example, maybe people will choose the benevolence of their kid in far mode and make them nice because that’s socially desirable and an easier job for them as a parent.
LW is a biased sample but it’s better than nothing. I would prefer to have a kid that’s...
[pollid:963]
No. You can force people to do something you want. That’s not cooperation at all, that’s just plain-vanilla coercion.
I’m using the word “cooperate” in the technical sense of “cooperate in a prisoner’s dilemma”. In this sense it’s possible for an outside force to coerce cooperation, in the same way that e.g. the government forces your neighbor to cooperate rather than defect and steal your stuff, or anti-doping agencies force athletes to cooperate in the prisoner’s dilemma of whether to use performance-enhancing drugs.
For the technical sense of “cooperate in a prisoner’s dilemma” you need to have a prisoner’s dilemma situation to start with. Once you coerce cooperation you have effectively changed the payoffs in the matrix—the “defect” cell now has a huge negative number in it, that’s what coercion means. It’s not a prisoner’s dilemma any more.
Huh? Why do you think I’m in a prisoner’s dilemma situation with my neighbour?
If you make your child taller, your child is better off (+competitive advantages, -other disadvantages of being taller) and your neighbor’s child is worse off (-competitive advantages).
If your neighbor makes his child taller, his child is better off and yours is worse off.
If you both make your children taller, the competitive advantages cancel out and you each have only the disadvantages.
Being tall is not a disadvantage even if you take away “competitive advantages” (normally tall, not freakishly tall). An arms race is a different situation that a prisoner’s dilemma.
The original claim was that the neighbor might “steal your stuff” which isn’t a prisoner’s dilemma either.
And most importantly, I do have neighbors. I don’t feel I am in a prisoner’s dilemma situation with them and I suspect they don’t feel it either.
Because the government altered the payoff matrix making cooperation individually preferable to defection.
Imagine you were a hunter-gatherer: within your tribe, a system of reputation and customs, with associated punishments for defectors, tended to enforce cooperation, but different tribes occupying in neighboring areas typically recognized no social obligations towards each other, and as a result all encounters were tense and very often violent, warfare and marauding were endemic.
With a modern government you can interact with most strangers from your country or most other countries with a reasonable expectation that the interaction will be peaceful and productive.
It wasn’t a prisoner’s dilemma to start with. Hunter-gatherers do not live in a constant prisoner’s dilemma situations.
I don’t get the LW’s obsession with the prisoner’s dilemma. It’s a very specific kind of situation, rare in normal life. If you have a choice between cooperation and non-cooperation that does not automatically mean you’re in a prisoner’s dilemma.
Prisoner’s dilemma is the simplest idealized form of all scenarios where a group of agents prefer that everyone cooperates with each other rather than everyone defects to each other, but for each individual agent, whatever the other agents do, it has an incentive to defect.
There are other common types of scenarios, of course: in zero-sum scenarios cooperation is not possible: a hunter and their prey can’t cooperate to split calories between each other in a way that benefits both.
In other scenarios, cooperation is trivially the best choice: if Alice and Bob want to move a heavy object from point A to point B and neither is strong enough to move it alone, but they can move with their combined strength, then they have an incentive to cooperate, neither has an incentive to defect since if one of them defects then the heavy object doesn’t reach point B.
These scenarios are trivial from a game-theoretical perspective. The simplest and arguably the most practically relevant scenario where coordination is beneficial but can’t be trivially achieved is the prisoner’s dilemma.
Stag hunts (which are not the same as the hunter/prey scenarios discussed elsewhere in this thread) are another theoretically nontrivial category of coordination games with interesting social/behavioral implications—arguably more than the prisoner’s dilemma, though that probably depends on what kind of life you happen to find yourself in. I don’t know why they don’t get much exposure on LW, but it might have something to do with the fact that they don’t have the PD’s historical links to AI.
I agree that Stag hunt is theoretically and practically interesting, but I would say that it is not as interesting as the Prisoner’s dilemma.
In order to “solve” a Stag hunt (in the sense of realizing the Pareto-optimal outcome), all you need is a communication channel between the players, even an one-shot one-way channel suffices.
In a Prisoner’s dilemma, communication is not enough, you need either to iterate the game or to modify the payoff matrix.
There are other games that have significant practical applicability, such as Chicken/Volunteer’s dilemma and Ultimatum.
I’m not aware of these links, do you have a reference?
Not offhand, but the PD (specifically, the iterated version) is a classic exercise to motivate prediction and interaction between software agents. I wrote a few in school, though I was better at market simulations. Believe LW ran a PD tournament at some point, too, though I didn’t participate in that one.
I believe it’s because it is at the same time very simple to explain and very interesting.
I think they ran two variations of program-equilibrium PD. I participated in the last one.
I understand that the prisoner’s dilemma is interesting and non-trivial from the game-theoretic perspective. That does not contradict my point that it’s rare in normal life and that most choices people actually make are not in this framework.
Unless the object weighs exactly enough that it requires both of their full strength to move, then they both have an incentive to defect (not to put their full effort in, and let the other work harder). Mutual defection then results in the object not reaching point A.
Most scenarios involve some variation. Even the hunter-prey scenario; the herd or hunters could deliberately choose a sacrifice, saving both hunters and prey from running and expending additional calories on all sides, and reducing the number of prey animals, overall, that the hunters would need to eat. (Consider a real-life example of this—human herders, and their herds. Human-herd relationships are more complex than that, but it could be modeled that way.)
Hunter A steals Hunter B’s kills/wives/whatever. Defection pays off. Cooperation always pays more overall, defection pays the defector better. “Government” in this case is tribal; we’ll kill or exile defectors. (Exile is probably the genetically preferable option, since it may result in some of your genes being spread to other tribes, assuming you share more genetics with in-tribe than with out-tribe individuals; a prisoner’s dilemma in itself.)
Pretty much every situation in real life involves some variant on the prisoner’s dilemma, almost always with etiquette, ethical, or legal prohibitions against defection.
Nonsense. First, cooperation does not always pay more, and second, the whole point of the prisoner’s dilemma is that cooperation pays each agent better, conditional on them cooperating. “Overall” is a very nebulous concept, anyway, unless you take the hard utilitarian position and start adding up utils.
If cooperation were that beneficial, unconditional cooperation would have been hardwired in our genes.
Nope, I strongly disagree. To take a trivial example, Alice doesn’t steal Bob’s car because she thinks she’ll be caught and sent to prison. Alice is NOT “cooperating” with Bob, she is reacting to incentives (in this case, threat of imprisonment) which have nothing to do with the prisoner’s dilemma.
“Overall” means “Combining the utility-analog of both parties”, not “More utility-analog for a given party”. With only one hunter, there are fewer kills/less meat overall, at the least.
The incentives are the product of breaking the prisoner’s dilemma—the “government altered the payoff matrix” and all that. Etiquette, ethics, and law are all increasing levels of rules, and punishment for those rules, whose core purpose is to alter payoffs for defection; from as subtle as the placement of utensils at a dinner table to prohibit subtle threats to other guests, and less desirable seat placements as punishments for not living up to standards of etiquette, to shooting somebody for escalating a police situation one time too many in an attempt to escape punishment.
I am not a utilitarian. I don’t understand how are you going to combine the utils of both parties.
With one hunter less, there are fewer kills but fewer mouths to feed as well.
If it’s broken, it’s not a prisoner’s dilemma situation any more. If you want to argue that it exists as a counterfactual I’ll agree and point out that a great variety of things (including ravenous pink unicorns with piranha teeth) exist as a counterfactual.
I’m also not a utilitarian, and at this point you’re just quibbling over semantics rather than making any kind of coherent point. Of course you can’t combine the utils, that’s the -point- of the problem. Arguing that cooperation-defection results in the most gain for the defector is just repeating part of the problem statement of the prisoner’s dilemma.
Please, if you would, maintain the context of the conversation taking place. This gets very tedious when I have to repeat everything that was said in every previous comment. http://lesswrong.com/lw/m6b/thoughts_on_minimizing_designer_baby_drama/cdaa ← This is where this chain of conversation began. If this is your response, you’re doing nothing but conceding the point in a hostile and argumentative way.
Then I have no idea what you meant by “Cooperation always pays more overall, defection pays the defector better”—what is the “more overall” bit?
Yes, and I still don’t get the LW’s obsession with it. You are just providing supporting examples by claiming that everthing is PD and only the government’s hand saves us from an endless cycle of defections.
I will repeat my assertion that in real life, the great majority of choices people make are NOT in the PD context. This might or might not be different in the counterfactual anarchy case where there is no government, but in reality I claim that PD is rare and unusual.
So Lumifer, I appreciate the time you’ve taken to engage on this thread. I think the topic is an important one and it’s great to see more people discussing it. But...
I agree with OrphanWilde that you would be more pleasant to engage with if you tried to meet people halfway during discussions. Have you read Paul Graham on disagreement? The highest form of disagreement is to improve your opponent’s argument, then refute it. If we’re collaborating to figure out the truth, it’s possible for me to skip spelling out a particular point I’m making in full detail and trust that you’re a smart person and you can figure out that part of the argument. (That’s not to say that there isn’t a flaw in that part of the argument. If you understand the thrust of the argument and also notice a flaw, pointing out the flaw is appreciated.) Being forced to spell things out, especially repeatedly, can be very tedious. Assume good faith, principle of charity, construct steel men instead of straw men, etc. I wrote more on this.
You seem like a smart guy, and I appreciate the cynical perspective you have to offer. But I think I could get even more out of talking to you if you helped me make my arguments for me, e.g. the way I tried to do for you here and here. Let’s collaborate and figure out what’s true!
I value speaking plainly and clearly.
In real life (aka meatspace) I usually have to control my speech for nuances, implications, connotations, etc. It is not often that you can actually tell a fucking idiot that he is a fucking idiot.
One of the advantages of LW is that I can call a “digging implement named without any disrespect for oppressed people of color” a “spade” and be done with it. I value this advantage and use it. Clarity of speech leads to clarity of thought.
If I may make a recommendation about speaking to me, it would be useful to assume I am not stupid (most of the time, that is :-/). If I’m forcing you to “spell things out” that’s because there is a point to it which you should be able to discover after a bit of thought and just shortcut to the end. If I’m arguing with you this means I already disagree with some issue and the reason for the arguments is to figure out whether it’s a real (usually value-based) disagreement, a definition problem, or just a misunderstanding. A lot of my probing is aimed at firming up and sharpening your argument so that we can see where in that amorphous mass the kernel of contention is. I do steelman the opponents’ position, but if the steelman succeeds, I usually just agree and move to the parts where there is still disagreement or explicitly list the conditions under which the steelman works.
In arguments I mostly aim to define, isolate, and maximally sharpen the point of disagreement—because only then can you really figure out what the disagreement is about and whether it’s real or imaginary. I make no apologies for that—I think it’s good practice.
Cool, it sounds like we’re mostly on the same page about how disagreements should proceed, in theory at least. I’m a bit surprised when you say that your disagreements are usually values-based. It seems like in a lot of cases when I disagree with people it’s because we have different information, and over the course of our conversation, we share information and often converge on the same conclusion.
So maybe this is what frustrated me about our previous discussion… I think I would have appreciated a stronger pointer from you as to where our actual point of disagreement might lay. I’d rather you explain your perceived weakness in my argument rather than forcing me to discover it for myself. (Having arguments is frustrating enough without adding on a puzzle solving aspect.) For example, if you had said something like “communism was a movement founded by people with genes for altruism, and look where that went” earlier in our discussion, I think I would have appreciated that.
If you want, try predicting how I feel about communism, then rot13 the rest of this paragraph. V guvax pbzzhavfz vf n snyfvsvrq ulcbgurfvf ng orfg. Fbpvrgl qrfvta vf n gevpxl ceboyrz, fb rzcvevpvfz vf xrl. Rzcvevpnyyl, pbzzhavfg fbpvrgvrf (bapr gurl fpnyr cnfg ivyyntr-fvmrq) qba’g frrz irel shapgvbany, juvpu vf fgebat rivqrapr gung pbzzhavfz vf n onq zbqry. V qba’g guvax jr unir n inyhrf qvfnterrzrag urer—jr frrz gb or va nterrzrag gung pbzzhavfz naq eryngrq snvyher zbqrf ner onq bhgpbzrf. Engure, V guvax jr unq na vasb qvfpercnapl, jvgu lbh univat gur vafvtug gung nygehvfz trarf zvtug yrnq gb pbzzhavfz naq zr ynpxvat vg. Gur vyyhfvba bs genafcnerapl zvtug unir orra va bcrengvba urer.
I don’t know if they are “usually” value-based, but those are the serious, unresolvable ones. If the disagreement is due to miscommunication (e.g. a definitions issue), it’s easy to figure out once you get precise. If the disagreement is about empirical reality, well, you should stop arguing and go get a look at the empirical reality. But if it’s value-based, there is not much you can do.
Besides, a lot of value-based disagreements masquerade as arguments about definitions or data.
Mea culpa. I do have a tendency to argue by questions—which I’m generally fine with—but sometimes it gets… excessive :-) I know it can be a problem.
Well, it’s 2015 and you’re an American, I think, so it’s highly unlikely you have (or are willing to admit) a liking for communism :-)
But the issue here is this: some people argue that communism failed, yes, but is was a noble and righteous dream which was doomed by imperfect, selfish, nasty people. If only the people were better (higher level of consciousness and all that), communism would work and be just about perfect.
Now, if you can genetically engineer people to be suitable for communism...
Judging by the reactions of some people in this thread, for a lot of LWers, their knowledge of game theory starts and ends with PD.
The total payoff—the combined benefits both players receive—is better. This -matters-, because it’s possible to -bribe- cooperation. So one hunter pays the other hunter meat -not- to kill him and take his wife, or whatever. Extortionate behavior is itself another level of PD that I don’t care to get into.
Okay. This conversation? This is a PD. You’re defecting while I’m cooperating. You’re changing the goalposts and changing the conversational topic in an attempt to try to be right about something, violating the implicit rules of a conversation, while I’ve been polite and not calling you out on it; since this is an iterated Prisoner’s Dilemma, I can react to your defection by defecting myself. The karma system? It’s the government. It changes the payoffs. So what’s the relevance? It helps us construct better rules and plan for behaviors.
Do you also show up to parties uninvited? Yell at managers until they give in to your demands? Make shit up about people so you have something to add to conversations? Refuse to tip waitstaff, or leave subpar tips? These are all defections in variations on the Prisoner’s Dilemma, usually asymmetric variations.
And I will repeat my assertion that in this conversation, we aren’t having that discussion. It -might- matter in a counterfactual case where we were talking about whether or not PD payoff matrices are a good model for a society with a government, but your actual claim was that PD didn’t apply in the first place, not that it doesn’t apply now.
Sigh. So you’re looking at combined benefits, aka “utility-analog of both parties”, aka utils, about which you just said “of course you can’t combine the utils”.
Bullshit.
Instead of handwaving at each other, let’s define PD and then see what qualifies. I can start.
I’ll generalize PD—since we’re talking about social issues—to multiple agents (and call it GPD).
So, a prisoner’s dilemma is a particular situation that is characterized by the following:
Multiple agents (2 or more) have to make a particular choice after which they receive the payoffs.
All agents know they are in the GPD. There are no marks, patsies, or innocent bystanders.
All agents have to make a choice between the two alternatives, conventionally called cooperate (C) or defect (D). They have to make a choice—not making a choice is not an option, and neither is picking E. In some situations it doesn’t matter (when D is defined as not-C), in some it does.
All agents make their choice without knowing what other agents chose and before anyone receives the payoff.
For each agent the payoff from choosing D is known and fixed: decisions of other agents do not change it. In other words, if any agent chooses D, he is guaranteed to receive the D payoff known to him.
For each agent the payoff from choosing C varies depending on the decisions of other agents. If many other agents also chose C, the C payoff is high, more than D. If only a few other agents chose C, the C payoff is low, less than D (this is the generalization to multiple agents).
Given this definition, on which basis, more or less, I am arguing in this subthread, this conversation (or any single comment) is nowhere near a PD. Nor are the great majority of real-life situations calling for a choice.
Chicken comes up fairly often and there mutual defection is by far the worst outcome for either party (i.e. if you knew the other guy wanted to defect, you’d cooperate).
In an even simpler case, if you are a business, trying to cooperate instead of “defecting” will get you charged with anti-trust violations.
True. But challenging somebody to a Chicken-like game in the first place can be modeled as a Defection in a prisoner’s dilemma; you win if they Cooperate and refuse, and both of you are worse off if they also Defect, and agree to the game.
No, it can not—in a PD you make your decision not knowing the other party’s decision. Here if you challenge, the other party already knows your choice before having to make its own.
So get a reputation for being revengeBot?
You’ve Defected, and they’ve Cooperated, the moment you issued your challenge, and they didn’t. They’re now in a disadvantageous position, and you’re in an advantageous position; their subsequent Defection is in a different game with altered payoffs, but it also qualifies as a PD. (You could, after all, Cooperate in the subsequent game, and retract your challenge.)
Prisoner’s Dilemma is generally iterative in real life.
Actually some of the disadvantages of being tall would disappear (in the longish run) if everybody was tall. For example, if the average person was 1.90 m, cars would be designed accordingly and wouldn’t be as uncomfortable for people 1.90 m tall.
Top-down policies enforced by the usual state enforcement mechanisms are the typical way people implement coordination.
Err… No.
Top-down policies happen when voluntary coordination fails. They’re generally a sign of disagreement and mistrust: building an edifice of bureaucracy so that everyone knows exactly what they’re expected to do and giving others recourse when they fail to do it.
But voluntary coordination is hard, especially when it involves large groups of people, which is why we invented governments.
I get the idea that FAI takes more intelligence than AGI, as AGI might be able to be brute-forced by reverse-engineering the brain or evolutionary approaches, whereas de novo AI is far harder, let alone AGI. This would mean that increasing intelligence would make the world safer. I don’t see why enhanced psychopaths are more likely than enhanced empaths.
No, I’m certainly not, however I am realistic and I do prioritise. I don’t think the risk from genetic enhancement is all that great, and indeed it may be a net positive.
Anyway, so I think that mandatory enhancement is not going to be popular. However, other ideas do seem more plausible:
So, this is a reasonable idea. Governments could prioritise research into stopping diseases above increasing intelligence, and indeed this is likely to be the case anyway, as this is less controversial. Increasing compassion or even docility could also be prioritised above increasing intelligence.
This is also a good idea. It seems inevitable that some of the rich will be early adopters before the technology is cheap enough to be made free to all. However, the cost of sequencing has been going down 5x per year, meaning that it is likely to quickly become widely available.
Overall, I would say the best strategy seems to be to take a more libertarian than authoritarian approach, but try to funnel money into researching the genetics of various antisocial personality disorders, try to make the technology free, and either don’t patient the genes or ensure that the patients don’t last that long.
I think sequencing is what lets you measure genes, not modify them.
Indeed, but I think it depends whether you used germline selection or germline modification. IIRC, in germline selection you create many embryos, sequence the genes, and select the embryo with the best genes.
Also, if the cost of sequencing goes down very fast, I would have thought this provides some evidence that the cost of modification would drop at a similar rate. Of course, there is already genetic modification of crops—do you know how that has changed in cost over time?
Good point. I don’t know about crops.
Apologies if I’ve been sounding demoralising, that’s not my intention. I think your comments on this subject are interesting, and I’ve upvoted them, but since I find I have more to say about points I disagree with than points I agree with, in general I might tend to sound more critical than I actually am.
I’ll reply to the rest of your comment later, and find something positive to say.
Imagine the same people who say that anyone who doesn’t support a minimum wage or anyone who supports sweatshops lacks compassion for the poor, also deciding that babies have to be designed with extra compassion.
Ideally we can convince thoughtful people involved in this field to suggest a set of regulatory guidelines.
Take a look at real-life political processes.
Especially processes in countries like China that are likely more open to experimentation.
http://slatestarcodex.com/2015/03/27/highlights-from-my-notes-from-another-psychiatry-conference/
There are lots of people who view scientists favorably. Global warming would be a prominent case of scientists saying “we need to do X” and a large social movement forming around their advice. It used to be illegal to teach evolution in some states; now it’s illegal to teach creationism. You probably think the pro-science side is weak because you (like me) are a member of the pro-science superorganism, and one of the things superorganisms do in order to fire their members up is paint themselves as underdogs who are struggling against overwhelming forces of evil. If we were religious we’d probably be complaining about the War on Christmas or something.
Do you want to be specific about what failure modes you forsee? It’s frustrating to see generic cynicism rather than concrete failure modes. Generic cynicism is just demoralizing. Concrete failure modes can be debugged.
I don’t think the “sides” here will have a “pro-science” and “anti-science” labels. The dispute is going to be, as usual, about power, money, and values.
Sure. CASSANDRA MODE! X-)
I foresee first a blank prohibition based on the “we have a nice business going on here, this shit sound scary and, worse, capable of inducing serious socio-political changes—let’s just forbid it all”. Then I foresee a gray market developing for the children of the rich and famous and the regulators turning a blind eye to it. Then I foresee the gray market becoming so widespread it will become impossible to ignore it any more, so the regulators will come up with regulations to safeguard public safety and morality. These will initially take the negative form as bans on certain types of modifications. Eventually they will add mandatory modifications (“This is just like vaccines! Are you anti-science?! Don’t you want the best for your children?”) which is where things will start to get really iffy. If the society manages to get through this without the wheels flying off (I’m not holding my breath) we’ll probably get to the “everything not forbidden is mandatory” stage.
Is that specific enough?
I think it’s pretty likely that things will play out the way you describe. I’m wondering how it’s possible to improve on that, if at all.
Of course the “pro-science” side looks like it’s winning. One aspect of winning is the ability to have yourself declared “pro-science” and even have government agencies re-write past data to support your argument.
To get a more objective view, let’s look at past government interventions, for example, the history of government nutrition policy is full of the “pro-science” side winning with bad science and imposing it on the public.
Searching for other takes on that link shows that that rewriting of historical temperatures was announced in advance, is routinely done, and didn’t actually affect how much evidence there was for global warming in general, even if it seemed to do so in a cherry-picked case.
Well that lowers how much credit I assign atmospheric science even more.
A skeptical take on that link
Having thought about this for a while, I think a moderately safe thing to do (when it is actually possible to control the biological outcomes with some vague notion as to the actually likely long term phenotypic results) is to offer financial subsidies to help improve the future generations of those least well off in the current regime, especially for any of a large list of pro-social improvements (that parents get to choose between and will hopefully choose in different ways to prevent a monoculture).
Also, figuring out the phenotypic results is likely to involve mistakes that cost entire human lives in forgone potential, and there should probably be a way to protect against this downside in advance, like with bonds/insurance purchased prior to birth to make sure we have budgeted to properly care for people who might have been amazing but turned out to be slightly handicapped.
I expect that genetic enhancements will be treated similarly as schooling and vaccinations: there will be a government-mandated baseline, probably provided for free, and then there will be options for those who can afford it, within a system of complex government regulation.
EDIT:
Gattaca
This sort of thing will never be common. The real question is not about implanting traits in large groups but the possibility of introduced traits spreading through the population through normal reproduction.
Why’s that?
Hm. Maybe we are really socially isolated then, but as a couple we were never really interested in what will other people think if we do something (we both are the not having many friends type), and we would have jumped on the option of having an easy baby, no learning difficulties, not crying during the night, and of course perfectly healthy. Granted, we would not edit things like hair or eye color because it would feel like an unwelcome intrusion into other person’s individuality even when the person does not exist yet. But we would edit out the potential problems.
I remember how we were full of fears of getting a Downs case or worse. Plain simply we were not 100% sure of our ability to give a fully healthy child the time investment she needs, we would not have been able to deal with a disabled one who needs much more. Thankfully we have a healthy baby although developing smaller than usual, but the fear was there and we would have gladly accepted the option to not have this fear. I don’t understand why would be a social stigma against e.g. fixing Downs. Of course things like customizing hair color is a bit too frivolous to me too, but that is a different story. I would also not give things like a musical talent because we cannot know if it does not lead to problems down the road like having a calling to something else, yet choosing to work in the talent as that is a safer career.
One problem with this perspective is that not everyone is agreed on what is a “potential problem” and what falls into “[an]other person’s individuality”. Deafness springs to mind as an example, and in the other direction, what if ginger hair would increase the odds that your child got bullied?
Bullying is AFAIK based on perceived weakness, being a good victim candidate. Granted, being “weird” and thus seen as not having many allies, easy to single out, is a perceived weakness. Still I would probably tackle the problem by other means (like convincing ginger kids to always protect each other). Deafness is clearly a defect, I don’t really care about the deaf hamstering about how it is a culture. It is a culture made to deal with a defect, and as such it is a very respectable one, but it is just like the culture of grieving, if we become immortal we will not miss spectacular tombstones.
But sure on the meta level I do agree not all people will agree with me here. But there is an obvious solution of leaving the corner cases to the parents jurisdiction.
What I would want to avoid is arms races really. Such as in height (being important for the sex appeal of men).
You seem to have a very bizarre idea of what constitutes someone’s “individuality” since you appear to be more concerned about superficial things like eye and hair color as opposed to things like personality and learning style.
Precisely—individuality is in things that don’t matter. Learning style is something we should simply be efficient at. Parallel: it would be good if we all ate the One Perfect Nutritious Diet but customize it with sauces and spices: being individual in the things that don’t matter. More realistic: we all ideally dress according to the weather as not being cold or hot matters, but choose colors of clothes that express our individuality because color does not matter. What is bizarre about it?
That’s not what I mean by “individuality” and not I suspect not what most people mean by it either. How about you explain why your definition of “individuality” is something anyone should care about.
Also, would you really not mind if I forcibly overrode your memories and personality as long as your hair and eye color stayed the same?
At this point I should admit I have not really invested an immense amount of time into figuring this out, but still may be useful: when we value something merely because it is different or because it was freely chosen, it suggests it is not much better than other things or else we would like it for its actualy betterness. So people may customize their body or car to look a bit different than most others so that they can show they are different individuals (cue Life of Brian here) or that they choose some things autonomously instead of accepting the default choice, but all these things are not so important.
However when something is important we usually want it the best, we don’t want 100 different ways of manufacturing nails just that every factory can play special snowflake, we want one most efficient way and every factory adopting it. When things matter, and some things are better than others, then reasonable people don’t play special snowflakery and don’t go for an individual, custom solution just to show off their non-conformism.
So I have a generic vision of reasonable people wanting to do things the best way, uniformly, but when something does not matter much or one way is not so much better than the others then they relax the unformity and it is okay to play special snowflakery and customize everything and express our oh so awesome nonconformism in them.
I think this must make sense, I am just not so sure anymore that it is really relevant to the question. As I have to admit things like memories do not fall either of these buckets. As for overriding my personality, well, if highly inefficient ways of dealing with some problems are considered part of one’s personality (i.e. habits) why do you think I am even here on LW? That is pretty much the whole point. This whole website is all about trying to get as close to the self-rewriting AI as our biological hardware allows.
Assuming we know in advance what that is, and that it won’t change as circumstances change. The point of individuality is that in general the only way to find out the efficiency of a method is by trying it. Hence if a factory owner has a new crazy idea for how to manufacture nails, let him try it (without having to convince a panel in “nail manufacturing experts” first).
“Special snowflakeness” is want happens when “individuality” becomes a lost purpose. The “individuality only in irrelevant things” that you are arguing for is what happens when it becomes a really lost purpose.
“Designer babies” is an ambiguous term. You’re talking about fixing defects, while the original post is more about enhancements.
There is no clear zero bound.
Define a “defect” as something where
-- an overwhelming majority of most people agree on how to determine who has it (which may include deferring to doctors, as long as they don’t defer to different sets of doctors)
-- most people do not have it
-- an overwhelming majority of most people without it think it’s a bad idea to personally have, and a good idea to eliminate from society
Fixing those should not lead to the problems that making enhancements does.
Go back a couple of hundred years. Define the defect as “lack of belief in Jesus Christ”. It qualifies under your criteria.
No, it doesn’t. That’s utterly absurd; are you seriously suggesting that there was ever a time when an overwhelming majority of all people was Christian? You do realize that just because your history book includes mostly Christians doesn’t mean there aren’t non-European places with non-Christian inhabitants, right?
At any rate, I don’t claim and don’t believe that this would work for times in the past.
I understand “most people” locally—that’s most of those people who form your society and who influence your culture and political decisions. Were you thinking of some sort of global referendums and, by implication, a global government?
Our present will be the past in the immediate future :-P
If you don’t trust the “past” people to change your gene pool, what makes you think “future” people will trust you to change their gene pool?
No, I was thinking of ideas that are so universal that they’re not even culture-dependent or politics-dependent to any significant degree. Pretty much every sighted person thinks blindness is bad.
Even though present-day people aren’t perfect and can make mistakes, the overall trend is asymptotically towards making fewer mistakes and the difference between past and present should not be the same as the difference between present and future. Furthermore, I don’t claim that what everyone agrees on would be exactly correct, only that the risk of being over-inclusive is not significant.
Besides fixing gross genetic abnormalities (e.g. cleft palate and such), I am not sure what kind of universally acceptable traits can you gene-engineer.
8-0 That’s a huge claim that I don’t see much evidence for. Not to mention that it assumes objective unchanging criteria of what a “mistake” is. I smell hubris.
“Besides that, how did you like the play, Mrs. Lincoln?”
Of course, that kind of abnormality is just what I was referring to.
So, nothing that touches the mind?
Some types of mental retardation are such that everyone who does not have them would agree that they are bad to have.
On the other hand, psychopathy would fail on criterion 1; it can’t be defined well enough. (You could avoid mentioning a specific phenotype in your definition and instead define it as “has genes X, Y, and Z”, but it would then fail on criterion 3 since people would have little reason to oppose an arbitrary list of genes that is not connected to a specific phenotype.)
What problem do you see with the Hare?
I have no idea what you are talking about.
The Hare checklist is the standard instrument for measuring psychopathy.
The Wikipedia artticle for it has a criticism section.
Also, giving and analyzing the test seems to involve lots of human judgment. Which means that in order for point 1 to be true, everyone will have to trust the judgment of test-givers. I don’t think that’s going to happen.
Do you consider lack of vitamin C production a possible problem that you would want to fix?
Dunno. It seems it became a non-problem, cheap pills, and even them almost unnecessary with a decent fresh diet. My gut instinct would be to fix only those problems that don’t have such easy convenient external solutions. For example I would not want to be able to run 30 km/h for two hours so that I can save the cost of bicycle. I don’t have very rational arguments for it, just the basic instict to conserve as much about our humanity as we know it as we can unless the cost is too high. Getting vitamin pills / eating fresh food or buying a bike is not a too high cost. Perhaps it can be justified on the basis of not trading something that works without side effects for something that may have them.
Our ancestors did produce their own vitamin C. But there are a bunch of deletions in our copy of the gene so it doesn’t work anymore. Fixing it up wouldn’t mean to move that far away from what it means to be humans.
Also if we decide that we don’t need the gene, why not get rid of it completely? Why carry around a broken vitamin C gene? It seems stupid from a design perspective.
Vitamin C is just one example that’s nice, because it’s an ability that we lost in evolution but there are many small issues. For a lot of enzymes different species have an enzyme that serves the same function. Some of them however have better enzymes that work more efficiently. Yeast has had a lot more evolutionary cycles then us and might be simply better at a lot of housekeeping genes. For every gene we could search for the best one that’s out there and exchange the human version with it.
Of course we will first do it on pigs that we want to eat. But if the pigs are much better when you gave them the best version of every housekeeping gene that’s out there, some humans will also want to have the best ones.
Then it sounds like you don’t really want designer babies.
There’s no 30 km/h running gene that you could possibly put somewhere. To get that outcome you would need to modify a bunch of genes and as a result have more fit people.
This would be an extraordinarily bad idea especially from yeast in particular. ‘Best’ is contextual.
It’s contextual but that doesn’t mean that “best” doesn’t exist.
At the beginning it’s unlikely to be tried on humans, but when it comes to farm animals I would expect that people do experiment with it. Maybe a pig with yeast mitrochondria does better than a regular pig because the yeast mitrochondria had more time to be highly optimized by evolution. It might be that you need to do a few additional changes to have the pig deal with the yeast mitochondria, but people will experiment.
If you get an enzyme that’s twice as efficient in catalyzing some reaction, you have to down regulate it’s expression.
If there a social consensus against designer babies they get outlawed.
Do people have really low amounts of libertarian-ish instincts to have at least some gray zone of “disliked but not outlawed” ? I guess cigarettes are in this zone.
Would you count “legal but with Pigouvian taxes on it” count as “libertarian-ish”?
Yes. At least a choice is offered. Current EU level taxes, although fairly insane (€3-€7 a pack on the average and without the taxes it would be under 50 cents), are still low enough to compete with the black market, the black market did not get very big yet. So this is more or less inside normalcy.
When a choice is not offered, such as the categorical ban of smoking at bars in most EU countries, the typical choices are to either to engage in something illegal and black-marketish, or to obey, and to obey has two versions, either to go there for a drink and not smoke, or to not go at all.
The difference between the two that the outcomes of the first are fairly calculable, predictable, and easily amendable. You can notch up a Pigovian tax until you notice the black market is too big of an annoyance, then turn it down a notch or two. The second option leads to unpredictable chaos, anything from bars closing down to public parks becoming impromptu drinking and smoking avenues.
So for the sake of a predictable order, it would be safer if bans would be replaced with special taxes that allow more granularity, such as allowing smoking in a bar that pays hazard pay and extra health insurance to the waitstaff. The market can price that in. While the non-smokers can enjoy lower prices in the smoke-free establishments who can compete better this way. Again the goal would be to fine-tune it until you reach a balance where both types of establishments flourish.
Now I realize there is something weird calling a plan that involves the governmental micromanagement of market libertarian-ish, but the point is it is still more so, still more market oriented, than categorical bans.
If you have a minimum wage, that might not work. What if the free market price of bar staff is $X per hour, the free market price of bar staff under poor health conditions is $Y but the minimum wage is greater than X and Y?
There’s something really weird going on—I received this message in my inbox, too!
Also the legal use being restricted from more and more spaces.
Of course, the fully libertarian thing would be allowing the owner of each bar to decide whether or not to forbid patrons from smoking; allowing them would drive certain perspective patrons away and forbidding them would drive other perspective patrons away, and it is in the interest of each bar owner to figure out which ones outnumber the others.
I find that less Pigouvian and less libertarian-ish. Bit of an analysis here: http://lesswrong.com/lw/m6b/thoughts_on_minimizing_designer_baby_drama/cdfp
It is odd—I received this message in my inbox.
In the parent of that comment, is the little envelope green? If so, it means that accidentally or deliberately, you asked to be notified of replies to that comment.
Thank you. I mostly come here from my smartphone, and sometimes miss the correct buttons. Sorry for the trouble.
Gradually in the process of moving into the “outlawed” zone.
Don’t focus on the abstract but on actual issue.
Given that I can’t even legally grow genetically engineered plants on my balcony in most of Europe. growing genetically engineered humans is likely not something we will allow.
That’s why genetically engineered Chinese will soon buy Europe and turn it into a theme park. With authentically unenhanced natives, no less.
Regardless of the law, would it be far-fetched to say that a certain percent of the population would be enhanced anyway?
In the beginning stages It’s quite easy to write laws that make it a disadvantage to be genetically modified. Bruce Sterlings novel Distraction deals with the protagonist having a “personal background problem” because he’s genetically modified in a world where that’s outlawed. As a result he can’t run for office and just do PR for a politician.
It’s easy to write the laws, but it may be hard to enforce them.
It isn’t easy to identify people who are just modified to be in the upper end of normal human capacities.
People normally have parents. It’s easy to say when the genes of the parents don’t correspond to the genes of a child.
Apart from that I think you underrate the ease of doing genetic engineering without leaving traces. Especially with a decade between the moment of birth and the moment that someone analyses the DNA for traces of manipulation.
Is there anything that would prevent that number from increasing?
Based on current trends, I’d expect the first thing would be a requirement for docility and obedience. They’re already using drugs to instill these into children and I don’t expect designer babies to fare much differently.
I expect parents to be a strong special interest group in the development of laws related to this. My guess is that parents are more willing to have a kid that’s mandated to be compassionate than mandated to be docile.
It wouldn’t be called “docile”, of course. It will be called “socially well-adjusted” and “free of criminal tendencies”.
Who exactly in society is supposed to have an incentive to produce “docile” people? Following Epictetus, do you forsee teachers lobbying the government to mandate the production of obedient children to make their lives easier? That doesn’t seem very plausible to me.
If politicians want to be in charge of a “docile” citizenry, changing the genes of newborn babies is not going to help with that, since by the time the newborns are grown up the politicians will be out of office.
Also, you haven’t explained why a “docile” citizenry is a bad thing. Personally, I’m leaning towards it being a good thing, especially if enforced universally. [Edit: to clarify, I stated this because I perceived “docile” people to be more altruistic. Troublemaking altruists are just as good.] I expect the people reading Less Wrong are substantially more “docile” than members of the general public in the sense that they prefer reading over loud parties and rarely get in to fights. And Less Wrong is also 30% effective altruist, a rate much higher than the population at large.
Those who have power.
I think we are going to have a radical values-based disagreement about that.
That’s not what “docile” means.
Sounds horrible to me, but you know what? I’ll exacerbate your optics problem and just call the end result “genetic slaves”.
OK, so let’s think about this. Here are some types of powerful people:
Politicians
Movie stars
CEOs
Professional athletes
Religious leaders
Some other group?
Politicians have little incentive to produce docile designer babies as I already described. CEOs might have wanted docile workers in a previous era, but the best knowledge workers aren’t especially docile. I can’t think of any incentives for movie stars or professional athletes. Religious leaders might be incentivized to produce docile citizens, but for a bunch of reasons I’m not especially worried: religiosity is on the decline, religious leaders seem likely to be too horrified to participate in the discussion of what characteristics to push for, and only the most sociopathic religious leaders are likely to consciously realize that they would benefit from docility.
It sounds to me as though you are rounding to the nearest sci-fi cliche here instead of thinking things through logically. It’s fine if you want to be inspired by sci-fi in your dystopic musings, but ultimately it’s only persuasive if you can explain the incentive structure that would cause a particular dystopia to arise. We’re trying to tell accurate stories here, not horrifying or entertaining ones.
That sounds entirely connotational with no denotation.
If you don’t mind, maybe we could put aside the whole question of what’s politically viable for a minute and just talk about what our ideal outcome is. Psychologists have studied the construct of agreeableness. Wikipedia lists 6 “facets”:
Trust
Straightforwardness
Altruism
Compliance
Modesty
Tender-Mindedness
I suspect you and I are actually on the same page in the sense that we’d prefer to live in a society of straightforward and altruistic people than a society of manipulative and selfish ones. It’s not clear to me whether living in a society of highly compliant people would be desirable or not. “Docility” has connotations of trust, altruism, compliance, modesty, and tender-mindedness in my estimation. Our dicussion might be simplified by tabooing this word, since it’s actually a bundle of a bunch of concepts.
Heh. We do have a major mismatch :-) Your list I would call “people you’re likely to see on TV”. Let me offer you a few other examples.
Your neighborhood cop is powerful. He can kill you and stands a good chance of escaping the usual consequences. He can easily make your life very unpleasant and painful, if only for a while, and have zero consequences for that.
Rich people willing to use their money can be powerful—and these are not usually the celebrities you’ve mentioned. Some guy who ran some unknown hedge fund for a while, invested his money into a private equity deal, sold it successfully, and is now a multimillionaire living in a nondescript mansion in Connecticut—he’s never been on TV and outside of his circle of friends no one would recognize his name—he could be powerful if he wanted to.
Unelected bureaucrats are very powerful. Politicians in democracies come and go, but civil servants stay and build their influence and their networks. They are the professionals of governing (politicians are professionals of marketing).
Warlords are powerful. Power still comes out of a barrel of a gun and you don’t need to be a politician to run a place. Mafia/cartel/gang/etc. leaders are here as well.
Social groups (“tribes”) can be powerful—or powerless. These, by the way, tend to have long-term interests.
Not a sci-fi cliche, but real human history. The ruling classes have always preferred a docile population and I see no reason for that to change. The traditional way to enforce docility was to kill all the troublemakers, but unless you do it on a sufficiently massive scale to impact the gene pool, it only lasts for a generation. A population with forced permanent docility—guess whose dream would that be?
Sure. My connotations of “docility” differ (what are altruism or modesty doing in there??), but let’s just taboo the word.
What I mean is basically obedience to authority plus willingness to please. When told to sit down and shut up you say “Yes, sir”, sit down, and shut up—and you like it. When told “Go do this” you go and do this. It’s the difference between wolves and dogs.
Thanks for clarifying your position. I’ll use the word “submissiveness” to refer to this if you don’t mind.
All the people you describe are powerful, but cops and warlords have only limited ability to affect legislation in democratic countries. So I’ll focus on rich people and bureaucrats.
Some thoughts:
The time horizon is long here… most people aren’t sufficiently good at delayed gratification to plan on a 20 year timescale, and that’s how long it takes for children to grow up. And then it takes another 20 years or so for them to be a large fraction of the adult population. That’s half a lifetime.
More importantly, the classic hypocrisy scenario is when people are benevolent when a question is presented in a way that primes far mode (“Corruption is wrong”) but somehow their preferences change when a short-term opportunity presents itself (“Man, I really need some money to cover my loans… I’ll ask for a bribe just this once. It’s not like I’m doing anything wrong really, just offering them the ability to accelerate their application.”) Things that affect events 20 years out are more likely to prime far mode.
This plan has social desirability bias working against it. Joe Bureaucrat goes up to his colleague and says “Hey Liz, the citizenry will be far easier to subjugate 20 years down the line if we write submissiveness in to this new law.” Mr. Burns steeples his fingertips and chuckles: “My portfolio companies will find themselves profiting nicely once everyone is a submissive little consumer who buys everything they see on TV.” The perpetrators will need to coordinate with scientists in order to draft their legislation, and they’ll need a plausible rationalization for why they’re mandating submissiveness (rather than, say, other crime reduction options like altruism) in order to coordinate on the effort effectively.
In principle, any law passed regarding this would also affect the children of rich people and bureaucrats. So they’d either have to deal with the fact that their kids would also be submissive or find some way around the law, probably by traveling. Traveling could allow them to circumvent other restrictions too. One failure mode would be a society where most are trusting and submissive, but many foreign-born individuals are dominant and sociopathic. This could also arise if different countries had different restrictions and unrestricted immigration was allowed between countries. This gives every country an incentive to put at least some steel in the spine of their citizenry.
It’s not exactly clear the degree of control we have here. Ultimately we’re having this discussion in the hopes that our conclusions will be implemented somewhere and somehow… perhaps by scientists working on genomics, perhaps by some altruistically motivated lobbyist, etc. My guess would be that anyone advocating for legislation could also make it clear what the legislation shouldn’t do, but it’s possible that their control wouldn’t be that fine-grained, and they’d find themselves initially pushing for legislation that they eventually didn’t endorse.
By the way, I noticed that you’ve been a bit antagonistic and cynical during this discussion, with a strong “us vs them” type framing. I’m wondering if anyone (including you) has any thoughts on how I could have presented this issue in a way made it less likely to get politicized. It seems like once an issue gets politicized it’s tough to un-politicize it. I have half a mind to delete my post for that reason; maybe a different discussion on a different day will turn out better if everyone just forgets about this one.
You are looking for basically political answers so I can’t see how are you going to avoid politics. Your OP mentions as potential solutions things like “mandatory birth control technology” and “require designer babies to possess genes for...” These are coercive political solutions.
Yes. I’m usually cynical as I find this to be the epistemically correct posture.
As to antagonistic, I did mention at one point that we have a radical value-level disagreement. Perhaps you didn’t notice that, permit me to elaborate.
I don’t like the world that you’re proposing. I wouldn’t want to live in it and I would work to prevent it from happening. This is not a minor disagreement about which of the nice adjectives are nicer.
You’re arguing for the world where everyone is made docile with the “connotations of trust, altruism, compliance, modesty, and tender-mindedness”. I am none of these things, literally, not a single one. I will not fit in your world, I will not like being in it and I won’t like most of the people there. If I were to have children and someone would insist on making them genetically docile, I would object very strongly and very forcefully.
Does that clarify things?
I guess I have this naive idea that on Less Wrong we can have friendly, thoughtful discussions of politics without getting divided in to tribes. Does this seem like an ideal worth aiming for?
You misread me, or I miscommunicated, or something :) Let me clarify: I have no proposals regarding trust, compliance, modesty, or tender-mindedness. And I didn’t mean to communicate any such proposals. When I said I was “leaning towards [docility] being a good thing”, I said that mainly because I perceived the word “docility” to have altruistic connotations.
I think we can both agree that enhanced psychopaths seem like a bad thing, right? So then the question is whether it makes sense to take measures to prevent people from engineering their babies to be enhanced psychopaths. I’m currently leaning towards no, in part through objections you’ve raised and in part through my guess that few people would deliberately choose to have an antisocial baby.
I think maybe our discussion hit a snag at some point because you incorrectly diagnosed me as someone who had values significantly different than yours. At this point you (probably rationally) decided to take an antagonistic pose in order to try to speak out against those values of mine that you disagreed with, and it became harder for us to toss ideas around, stay curious, and share evidence about things. (These are all things you do in a collaborative discussion with someone who shares your values, but are arguably counterproductive for achieving one’s goals in a discussion with someone who doesn’t share your values.) Hopefully the last paragraph clarified things some.
In any case: I’m a consequentialist utilitarian, so I care about everyone’s preferences when designing my utopias, which includes yours. I don’t think I’m your enemy. When it comes to government regulation, I’m a pragmatist: I’m in favor of whatever seems likely to work. And yes, failures of previous regulations contribute to that estimate.
But altruism was in his list along with trust, compliance, etc. So I don’t think you actually answered his objections.
If I tell you I don’t want to eat foods made using vomit, excrement, or bile, and you tell me “well, the food doesn’t contain any vomit or bile”, that’s not really very comforting.
This discussion was reasonably friendly by internet standards. No one called anyone a troll, accused him of lying, or decided to discuss his sexual peculiarities :-) No one got doxed or swatted :-D
I also don’t see much of a division into tribes. Two individuals can perfectly well have a political disagreement without serving merely as representatives of warring tribes. I, for one, don’t believe I am representing any tribe or any political consensus here, it’s just my own beliefs and viewpoints.
But you would prefer the population to move (or be moved) in that general direction? You said, and I quote:
and later you explained what did you mean by “docile”. Did you change your mind?
Why do you believe the diagnosis was incorrect?
Yes :) We’re not doing that badly.
Did you see the clarifying edit in this comment? After thinking a little harder, I realized that the only reason docile people seemed good is because the term suggested altruism to me.
I’m hoping that “more babies should be born altruists” is something almost everyone can agree on. It seems like a proposal even a sociopath could get behind: more suckers to take advantage of :P (Note that it’s possible to be a disagreeable/heretical/cynical altruist; in fact, I know a couple.)
Brainstorming reasons why people wouldn’t like living in a world with lots of young altruists:
The young altruists will badger older folks to change their behavior, e.g. switch to a vegan diet, embrace the cause du jour, or just imply that they’re bad people since they don’t devote lots of time and resources to altruism related stuff (or act morally superior).
The older non-altruists would like to make friends with other non-altruists. Although there are lots of non-altruists who are their age to make friends with, maybe they would like to make friends with young people for some reason. Maybe to broaden their horizons, or maybe they’ve accumulated grudges against most non-altruists their age by this point, or some other reason.
It turns out to be impossible to genetically enhance altruism on its own without enhancing all the other facets of agreeableness along with it, which cause their own set of problems.
These seem like reasons to think that the socially ideal center of the altruism distribution should trend a bit more towards selfishness than it would otherwise.
Do any of these apply to you? Can you think of others?
Sorry, nope :-/
Don’t think so. I don’t foresee any difficulties in sitting in my wheelchair, shaking my cane and yelling “You kids get off my lawn!” :-D And I rather doubt the conversion to altruism is going to be that total that I won’t be able to find anyone to be friends with.
But yes, I suspect that a world full of altruists is going to have a few unpleasant failure modes.
First, what are we talking about? The opposite of “altruistic” is “selfish”—so we are talking about people who don’t care much about their personal satisfaction, success, or well-being, but care greatly about the well-being of some greater community. There are other words usually applied to such people. If we approve of them and their values (and, by implication, goals) we call them “idealists”. If we disapprove of them, we call them “fanatics”.
Early communists, for example, were altruists—they were building a paradise for all workers everywhere. That didn’t stop them from committing a variety of atrocities and swiftly evolving into the most murderous regimes in human history.
The problem, basically, is that if you think that the needs and wants of an individual are insignificant in the face of the good that can accrue to the larger community, you are very willing to sacrifice individuals for that greater good. That is a well-trod path and we know where it leads.
Do you think the effective altruist movement is likely to run in to the same failure modes that the communist movement ran in to?
If it gets sufficient amount of power (which I don’t anticipate happening) then yes.
Sure… and if they operate using reason and evidence, we call them “scientists”, “economists”, etc. (Making the world better is an implicit value premise in lots of academic work, e.g. there’s lots of Alzheimer’s research being done because an aging population is going to mean lots of Alzheimer’s patients. Most economists write papers on how to facilitate economic growth, not economic crashes. Etc.) I agree that releasing a bunch of average intelligence, average reflectiveness altruists on the world is not necessarily a good idea and I didn’t propose it.
I mean, the Allied soldiers that died during WWII were sacrificed for the greater good in a certain sense, right? I feel like the real problem here might be deeper, e.g. willingness of the population to accept any proposal that authorities say is for the greater good (not necessarily quite the same thing as altruism… see below).
I think there are a bunch of related but orthogonal concepts that it’s important to separate:
Individualism vs collectivism (as a sociological phenomenon, e.g. “America’s culture is highly individualistic”). Maybe the only genetic tinkering that’s possible would also increase collectivism and cause problems.
Looking good vs being good. Maybe due to the conditions human altruism evolved in (altruistic punishment etc.), altruists tend to be more interested in seeming good (e.g. obsess about not saying anything offensive) than being good (e.g. figure out who’s most in need and help that person without telling anyone). It could be that you are sour on altruism because you associate it with people who try to look good (self-proclaimed altruists), which isn’t necessarily the same group as people who actually are altruists (anything from secretly volunteering at an animal shelter to a Fed chairman who thinks carefully, is good at their job, and helps more poor people than 100 Mother Teresas). Again, in principle it seems like these axes are orthogonal but maybe in practice they’re genetically related.
Utilitarianism vs deontology (do you flip the lever in the trolley problem). EY wrote a sequence about how these are a useful safeguard on utilitarianism. I specified that my utopia would have people who were highly reflective, so they should understand this suggestion and either follow it or improve on it.
Whatever dimension this quiz measures. Orthogonal in theory, maybe related in practice.
A little knowledge is a dangerous thing—sometimes people are just wrong about things. Even non-communists thought communist economies would outdo capitalist ones. I think in a certain sense the failure of communism says more about the fact that society design is a hard problem than the dangers of altruism. Probably a good consideration against tinkering with society in general, which includes genetic engineering. However, it sounds like we both agree that genetic engineering is going to happen, and the default seems bad. I think the fundamental consideration here is how much to favor the status quo vs some new unproven but promising idea. Again, seems theoretically orthogonal to altruism but might be related in practice.
Gullibility. I’d expect that agreeable people are more gullible. Orthogonal in theory, maybe related in practice.
And finally, altruism vs selfishness (insofar as one is a utilitarian, what’s the balance of your own personal utility vs that of others). I don’t think making people more altruistic along this axis is problematic ceteris paribus (as long as you don’t get in to pathological self-sacrifice territory), but maybe I’m wrong.
This is a useful list of failure modes to watch for when modifying genes that seem to increase altruism but might change other stuff, so thanks. Perhaps it’d be wise to prioritize reflectiveness over altruism. (Need for cognition might be the construct we want. Feel free to shoot holes in that proposal if you want to continue talking :P)
I am relieved :-P
And yes, I think the subthread has drifted sufficiently far so I’ll bow out and leave you to figure out by yourself the orthogonality of being altruistic and being gullible :-)
It would, if it didn’t keep getting disproven.
That said, designer babies aren’t an issue that I would have thought to be more politically sensitive than average as transhumanist topics go—it’s a touchy subject in the mainstream, but not in a Blue/Green way, more in a “prone to generalization from fictional evidence” way. Learn something new every day, I guess.
It brings up memories of other movements, backed by influential scientists, that gained widespread political appeal. Specifically, eugenics. Long before the term was associated with the Nazis, there were sincere eugenics movements in the United States that sought to improve the gene pool. There were laws on the books that provided for forced sterilization of the “unfit”. It just so happened that the victims of these policies were poor and/or minorities.
Since then, there’s always been a segment of the mainstream predisposed to distrust any talk of “improving” the population through scientific means. Any discussion of that topic will thus raise the Blue/Green specter.
As I mentioned in the reply to parent, I don’t see tribal warfare here. We are not two faceless champions of the enemy tribes duking it out, we are just two individuals who disagree. Not every political disagreement must represent tribal affiliation.
Tribal warfare looks like this.
And “designer babies” is not a particularly politically sensitive topic. However control of how babies get designed, especially through laws and regulations, certainly is.
Not fictional evidence, real world evidence of how powerful groups have sought to control, marginalize, destroy the agency of, and even eliminate marginal groups all across history including recent history.
So, in other words, it is a Blue/Green issue. Well, I’ve been wrong before.
True, except the people under discussion are not like that. You don’t get to be a millionaire by having small time horizons.
Yes and Epictetus and Lumifer have explained who they will go about rationalizing this upthread (several times). If you want a civil discussion you could start by actually paying attention to what the people you’re talking to are saying.
Or simply make the law sufficiently convoluted that it’s possible to get out of it by jumping through bureaucratic hoops.
That’s why I added the point about altruism being an alternative to submissiveness. But I agree that their point is basically a good one.
(In general you might read all my comments in this thread as just bringing up considerations that might be relevant so they can get discussed. I haven’t come to any firm conclusions about this subject and don’t intend to any time soon. Sometimes I don’t bother writing my current belief state because I’m still updating and that would make my comments longer and less content-dense.)
My worry is less bureaucrats, for some of the reasons you describe (delayed gratification is not a characteristic of bureaucrats), but well-meaning social reformers. You can find an unending stream of people who will tell you that doing X or even believing X is antisocial behavior and a portion of them will want the next generation to be genetically programmed to avoid X and do Y instead—for everyone’s own good, of course.
Good point.
Its times like this that i find what certain subgroups of LW dont find ‘political’ fascinating and hilarious. It says everything about the bubble from which they come.
I don’t think your model of politicians is very good if you think that they only care about what happens when they are in office.
Reducing testosterone would be a way to make people less aggressive. Even if it does reduce criminality, there a price to pay.
China has already used selective breeding to breed very tall basketball players.
Shouldn’t enjoyment of reading and loud parties be largely orthogonal? Personally I enjoy reading and raves and fighting (martial arts grappling, not pub brawls). But more to the point, I’m not sure this is the right definition of docile—books are far more dangerous than rock concerts.
I do however agree that its not obvious whether a “docile” citizenry is a bad thing.
I have noticed a negative correlation between “life of the mind” pursuits and “life of the party” pursuits. How much of that is because of skill specialization or deliberate signalling, rather than actual preference differences? Hard to say.
Sure, from the perspective of public safety or homeland security or whatever your favorite euphemism is. But I expect it’s more likely for parents to be doing this sort of selection than governments, and from a parental perspective Das Kapital (or for that matter Atlas Shrugged) keeps your kids out of your hair and improves expected future grades rather better than whatever teen pop group is popular right now.
Governments will likely regulate it and can freely say that if a company makes designer babies it has to include certain genes.
Parents also won’t be able to effectively interpret the evidence but will have to take advice from experts. Experts who design gene cocktails that they believe to be beneficial.
I wouldn’t expect a regulatory body to have the expertise necessary to come up with its own set of mandatory gene variants to produce a politically convenient population, and I wouldn’t expect a Western regulatory body to be able to get away with blatantly optimizing for political docility based on on other people’s results, not in a field that a lot of people are already skeptical about. It’s reasonable to expect some slant in that direction, and I might also expect to see variants banned if they were positively associated with things like aggression or criminality, but that’s a much weaker form of optimization. Basically, I think we’d end up with something that looks more like the FDA than the Thought Police, or even like the FCC.
Experts would exist, of course, but I have no reason to believe that their incentives would point strongly in the direction of social control.
That sounds to me like “that won’t happen because of the objections of people like you. Because it won’t happen, there’s no need to object to it”.
I’m saying what I think would happen, not what should happen.
You can object to what you want, but a statement that starts “Governments will likely regulate it...” is a prediction, not an objection.
It’s a prediction based on the existence of objections. If you use that prediction to then argue against the objections, it becomes self-defeating, since successfully using the prediction that way destroys the basis for being able to make the prediction.
I am not arguing against objections to government-mandated genetic modification. I am arguing that, as a matter of fact, Western governments in the near future are unlikely to fully exploit that kind of mandate, partly because those objections are common.
Analogously, I don’t believe Western governments are likely, at the moment, to burn opposition literature en masse. It does not therefore follow that arguments for free speech aren’t worth taking seriously—just that the existence of a valid underlying principle doesn’t imply imminent dystopian peril.
I suspect that by the time designer babies become an issue the biotechnology will advance far enough to allow genetic modifications in adults on the fly, including natural sex change, tinkering with intelligence and abilities, strength, speed, etc. So the issue will be not so much designer babies, but designer self.
Why would you assume that? That’s like saying by the time we can manufacture a better engine we’ll be able to replace a running one with the new design.
For example, evolution has optimized and delivered a mechanism for turning gene edits in a fertilized egg into a developed brain. It has not done the same for incorporating after-the-fact edits into an existing brain. So in the adult case we have to do an extra giga-evolve-years of optimization before it works.
First, assume != suspect (the latter is what I said). Second, gene therapy is already a reality.
Which has severe limits.
Sex differentiation mostly occurs during embryonic/fetal development. If you were to somehow switch the sex chromosomes of all cells of an adult human, or even a newborn human, it wouldn’t magically transform a penis into a vagina or vice versa. All you would get would be to screw that person’s endocrine system causing infertility and other health problems.
Thank god I wasn’t the only one who thought of this. There’s going to be a 17 year lag between when the babies are designed and when they enter the workforce so lots of time to figure out how to do the same thing in adults. It seems unlikely that we will have very advanced genetic manipulation technology, but that gene therapy will still be in its infancy, and no one will have cybernetic implants.
After a bit of Googling I find that there a company that manage to clone 500 pig per year. In a decade they might try themselves on humans.