I already maybe mentioned this in some earlier discussion so maybe it’s not worth rehashing in detail but…
I strongly feel laws are downstream of culture. Instead of thinking which laws are best, it seems worthwhile to me to try thinking of which culture is best. First amendment in US is protected by culture rather than just by laws, if the culture changed then so would the laws. Same here with genomic liberty. Laws can be changed and their enforcement in day to day life can be changed. (Every country has examples of laws that exist on books but don’t get enforced in practice.)
(And if you do spend time thinking of what the ideal culture looks like, then I’ll have my next set of objections on why you personally can’t decide ideal culture of a civilisation either; how that gets decided is more complicated. But to have that discussion, first we will have to agree culture is important.)
I appreciate you for thinking about these topics. I just think reality is likely to look very different from what you’re currently imagining.
I agree that culture is important and that I contribute a very small amount to deciding what culture looks like. What do you think I’m imagining that reality will look different from?
I’m unsure what the theory of change associated with your LW post is. If you have a theory of change associated with it that also makes sense to me, my guess is you’d focus a lot more on cultural attitudes and incentives, and a lot less on legality or technical definitions.
The process for getting a certain desirable future is imo likely not going to be that you create the law first and everyone complies it with later when the tech is deployed.
It’ll look more like the biotech companies deploy the tech in a certain way, then a bunch of citizens get used to using it a certain way (and don’t have lots of complaints), and then a certain form of usage gets normalised, and only after that you can make a law codifying what is allowed and not allowed.
Until society has consensus agreement on certain ways of doing things, and experience doing them in practice (not theory), I don’t think it’ll be politically viable to pass a legal ban (that doesn’t say, get overturned soon after).
The other way around is possible, it is possible to make a law saying something is disallowed before it has actually been done. Historically societies have often been bad at this sort of thing. (Often you need mishap to happen before a law banning something is politically viable.) But cultures in general are averse to change, so that alone can be good enough to that a ban is politically viable.
This makes more sense for blanket ban though, it makes less sense for the kind of targeted ban of certain types of interventions. Culture does not already encode the stuff in your post as of 2025, what you’re proposing is novel.
Yeah I’m not, like, trying to sneak this in as a law or something. It’s a proposed policy principle, i.e. a proposed piece of culture.
My main motive here is just to figure out what a good world with germline engineering could/would look like, and a little bit to start promoting that vision as something to care about and work towards. I agree that practical technology will push the issue, but I think it’s good to think about how to make the world with this technology good, rather than just deferring that. Besides the first-order thing where you’re just supposed to try to make technology end up going well, it’s also good to think about the question for cooperative reasons. For one thing, pushing technology ahead without thinking about whether or how it will turn out well is reckless / defecty, and separately it looks reckless / defecty. That would justify people pushing against accelerating the technology, and would give people reason to feel skittish about the area (because it contains people being reckless / defecty). For another thing, having a vision of a good world seems like it ought to be motivating to scientists and technologists.
I haven’t spent a lot of time thinking about this myself. But one suggestion I would recommend:
For any idea you have, also imagine 20 other neighbouring ideas, ideas which are superficially similar but ultimately not the same.
The reason I’m suggesting this exercise is that ideas keep mutating. If you try to popularise any set of ideas, people are going to come up with every possible modification and interpretation of them. And eventually some of those are going to become more popular and others less popular.
For example with “no removing a core aspect of humanity” principle, imagine if someone who values fairness and equality highly considers this value a core aspect of humanity and then thinks through its implications. Or let’s say with “parents have a strong right to propagate their own genes”, a hardcore libertarian takes this very seriously and wants to figure out edge case of exactly how many “bad” genes are they allowed to transmit to their child before they run afoul of “aimed at giving their child a life of wellbeing” principle.
You can come up with a huge number of such permutations.
I already maybe mentioned this in some earlier discussion so maybe it’s not worth rehashing in detail but…
I strongly feel laws are downstream of culture. Instead of thinking which laws are best, it seems worthwhile to me to try thinking of which culture is best. First amendment in US is protected by culture rather than just by laws, if the culture changed then so would the laws. Same here with genomic liberty. Laws can be changed and their enforcement in day to day life can be changed. (Every country has examples of laws that exist on books but don’t get enforced in practice.)
(And if you do spend time thinking of what the ideal culture looks like, then I’ll have my next set of objections on why you personally can’t decide ideal culture of a civilisation either; how that gets decided is more complicated. But to have that discussion, first we will have to agree culture is important.)
I appreciate you for thinking about these topics. I just think reality is likely to look very different from what you’re currently imagining.
I agree that culture is important and that I contribute a very small amount to deciding what culture looks like. What do you think I’m imagining that reality will look different from?
I’m unsure what the theory of change associated with your LW post is. If you have a theory of change associated with it that also makes sense to me, my guess is you’d focus a lot more on cultural attitudes and incentives, and a lot less on legality or technical definitions.
The process for getting a certain desirable future is imo likely not going to be that you create the law first and everyone complies it with later when the tech is deployed.
It’ll look more like the biotech companies deploy the tech in a certain way, then a bunch of citizens get used to using it a certain way (and don’t have lots of complaints), and then a certain form of usage gets normalised, and only after that you can make a law codifying what is allowed and not allowed.
Until society has consensus agreement on certain ways of doing things, and experience doing them in practice (not theory), I don’t think it’ll be politically viable to pass a legal ban (that doesn’t say, get overturned soon after).
The other way around is possible, it is possible to make a law saying something is disallowed before it has actually been done. Historically societies have often been bad at this sort of thing. (Often you need mishap to happen before a law banning something is politically viable.) But cultures in general are averse to change, so that alone can be good enough to that a ban is politically viable.
This makes more sense for blanket ban though, it makes less sense for the kind of targeted ban of certain types of interventions. Culture does not already encode the stuff in your post as of 2025, what you’re proposing is novel.
Yeah I’m not, like, trying to sneak this in as a law or something. It’s a proposed policy principle, i.e. a proposed piece of culture.
My main motive here is just to figure out what a good world with germline engineering could/would look like, and a little bit to start promoting that vision as something to care about and work towards. I agree that practical technology will push the issue, but I think it’s good to think about how to make the world with this technology good, rather than just deferring that. Besides the first-order thing where you’re just supposed to try to make technology end up going well, it’s also good to think about the question for cooperative reasons. For one thing, pushing technology ahead without thinking about whether or how it will turn out well is reckless / defecty, and separately it looks reckless / defecty. That would justify people pushing against accelerating the technology, and would give people reason to feel skittish about the area (because it contains people being reckless / defecty). For another thing, having a vision of a good world seems like it ought to be motivating to scientists and technologists.
Got it!
I haven’t spent a lot of time thinking about this myself. But one suggestion I would recommend:
For any idea you have, also imagine 20 other neighbouring ideas, ideas which are superficially similar but ultimately not the same.
The reason I’m suggesting this exercise is that ideas keep mutating. If you try to popularise any set of ideas, people are going to come up with every possible modification and interpretation of them. And eventually some of those are going to become more popular and others less popular.
For example with “no removing a core aspect of humanity” principle, imagine if someone who values fairness and equality highly considers this value a core aspect of humanity and then thinks through its implications. Or let’s say with “parents have a strong right to propagate their own genes”, a hardcore libertarian takes this very seriously and wants to figure out edge case of exactly how many “bad” genes are they allowed to transmit to their child before they run afoul of “aimed at giving their child a life of wellbeing” principle.
You can come up with a huge number of such permutations.