Nina Panickssery
presumably ancient people weren’t constantly sick.
I think you presume incorrectly. People in primitive cultures spend a lot of time with digestive issues and it’s a major cause of discomfort, illness, and death.
On people’s arguments against embryo selection
A recent NYT article about Orchid’s embryo selection program triggered a surprising to me backlash on X where people expressed disgust and moral disapproval at the idea of embryo selection. The arguments generally fell into two categories:
(1) “The murder argument” Embryo selection is bad because it involves creating and then discarding embryos, which is like murdering whole humans. This argument also implies regular IVF, without selection, is also bad. Most proponents of this argument believe that the point of fertilization marks a key point when the entity starts to have moral value, i.e. they don’t ascribe the same value to sperm and eggs.
(2) “The egalitarian argument” Embryo selection is bad because the embryos are not granted the equal chance of being born they deserve. “Equal chance” here is probably not quite the correct phrase/is a bit of a strawman (because of course fitter embryos have a naturally higher chance of being born). Proponents of this argument believe that intervening on the natural probability of any particular embryo being born is anti-egalitarian and this is bad. By selecting for certain traits we are saying people with those traits are more deserving of life, and this is unethical/wrong.
At face value, both of these arguments are valid. If you buy the premises (“embryos have the moral value of whole humans”, “egalitarianism is good”) then the arguments make sense. However, I think it’s hard to justify moral value beginning at the point of fertilization.
On argument (1):
If we define murder as “killing live things” and decide that murder is bad (an intuitive decision), then “the murder argument” holds up. However, I don’t think we actually think of murder as “killing live things” in real life. We don’t condemn killing bacteria as murder. The anti-IVF people don’t condemn killing sperm or egg cells as murder. So the crux here is not whether the embryo is alive, but rather whether it is of moral value. Proponents of this argument claim that the embryo is basically equivalent to a full human life. But to make this claim, you must appeal to its potential. It’s clear that in its current state, an embryo is not a full human. The bundle of cells has no ability to function as a human, no sensations, no thoughts, no pain, no happiness, no ability to survive or grow on its own. We just know the given the right conditions, the potential for a human life exists. But as soon as we start arguing about how the potential of something grants it moral value, it becomes difficult to draw the line arbitrarily at fertilization. From the point of view of potential humans, you can’t deny sperm and eggs moral value. In fact, every moment a woman spends not pregnant is a moment she is ridding the world of potential humans.
On argument (2):
If you grant the premise that any purposeful intervention on the probabilities of embryos being born is unethical because it violates some sacred egalitarian principle then it’s hard to refute argument (2). Scott Alexander has argued that encouraging a woman to rehabilitate from alcoholism before getting pregnant is equivalent to preferring the healthy baby over the baby with fetal alcohol syndrome, something argument (2) proponents oppose. However, I think this is a strawman. The egalitarians think every already-produced embryo should be given as equal a chance as possible. They are not discussing identity changes of potential embryos. However, again we run into the “moral value from potential” problem. Sure, you can claim that embryos have moral value for some magical God-given reason. But my intuition is that in their hearts, the embryo-valuers are using some notion of potential full human life to ground their assessment. In which case again we run into the arbitrariness of the fertilization cutoff point.
So in summary, I think it’s difficult to justify valuing embryos without appealing to their potential, which leads us to value earlier stages of potential humans. Under this view, it’s a moral imperative to not prevent the existences of any potential humans, which looks like maximizing the number of offspring you have. Or as stated in this xeet
every combo of sperm + egg that can exist should exist. we must get to the singularity so that we can print out all possible humans and live on an incredibly alive 200 story high coast to coast techno favela
Couldn’t have said it better!
I also separately don’t buy that it’s riskier to build AIs that are sentient
Interesting! Aside from the implications for human agency/power, this seems worse because of the risk of AI suffering—if we build sentient AIs we need to be way more careful about how we treat/use them.
Not any more risky than bringing in humans.
Humans are more likely to be aligned with humanity as a whole compared to AIs, even if there are exceptions
Many existing humans want their descendants to exist, so they are fulfiling the preferences of today‘s humans
If there are lots of other sentient beings in existence with their own preferences and values, then it makes sense that they should have their own resources and have agency over themselves rather than us having agency over them
Perhaps yes (although I’d say it depends on what the trade-offs are) but the situation is different if we have a choice in whether or not to bring said sentient beings with difference preferences into existence in the first place. Doing so on purpose seems pretty risky to me (as opposed to minimizing the sentience, independence, and agency of AI systems as much as possible, and instead directing the technology to promote “regular” human flourishing/our current values).
I think it’s more likely than not that “crazy enlightened beings doing crazy transhuman stuff” will be bad for “regular” biological humans (ie. it’ll decrease our number/QoL/agency/pose existential risks).
What do you do, out of curiosity?
And good decaf black tea is even harder to get…
Counterargument: sure, good decaf coffee exists, but it’s harder to get hold of. Because it’s less popular, the decaf beans at cafés are often less fresh or from a worse supplier. Some places don’t stock decaf coffee. So if you like the taste of good coffee, taking caffeine pills may limit the amount of good coffee you can access and drink without exceeding your desired dose.
On optimizing for intelligibility to humans (copied from substack)
One risk of “vibe-coding” a piece of software with an LLM is that it gets you 90% of the way there, but then you’re stuck—the last 10% of bug fixes, performance improvements, or additional features is really hard to figure out because the AI has written messy, verbose code that both of you struggle to work with. Nevertheless, to delegate software engineering to AI tools is more tempting than ever. Frontier models can spit out almost-perfect complex React apps in just a minute, something that would have taken you hours in the past. And despite the risks, it’s often the right decision to prioritize speed, especially as models get smarter.
There is, of course, a middle ground between “vibe-coding” and good old-fashioned typing-every-character-yourself. You could use LLMs for smart autocomplete, occasionally asking for help with specific functions or decisions, or small and targeted edits. But models don’t seem optimized for this use case. It’s actually difficult to do so—it’s one thing to build an RL environment where the goal is to write code that passes some tests or gets a high preference score. It’s another thing to build an RL environment where the model has to guide a human to do a task, write code that’s easy for humans to build on, or ensure the solution is maximally legible to a human.
Will it become a more general problem that the easiest way for an AI to solve a problem is to produce a solution that humans find particularly hard to understand or work with? Some may say this is not a problem at the limit, when AIs are robustly superhuman at the task, but until then there is a temporary period of slop. Personally, I think this is a problem even when AIs are superhuman because of the importance of human oversight. Optimizing for intelligibility to humans is important for robustness and safety—at least some people should be able to understand and verify AI solutions, or intervene in AI-automated systems when needed.
People talk about meditation/mindfulness practices making them more aware of physical sensations. In general, having “heightened awareness” is often associated with processing more raw sense data but in a simple way. I’d like to propose an alternative version of “heightened awareness” that results from consciously knowing more information. The idea is that the more you know, the more you notice. You spot more patterns, make more connections, see more detail and structure in the world.
Compare two guys walking through the forest: one is a classically “mindful” type, he is very aware of the smells and sounds and sensations, but the awareness is raw, it doesn’t come with a great deal of conscious thought. The second is an expert in botany and birdwatching. Every plant and bird in the forest has interest and meaning to him. The forest smells help him predict what grows around the corner, the sounds connect to his mental map of birds’ migratory routes.
Sometimes people imply that AI is making general knowledge obsolete, but they miss this angle—knowledge enables heightened conscious awareness of what is happening around you. The fact that you can look stuff up on Google, or ask an AI assistant, does not actually lodge that information in your brain in a way that lets you see richer structure in the world. Only actually knowing does that.
In case someone wants a more extreme version of this post: https://ninapanickssery.substack.com/p/stormicism
I’d guess that you can suffer quite severe impairment from only a small amount of physical brain damage if the damage occurs in locations important for connecting different brain areas/capabilities. Information being “not lost, just inaccessible” seems realistic to me. However, I wouldn’t base this intuition on cases of terminal lucidity.
I am not saying care and compassion is incompatible with rationality and high-quality writing.
Yes, perhaps it’s reasonable to require some standard, but personally I think there’s a place for events where that standard is as or more permissive than it is at LessOnline. This is my subjective opinion and preference, but I would not be surprised if many LessWrong readers shared it.
It’s of course reasonable to skip an event because people you don’t like will be there.
However, it’s clear that many people have the opposite preference, and wouldn’t want LessOnline attendees or invited guests to have to meet a “standard of care and compassion,” especially one wherever you’re putting it.
LessOnline seems to be about collecting people interested in and good at rationality and high-quality writing, not about collecting people interested in care and compassion. For the latter I’d suggest one go to something like EA Global or church…
This, and several of the passages in your original post such as, “I agree such a definition of moral value would be hard to justify,” seem to imply some assumption of moral realism that I sometimes encounter as well, but have never really found convincing arguments for. I would say that the successionists you’re talking to are making a category error, and I would not much trust their understanding of ‘should’-ness outside normal day-to-day contexts.
I broadly agree.
I am indeed being a bit sloppy with the moral language in my post. What I mean to say is something like “insofar as you’re trying to describe a moral realist position with a utility function to be optimized for, it’d be hard to justify valuing your specific likeness”.
In a similar fashion, I prefer and value my family more than your family but it’d be weird for me to say that you also should prefer my family to your own family.
However, I expect our interests and preferences to align when it comes to preferring that we have the right to prefer our own families, or preferring that our species exists.
(Meta: I am extremely far from an expert on moral philosophy or philosophy in general, I do aspire to improve how rigorously I am able to articulate my positions.)
Not a fair trade, but also present-day “Mundane Mandy” does not want to risk everything she cares about to give “Galaxy-brain Gavin” the small chance of achieving his transhumanist utopia.
There’s no reason for me to think that my personal preferences (e.g. that my descendants exist) are related to the “right thing to do”, and so there’s no reason for me to think that optimizing the world for the “right things” will fulfil my preference.
I think most people share similar preferences to me when it comes to their descendants existing, which is why I expect my sentiment to be relatable and hope to collaborate with others on preventing humanity’s end.
Thank you for writing this. I think it’s a bit humorous that the people complaining about too much fearmongering re. kids out on their own are themselves probably engaging in too much fearmongering about police/CPS.