I think this post suffers pretty badly from Typical Mind Fallacy. This thinking isn’t alien to me. I used to think exactly like this 8 years ago, but since marriage and kid I now disagree with basically every point.
One claim that is hopefully uncontroversial: Humans are not literally optimizing for IGF,
I think this is controverisial because it’s basically wrong :)
First, its not actually obvious what “definition” of IGF you are using. If you talk about animals, the definition that might fit is “number of genes in the next generation”. However if you talk about humans, we care about both “number of genes in the next generation” and “resources given to the children”. Humans can see “one step ahead” and know the rough prospects their children have in the dating market. “Resources” is not just money, it is also knowledge, beauty, etc.
Given this, if someone decides to have two children instead of four, this might just mean they simply don’t trust their ability to equip the kids with the necessary tools to succeed.
Now, different people ALSO have different weights for the quantity vs quality of offspring. See Shoshannah Tekofsky’s comment (unfortunately disagreed with) for the female perspective on this. Evolutionary theory might predict that males are more prone to maximize quantity and satisfice quality and female are more prone to satisfice quantity and maximize quality. That is, “optimization” is not the same as “maximization”. There can also be satisfice / maximization mixes where each additional unit of quality or quantity still has value, but it falls off.
Would you give up your enjoyment of visual stimuli then, like an actual IGF optimizer would?
If you give a choice between having 10 extra kids with my current wife painlessly + sufficient resources for a good head start for them, I would consider giving up my enjoyment of visual stimuli. The only hesitation is that i don’t like “weird hypotheticals” in general and i potentially expect “human preference architectures” to not be as easily “modularizable” compared to computer architectures. This giving up can also have all sorts of negative effects beyond losing “qualia” of visualness, like losing capacity for spacial reasoning. However, if the “only” thing i lose is qualia and not any cognitive capacities, than this is an easy choice.
But, do you really fundamentally care that your kids have genomes?
Yes, obviously i do. I don’t consider “genomeless people” to be a thing, i dislike genetic engineering and over-cyborgization, i don’t think uploads are even possible.
Or, an even sharper proposal: how would you like to be killed right now, and in exchange you’ll be replaced by an entity that uses the same atoms to optimize as hard as those atoms can optimize, for the inclusive genetic fitness of your particular genes. Does this sound like practically the best offer that anyone could ever make you? Or does it sound abhorrent?
This hypothetical is too abstract to be answerable, but if i were to offer an answer to a hypothetical with a similar vibe: many people do in fact die for potential benefits to inclusive fitness for their families, we call those soldiers / warriors / heroes. Now, sometimes their government deceives them about whether or not their sacrifice is in fact helpful for their nation, however the underlying psychology seems be easily consistent with “IGF-optimization”
My point today is that the observation “humans care about their kids” is not in tension with the observation “we aren’t IGF maximizers”,
I think this is where the difference between the terms “optimizer” and “maximizer” is important. Also important to understand what sort of constraints most people in fact operate under. Most people seem to they act AS IF they are IGF satisficers—they get up to a certain level of quantity / quality and seem to slow down after that. However, it’s hard to infer the exact values because very specific subconscious /conscious beliefs could be influencing the strategy.
For example, i could argue that secretly, many people want to be maximizer, however this thing we call civilization is effectively an agreement between maximizers to forgoe certain maximization tactics and stick to being a satisficers. So people might avoid “overly agressive” maximization because they are correctly worried this is perceived as “defection” and ends up backfiring. Given that the current environment is very different from the ancestral environment, this particular machinery might be malfunctioning and leading to people subconsciously perceive having any children as defection. However i suspect humanity will adapt in a small number of generations.
Humans are not literally optimizing for IGF, and regularly trade other values off against IGF.
Sort of true. The main value people seem to trade off is “physical pain.” Humans are also resource and computation constrained and implementing “proper maximization” in a heavily resources constrained computation may not even be possible.
Introspecting my thought before and after kids, I have a theory that the process of finding a mate prior to “settling down” tends to block certain introspection into one’s motivations. It’s easier to appreciate art if you are not thinking “oh i am looking at art i like because art provides baysean evidence on lifestyle choices to potential mates”. Thinking this way can appear low status which is itself a bad sign. So the brain is more prone to lying to itself that “there is enjoyment for it’s own sake.” After having a kid, the mental “block” is lifted and it is sort of obvious this is what i was doing and why.
Robin is a libertarian, Nate used to be, but after the whole calls to “bomb datacenters” and vague “regulation,” calls from the camp, i don’t buy libertarian credentials.
A specific term of “cradle bags of meat” is de-humanization. Many people view dehumanization as evidence of violent intentions. I understand you do not, but can you step away and realize that some people are quite sensitive to the phrasing?
More-over when i say “forcefully do stuff to people they don’t like”, this is a general problem. You seem to interpret this as only taking about “forcing people to be uploaded” which is a specific sub-problem. There are many other instances of this general problem which i refere to such as
a) forcing people to take care of economically unproductive digital minds.
it’s clear that Nate sees “American tax-payers” with contempt. However depending on the specific economics of digital minds, people are wary of being forced to give up resources to something they don’t care about.
or
b) ignoring existing people’s will with regards to dividing the cosmic endownment.
In your analogy, if a small group of people wants to go to Mars and take a small piece of it, that’s ok. However if they wish to go to Mars and then forcefully prevent other people from going there at all because they claim they need all of Mars to run some computation, this is not ok.
It’s clear Nate doesn’t see any problem with that with the interstellar probes comment.
Again, the general issue here, is that I am AWARE of the disagreement about whether on not digital minds (i am not taking about uploads, but other simpler to make categories of digital minds) are ok to get created. Despite me being in the majority position of “only create tool-AIs”, I am acknowledging that people might disagree. There are ways to resolve this peacefully (divide the star systems between groups). However, despite being in the minority position LW seems to wish to IGNORE every one else’s vision of the future and call them insults like “bags of meat” and “carbon—chauvinists”.