There’s two different considerations at play here:
Whether global birth rates/total human population will decline.
and
Whether that decline will be a “bad” thing.
In the case of the former:
I think that a “business as usual” or naive extrapolation of demographic trends is a bad idea, when AGI is imminent. In the case of population, it’s less bad than usual, at least compared to things like GDP. As far as I’m concerned, the majority of the probability mass can be divvied up between “baseline human population booms” and “all humans die”.
Why might it boom? (The bust case doesn’t need to be restated on LW of all places).
To the extent that humans consider reproduction to be a terminal value, AI will make it significantly cheaper and easier. AI assisted creches or reliable rob-nannies that don’t let their wards succumb to what are posited as the ills of too much screen time or improper socialization will mean that much of the unpleasantness of raising a child can be delegated, in much the same manner that a billionaire faces no real constraints in their QOL from having a nigh arbitrary number of kids when they can afford as many nannies as they please. You hardly need to be a billionaire to achieve that, it’s in the reach of UMC Third Worlders because of income inequality, and while more expensive in the West, hardly insurmountable for successful DINKs. The wealth versus fertility curve is currently highest for the poor, dropping precipitously with income, but then increases again when you consider the realms of the super-wealthy.
What this does retain will be what most people consider to be universally cherished aspects of raising a child, be it the warm fuzzy glow of interacting with them, watching them grow and develop, or the more general sense of satisfaction it entails.
If, for some reason, more resource rich entities like governments desire more humans around, advances like artifical wombs and said creches would allow large population cohorts to be raised without much in the way of the usual drawbacks today, as seen in the dysfunction of orphanages. This counts as a fallback measure in case the average human simply can’t be bothered to reproduce themselves.
The kind of abundance/bounded post-scarcity we can expect will mean no significant downsides from the idle desire to have kids.
Not all people succumb to hyper-stimuli replacements, and the ones who don’t will have far more resources to indulge their natal instincts.
As for the latter:
Today, and for most of human history, population growth has robustly correlated with progress and invention, be it technological or cultural, especially technological. That will almost certainly cease to be so when we have non-human intelligences or even superintelligences about, that can replace the cognitive or physical labour that currently requires humans.
It costs far less to spool up a new instance of GPT-4 than it does to conceive and then raise a child to be a productive worker.
You won’t need human scientists, or artists, or anything else really, AI can and will fill those roles better than we can.
I’m also bullish on the potential for anti-aging therapy, even if our current progress on AGI was to suddenly halt indefinitely. Mere baseline human intelligence seems sufficient to the task within the nominal life expectancy of most people reading this, as it does for interplanetary colonization or constructing Dyson Swarms. AI would just happen to make it all faster, and potentially unlock options that aren’t available to less intelligent entities, but even we could make post-scarcity happen over the scale of a century, let alone a form of recursive self-improvement through genetic engineering or cybernetics.
From the perspective of a healthy baseliner living in a world with AGI, you won’t notice any of the current issues plaguing demographically senile or contracting populations, such as failure of infrastructure, unsustainable healthcare costs, a loss of impetus when it comes to advancing technology, less people around to make music/art/culture/ideas. Whether there are a billion, ten billion or a trillion other biological humans around will be utterly irrelevant, at least for the deep seated biological desires we developed in an ancestral environment where we lived and died in the company of about 150 others.
You won’t be lonely. You won’t be living in a world struggling to maintain the pace of progress you once took for granted, or worse, watching everything slowly decay around you.
As such, I personally don’t consider demographic changes to be worth worrying about really. On long enough time scales, evolutionary pressures will ensure that pro-natal populations will reach carrying capacity. In the short or medium term, with median AGI timelines, it’s exceedingly unlikely that most current countries with sub-replacement TFR will suffer outright, in the sense their denizens will notice a reduced QOL. Sure, in places like China, Korea, or Japan, where such issues are already pressing, they might have to weather at most a decade or so, but even they will benefit heavily from automation making a lack of humans an issue moot.
https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/
Absolute napkin math while I’m sleep deprived at the hospital, but you’re looking at something around 86 trillion ML neurons, or about 516 quadrillion parameters. to emulate the human brain. That’s.. A lot.
Now, I am a doctor, but I’m certainly no neurosurgeon. That being said, I’m not sure it’s particularly conducive to the functioning of a human brain to stuff it full of metallic wires. Leaving aside that Neuralink and co are very superficial and don’t penetrate particularly deep into the cortex (do they even have to? Idk, the grey matter is on the outside anyway), it strikes me as electrical engineer’s nightmare to even remotely get this wired up and working. The crosstalk. The sheer disruption to homeostasis..
If I had to bet on mind uploading, the first step would be creating an AGI. To make that no longer my headache, of course.
Not an option? Eh, I’d look for significantly more lossy options than to hook up every neuron. I think it would be far easier to feed behavioral and observational data alongside tamer BCIs to train a far more tractable in terms of size model to mimic me, to a degree indistinguishable for a (blinded) outside observer. It certainly beats being the world’s Literal Worst MRI Candidate, and probably won’t kill you outright. I’m not sure the brain will be remotely close to functional by the time you’re done skewering it like that, which makes me assume the data you end up collecting any significant degree into the process will be garbage from dying neuronal tissue.