Raising children on the eve of AI
Cross-posted with light edits from Otherwise.
I think of us in some kind of twilight world as transformative AI looks more likely: things are about to change, and I don’t know if it’s about to get a lot darker or a lot brighter.
Increasingly this makes me wonder how I should be raising my kids differently.
What might the world look like
Most of my imaginings about my children’s lives have them in pretty normal futures, where they go to college and have jobs and do normal human stuff, but with better phones.
It’s hard for me to imagine the other versions:
A lot of us are killed or incapacitated by AI
More war, pandemics, and general chaos
Post-scarcity utopia, possibly with people living as uploads
Some other weird outcome I haven’t imagined
Even in the world where change is slower, more like the speed of the industrial revolution, I feel a bit like we’re preparing children to be good blacksmiths or shoemakers in 1750 when the factory is coming. The families around us are still very much focused on the track of do well in school > get into a good college > have a career > have a nice life. It seems really likely that chain will change a lot sometime in my children’s lifetimes.
When?
Of course it would have been premature in 1750 to not teach your child blacksmithing or shoemaking, because the factory and the steam engine took a while to replace older forms of work. And history is full of millenialist groups who wrongly believed the world was about to end or radically change.
I don’t want to be a crackpot who fails to prepare my children for the fairly normal future ahead of them because I wrongly believe something weird is about to happen. I may be entirely wrong, or I may be wrong about the timing.
Is it even ok to have kids?
Is it fair to the kids?
This question has been asked many times by people contemplating awful things in the world. My friend’s parents asked their priest if it was ok to have a child in the 1980s given the risk of nuclear war. Fortunately for my friend, the priest said yes.
I find this very unintuitive, but I think the logic goes: it wouldn’t be fair to create lives that will be cut short and never reach their potential. To me it feels pretty clear that if someone will have a reasonably happy life, it’s better for them to live and have their life cut short than to never be born. When we asked them about this, our older kids said they’re glad to be alive even if humans don’t last much longer.
I’m not sure about babies, but to me it seems that by age 1 or so, most kids are having a pretty good time overall. There’s not good data on children’s happiness, maybe because it’s hard to know how meaningful their answers are. But there sure seems to be a U-shaped curve that children are on one end of. This indicates to me that even if my children only get another 5 or 10 or 20 years, that’s still very worthwhile for them.
This is all assuming that the worst case is death rather than some kind of dystopia or torture scenario. Maybe unsurprisingly, I haven’t properly thought through the population ethics there. I find that very difficult to think about, and if you’re on the fence you should think more about it.
What about the effects on your work?
If you’re considering whether to have children, and you think your work can make a difference to what kind of outcomes we see from AI, that’s a different question. Some approaches that seem valid to me:
“I’m allowed to make significant personal decisions how I want, even if it decreases my focus on work”
“I care more about this work going as well as it can than I do about fulfillment in my personal life”
There are some theories about how parenting will make you more productive or motivated, which I don’t really buy (especially for mothers). I do buy that it would be corrosive for a field to have a norm that foregoing children is a signal of being a Dedicated, High-Impact Person.
One compromise seems to be “spend a lot of money on childcare,” which still seems positive for the kids compared with not existing.
In the meantime
Our kids do normal things like school. Partly because even in a world where it became clear that school isn’t useful, our pandemic experience makes me think they would not be happier if we somehow pulled them out.
I’m trying to lean toward more grasshopper, less ant. Live like life might be short. More travel even when it means missing school, more hugs, more things that are fun for them.
What skills or mindsets will be helpful?
It feels like in a lot of possible scenarios, nothing we could do to prepare the kids will particularly matter. Or what turns out to be helpful is so weird we can’t predict it well. So we’re just thinking about this for the possible futures where some skills matter, and we can predict them to some degree.
I haven’t really looked into what careers are less automatable; that seems probably worth looking at when teenagers or young adults are moving toward careers. I wouldn’t be surprised if childcare is actually one of the most human-specialized jobs at some point.
Some thoughts from other parents:
A friend pointed out is that it’s good if children’s self-image isn’t too built around the idea of a career, because of the high chance that careers as we know them won’t be a thing.
“For now I basically just want her to be happy and healthy and curious and learn things.”
“I think it’s worth focusing on fundamental characteristics for a good life: high self esteem and optimistic outlook towards life, problem solving and creative thinking, high emotional intelligence, hobbies/sports/activities that they truly enjoy, being AI- and tech-native.”
“I’m less worried about mine being doctors or engineers. I feel more confident they should just pursue their passions.”
How much contact with AI?
I know some parents who are encouraging kids to play around with generative AI, with the idea that being “AI-native” will help them be better prepared for the future.
Currently my guess is that the risk of the kids falling into some weird headspace, falling in love with the AI or something, is higher than is worth it. As Joe Carlsmith writes: “If they want, AIs will be cool, cutting, sophisticated, intimidating. They will speak in subtle and expressive human voices. And sufficiently superintelligent ones will know you better than you know yourself – better than any guru, friend, parent, therapist.”
Maybe in a few years it’ll be impossible to keep my children away from this coolest of cool kids. But currently I’m not trying to hasten that.
What we say to them
Not a lot. One of our kids has been interested in the possibility of human extinction at points, starting when she learned about the dinosaurs. (She used to check out the window to see if any asteroids were headed our way.)
We’ve occasionally talked about AI risk, and biorisk a bit more, but the kids don’t really grasp anything worse than the pandemic we just went through. I think they’re more viscerally worried about climate change and the loss of panda habitats, because they’ve heard more about those from sources outside the family.
CS Lewis in 1948
I think this quote doesn’t do justice to “Try hard to avert futures where we all get destroyed,” but I still find it useful.
“If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things—praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts—not huddled together like frightened sheep and thinking about bombs. They may break our bodies (a microbe can do that) but they need not dominate our minds.”
Related writing
Zvi’s AI: Practical advice for the worried, with section “Does it still make sense to try and have kids?” and thoughts on jobs.
Anna Salamon and Oliver Habryka on whether people who care about existential risk should have children.
Curated. My first (and planned to be only) child was born three months ago, so I’ve answered the question of “whether to have kids”. How to raise a child in the current world remains a rather open question, and I appreciate this post for broaching the topic.
I think there’s a range of open questions here, would be neat to see further work on them. Big topics are how to ensure a child’s psychological wellbeing if you’re honest with them that you think imminent extinction is likely, and also how to maintain productivity and ambitious goals with kids.
An element that’s taken me surprise is the effect of my own psychology of feeling a very strong desire to give my child a good world to live in, but feeling I’m not capable of ensuring that. Yes, I can make efforts, but it’s hard feeling I can’t give her the world I wish I could. That’s a psychological weight I didn’t anticipate before.
Congratulations! If it is not too personal, would you share your considerations that informed your answer to that question?
I have four sons, ages 12 to 20. Ever since the oldest started to use the computer a lot I have wondered whether to limit it one way or the other. I also sat at the computer a lot starting ~1988. My parents must have wondered what good might come of this strange hobby. I don’t know if they had a clue that it might become useful. But they trusted and gave freedom to explore. The job I’m doing right now didn’t even exist when I was my kids current age. How could anybody have advised it as a career? In the same way, it seems even more likely that whatever my sons will be doing when they are as old as I am now doesn’t exist either. Maybe it will lead to participating in the gaming economy like in South Korea, or all the meme immersion will lead to future media empires.
I have thought the same with young kids. After a little thought I decided its best not to really change anything. As you said there is no benefit of taking kids out of school even if you believe the skills wont be useful. If they are happy at school and gain a sense of mastery and purpose with peers etc then that is good.
If you want specifics then sure asking them “what do you want to be when you grow up” is definitely not a good idea, if it ever was. Also our society is all geared towards feeling worthwhile if you make something rather than just being intrinsically worthwhile. You can wonder how a parent in the “Culture” universe would bring up their child.
Perhaps the more open a child is to a Brain Computer Interface such as Neuralink, the more they will contribute in the future? I keep the kids away from ChatGPT and image generators—if kids get a sense of achievement from drawing then I don’t see any good that can come of letting them make artwork in seconds.
Also I consider it plenty likely enough that we won’t see that future anytime soon. If you have read “Chip wars” you will realize how incredibly fragile the semi-conductor supply chain is. If Taiwan is invaded, then that guarantees say 5+ years of a setback and I could realistically see 2 decades if things really turn to strife and supply chains collapse. Additionally if we have a “slow takeoff” as I believe once again China will see that they will essentially have no chance of a Chinese century and will be motivated to destabilize things—likewise for any major power that thinks they are losing out. Where I live thats supply chain shortages but not a vastly changed world.
It’s not related to the post’s main point, but the U-shape happiness finding seems to be questionable. It looks more like it just goes lower with age by other analyses in general this type of research shouldn’t be trusted
The U-shaped happiness curve is wrong: many people do not get happier as they get older (theconversation.com)
Yeah, it kind of looks like all the unhappy people die by 50 and then average goes up. Conditioning on the figure being right in the first place.
[EDIT] looks like approximately 12% − 20% of people are dead by 50. Probably should not be that large of an effect on average? idk. Maybe I’m wrong.
Good thoughts. The world will always have its ups and downs. I don’t think tech can save us from it perpetually. Just like “Gods” and whatnot didn’t save the people of past perpetually. People have been through waves of utopia and hell for eons.
Anyway, I don’t have a bunch of data but I can share my personal experience.
I had my first kid, 6m old boy. Everybody seems to think he’s “The Buddha” due to his wise and alert vibes, and unusually calm and happy demeanor. He certainly seems to be relatively easy and joyful to care for compared to what we hear from every other parent, though of course he has his moments.
Everyone is different (and they should be, it obviously takes all sorts to build this world) but this to me is the only thing that’s important. And we did not teach him anything, we just became calm and clear headed ourselves. The baby just picked up the same mentality.
This seems to depend less on money, but a lot more on time. Ok some argue “Time = Money”, but not necessarily, especially in affluent countries like where I live. Every single person I know who has much more money than me, has far less time. And I don’t feel any of them work on anything particularly great or meaningful for society. My wife and I spent years crafting a very unique lifestyle which maximizes time above all else, and it did not involve getting very rich
So what if we all get uploaded to computers? Well his neural-net will be the clearest and happiest so everyone will want one like it. Take a look at any other future outcome and see if having a clear and happy mind is never of great benefit. Skills are secondary and can always be learned “on the job”—especially when you have a calm and clear head. Note that calmness and clarity does not equal laziness nor ineffectiveness. On the contrary, it helps one better determine where it’s worth putting in a lot of effort and where it’s a waste of time. It allows one to pick up new things quickly.
I’m not even remotely prepared to state my odds on any of whats going ahead, because I’m genuinely mystified as to where this road goes. I’m a little less worried the AI would go haywire than I was a few years ago, because it appears the current trajectory of LLM based AIs seems to generate AI that emulates humans more often than it emulates raging deathbots. But I’m not a seer, and I’m not an AI specialist, so I’m entirely comfortable with the possiblity I haven’t got a clue there. All I know is we ought try and figure it out, just in case this thing does go sour.
What I do think is highly likely is a world that doesnt need me, or any of the university trained folk and family that I grew up with in the economy, and eventually any of the more working class folk either. This is either going to go completely awfully (we know from history that when the middle class vanishes things get ugly, we’re seeing elements of that right now), or we scrape our way through that ugliness and actually create a post-work society that doesnt abandon those that didn’t own capital in that post-work world (I think it has to be that way. Throw the vast majority into abject poverty, and things start getting lit on fire. Theres really only one destination here, and it folks aint gonna tolerate an Elysium style dystopia without a fight)
So with that in mind, why send the kids to school? Because if the world don’t need our bodies, we’re gonna have to find other things to do with our mind. Sci Fi has a few suggestions here. In Iain Bank’s Culture novels, the humans of the culture are somewhat surplus to requirements for the productive economy. The minds (the super powered AIs that administer the culture) and their drones have the hard work covered. So the Humans spend their time in leisurely pursuits and intellectual pursuits. Education is a primary activity for the citizenry and its pursued for its own sake. This to me is somewhat reminiscient of the historical role of universities that saw education, study and research as valuable for its own sake. The philosophers of old where not philosophers so they could increase grain production, but because they wanted to grasp the nature of the universe, understand the gods (or lack thereof in later incarnations) and improve the ethical world they lived in. Seems like theres still going to be a role for that.
It’s not fair to the kids to give birth to them, regardless of whether the world will end in a year, 10 years, or never. The kids themselves will die 100%, which is a horrible and awful thing you put them through. I wish I was never born and so do millions others. By giving birth, you always give joy to yourself and the newborn in cost of suffering of other newborns.
Out of billions. Unfortunate for the millions, but the billions have it.
I don’t understand your point, is it:
a) Life always ends with death, and many people believe that if their life ends with death they don’t want to live at all or
b) Giving birth always gives “joy to yourself and the newborn” while also causing “suffering of other newborns”. (If so, why?)
It’s not consensual. It can’t be. Then again, much of parenting isn’t. All we can do is apologize, whether it’s “I’m sorry that you were born and you wish you weren’t” or “I’m sorry we lived in France and you wish we hadn’t” or anything else.
I hope I made good choices and I will never know.
Optimizing for both impact and personal fun, I think this is probably directionally good advice for the types of analytic people who think about the long term a lot, regardless of kids. (It’s bad advice for people who aren’t thinking very long term already, but that’s not who is reading this.)
Yes for sure. I experience this myself when I am in the presence of very mindful folks (e.g. experienced monks who barely say anything), and occasionally someone has commented that I have done the same for them, sometimes quoting a particular snippet of something I said or wrote. We all affect each other in subtle ways, often without saying an actual word.
I agree with this conditional, but I question whether the condition (bolded) is a safe assumption. For example, if you could go back in time to survey all of the hibakusha and their children, I wonder what they would say about that C.S. Lewis quotation. It wouldn’t surprise me if many of them would consider it badly oversimplified, or even outright wrong.
This strikes me as some indexical sleight of hand. If the priests were instead saying no during the 1980s, wouldn’t that have led to a baby boom in the 1990s...?
When some societies turned into agrarian societies, right after they leaned into sports and performing arts as a primary leisure and competitive activity. Training, education and research never went away.
I recently came across this white paper on the Future of Work and the role of our children by The Archbridge Institute. https://www.archbridgeinstitute.org/building-soft-skills-for-the-future-of-work/
According to this white paper, what’s needed in the AI era is a combination of creativity, empathy, imagination, and team-building. How do kids develop these? Through a combination of early education, family and an emphasis on “Free Play”, typically mixed-age. One quote:
“Once I was watching some kids play an informal game of basketball. They were spending more time deciding on the rules and arguing about whether particular plays were fair than they were playing the game. I overheard a nearby adult say, “Too bad they don’t have a referee to decide these things, so they wouldn’t have to spend so much time debating.” Well, is it too bad? In the course of their lives, which will be the more important skill—shooting or debating effectively and learning how to compromise? Kids playing sports informally are practicing many things at once, the least important of which may be the sport itself.”
The white paper has other quotes and examples…
About: “The Archbridge Institute is a Washington DC-based non-partisan, independent, 501(c)(3) public policy think tank. Our mission is to lift barriers to human flourishing.” More like a socio-economic mobility thinktank...
Good article. I would advise less emphasis on traditional schooling (reading, writing, ’rithmetic) and more emphasis on relationship intelligence and embodied intelligence (making things with your hands).
Thank you for writing this. My girlfriend and I would like kids, but I generally try not to bring AI up around her. She got very anxious while listening to an 80k hours podcast on AI and it seemed generally bad for her. I don’t think any of my work will end up making an impact on AI, so I think basically the CS Lewis quote applies. Even if you know the game you’re playing is likely to end, there isn’t anything to do since there are no valid moves if the new game actually starts.
I did want to ask, how did you think about putting your children in school? Did you send them to a public school?
We’ve done the local public school, yes. More thoughts here: https://juliawise.net/school-your-mileage-may-vary/
Well said. We’ve been contemplating expanding our family lately and I have to say, I’ve been secretly thinking many of the same things. That said, if we want humanity to persist and have a chance of one day prospering alongside AI and other technologies to come, children seem like a pretty clear prerequisite (particularly from people like us who care about these bigger pictures). I personally believe there will likely be non-trivial socioeconomic inequality and strife in the wake of AGI, however, I believe that these timescales will be on the order of decades (not weeks or months). In short, I believe that raising future generations to care about the future of humanity is incredibly important.
On a brighter note, I personally think a few things could be worthwhile to think about in preparing our children for the uncertainty that will very likely come with a post-AGI world. Purely IMO and I realize these things are not available or practical for everyone but just wanted to share a few thoughts:
In an time when we increasingly can’t believe everything we see and read, kids need to learn to question appropriately, to reason probabilistically, think critically and most of all think for themselves.
AGI will more significantly disrupt white-collar job markets than blue collar job markets( with a few exceptions). Consequently, you might help your children develop some hard skills (eg. how to repair an appliance, build something with wood, patch clothing, change your car oil, an Arduino project etc.)
it will become increasing valuable to be more of a generalist (and one who is comfortable with change). Teach resilience and get them thinking about emotional intelligence from a young age.
I feel like it will be increasingly important to be involved with your local community and know your neighbors.
IMHO, The problem with kids and technology is less with IT and computers as it is with social media and the darker sides of the internet. I would actually prefer that my kids engage intellectually with an offline language model than to drop them in the deep end of the internets.
We frankly shouldn’t take our economic supply chains and food industries for granted. Understand where things come from, how they’re made, how to be more self sufficient. Grow a garden, tech kids basic horticulture, raise a pet chicken. If possible, lean towards living in an area with some local agriculture.
Not everyone needs to be a dedicated ‘prepper’… but if you own a home, maybe think about solar and water storage. Regardless, of where you live, minimal food stores not a bad idea. $100 of rice/beans and a bottle of multivitamins stored properly can feed a family for many months if desperate.
This is personal but I think we’re generally trending towards over structuring and micromanaging (ie. helicopter parenting) our kids which can lead to anxiety and lack of self-reliance. I think that is important to give younger children a bit more latitude and autonomy to learn to become comfortable on their own.
Needless to say, I don’t think social media (and most mainstream media) is healthy in any way for kids under 16ish, get them a flip phone. Get them involved with local groups for real-world socialization (eg. scouts, clubs, meetups, etc)
This isn’t easy of feasible for many but if you have the option and inclination, consider raising kids in a more rural environment at least for a while. If you believe that AI will bring socioeconomic instability, dense urban metropolitan area will be most impacted.
About your “prepper” points, it would be helpful to know the scenario you have in mind here.
To preface, i didn’t mean to make this a central point and it was mostly directed at families (but also anyone with the interest and means) considering how critical food availability would be in an prolonged AI gone bad scenario when internet connectivity would likely be cut. There are many scenario’s where food (and retail at large) could be significantly disrupted due to lack of electronic payment : prolonged power outages, accidental cyber incidents, deliberate cyber attacks, internet shutoffs, and honestly: recognize how significantly our agricultural system is based on autonomous harvesting and AI. In any of these scenarios, electronic forms of payment would be non-operational within days. Even if you have some cash on hand, many retailers might close their doors (with or without social unrest).
Further, there are numerous reasons to think severe weather will worsen in many regions which can directly or indirectly interrupt food supplies. The reason I bring this up in the context of families becuase once you have kids, you realize how important safe food supplies are… most can source water fairly easily outside of dry desert regions but unless you’re in alaska or hawaii, canada or have a few acres of farmland, you might not realize how vulnerable our food system could be.
On the one hand, I understand your point that preparing for breakdown of the economy may be more important if the likelihood of disasters in general increases; even though the most catastrophic AI scenarios would not leave a space to flee to, maybe the likelihood of more mundane disasters also increases? However, it is also possible that the marginal expected value of investing time in such skills goes down. After all, in a more technological society, learning technology skills may be more important than before, so the opportunity cost goes up.
I’m not actually talking about a complete breakdown of economy or society, just significant shocks to retail, IT, and supply chains and longer term economic shifts.
If I may draw a historical parallel: we shut down our entire airline transportation industry for weeks after 9/11, reducing annual GDP about 0.5%. Not to trivialize the ~3000 deaths and related suffering that occurred but, IMO, AI-facilitated deliberate attacks by malicious actors or nation states could easily be orders of magnitude worse. I honestly don’t know exactly how capable and prepared the US government is of ‘shutting down’ the internet completely. This would be a huge decision but I would be very surprised if they don’t already have a protocol in place for major catastrophic cyber scenarios that effectively does exactly this.
I suspect it might only take one order of magnitude (i.e., 30,000 deaths) perhaps two, before the Federal Agencies that be, pull the plug at every ISP and every telecom data provider for months. 3 OOM, it could be years. Yes, the consequences would be economically catastrophic.… and yes, there would probably be some hacky half-effective work-arounds via, decentralized networks, RF, satellite etc. but yes it could absolutely happened.
Information and computing Technologies would still have limited function in society but it wouldn’t be long before we saw some massive shifts in how digital tech-centric our economic future remained.
I try to summarize your position:
You think that with a relevant probability, major catastrophic events will happen that lead to situations in which traditional non-digital “prepper” skills are relevant,
and therefore, parents or families should invest a larger share of their own and their children’s time and resources into learning such skills,
compared to a world that was not “on the eve of AI”.
Right?
I’m not clear on what you mean by ‘relevant probability’, however, yes, I do think that we will see AGI within two decades, and with respect to AI,
P(massive job displacement) is high, perhaps 30-50%,
P(millions die / acute catastrophe) is perhaps 3-6% and
P(billions die / doom) is perhaps 0.2-0.5%
So, I’d say P(catastrophe) is not negligible and will likely slowly rise over time (so long as Information technology generally improves). If it does happened, I would not be surprised if governments take drastic action including potential broad blanket internet blackouts which would increase the value of certain non-IT skills.
I do worry about economic stability during any AI takeoff because the lack thereof could severely inhibit our ability to respond.
I think the term ‘prepper’ skills is a tad derogatory and perhaps simplistic, but I do believe that we are slowly loosing many of those skill sets that contribute to self-sufficiency and I do believe some skills associated with ‘prepping’ are valuable (i.e., basic first aid, CPR, orienteering, navigation, engineering, construction, carpentry, mechanical repair, basic agriculture, PPE, maintaining some food/water supply, etc.). Obviously, I am not talking about the more extreme fringes of prepping which becomes a different conversation.
I did not intend the word ‘prepper’ to be detogatory, but to be a word for ‘classical’ preparedness skills.
While I understand your risk assessment and it may be true that increasing societal risk makes such prepper skills more valuable, I think it neglects the problem that ‘digital’ skills, both for job qualifications and for disaster situations, may also become more valuable than before. As time is still only 24 hours a day, it is not clear how the ‘life preparedness curriculum’ composition should be different compared to, for example, growing up 20 years ago.
Storms are a pretty common issue to have to weather that can cut off access to power, water, and buying food for a time (and potentially damage your property). Tend to be what I think about first for disaster preparedness at least.
So that is not related to AI, right?
Not directly for me, I’m not the person you were asking, just mentioned one it’s generally useful in. Pretty much any disaster that might meddle in normal functioning outside your home helps to have a bit stored up to get through, though, storms are just ones I expect will happen regardless (in my climate).
If I had to predict some AI-specific disaster, though, seizing too much electrical power or diverting more water supply than planned for in a scenario where it’s growing too fast might be among them still.
If we really see AI radically changing everything, why should this assessment still be correct in 10 years? I assume that 30 years ago, people thought the opposite was true. It seems hard to be sure about what to teach children. I do not really see what the uniquely usefull skills of a human will be in 2040 or 2050. Nonetheless, developing these skills as a hobby, without really expecting that to teach something specific as a basic job skill, may be a good idea, also for your point 3.
My 8yo uses ChatGPT to help with his math and English school work if he’s struggling with a particular topic. This works particularly well with Custom GPTs (e.g. one tailored to being a Math tutor). It’s like have your own one-on-one tutor that can explain concepts in different ways, set exercises and provide encouraging feedback, it’s pretty neat and he loves it.
But yes, I have the same thoughts about what the future will look like when our kids are adults and how they should best prepare themselves for this. Being AI and tech-savvy seems a no-brainer. Thanks for the article.
I’m going through this too with my kids. I don’t think there is anything I can do educationally to better ensure they thrive as adults other than making sure I teach them practical/physical build and repair skills (likely to be the area where humans with a combination of brains and dexterity retain useful value longer than any other).
Outside of that the other thing I can do is try to ensure that they have social status and financial/asset nest egg from me, because there is a good chance that the egalitarian ability to lift oneself through effort is going to largely evaporate as human labour becomes less and less valuable, and I can’t help but wonder how we are going to decide who gets the nice beach-house. If humans are still in control of an increasingly non-egalitarian world then society will almost certainly slide towards it’s corrupt old aristocratic/rentier ways and it becomes all about being part of the Nomenklature (communist elites).
I think one more thing could be useful, I’d call it “structural rise”: over many different spheres of society, large projects are created by combining some small parts; ways to combine them and test robustness (for programs)/stability (for organisations)/beauty (music)/etc seem pretty common for most of the areas, so I guess they can be learned separately.
If you’re interested in an engineering field, and worry about technological unemployment due to AI, just play with as many different chatbots as you can. Ask engineering questions related to that field, get closer to ‘engineer me a thing using this knowledge that can hurt a human’, then wait for the ‘trust and safety’ staff to delete your conversation thread and overreact to censor the model from answering that type of question.
I’ve been doing this for fun with random technical fields. I’m hoping my name is on lists and they’re specifically watching my chats for stuff to ban.
Most ‘safety’ professions, mechanical engineering, mining, and related fields are safe, because AI systems will refuse to reason about whether an engineered system can hurt a human.
Same goes for agriculture, slaughterhouse design, etc.
I’m waiting for the inevitable AN explosion where the safety investigation finds ‘we asked AI if making a pile of AN that big was an explosion hazard and it said something about refusing to help build bombs, so we figured it was fine’
Do you know any interesting camp in Europe about HPMOR or something similar, my 11 daughter asked where is her letter to Hogwards. She start read book and ask why do nobody make film about this great fanfic.
Do you have any idea of good child camps for education in Europe? Or elsewhere?
Do you believe your children existing in the world make the world better?
I don’t know what to believe. On the one hand we have people who are genuine about their concerns about a looming AI takeover and on the other we have those who think the current cohort of purported AI (ChatGPT et al) are simply sexed up predictive text apps.
Would I be wrong to say that AI is just an upgraded flint axe … only a tool that grew a brain, well sort of?
About children, I’m pessimistic. There was a time when the only “temptations” that could lead them astray were wine/money; over time there’s been an explosion of psychotropic drugs—vastly increasing the pathways to self-desctruction—and it doesn’t look good ma’am, no it doesn’t. Those few who make it, manage a decent life, would surely be a breed apart, to have survived the storm; perhaps luckier would be a more apt term. What role will AI play? Hunt with the hounds and run with the hares, no? Let us take a moment to pray. 🙂
In a few places you compare, from a person’s perspective, the “fortune” of that person (or what “seems positive for” them), to their fortune if they never existed. How can this mean anything? I don’t think that someone/something that (hypothetically) doesn’t exist can have a (hypothetical) perspective. (And if they can’t even have a perspective, it probably doesn’t need to be said that neither can they have a fortune to compare to a fortune arising from a perspective on a (real or hypothetical) existence.
Seems more sensible (and might possibly make a practical difference) to, instead, judge from the perspective of those affected by the person in question. For example, instead of saying “Fortunately for my friend, the priest said yes”, saying “Fortunately for me, the priest said yes.”
That is a classic question of population ethics (LW). The author is writing from a totalist perspective (which I think is by far the most common view on LW) while you seem to find a person-affecting perspective clearly correct.
CS Lewis FTW.
I don’t know what Lewis thought about the bomb, but I trust he would have been all for trying to avert nuclear calamity. Such a belief would have taken nothing away from the wisdom of the passage you quoted. We should reason as hard as we can about the future and strive for the best outcomes, but the universe wants to unfold, will continue to unfold, and will never oppress us with certain knowledge of our greater fate: uncertainty is the human condition. Therefore we should bestow on the generations that follow us optimism, resilience, agency, and when we can, joy. They will take it from there.
Enjoy those kittens!
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
Very close thinking to mine overall, thank you for this post!
My own approach is simple. We don’t know whether we’re heading towards dystopia or utopia. In some cases, it’s wise to be maximally pessimistic and assume dystopia. However, in the vast majority of cases, it is wiser to not only assume a utopian future, but to behave, to an extent which is realistic, as if we are already in a utopia. This “utopian zone” in our life should I think cover most interpersonal relationships, and definitely our relationship with our kids.
This simple principle—always act as if you’re in utopia, unless it’s too out of place—gives easy answers to most of the questions in this post. Should we have kids? Yes, because if we don’t, we’re already in dystopia! What to teach our kids? Assume we’re in utopia and teach them the skills of being happy, fulfilled, self-realized, not bored. Should we tell them about dangers ahead? Yes, because even in a utopia, you can tell kids scary fairy tales—kids love scary stuff. And so on.
My thinking on this was much stimulated by my writing a book for my 6yo son. I plan to publish it here on Lesswrong. A description is in the pinned comment in my profile. Would appreciate your checking it out!
A solid thing might be to get them used to the idea of only having one child (and freezing gametes just in case of course).
If people are happy with one child, that means that if either the lightcone or the earth economy has limited resources to support legacy humans (eg if everyone lives off their grandparents money in a world where money is massively deflated but there’s no jobs, or legacy humans need to share their lifespan with every child they “birth” due to the lightcone having finite energy) then everyone will be choosing 1-2 kids instead of 2-3. So they might as well be happy about that instead of sad. Early childhood intervention is the best time to avoid setting them up for disappointment later (they will be pleasantly surprised instead if 3turns out to be the right choice). I don’t know anything about population ethics, but I know what it’s like to be disappointed about this.
No idea how to explain that well, eg “we’re glad we chose to have all of you, but we didn’t find out until 2years ago that you will probably be happiest with 1-2 because grown ups make mistakes too and we didn’t know there might be problems with running out of space”. But it’s a limited time to say “yeah we had 3 but that doesn’t mean you should” because they won’t listen later, or something, idk (boggles my mind that trying to explain anthropics to a 6-yo is basically guaranteed to fail and make asteroid fears worse).
There seem to be a lot of assumptions in the argument for only one child. And even if that specific line of reasoning turns out to be eventually correct doesn’t mean it has to hold now.
I still think this is correct, but a better approach would be to encourage kids to be flexible with their life plan, and to think about making major life decisions based on what the world ends up looking like rather than what they currently think is normal.
Kids raised in larger families tend to see larger families as what they’ll do later in life, and this habit of thought gets placed early on and is hard to change when they’re older, so that’s an example of a good early intervention to prepare them for the future before their preferences get locked in, but it’s not the only one.
I agree that there is a correlation between kids of large families having/preferring larger families, but it is not a strong one, and we don’t know how it interacts with all the other things you assume. So I think is another weak argument with its own set of additional assumptions. I think you have to make a stronger case.