I make this point not to argue against finding love or starting a family, but to argue against a mindset that treats AGI and daily life as more or less two different magisteria….
It still doesn’t feel to me like it’s fully speaking as though the two worlds as one world
The situation is tricky, IMO. There is, of course, at the end of the day only one world. If we want to have kids who can grow up to adulthood, and who can have progeny of their own, this will require that there be a piece of universe hospitable to human life where they can do that growing up.
At the same time:
a) IMO, there is a fair amount of “belief in belief” about AI safety and adjacent things. In particular, I think many people believe they ought to believe that various “safety” efforts help, without really anticipating-as-if this sort of thing can help.
(The argument for “be worried about the future” is IMO simpler, more obvious, and more likely to make it to the animal, that particular beliefs that particular strategies have much shot. I’m not sure how much this is or isn’t about AI; many of my Uber drivers seem weirdly worried about the future.)
a2) Also, IMO, a fair number of peoples’ beliefs (or “belief in beliefs”) about AI safety are partly downstream of others’ political goals, e.g. of others’ social incentives that those people believe in particular narratives about AI safety and about how working at place X can help with AI safety. This can accent the “belief in belief” thing.
b) Also, even where people have honest/deep/authentic verbal-level beliefs about a thing, it often doesn’t percolate all the way down into the animal. For example, a friend reports having interviewed a number of people about some sex details, and coming out believing that some people do and some people don’t have a visceral animal-level understanding that birth control prevents pregnancy, and reports furthermore that such animal-level beliefs often backpropagate to peoples’ desires or lack of desires for different kinds of sex. I believe my friend here, although this falls under “hard to justify personal beliefs.”
c) As I mentioned in the OP, I am worried that when a person “gives up on” their romantic and/or reproductive goals (or other goals that are as deeply felt and that close to the center of a person, noting that the details here vary by individual AFAICT), this can mess up their access to caring and consciousness in general (in Divia’s words, can risk “some super central sign error deep in their psychology”).
a-c, in combination, leave me nervous/skeptical about people saying that their plan is to pursue romance/children “after the Singularity,” especially if they’re already nearing the end of their biological window. And especially if it’s prompted by that being a standard social script in some local circle. I am worried that people may say this, intend it with some portion of themselves, but have their animal hear “I’m going to abandon these goals and live in belief-in-beliefs.”
I have a personal thing up for me about this one. Back in ~2016, I really wanted to try to have a kid, but thought that short timelines plus my own ability to contribute to AI safety efforts meant I probably shouldn’t. I dialoged with my system 1 using all the tools in the book. I consulted all the people near me who seemed like they might have insight into the relevant psychology. My system 1 / animal-level orientation, after dialog, seemed like it favored waiting, hoping for a kid on the other side of the singularity. I mostly passed other peoples’ sanity checks, both at the time and a few years later when I was like “hey, we were worried about this messing with my psyche, but it seems like it basically worked, right? what is your perception?” And even so, IMO, I landed in a weird no man’s land of a sort of bleached out depression and difficulty caring about anything after awhile, that was kinda downstream of this but very hard for me to directly perceive, but made it harder to really mean anything and easier to follow the motions of looking like I was trying.
The take-away I’m recommending from this is something like: “be careful about planning on paths toward your deepest, animal-level goals that your animal doesn’t buy. And note that it’s can be hard to robustly know what your animal is or isn’t buying. ” Also, while there’s only one magesterium, if people are animal-level and speech-level reasoning as though there’s several, that’s a real and confusing-to-me piece of context to how we’re living here.
That makes sense to me, and it updates me toward your view on the kid-having thing. (Which wasn’t the focus of my comment, but is a thing I was less convinced of before.) I feel sad about that having happened. :( And curious about whether I (or other people I know) are making a similar mistake.
(My personal state re kids is that it feels a bit odd/uncanny when I imagine myself having them, and I don’t currently viscerally feel like I’m giving something up by not reproducing. Though if I lived for centuries, I suspect I’d want kids eventually in the same way I’d want to have a lot of other cool experiences.)
I feel kinda confused about how “political” my AGI-beliefs are. The idea of dying to AGI feels very sensorily-real to me — I feel like my brain puts it in the same reference class as ‘dying from a gunshot wound’, which is something I worry about at least a little in my day-to-day life (even though I live in a pretty safe area by US megalopolis standards), have bad dreams about, semi-regularly idly imagine experiencing, etc. I don’t know how that relates to the “is this belief political?” question, or how to assess that.
Regardless, I like this:
I am worried that people may say this, intend it with some portion of themselves, but have their animal hear “I’m going to abandon these goals and live in belief-in-beliefs.”
I’d also assume by default that ‘the animal discounts more heavily than the philosopher does’ is a factor here...? And/or ‘the animal is better modeled on this particular question as an adaptation-executer rather than a utility-maximizer, such that there’s no trade you can make that will satisfy the animal if it involves trading away having-kids-before-age-45’?
It could be wise to have kids, for the sake of harmony between your parts/subgoals, even if you judge that having kids isn’t directly x-risk-useful and your animal seems to have a pretty visceral appreciation for x-risk—just because that is in fact what an important part of you wants/needs/expects/etc.
The situation is tricky, IMO. There is, of course, at the end of the day only one world. If we want to have kids who can grow up to adulthood, and who can have progeny of their own, this will require that there be a piece of universe hospitable to human life where they can do that growing up.
At the same time:
a) IMO, there is a fair amount of “belief in belief” about AI safety and adjacent things. In particular, I think many people believe they ought to believe that various “safety” efforts help, without really anticipating-as-if this sort of thing can help.
(The argument for “be worried about the future” is IMO simpler, more obvious, and more likely to make it to the animal, that particular beliefs that particular strategies have much shot. I’m not sure how much this is or isn’t about AI; many of my Uber drivers seem weirdly worried about the future.)
a2) Also, IMO, a fair number of peoples’ beliefs (or “belief in beliefs”) about AI safety are partly downstream of others’ political goals, e.g. of others’ social incentives that those people believe in particular narratives about AI safety and about how working at place X can help with AI safety. This can accent the “belief in belief” thing.
b) Also, even where people have honest/deep/authentic verbal-level beliefs about a thing, it often doesn’t percolate all the way down into the animal. For example, a friend reports having interviewed a number of people about some sex details, and coming out believing that some people do and some people don’t have a visceral animal-level understanding that birth control prevents pregnancy, and reports furthermore that such animal-level beliefs often backpropagate to peoples’ desires or lack of desires for different kinds of sex. I believe my friend here, although this falls under “hard to justify personal beliefs.”
c) As I mentioned in the OP, I am worried that when a person “gives up on” their romantic and/or reproductive goals (or other goals that are as deeply felt and that close to the center of a person, noting that the details here vary by individual AFAICT), this can mess up their access to caring and consciousness in general (in Divia’s words, can risk “some super central sign error deep in their psychology”).
a-c, in combination, leave me nervous/skeptical about people saying that their plan is to pursue romance/children “after the Singularity,” especially if they’re already nearing the end of their biological window. And especially if it’s prompted by that being a standard social script in some local circle. I am worried that people may say this, intend it with some portion of themselves, but have their animal hear “I’m going to abandon these goals and live in belief-in-beliefs.”
I have a personal thing up for me about this one. Back in ~2016, I really wanted to try to have a kid, but thought that short timelines plus my own ability to contribute to AI safety efforts meant I probably shouldn’t. I dialoged with my system 1 using all the tools in the book. I consulted all the people near me who seemed like they might have insight into the relevant psychology. My system 1 / animal-level orientation, after dialog, seemed like it favored waiting, hoping for a kid on the other side of the singularity. I mostly passed other peoples’ sanity checks, both at the time and a few years later when I was like “hey, we were worried about this messing with my psyche, but it seems like it basically worked, right? what is your perception?” And even so, IMO, I landed in a weird no man’s land of a sort of bleached out depression and difficulty caring about anything after awhile, that was kinda downstream of this but very hard for me to directly perceive, but made it harder to really mean anything and easier to follow the motions of looking like I was trying.
The take-away I’m recommending from this is something like: “be careful about planning on paths toward your deepest, animal-level goals that your animal doesn’t buy. And note that it’s can be hard to robustly know what your animal is or isn’t buying. ” Also, while there’s only one magesterium, if people are animal-level and speech-level reasoning as though there’s several, that’s a real and confusing-to-me piece of context to how we’re living here.
❤
That makes sense to me, and it updates me toward your view on the kid-having thing. (Which wasn’t the focus of my comment, but is a thing I was less convinced of before.) I feel sad about that having happened. :( And curious about whether I (or other people I know) are making a similar mistake.
(My personal state re kids is that it feels a bit odd/uncanny when I imagine myself having them, and I don’t currently viscerally feel like I’m giving something up by not reproducing. Though if I lived for centuries, I suspect I’d want kids eventually in the same way I’d want to have a lot of other cool experiences.)
I feel kinda confused about how “political” my AGI-beliefs are. The idea of dying to AGI feels very sensorily-real to me — I feel like my brain puts it in the same reference class as ‘dying from a gunshot wound’, which is something I worry about at least a little in my day-to-day life (even though I live in a pretty safe area by US megalopolis standards), have bad dreams about, semi-regularly idly imagine experiencing, etc. I don’t know how that relates to the “is this belief political?” question, or how to assess that.
Regardless, I like this:
I’d also assume by default that ‘the animal discounts more heavily than the philosopher does’ is a factor here...? And/or ‘the animal is better modeled on this particular question as an adaptation-executer rather than a utility-maximizer, such that there’s no trade you can make that will satisfy the animal if it involves trading away having-kids-before-age-45’?
It could be wise to have kids, for the sake of harmony between your parts/subgoals, even if you judge that having kids isn’t directly x-risk-useful and your animal seems to have a pretty visceral appreciation for x-risk—just because that is in fact what an important part of you wants/needs/expects/etc.