ONE: I love how “should I learn to drive for this trip right here?” cascades into this vast set of questions about possible future history, and AGI, and so on <3
Another great place for linking “right now practical” questions with “long term civilizational” questions is retirement. If you have no cached thoughts on retirement, you might profitably apply the same techniques used for car stuff to “being rich if or when the singularity happens” and see if either thought changes the other?
TWO: I used to think “I want to live this year”, “If I want to live in year Y then I will also want to live in year Y+1″. Then by induction: “I will want to live forever”.
However, then I noticed that this model wasn’t probabilistic, and was flinching from possible the deepest practical question in philosophy, which is “suicide”. Figuring out the causes and probabilities of people changing from “I do NOT want to kill myself in year Y” to “I DO want to kill myself in year Y+1” suggests a target for modeling? Which would end up probabilistic?
Occam (applied to modeling) says that the simplest possible model is univariate, so like maybe there is some value P which is the annual probability of “decaying into suicidalness that year”? I do mean decay here, sadly. Tragically, it looks to me like suicide goes up late in life… and also suicides might be hiding in “accidental car deaths” for insurance reasons? So maybe the right thing is not just a univariate model but a model where the probability goes up the older you get?
This approach, for me, put bounds on the value of my life (lowering the expected value of cryonics, for example) and caused me to be interested in authentic durable happiness, in general, in humans, and also a subject I invented for myself that I call “gerontopsychology” (then it turned out other people thought of the same coinage, but they aren’t focused on the generalizable causes of suicidal ideation among the elderly the way I am).
ONE: I love how “should I learn to drive for this trip right here?” cascades into this vast set of questions about possible future history, and AGI, and so on <3
Yeah, it is interesting isn’t it. Personally I actually would prefer if it were a more mundane decision, like whether or not I want to deal with the traffic or something :)
Another great place for linking “right now practical” questions with “long term civilizational” questions is retirement. If you have no cached thoughts on retirement, you might profitably apply the same techniques used for car stuff to “being rich if or when the singularity happens” and see if either thought changes the other?
Wow, this is a very good point. Thank you! I thought about it briefly in the past, but it hadn’t hit me until now that similar logic of “small chance of moderate utility over an extremely long time horizon” might apply to things like “be rich when the singularity happens”. I always had imagined that singularity = amazingness forever for everyone, but I think my inner Professor Quirrell must have fell asleep. He’s awake now and is yelling at me.
Do you have any thoughts in particular on this? For me personally I plan on being rich anyway so that I can fund FAI research or something, but I still think it’s worth thinking about.
TWO: I used to think “I want to live this year”, “If I want to live in year Y then I will also want to live in year Y+1″. Then by induction: “I will want to live forever”.
My thought here is that the post-singularity future will be a black swan, and that this natural decay you’re describing wouldn’t apply. It’ll be an amazing place to live where people won’t want to die. Hopefully.
I know there might be normal things about human psychology that would make people perhaps get bored and stop enjoying life at eg. their 10,000th birthday, but in a post-singularity world, it seems quite likely that we’d be able to overcome this. I guess that gets a little wire heading-y, but it’s not excessive to my taste.
Maybe I’m not thinking about this properly, but it seems to me like more of a yes-or-no sort of thing than a progressive decay sort of thing. Eg. it doesn’t get more likely that people will want to die as time goes on in the post-singularity world. It’s just a question of whether the post-singularity world has solved that problem or not.
caused me to be interested in authentic durable happiness, in general, in humans, and also a subject I invented for myself that I call “gerontopsychology”
I have a weird feeling you’d be interested in Michael Plant of the EA community’s work on “ordinary human unhappiness”. It’s always something I’ve thought about too as a worthwhile idea to pursue.
Two things:
ONE: I love how “should I learn to drive for this trip right here?” cascades into this vast set of questions about possible future history, and AGI, and so on <3
Another great place for linking “right now practical” questions with “long term civilizational” questions is retirement. If you have no cached thoughts on retirement, you might profitably apply the same techniques used for car stuff to “being rich if or when the singularity happens” and see if either thought changes the other?
TWO: I used to think “I want to live this year”, “If I want to live in year Y then I will also want to live in year Y+1″. Then by induction: “I will want to live forever”.
However, then I noticed that this model wasn’t probabilistic, and was flinching from possible the deepest practical question in philosophy, which is “suicide”. Figuring out the causes and probabilities of people changing from “I do NOT want to kill myself in year Y” to “I DO want to kill myself in year Y+1” suggests a target for modeling? Which would end up probabilistic?
Occam (applied to modeling) says that the simplest possible model is univariate, so like maybe there is some value P which is the annual probability of “decaying into suicidalness that year”? I do mean decay here, sadly. Tragically, it looks to me like suicide goes up late in life… and also suicides might be hiding in “accidental car deaths” for insurance reasons? So maybe the right thing is not just a univariate model but a model where the probability goes up the older you get?
This approach, for me, put bounds on the value of my life (lowering the expected value of cryonics, for example) and caused me to be interested in authentic durable happiness, in general, in humans, and also a subject I invented for myself that I call “gerontopsychology” (then it turned out other people thought of the same coinage, but they aren’t focused on the generalizable causes of suicidal ideation among the elderly the way I am).
Ok three things...
THREE: I drive <3
Yeah, it is interesting isn’t it. Personally I actually would prefer if it were a more mundane decision, like whether or not I want to deal with the traffic or something :)
Wow, this is a very good point. Thank you! I thought about it briefly in the past, but it hadn’t hit me until now that similar logic of “small chance of moderate utility over an extremely long time horizon” might apply to things like “be rich when the singularity happens”. I always had imagined that singularity = amazingness forever for everyone, but I think my inner Professor Quirrell must have fell asleep. He’s awake now and is yelling at me.
Do you have any thoughts in particular on this? For me personally I plan on being rich anyway so that I can fund FAI research or something, but I still think it’s worth thinking about.
My thought here is that the post-singularity future will be a black swan, and that this natural decay you’re describing wouldn’t apply. It’ll be an amazing place to live where people won’t want to die. Hopefully.
I know there might be normal things about human psychology that would make people perhaps get bored and stop enjoying life at eg. their 10,000th birthday, but in a post-singularity world, it seems quite likely that we’d be able to overcome this. I guess that gets a little wire heading-y, but it’s not excessive to my taste.
Maybe I’m not thinking about this properly, but it seems to me like more of a yes-or-no sort of thing than a progressive decay sort of thing. Eg. it doesn’t get more likely that people will want to die as time goes on in the post-singularity world. It’s just a question of whether the post-singularity world has solved that problem or not.
I have a weird feeling you’d be interested in Michael Plant of the EA community’s work on “ordinary human unhappiness”. It’s always something I’ve thought about too as a worthwhile idea to pursue.
Haha, to each their own :)