I put some probabalistic weight on radical-short-term-change (15-40 years) scenarios, but I think you’re making a mistake to put ALL your belief in that, with none (or almost none) on more gradual changes. Even if it’s not over 50% (and I think it IS, but probably not over 70%), it’s the most likely single kind of future that your property, investments, and knowledge remain valuable.
For the radical changes, it’s worth categorizing into “disasters I can’t really do much about”, and removing them from your calculations. “Disasters I can help avert”, and “Disasters I can prepare for” are worth an expected-value calculation: what do you give up in the most likely worlds (status-quo-ish) to have better experiences in those sets of worlds, multiplied by the probability of each? Likewise “utopias where I win regardless of today’s behavior” gets ignored, and “utopias I can help cause” and “utopias I can improve my experience in” get evaluated in terms of cost-benefit.
For me (and the children and younger people I’m involved with), the standard “live a good life, and prepare to continue that as long as possible, within the somewhat-predictable variations and complications of modern life” advice holds. Monetary investments are a little more barbell than 25 years ago, and the risky/speculative portions need to be reviewed more often. Among my circle, education is mostly seen as desirable on it’s own, more than a required hurdle to pass, and a focus on breadth and problem-solving has always been a critical parts of it (both in school and out), so that doesn’t change much either.
My suggestions regarding the epistemics of the original post are fairly in line with the content in your first paragraph. I think allocating decision weight in proportion to the expected impacts different scenarios have on your life is the correct approach. Generating scenarios and forecasting their likelihood is difficult, and there is also a great deal of uncertainty with how you should change your behavior in light of these scenarios. I think that making peace with the outcomes of disastrous scenarios that you or humanity cannot avoid is a strong action-path for processing thinking about uncontrollable scenarios. As for scenarios that you can prepare for, such as the effects of climate change, shallow AI, embryo selection / gene-editing, and forms of gradual technological progress, among other things, perhaps determining what you value and want if you could only live / live comfortably for the next 5, 10, 15, 20, 30, etc… years might be a useful exercise, since each of these scenarios (e.g., only living 5 more years vs. only living 10 more years vs. only more 5 years in global business-as-usual) might lead you to make different actions. I am in a similar decision-boat as you, as I believe that in coming years the nature of the human operations in the world will change significantly and on many fronts. I am in my early 20s, I have been doing some remote work / research in the areas of forecasting and ML, want to make contributions to AI Safety, want to have children with my partner (in around 6 years), do not know where I would like to live, do not know what my investment behaviors should be, do not know what proportion of my time should be spent doing such things as reading, programming, exercising, etc… A useful heuristic for me has been to worry less. I think moving away from people and living closer to the wilderness have benefitted me as well; the location I am in currently seem robust to climate change and mass exoduses from cities (should they ever occur), has few natural disasters, has good air quality, is generally peaceful and quiet, and is agriculturally robust w/ sources of water. Perhaps finding some location or set of habits that are in line with “what I hoped to retire into / do in a few years or what I’ve always desired for myself” might make for a strong remainder-of-life / remainder-of-business-as-usual, whichever you attach more weight to.
I put some probabalistic weight on radical-short-term-change (15-40 years) scenarios, but I think you’re making a mistake to put ALL your belief in that, with none (or almost none) on more gradual changes. Even if it’s not over 50% (and I think it IS, but probably not over 70%), it’s the most likely single kind of future that your property, investments, and knowledge remain valuable.
For the radical changes, it’s worth categorizing into “disasters I can’t really do much about”, and removing them from your calculations. “Disasters I can help avert”, and “Disasters I can prepare for” are worth an expected-value calculation: what do you give up in the most likely worlds (status-quo-ish) to have better experiences in those sets of worlds, multiplied by the probability of each? Likewise “utopias where I win regardless of today’s behavior” gets ignored, and “utopias I can help cause” and “utopias I can improve my experience in” get evaluated in terms of cost-benefit.
For me (and the children and younger people I’m involved with), the standard “live a good life, and prepare to continue that as long as possible, within the somewhat-predictable variations and complications of modern life” advice holds. Monetary investments are a little more barbell than 25 years ago, and the risky/speculative portions need to be reviewed more often. Among my circle, education is mostly seen as desirable on it’s own, more than a required hurdle to pass, and a focus on breadth and problem-solving has always been a critical parts of it (both in school and out), so that doesn’t change much either.
My suggestions regarding the epistemics of the original post are fairly in line with the content in your first paragraph. I think allocating decision weight in proportion to the expected impacts different scenarios have on your life is the correct approach. Generating scenarios and forecasting their likelihood is difficult, and there is also a great deal of uncertainty with how you should change your behavior in light of these scenarios. I think that making peace with the outcomes of disastrous scenarios that you or humanity cannot avoid is a strong action-path for processing thinking about uncontrollable scenarios. As for scenarios that you can prepare for, such as the effects of climate change, shallow AI, embryo selection / gene-editing, and forms of gradual technological progress, among other things, perhaps determining what you value and want if you could only live / live comfortably for the next 5, 10, 15, 20, 30, etc… years might be a useful exercise, since each of these scenarios (e.g., only living 5 more years vs. only living 10 more years vs. only more 5 years in global business-as-usual) might lead you to make different actions. I am in a similar decision-boat as you, as I believe that in coming years the nature of the human operations in the world will change significantly and on many fronts. I am in my early 20s, I have been doing some remote work / research in the areas of forecasting and ML, want to make contributions to AI Safety, want to have children with my partner (in around 6 years), do not know where I would like to live, do not know what my investment behaviors should be, do not know what proportion of my time should be spent doing such things as reading, programming, exercising, etc… A useful heuristic for me has been to worry less. I think moving away from people and living closer to the wilderness have benefitted me as well; the location I am in currently seem robust to climate change and mass exoduses from cities (should they ever occur), has few natural disasters, has good air quality, is generally peaceful and quiet, and is agriculturally robust w/ sources of water. Perhaps finding some location or set of habits that are in line with “what I hoped to retire into / do in a few years or what I’ve always desired for myself” might make for a strong remainder-of-life / remainder-of-business-as-usual, whichever you attach more weight to.
But if I assume doom, my safe withdrawal rate gets so high!