Even if you don’t have exact values, it’s possible to model the distribution of peak heights and flood depths, to determine how many peaks you’d need to see before a given confidence that you’re high enough. And then your search mechanism becomes “don’t climb a peak entirely—set a path to see as many peaks as possible before committing to one, then climb the best one you know”, or if the flood is slow, you might get stuck on a peak during exploration, so it reduces to https://en.wikipedia.org/wiki/Secretary_problem .
The question of whether it’s better for the entire group to take it’s best chance on one peak (all live or all die), or whether it’s best to spread out, making it almost certain that some will die and others will live is rather distinct from the best search strategy. I weakly believe that there is no preference aggregation which makes sense to treat a “group agency” as a distinct thing from “set of individual agents”. So it will depend on the altruism of the individuals whether they want the best chance of individual survival (by following the best searcher) or if they want a lower chance of their survival to get a higher chance that SOMEONE survives.
Ah, I never thought about this being a secretary problem.
Well, initially I used it as an analogy for evolution and didn’t think too much about memorising/backtracking.
Oh wait, the mountaineer has memory about each peak he saw then he should go back to one of the high peaks he encountered before (assuming the flood hasn’t moped the floor yet, which is a given since he is still exploring), there is probably no irrecoverable rejections here like in secretary problem.
The second choice is a strange one. I think the entire group taking the best chance on one peak ALSO maximises the expected number of survivals, together with maximising each individual’s chance of survival.
But it still seems that “a higher chance that someone survives” is something that we want to take into the utility calculation when humanity wants to make choices in face of a catastrophes.
For example, if a coming disaster gives us two choices
(a): 50% chance that humans will go extinct, 50% chance nothing happens.
(b): 90% chance that 80% of humans will die.
The number of expected deaths of (b) significantly exceeds the one of (a), and (a) expects a greater number of humans surviving. But I guess many will agree that (b) is the better option to choose.
The key is that “humanity” doesn’t make decisions. Individuals do. The vast majority of individuals care more about themselves than about strangers, or about the statistical future masses. Public debate is mostly about signaling, so will be split between (a) and (b), depending on cultural/political affiliation. Actual behavior is generally selfish, so most will chose (a), maximizing their personal chances.
Epistemic status: elaborating on a topic by using math on it; making the implicit explicit
From an collective standpoint, the utility function over #humans looks like this: it starts at 0 when there are 0 humans, slowly rises until it reaches “recolonization potential”, then rapidly shoots up, eventually slowing down but still linear. However, from an individual standpoint, the utility function is just 0 for death, 1 for life. Because of the shape of the collective utility function, you want to “disentangle” deaths, but the individual doesn’t have the same incentive.
#humans has a decreasing marginal returns, since really the main concern for #humanity is the ability to recover, and that while increases with #humans it is not linear.
I do think individuals have “some” concerns about whether humanity in general will survive, since all humans still share *some* genes with each individual, the survival and propagation of strangers can still have some utility for a human individual (I’m not sure where am I going here...)
I agree that #humans has decreasing marginal returns at these scales—I meant linear in the asymptotic sense. (This is important because large numbers of possible future humans depend on humanity surviving today; if the world was going to end in a year then (a) would be better than (b). In other words, the point of recovering is to have lots of utility in the future.)
I don’t think most people care about their genes surviving into the far future. (If your reasoning is evolutionary, then read this if you haven’t already.) I agree that many people care about the far future, though.
Even if you don’t have exact values, it’s possible to model the distribution of peak heights and flood depths, to determine how many peaks you’d need to see before a given confidence that you’re high enough. And then your search mechanism becomes “don’t climb a peak entirely—set a path to see as many peaks as possible before committing to one, then climb the best one you know”, or if the flood is slow, you might get stuck on a peak during exploration, so it reduces to https://en.wikipedia.org/wiki/Secretary_problem .
The question of whether it’s better for the entire group to take it’s best chance on one peak (all live or all die), or whether it’s best to spread out, making it almost certain that some will die and others will live is rather distinct from the best search strategy. I weakly believe that there is no preference aggregation which makes sense to treat a “group agency” as a distinct thing from “set of individual agents”. So it will depend on the altruism of the individuals whether they want the best chance of individual survival (by following the best searcher) or if they want a lower chance of their survival to get a higher chance that SOMEONE survives.
Ah, I never thought about this being a secretary problem.
Well, initially I used it as an analogy for evolution and didn’t think too much about memorising/backtracking.
Oh wait, the mountaineer has memory about each peak he saw then he should go back to one of the high peaks he encountered before (assuming the flood hasn’t moped the floor yet, which is a given since he is still exploring), there is probably no irrecoverable rejections here like in secretary problem.
The second choice is a strange one. I think the entire group taking the best chance on one peak ALSO maximises the expected number of survivals, together with maximising each individual’s chance of survival.
But it still seems that “a higher chance that someone survives” is something that we want to take into the utility calculation when humanity wants to make choices in face of a catastrophes.
For example, if a coming disaster gives us two choices
(a): 50% chance that humans will go extinct, 50% chance nothing happens.
(b): 90% chance that 80% of humans will die.
The number of expected deaths of (b) significantly exceeds the one of (a), and (a) expects a greater number of humans surviving. But I guess many will agree that (b) is the better option to choose.
The key is that “humanity” doesn’t make decisions. Individuals do. The vast majority of individuals care more about themselves than about strangers, or about the statistical future masses. Public debate is mostly about signaling, so will be split between (a) and (b), depending on cultural/political affiliation. Actual behavior is generally selfish, so most will chose (a), maximizing their personal chances.
Epistemic status: elaborating on a topic by using math on it; making the implicit explicit
From an collective standpoint, the utility function over #humans looks like this: it starts at 0 when there are 0 humans, slowly rises until it reaches “recolonization potential”, then rapidly shoots up, eventually slowing down but still linear. However, from an individual standpoint, the utility function is just 0 for death, 1 for life. Because of the shape of the collective utility function, you want to “disentangle” deaths, but the individual doesn’t have the same incentive.
Oh yes! This can make more sense now.
#humans has a decreasing marginal returns, since really the main concern for #humanity is the ability to recover, and that while increases with #humans it is not linear.
I do think individuals have “some” concerns about whether humanity in general will survive, since all humans still share *some* genes with each individual, the survival and propagation of strangers can still have some utility for a human individual (I’m not sure where am I going here...)
I agree that #humans has decreasing marginal returns at these scales—I meant linear in the asymptotic sense. (This is important because large numbers of possible future humans depend on humanity surviving today; if the world was going to end in a year then (a) would be better than (b). In other words, the point of recovering is to have lots of utility in the future.)
I don’t think most people care about their genes surviving into the far future. (If your reasoning is evolutionary, then read this if you haven’t already.) I agree that many people care about the far future, though.