Ah, I never thought about this being a secretary problem.
Well, initially I used it as an analogy for evolution and didn’t think too much about memorising/backtracking.
Oh wait, the mountaineer has memory about each peak he saw then he should go back to one of the high peaks he encountered before (assuming the flood hasn’t moped the floor yet, which is a given since he is still exploring), there is probably no irrecoverable rejections here like in secretary problem.
The second choice is a strange one. I think the entire group taking the best chance on one peak ALSO maximises the expected number of survivals, together with maximising each individual’s chance of survival.
But it still seems that “a higher chance that someone survives” is something that we want to take into the utility calculation when humanity wants to make choices in face of a catastrophes.
For example, if a coming disaster gives us two choices
(a): 50% chance that humans will go extinct, 50% chance nothing happens.
(b): 90% chance that 80% of humans will die.
The number of expected deaths of (b) significantly exceeds the one of (a), and (a) expects a greater number of humans surviving. But I guess many will agree that (b) is the better option to choose.
The key is that “humanity” doesn’t make decisions. Individuals do. The vast majority of individuals care more about themselves than about strangers, or about the statistical future masses. Public debate is mostly about signaling, so will be split between (a) and (b), depending on cultural/political affiliation. Actual behavior is generally selfish, so most will chose (a), maximizing their personal chances.
Epistemic status: elaborating on a topic by using math on it; making the implicit explicit
From an collective standpoint, the utility function over #humans looks like this: it starts at 0 when there are 0 humans, slowly rises until it reaches “recolonization potential”, then rapidly shoots up, eventually slowing down but still linear. However, from an individual standpoint, the utility function is just 0 for death, 1 for life. Because of the shape of the collective utility function, you want to “disentangle” deaths, but the individual doesn’t have the same incentive.
#humans has a decreasing marginal returns, since really the main concern for #humanity is the ability to recover, and that while increases with #humans it is not linear.
I do think individuals have “some” concerns about whether humanity in general will survive, since all humans still share *some* genes with each individual, the survival and propagation of strangers can still have some utility for a human individual (I’m not sure where am I going here...)
I agree that #humans has decreasing marginal returns at these scales—I meant linear in the asymptotic sense. (This is important because large numbers of possible future humans depend on humanity surviving today; if the world was going to end in a year then (a) would be better than (b). In other words, the point of recovering is to have lots of utility in the future.)
I don’t think most people care about their genes surviving into the far future. (If your reasoning is evolutionary, then read this if you haven’t already.) I agree that many people care about the far future, though.
Ah, I never thought about this being a secretary problem.
Well, initially I used it as an analogy for evolution and didn’t think too much about memorising/backtracking.
Oh wait, the mountaineer has memory about each peak he saw then he should go back to one of the high peaks he encountered before (assuming the flood hasn’t moped the floor yet, which is a given since he is still exploring), there is probably no irrecoverable rejections here like in secretary problem.
The second choice is a strange one. I think the entire group taking the best chance on one peak ALSO maximises the expected number of survivals, together with maximising each individual’s chance of survival.
But it still seems that “a higher chance that someone survives” is something that we want to take into the utility calculation when humanity wants to make choices in face of a catastrophes.
For example, if a coming disaster gives us two choices
(a): 50% chance that humans will go extinct, 50% chance nothing happens.
(b): 90% chance that 80% of humans will die.
The number of expected deaths of (b) significantly exceeds the one of (a), and (a) expects a greater number of humans surviving. But I guess many will agree that (b) is the better option to choose.
The key is that “humanity” doesn’t make decisions. Individuals do. The vast majority of individuals care more about themselves than about strangers, or about the statistical future masses. Public debate is mostly about signaling, so will be split between (a) and (b), depending on cultural/political affiliation. Actual behavior is generally selfish, so most will chose (a), maximizing their personal chances.
Epistemic status: elaborating on a topic by using math on it; making the implicit explicit
From an collective standpoint, the utility function over #humans looks like this: it starts at 0 when there are 0 humans, slowly rises until it reaches “recolonization potential”, then rapidly shoots up, eventually slowing down but still linear. However, from an individual standpoint, the utility function is just 0 for death, 1 for life. Because of the shape of the collective utility function, you want to “disentangle” deaths, but the individual doesn’t have the same incentive.
Oh yes! This can make more sense now.
#humans has a decreasing marginal returns, since really the main concern for #humanity is the ability to recover, and that while increases with #humans it is not linear.
I do think individuals have “some” concerns about whether humanity in general will survive, since all humans still share *some* genes with each individual, the survival and propagation of strangers can still have some utility for a human individual (I’m not sure where am I going here...)
I agree that #humans has decreasing marginal returns at these scales—I meant linear in the asymptotic sense. (This is important because large numbers of possible future humans depend on humanity surviving today; if the world was going to end in a year then (a) would be better than (b). In other words, the point of recovering is to have lots of utility in the future.)
I don’t think most people care about their genes surviving into the far future. (If your reasoning is evolutionary, then read this if you haven’t already.) I agree that many people care about the far future, though.