Not AI risk specifically. But I had lengthy discussions with a friend about the general question of whether it is ethical to have children. The concerns in our discussions were overpopulation and how bad the world is in general and in Germany in particular. These much weaker concerns compared to extinction were enough for him not to have children. He also mentioned The Voluntary Human Extinction Movement. We still disagree on this. Mostly we disagree on how had and failed the world is. I think it is not worse than it has been most of the time since forever. Maybe because I perceive less suffering (in myself and in others) than he does. We also disagree on how to deal with overpopulation. Whether to take local population into account. Whether to weigh by consumption. Whether to see this as an individual obligation, or a collective one. Or as an obligation at all. Still, we are good friends. Maybe that tells you something.
So you thought that overpopulation is not much of a concern and the world is not so bad, right? But if you had thought that overpopulation (or something else) was a really strong problem and also would have had very bad effects on your children (for example, if you had expected with a high probability that their life would have been a permanent postapocalyptic fight for food), would that have affected your decision?
Very likely. People have, at different times and places, reacted to perceived overpopulation with reduced fertility—if needed by infanticide (Robin Hanson discusses this sometimes, e.g., here). Though what exactly was considered “too many children” was probably very different each time. It wasn’t always “bad effects on children”. A “permanent postapocalyptic fight for food” doesn’t seem like the strongest argument. People have lived in that state for most of humanity.
Sorry, I don’t fully understand the answer. 1) You think that you would have reacted to perceived overpopulation by having less children, but 2) at the same time, you think that expecting your children to have to live in a permanent postapocalyptic fight for food is not such a strong argument because that was normal in former times. But if the consideration in point 2 would not have mattered, why would the perception of overpopulation have mattered to you?
First, the local overpopulation has a much higher influence than the global overpopulation. I think it is arguable to do something about Korea’s shrinking population even if you are worried that there are not enough resources globally. In general, what is true for the average is not true for everybody.
Second, just because I, as an individual, (feel that I) can’t afford to have children, e.g., because society imposes costs on me to have children, that doesn’t mean that I wish such children to not exist (it might, but it’s a different claim).
Maybe I am not clear enough or we are talking past each other. So forget about the overpopulation. Suppose you lived in a universe where in the year in which you decided to have your first child some omniscient demon appeared and told you that
with probability p, some disaster happens 10 years later which causes dying of starvation for everybody with certainty (with all the societal and psychological side effects that such a famine implies),
with probability 1-p life in your country remains forever as it was in that year (it’s hypothetical, so plesae do not question whether that is possible).
So my questions:
Would there be some p where you would have decided not to have children?
How would the quality / kind of the disaster affect the decision?
How would the time horizon (10 years in my example) affect the decision?
Are there other societal or other global conditions where you think people should not have children?
I reject the usefulness of the thought experiment. In practice, there is almost always a possibility to affect the outcome. And then the outcome is also almost never absolute (“starve with certainty”). And on top of that, my presumed inability to influence outcomes somehow also doesn’t influence by interest in wanting to have children. Thus, I wonder what you want to gain from this thought experiment? What is your crux?
But fine, let’s come up with a scenario that might fit the bill: It must be something that can’t be influenced, so natural causes are out. So aliens have sent us a message that they plan to do that and have proven that they have the technological means, and because of the means of transportation/communication we can’t send them a message back to convince them otherwise or something. And I guess I can work with the 1-p as it just means “stable comparable utility.”
What would I do? In this case, how we lived might still leave a testimonial of our life that the aliens might care about. Did we give up? Did we care about our families? Should we try to warn other civilizations about these aliens? So in this scenario, there is still a balance between what I could do, if not against it, then still about it. This trades against having children. But note that simply posing the scenario with a non-zero p slightly alters the decision for children: The presence of aliens would alter society and people might want to do something about it. Also, decision for children is a shared decision, so I additionally assume the partner would match this. But then it still depends on life circumstances. So I guess at p=~2% it would very slightly reduce the decision for children. From p=~25% it would start to draw more capacity. At =~80% there wouldn’t be much left to have children.
First of all, thanks for the detailed answer. I do not fully understand your position here, but the clarity of the answer to the thought experiment was helpful.
You reject the usefulness of the thought experiment, but I do not really understand why. Your reasons are that “in practice, there is almost always a possibility to affect the outcome” and that “the outcome is also almost never absolute”. With respect to the possibility to affect the outcome, I would say that I, as an individual, have to take most global situations as given. With respect to whether the outcome is “absolute”, you seem to mean that it is not a certain outcome or that not literally everybody would die. If it is just about the certainty, well, I included the subjective probability in the thought experiment. If it is about whether everybody dies, of course you can think of any probability distribution of outcomes, but what is gained by that? Then you say: “And on top of that, my presumed inability to influence outcomes somehow also doesn’t influence by interest in wanting to have children.” I do not really understand that sentence. Do you imply that powerful people naturally have a different amount of interest in wanting to have children? If so, why does that matter for the decision in the thought experiment?
You ask what I want to gain from this thought experiment.
Following lesswrong or EA community discussions about decisions about having children, I get the impression that the factors that influence the decision seem to be:
potentially reduced productivity (less time and energy for saving the world?),
immediate happiness / stress effect on the parents.
However, the ethics of bringing children into the world seem to be touched only superficially. This seems strange to me for a community in which thinking about ethics and thinking about the future are seen as valuable. @Julia Wise, writing about “Raising children on the eve of AI” says: “This is all assuming that the worst case is death rather than some kind of dystopia or torture scenario. Maybe unsurprisingly, I haven’t properly thought through the population ethics there. I find that very difficult to think about, and if you’re on the fence you should think more about it.” At the same time, the median community member’s expectation about the future seems very gloomy to me (though there are also people who seem very excited about a future of mind uploading, turning the world into a holodeck, or whatever).
I am confused about this attitude, and I try to determine whether
I just do not understand whether people on lesswrong expect the future to be bad or good,
people think even in case of a disaster with relevant likelihood, the future will definitely not include suffering that could outweigh some years of happiness,
people (who have children) have not thought about this in detail,
people do not think that any of this matters for some reason I overlook,
people tend to be taken in by motivated reasoning,
or something else.
So I tried to design a clear scenario to understand some parameters driving the decisions.
Why did I ask you about it? You have four children, you take part in discussions about the topic, you also write about alignment / AI risk.
You reject the usefulness of the thought experiment, but I do not really understand why. Your reasons are that “in practice, there is almost always a possibility to affect the outcome” and that “the outcome is also almost never absolute”. With respect to the possibility to affect the outcome, I would say that I, as an individual, have to take most global situations as given.
I agree that, as an individual, one cannot affect most outcomes significantly. But if everybody assumes everybody does that too, then nobody does anything and thus definitely nothing happens/is done. Everybody contributes small parts, but those aggregate to change, because somebody will be at the right place at the right time to do something or ask the right question or bring the right people together etc. By ruling out the possibility you take this effect away and I have to price that into my model. If you or society wants to achieve something, you have to convice large numbers that change is possible and that it is important that everybody contributes. In management, that is called “building momentum.”
With respect to whether the outcome is “absolute”, you seem to mean that it is not a certain outcome or that not literally everybody would die. If it is just about the certainty, well, I included the subjective probability in the thought experiment. If it is about whether everybody dies, of course you can think of any probability distribution of outcomes, but what is gained by that?
You only added a binary probability between two options keeping both individually rigid. It would have worked better to provide distributions for number of people suffering or the effectiveness of influence etc. - but because I didn’t know you intention of the though experiment I couldn’t just assume those.
Then you say: “And on top of that, my presumed inability to influence outcomes somehow also doesn’t influence by interest in wanting to have children.” I do not really understand that sentence. Do you imply that powerful people naturally have a different amount of interest in wanting to have children? If so, why does that matter for the decision in the thought experiment?
No, I don’t want to make that specific implication. Maybe powerful people have a different interest in having children, but I don’t know those forces and would’t make a confident prediction either way.
But if I personally can’t influence results, I have to make assumptions as to why I can’t. Maybe I’m sick, maybe I’m legally limited in some way in your hypothetical. Such reasons would surely influence desire to have children.
Following lesswrong or EA community discussions about decisions about having children, I get the impression that the factors that influence the decision seem to be:
potentially reduced productivity (less time and energy for saving the world?),
immediate happiness / stress effect on the parents.
...
I think that there are many more reasons than this including the ecological footprint of a child, personal reasons, general ethical reasons, and others. But I agree that there is no coherent picture. The community hasn’t come to terms with this and this is more a market place of EA/LW flavioured ideas. What else do you expect of a young and preparadigmatic field. People try to think hard about it, but it is, well, hard.
I am confused about this attitude, and I try to determine whether
I just do not understand whether people on lesswrong expect the future to be bad or good,
More bad than good, I guess. But it is a distribution as you can look up on Metaculus.
people think even in case of a disaster with relevant likelihood, the future will definitely not include suffering that could outweigh some years of happiness,
Some will think it and be worried. That’s what the s-risk sub-community is about, but I get the impression that is a small part. And then there is the question what suffering is and whether it is “bad” or a problem to begin with (though most agree on that).
people (who have children) have not thought about this in detail,
Unsurprising as, having babies has always been and always will be (until/when/if uploading/bio-engineering) a normal thing of life. Normal is normal. People do think about how many children they would want to have, but rarely if.
people do not think that any of this matters for some reason I overlook,
people tend to be taken in by motivated reasoning,
or something else.
Sure, some, but I don’t think it is as bad as you seem to think.
So I tried to design a clear scenario to understand some parameters driving the decisions.
And here I think things went wrong. I think the scenario wasn’t good. It was unrealistic—curring out too small a part of what you seem to be interested in.
Why did I ask you about it? You have four children, you take part in discussions about the topic, you also write about alignment / AI risk.
“But fine, let’s come up with a scenario that might fit the bill: It must be something that can’t be influenced, so natural causes are out.”
No, maybe I was not clear enough: The scenario is just about something that I cannot influence to a relevant extent. It does not matter whether mankind together is theoretically able to mitigate the disaster, because that is not directly relevant to individual decisions about having children.
“And I guess I can work with the 1-p as it just means “stable comparable utility.”″
I am not sure whether I understand what you mean, but I just meant that it is a world where the world can develop into one of two directions, and you have subjective probabilities about it.
“What would I do? In this case, how we lived might still leave a testimonial of our life that the aliens might care about. Did we give up? Did we care about our families? Should we try to warn other civilizations about these aliens? So in this scenario, there is still a balance between what I could do, if not against it, then still about it. This trades against having children.”
Yes, technically you might be able to do something relevant, but this is not why I came up with the thought experiment. So you can just assume that “you” in this scenario will not be able to save the world.
“But note that simply posing the scenario with a non-zero p slightly alters the decision for children: The presence of aliens would alter society and people might want to do something about it.”
How would that affect the decision, in your opinion?
“Also, decision for children is a shared decision, so I additionally assume the partner would match this.”
Even if something is a shared decision, you can always first think about your own preferences.
“But then it still depends on life circumstances. So I guess at p=~2% it would very slightly reduce the decision for children. From p=~25% it would start to draw more capacity. At =~80% there wouldn’t be much left to have children.”
Thanks for the precise answer!
I am surprised about the 2%. May I ask
what your expectations about global catastrophic risks are for then next decades? (No extremely precise answer necessary.)
whether “would start to draw more capacity” implies that the whole expectation would only affect your decisions because you believe you would invest your time into saving the world, but not because the effect of the expected future development on your (hypothetical) child’s life?
Thank you for continuing to engage in earnest dialogue. I’m currently traveling, but you deserve an answer. I will reply later. Feel free to elaborate in the meantime.
what your expectations about global catastrophic risks are for then next decades? (No extremely precise answer necessary.)
~~10%, mostly from AI. Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz.
whether “would start to draw more capacity” implies that the whole expectation would only affect your decisions because you believe you would invest your time into saving the world, but not because the effect of the expected future development on your (hypothetical) child’s life?
Thanks. I don’t understand the sentence “Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz.” Would you be willing to elaborate?
In your hypothetical scenario I had to come up with very specific worlds including a lot of suffering. In the “regular” existential risk (mostly from AI), that distribution is different, thus having babies is affected differently.
Yes, technically you might be able to do something relevant, but this is not why I came up with the thought experiment. So you can just assume that “you” in this scenario will not be able to save the world.
I think I don’t understand what you want to do with the thought experiment. In many cases, it is not needed to understand the purpose of a thought experiment and I think thought experiments are often useful. Including ones like this of putting it into a yes/no question. But this one is special.
I can not “just” “assume” that “I” will not be able to do something. These words mean something and it plays a role here. To assume means the I restrict all possible worlds to those where the scenario holds. That is not easy here—the hypotheical worlds are rare and difficult to get concrete enough to ask my intuition about, thus I cannot “just” do it. Asking me to “just” do it means to let my intuition use heuristics that are mostly based on our current world and will pattern match with things that probably do not correlate with what you are interested in. I’d rather refuse the question. I generally like answering polls and online questions. I think it is a good social habit to cultivate to source input about a wide range of topics. But I refuse to answer those that are ill-posed. I say: “Mu.”
I assume we are either lost in translation (which means I cannot phrase my thoughts clearly or am unable to put myself in your shoes) or you do not want to think about the question for some reason. I think I have to give up here. Nonetheless, thank you very much for the answer.
I guess “I don’t want to answer the question” is a decent summary. I spent so much time on answering why because I felt that there were too many assumption in the scenario.
“Also, decision for children is a shared decision, so I additionally assume the partner would match this.”
Even if something is a shared decision, you can always first think about your own preferences.
But my preferences are entangled with everybody else’s preferences. If you put me into hypothetical worlds where those people prefer differently, so do I.
People may disagree with this entanglement, but they are likely wrong. Out preferences are largely shaped by our upbringing, our peers, and society at large. And this is necessary: People can not grow up to functioning adults without this learning environment, as is proven, e.g., by feral children.
Of course, preferences are shaped by your social environment, but I assume that in any given situation you could still state a preference on the basis of which you would then enter into an exchange with the other relevant people?
I agree with this take. I already have four children, and I wouldn’t decide against children because of AI risks.
Did you take such things into account when you made the decision, or decisions?
Not AI risk specifically. But I had lengthy discussions with a friend about the general question of whether it is ethical to have children. The concerns in our discussions were overpopulation and how bad the world is in general and in Germany in particular. These much weaker concerns compared to extinction were enough for him not to have children. He also mentioned The Voluntary Human Extinction Movement. We still disagree on this. Mostly we disagree on how had and failed the world is. I think it is not worse than it has been most of the time since forever. Maybe because I perceive less suffering (in myself and in others) than he does. We also disagree on how to deal with overpopulation. Whether to take local population into account. Whether to weigh by consumption. Whether to see this as an individual obligation, or a collective one. Or as an obligation at all. Still, we are good friends. Maybe that tells you something.
So you thought that overpopulation is not much of a concern and the world is not so bad, right? But if you had thought that overpopulation (or something else) was a really strong problem and also would have had very bad effects on your children (for example, if you had expected with a high probability that their life would have been a permanent postapocalyptic fight for food), would that have affected your decision?
Very likely. People have, at different times and places, reacted to perceived overpopulation with reduced fertility—if needed by infanticide (Robin Hanson discusses this sometimes, e.g., here). Though what exactly was considered “too many children” was probably very different each time. It wasn’t always “bad effects on children”. A “permanent postapocalyptic fight for food” doesn’t seem like the strongest argument. People have lived in that state for most of humanity.
Sorry, I don’t fully understand the answer. 1) You think that you would have reacted to perceived overpopulation by having less children, but 2) at the same time, you think that expecting your children to have to live in a permanent postapocalyptic fight for food is not such a strong argument because that was normal in former times. But if the consideration in point 2 would not have mattered, why would the perception of overpopulation have mattered to you?
First, the local overpopulation has a much higher influence than the global overpopulation. I think it is arguable to do something about Korea’s shrinking population even if you are worried that there are not enough resources globally. In general, what is true for the average is not true for everybody.
Second, just because I, as an individual, (feel that I) can’t afford to have children, e.g., because society imposes costs on me to have children, that doesn’t mean that I wish such children to not exist (it might, but it’s a different claim).
Maybe I am not clear enough or we are talking past each other. So forget about the overpopulation. Suppose you lived in a universe where in the year in which you decided to have your first child some omniscient demon appeared and told you that
with probability p, some disaster happens 10 years later which causes dying of starvation for everybody with certainty (with all the societal and psychological side effects that such a famine implies),
with probability 1-p life in your country remains forever as it was in that year (it’s hypothetical, so plesae do not question whether that is possible).
So my questions:
Would there be some p where you would have decided not to have children?
How would the quality / kind of the disaster affect the decision?
How would the time horizon (10 years in my example) affect the decision?
Are there other societal or other global conditions where you think people should not have children?
I reject the usefulness of the thought experiment. In practice, there is almost always a possibility to affect the outcome. And then the outcome is also almost never absolute (“starve with certainty”). And on top of that, my presumed inability to influence outcomes somehow also doesn’t influence by interest in wanting to have children. Thus, I wonder what you want to gain from this thought experiment? What is your crux?
But fine, let’s come up with a scenario that might fit the bill: It must be something that can’t be influenced, so natural causes are out. So aliens have sent us a message that they plan to do that and have proven that they have the technological means, and because of the means of transportation/communication we can’t send them a message back to convince them otherwise or something. And I guess I can work with the 1-p as it just means “stable comparable utility.”
What would I do? In this case, how we lived might still leave a testimonial of our life that the aliens might care about. Did we give up? Did we care about our families? Should we try to warn other civilizations about these aliens? So in this scenario, there is still a balance between what I could do, if not against it, then still about it. This trades against having children. But note that simply posing the scenario with a non-zero p slightly alters the decision for children: The presence of aliens would alter society and people might want to do something about it. Also, decision for children is a shared decision, so I additionally assume the partner would match this. But then it still depends on life circumstances. So I guess at p=~2% it would very slightly reduce the decision for children. From p=~25% it would start to draw more capacity. At =~80% there wouldn’t be much left to have children.
First of all, thanks for the detailed answer. I do not fully understand your position here, but the clarity of the answer to the thought experiment was helpful.
You reject the usefulness of the thought experiment, but I do not really understand why. Your reasons are that “in practice, there is almost always a possibility to affect the outcome” and that “the outcome is also almost never absolute”. With respect to the possibility to affect the outcome, I would say that I, as an individual, have to take most global situations as given. With respect to whether the outcome is “absolute”, you seem to mean that it is not a certain outcome or that not literally everybody would die. If it is just about the certainty, well, I included the subjective probability in the thought experiment. If it is about whether everybody dies, of course you can think of any probability distribution of outcomes, but what is gained by that? Then you say: “And on top of that, my presumed inability to influence outcomes somehow also doesn’t influence by interest in wanting to have children.” I do not really understand that sentence. Do you imply that powerful people naturally have a different amount of interest in wanting to have children? If so, why does that matter for the decision in the thought experiment?
You ask what I want to gain from this thought experiment.
Following lesswrong or EA community discussions about decisions about having children, I get the impression that the factors that influence the decision seem to be:
potentially reduced productivity (less time and energy for saving the world?),
immediate happiness / stress effect on the parents.
However, the ethics of bringing children into the world seem to be touched only superficially. This seems strange to me for a community in which thinking about ethics and thinking about the future are seen as valuable. @Julia Wise, writing about “Raising children on the eve of AI” says: “This is all assuming that the worst case is death rather than some kind of dystopia or torture scenario. Maybe unsurprisingly, I haven’t properly thought through the population ethics there. I find that very difficult to think about, and if you’re on the fence you should think more about it.” At the same time, the median community member’s expectation about the future seems very gloomy to me (though there are also people who seem very excited about a future of mind uploading, turning the world into a holodeck, or whatever).
I am confused about this attitude, and I try to determine whether
I just do not understand whether people on lesswrong expect the future to be bad or good,
people think even in case of a disaster with relevant likelihood, the future will definitely not include suffering that could outweigh some years of happiness,
people (who have children) have not thought about this in detail,
people do not think that any of this matters for some reason I overlook,
people tend to be taken in by motivated reasoning,
or something else.
So I tried to design a clear scenario to understand some parameters driving the decisions.
Why did I ask you about it? You have four children, you take part in discussions about the topic, you also write about alignment / AI risk.
I agree that, as an individual, one cannot affect most outcomes significantly. But if everybody assumes everybody does that too, then nobody does anything and thus definitely nothing happens/is done. Everybody contributes small parts, but those aggregate to change, because somebody will be at the right place at the right time to do something or ask the right question or bring the right people together etc. By ruling out the possibility you take this effect away and I have to price that into my model. If you or society wants to achieve something, you have to convice large numbers that change is possible and that it is important that everybody contributes. In management, that is called “building momentum.”
You only added a binary probability between two options keeping both individually rigid. It would have worked better to provide distributions for number of people suffering or the effectiveness of influence etc. - but because I didn’t know you intention of the though experiment I couldn’t just assume those.
No, I don’t want to make that specific implication. Maybe powerful people have a different interest in having children, but I don’t know those forces and would’t make a confident prediction either way.
But if I personally can’t influence results, I have to make assumptions as to why I can’t. Maybe I’m sick, maybe I’m legally limited in some way in your hypothetical. Such reasons would surely influence desire to have children.
I think that there are many more reasons than this including the ecological footprint of a child, personal reasons, general ethical reasons, and others. But I agree that there is no coherent picture. The community hasn’t come to terms with this and this is more a market place of EA/LW flavioured ideas. What else do you expect of a young and preparadigmatic field. People try to think hard about it, but it is, well, hard.
More bad than good, I guess. But it is a distribution as you can look up on Metaculus.
Some will think it and be worried. That’s what the s-risk sub-community is about, but I get the impression that is a small part. And then there is the question what suffering is and whether it is “bad” or a problem to begin with (though most agree on that).
Unsurprising as, having babies has always been and always will be (until/when/if uploading/bio-engineering) a normal thing of life. Normal is normal. People do think about how many children they would want to have, but rarely if.
Sure, some, but I don’t think it is as bad as you seem to think.
And here I think things went wrong. I think the scenario wasn’t good. It was unrealistic—curring out too small a part of what you seem to be interested in.
Thank you.
About your reaction to the thought experiment:
“But fine, let’s come up with a scenario that might fit the bill: It must be something that can’t be influenced, so natural causes are out.”
No, maybe I was not clear enough: The scenario is just about something that I cannot influence to a relevant extent. It does not matter whether mankind together is theoretically able to mitigate the disaster, because that is not directly relevant to individual decisions about having children.
“And I guess I can work with the 1-p as it just means “stable comparable utility.”″
I am not sure whether I understand what you mean, but I just meant that it is a world where the world can develop into one of two directions, and you have subjective probabilities about it.
“What would I do? In this case, how we lived might still leave a testimonial of our life that the aliens might care about. Did we give up? Did we care about our families? Should we try to warn other civilizations about these aliens? So in this scenario, there is still a balance between what I could do, if not against it, then still about it. This trades against having children.”
Yes, technically you might be able to do something relevant, but this is not why I came up with the thought experiment. So you can just assume that “you” in this scenario will not be able to save the world.
“But note that simply posing the scenario with a non-zero p slightly alters the decision for children: The presence of aliens would alter society and people might want to do something about it.”
How would that affect the decision, in your opinion?
“Also, decision for children is a shared decision, so I additionally assume the partner would match this.”
Even if something is a shared decision, you can always first think about your own preferences.
“But then it still depends on life circumstances. So I guess at p=~2% it would very slightly reduce the decision for children. From p=~25% it would start to draw more capacity. At =~80% there wouldn’t be much left to have children.”
Thanks for the precise answer!
I am surprised about the 2%. May I ask
what your expectations about global catastrophic risks are for then next decades? (No extremely precise answer necessary.)
whether “would start to draw more capacity” implies that the whole expectation would only affect your decisions because you believe you would invest your time into saving the world, but not because the effect of the expected future development on your (hypothetical) child’s life?
Thank you for continuing to engage in earnest dialogue. I’m currently traveling, but you deserve an answer. I will reply later. Feel free to elaborate in the meantime.
While I don’t have much to elaborate, maybe the following headline captures the relevant mood: https://unchartedterritories.tomaspueyo.com/p/what-would-you-do-if-you-had-8-years
This comment is just to note that I’d still be happy about an answer.
~~10%, mostly from AI. Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz.
Not only, but mostly, yes.
Thanks. I don’t understand the sentence “Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz.” Would you be willing to elaborate?
In your hypothetical scenario I had to come up with very specific worlds including a lot of suffering. In the “regular” existential risk (mostly from AI), that distribution is different, thus having babies is affected differently.
Thanks for the reminder.
I think I don’t understand what you want to do with the thought experiment. In many cases, it is not needed to understand the purpose of a thought experiment and I think thought experiments are often useful. Including ones like this of putting it into a yes/no question. But this one is special.
I can not “just” “assume” that “I” will not be able to do something. These words mean something and it plays a role here. To assume means the I restrict all possible worlds to those where the scenario holds. That is not easy here—the hypotheical worlds are rare and difficult to get concrete enough to ask my intuition about, thus I cannot “just” do it. Asking me to “just” do it means to let my intuition use heuristics that are mostly based on our current world and will pattern match with things that probably do not correlate with what you are interested in. I’d rather refuse the question. I generally like answering polls and online questions. I think it is a good social habit to cultivate to source input about a wide range of topics. But I refuse to answer those that are ill-posed. I say: “Mu.”
I assume we are either lost in translation (which means I cannot phrase my thoughts clearly or am unable to put myself in your shoes) or you do not want to think about the question for some reason. I think I have to give up here. Nonetheless, thank you very much for the answer.
I guess “I don’t want to answer the question” is a decent summary. I spent so much time on answering why because I felt that there were too many assumption in the scenario.
But my preferences are entangled with everybody else’s preferences. If you put me into hypothetical worlds where those people prefer differently, so do I.
People may disagree with this entanglement, but they are likely wrong. Out preferences are largely shaped by our upbringing, our peers, and society at large. And this is necessary: People can not grow up to functioning adults without this learning environment, as is proven, e.g., by feral children.
Of course, preferences are shaped by your social environment, but I assume that in any given situation you could still state a preference on the basis of which you would then enter into an exchange with the other relevant people?
Yes, that would be possible, but it would lead to different results.