First of all, thanks for the detailed answer. I do not fully understand your position here, but the clarity of the answer to the thought experiment was helpful.
You reject the usefulness of the thought experiment, but I do not really understand why. Your reasons are that “in practice, there is almost always a possibility to affect the outcome” and that “the outcome is also almost never absolute”. With respect to the possibility to affect the outcome, I would say that I, as an individual, have to take most global situations as given. With respect to whether the outcome is “absolute”, you seem to mean that it is not a certain outcome or that not literally everybody would die. If it is just about the certainty, well, I included the subjective probability in the thought experiment. If it is about whether everybody dies, of course you can think of any probability distribution of outcomes, but what is gained by that? Then you say: “And on top of that, my presumed inability to influence outcomes somehow also doesn’t influence by interest in wanting to have children.” I do not really understand that sentence. Do you imply that powerful people naturally have a different amount of interest in wanting to have children? If so, why does that matter for the decision in the thought experiment?
You ask what I want to gain from this thought experiment.
Following lesswrong or EA community discussions about decisions about having children, I get the impression that the factors that influence the decision seem to be:
potentially reduced productivity (less time and energy for saving the world?),
immediate happiness / stress effect on the parents.
However, the ethics of bringing children into the world seem to be touched only superficially. This seems strange to me for a community in which thinking about ethics and thinking about the future are seen as valuable. @Julia Wise, writing about “Raising children on the eve of AI” says: “This is all assuming that the worst case is death rather than some kind of dystopia or torture scenario. Maybe unsurprisingly, I haven’t properly thought through the population ethics there. I find that very difficult to think about, and if you’re on the fence you should think more about it.” At the same time, the median community member’s expectation about the future seems very gloomy to me (though there are also people who seem very excited about a future of mind uploading, turning the world into a holodeck, or whatever).
I am confused about this attitude, and I try to determine whether
I just do not understand whether people on lesswrong expect the future to be bad or good,
people think even in case of a disaster with relevant likelihood, the future will definitely not include suffering that could outweigh some years of happiness,
people (who have children) have not thought about this in detail,
people do not think that any of this matters for some reason I overlook,
people tend to be taken in by motivated reasoning,
or something else.
So I tried to design a clear scenario to understand some parameters driving the decisions.
Why did I ask you about it? You have four children, you take part in discussions about the topic, you also write about alignment / AI risk.
You reject the usefulness of the thought experiment, but I do not really understand why. Your reasons are that “in practice, there is almost always a possibility to affect the outcome” and that “the outcome is also almost never absolute”. With respect to the possibility to affect the outcome, I would say that I, as an individual, have to take most global situations as given.
I agree that, as an individual, one cannot affect most outcomes significantly. But if everybody assumes everybody does that too, then nobody does anything and thus definitely nothing happens/is done. Everybody contributes small parts, but those aggregate to change, because somebody will be at the right place at the right time to do something or ask the right question or bring the right people together etc. By ruling out the possibility you take this effect away and I have to price that into my model. If you or society wants to achieve something, you have to convice large numbers that change is possible and that it is important that everybody contributes. In management, that is called “building momentum.”
With respect to whether the outcome is “absolute”, you seem to mean that it is not a certain outcome or that not literally everybody would die. If it is just about the certainty, well, I included the subjective probability in the thought experiment. If it is about whether everybody dies, of course you can think of any probability distribution of outcomes, but what is gained by that?
You only added a binary probability between two options keeping both individually rigid. It would have worked better to provide distributions for number of people suffering or the effectiveness of influence etc. - but because I didn’t know you intention of the though experiment I couldn’t just assume those.
Then you say: “And on top of that, my presumed inability to influence outcomes somehow also doesn’t influence by interest in wanting to have children.” I do not really understand that sentence. Do you imply that powerful people naturally have a different amount of interest in wanting to have children? If so, why does that matter for the decision in the thought experiment?
No, I don’t want to make that specific implication. Maybe powerful people have a different interest in having children, but I don’t know those forces and would’t make a confident prediction either way.
But if I personally can’t influence results, I have to make assumptions as to why I can’t. Maybe I’m sick, maybe I’m legally limited in some way in your hypothetical. Such reasons would surely influence desire to have children.
Following lesswrong or EA community discussions about decisions about having children, I get the impression that the factors that influence the decision seem to be:
potentially reduced productivity (less time and energy for saving the world?),
immediate happiness / stress effect on the parents.
...
I think that there are many more reasons than this including the ecological footprint of a child, personal reasons, general ethical reasons, and others. But I agree that there is no coherent picture. The community hasn’t come to terms with this and this is more a market place of EA/LW flavioured ideas. What else do you expect of a young and preparadigmatic field. People try to think hard about it, but it is, well, hard.
I am confused about this attitude, and I try to determine whether
I just do not understand whether people on lesswrong expect the future to be bad or good,
More bad than good, I guess. But it is a distribution as you can look up on Metaculus.
people think even in case of a disaster with relevant likelihood, the future will definitely not include suffering that could outweigh some years of happiness,
Some will think it and be worried. That’s what the s-risk sub-community is about, but I get the impression that is a small part. And then there is the question what suffering is and whether it is “bad” or a problem to begin with (though most agree on that).
people (who have children) have not thought about this in detail,
Unsurprising as, having babies has always been and always will be (until/when/if uploading/bio-engineering) a normal thing of life. Normal is normal. People do think about how many children they would want to have, but rarely if.
people do not think that any of this matters for some reason I overlook,
people tend to be taken in by motivated reasoning,
or something else.
Sure, some, but I don’t think it is as bad as you seem to think.
So I tried to design a clear scenario to understand some parameters driving the decisions.
And here I think things went wrong. I think the scenario wasn’t good. It was unrealistic—curring out too small a part of what you seem to be interested in.
Why did I ask you about it? You have four children, you take part in discussions about the topic, you also write about alignment / AI risk.
First of all, thanks for the detailed answer. I do not fully understand your position here, but the clarity of the answer to the thought experiment was helpful.
You reject the usefulness of the thought experiment, but I do not really understand why. Your reasons are that “in practice, there is almost always a possibility to affect the outcome” and that “the outcome is also almost never absolute”. With respect to the possibility to affect the outcome, I would say that I, as an individual, have to take most global situations as given. With respect to whether the outcome is “absolute”, you seem to mean that it is not a certain outcome or that not literally everybody would die. If it is just about the certainty, well, I included the subjective probability in the thought experiment. If it is about whether everybody dies, of course you can think of any probability distribution of outcomes, but what is gained by that? Then you say: “And on top of that, my presumed inability to influence outcomes somehow also doesn’t influence by interest in wanting to have children.” I do not really understand that sentence. Do you imply that powerful people naturally have a different amount of interest in wanting to have children? If so, why does that matter for the decision in the thought experiment?
You ask what I want to gain from this thought experiment.
Following lesswrong or EA community discussions about decisions about having children, I get the impression that the factors that influence the decision seem to be:
potentially reduced productivity (less time and energy for saving the world?),
immediate happiness / stress effect on the parents.
However, the ethics of bringing children into the world seem to be touched only superficially. This seems strange to me for a community in which thinking about ethics and thinking about the future are seen as valuable. @Julia Wise, writing about “Raising children on the eve of AI” says: “This is all assuming that the worst case is death rather than some kind of dystopia or torture scenario. Maybe unsurprisingly, I haven’t properly thought through the population ethics there. I find that very difficult to think about, and if you’re on the fence you should think more about it.” At the same time, the median community member’s expectation about the future seems very gloomy to me (though there are also people who seem very excited about a future of mind uploading, turning the world into a holodeck, or whatever).
I am confused about this attitude, and I try to determine whether
I just do not understand whether people on lesswrong expect the future to be bad or good,
people think even in case of a disaster with relevant likelihood, the future will definitely not include suffering that could outweigh some years of happiness,
people (who have children) have not thought about this in detail,
people do not think that any of this matters for some reason I overlook,
people tend to be taken in by motivated reasoning,
or something else.
So I tried to design a clear scenario to understand some parameters driving the decisions.
Why did I ask you about it? You have four children, you take part in discussions about the topic, you also write about alignment / AI risk.
I agree that, as an individual, one cannot affect most outcomes significantly. But if everybody assumes everybody does that too, then nobody does anything and thus definitely nothing happens/is done. Everybody contributes small parts, but those aggregate to change, because somebody will be at the right place at the right time to do something or ask the right question or bring the right people together etc. By ruling out the possibility you take this effect away and I have to price that into my model. If you or society wants to achieve something, you have to convice large numbers that change is possible and that it is important that everybody contributes. In management, that is called “building momentum.”
You only added a binary probability between two options keeping both individually rigid. It would have worked better to provide distributions for number of people suffering or the effectiveness of influence etc. - but because I didn’t know you intention of the though experiment I couldn’t just assume those.
No, I don’t want to make that specific implication. Maybe powerful people have a different interest in having children, but I don’t know those forces and would’t make a confident prediction either way.
But if I personally can’t influence results, I have to make assumptions as to why I can’t. Maybe I’m sick, maybe I’m legally limited in some way in your hypothetical. Such reasons would surely influence desire to have children.
I think that there are many more reasons than this including the ecological footprint of a child, personal reasons, general ethical reasons, and others. But I agree that there is no coherent picture. The community hasn’t come to terms with this and this is more a market place of EA/LW flavioured ideas. What else do you expect of a young and preparadigmatic field. People try to think hard about it, but it is, well, hard.
More bad than good, I guess. But it is a distribution as you can look up on Metaculus.
Some will think it and be worried. That’s what the s-risk sub-community is about, but I get the impression that is a small part. And then there is the question what suffering is and whether it is “bad” or a problem to begin with (though most agree on that).
Unsurprising as, having babies has always been and always will be (until/when/if uploading/bio-engineering) a normal thing of life. Normal is normal. People do think about how many children they would want to have, but rarely if.
Sure, some, but I don’t think it is as bad as you seem to think.
And here I think things went wrong. I think the scenario wasn’t good. It was unrealistic—curring out too small a part of what you seem to be interested in.
Thank you.