what your expectations about global catastrophic risks are for then next decades? (No extremely precise answer necessary.)
~~10%, mostly from AI. Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz.
whether “would start to draw more capacity” implies that the whole expectation would only affect your decisions because you believe you would invest your time into saving the world, but not because the effect of the expected future development on your (hypothetical) child’s life?
Thanks. I don’t understand the sentence “Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz.” Would you be willing to elaborate?
In your hypothetical scenario I had to come up with very specific worlds including a lot of suffering. In the “regular” existential risk (mostly from AI), that distribution is different, thus having babies is affected differently.
~~10%, mostly from AI. Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz.
Not only, but mostly, yes.
Thanks. I don’t understand the sentence “Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz.” Would you be willing to elaborate?
In your hypothetical scenario I had to come up with very specific worlds including a lot of suffering. In the “regular” existential risk (mostly from AI), that distribution is different, thus having babies is affected differently.