I liked reading your article; very interesting! π
One point I figured I should x-post with our DMs π --> IMO, if one cares about future lives (as much as present ones) then the question stops really being about expected lives and starts just being about whether an action increases or decreases x-risks. I think a lot/βall of the tech you described also have a probability of causing an x-risk if theyβre not implemented. I donβt think we can really determine whether a probability of some of those x-risk is low enough in absolute terms as those probabilities would need to be unreasonably low, leading to full paralysis, and full paralysis could lead to x-risk. I think instead someone with those values (ie. caring about unborn people) should compare the probability of x-risks if a tech gets developed vs not developed (or whatever else is being evaluated). π
I liked reading your article; very interesting! π
One point I figured I should x-post with our DMs π --> IMO, if one cares about future lives (as much as present ones) then the question stops really being about expected lives and starts just being about whether an action increases or decreases x-risks. I think a lot/βall of the tech you described also have a probability of causing an x-risk if theyβre not implemented. I donβt think we can really determine whether a probability of some of those x-risk is low enough in absolute terms as those probabilities would need to be unreasonably low, leading to full paralysis, and full paralysis could lead to x-risk. I think instead someone with those values (ie. caring about unborn people) should compare the probability of x-risks if a tech gets developed vs not developed (or whatever else is being evaluated). π