Medical grade nanobots capable of rendering people immortal exist. They’re a one time injection that protect you from all disease forever. Do you and your family accept the treatment? If so, you’re essentially guaranteeing your family will survive until the singularity, at which point a malevolent singleton might take over the universe and do all sorts of nasty things to you.
I agree that cryonics is scarier than the hypothetical, but the issue at hand isn’t actually different.
Children are only helpless for about 10 years. If the singleton came within 10 years of my child being born without warning, it would be awful but not my fault. If I had any warning of it coming, and I still chose to have children that then came to harm, it would be my fault.
Good question. The reason is because that this has recently become an ethical problem for me rather than an optimization problem. Perhaps that is why I think of it in far mode, if that is what I’m doing. But I do know that in ethical mode, it can be the case that you’re no longer allowed to base a decision on the computed “average value” … even small risks or compromises might be unacceptable. If I allow my child to come to harm, and I’m not allowed to do that, then it doesn’t matter what advantage I’m gambling for. I perceive at a certain age they can make their own decision, and then with relief I may sign them up for cryonics at their request.
Consider this hypothetical situation:
Medical grade nanobots capable of rendering people immortal exist. They’re a one time injection that protect you from all disease forever. Do you and your family accept the treatment? If so, you’re essentially guaranteeing your family will survive until the singularity, at which point a malevolent singleton might take over the universe and do all sorts of nasty things to you.
I agree that cryonics is scarier than the hypothetical, but the issue at hand isn’t actually different.
Children are only helpless for about 10 years. If the singleton came within 10 years of my child being born without warning, it would be awful but not my fault. If I had any warning of it coming, and I still chose to have children that then came to harm, it would be my fault.
Why does fault matter?
Good question. The reason is because that this has recently become an ethical problem for me rather than an optimization problem. Perhaps that is why I think of it in far mode, if that is what I’m doing. But I do know that in ethical mode, it can be the case that you’re no longer allowed to base a decision on the computed “average value” … even small risks or compromises might be unacceptable. If I allow my child to come to harm, and I’m not allowed to do that, then it doesn’t matter what advantage I’m gambling for. I perceive at a certain age they can make their own decision, and then with relief I may sign them up for cryonics at their request.