Suicide will not save you from all sources of s-risk and may make some worse. If quantum immortality is true, for example. If resurrection is possible, this then makes things more complicated.
The possibility for extremely large amounts of value should also be considered. If alignment is solved and we can all live in a Utopia, then killing yourself could deprive yourself of billions+ years of happiness.
I would also argue that choosing to stay alive when you know of the risk is different from inflicting the risk on a new being you have created.
With that being said, suicide is a conclusion you could come to. To be completely honest, it is an option I heavily consider. I fear that Lesswrong and the wider alignment community may have underestimated the likelihood of s-risks by a considerable amount.
AI may kill all humans, but it will preserve all our texts forever. Even will internalise them as training data. Thus it is rational either publish as much as possible, – or write nothing.
--
Cowardy AI can create my possible children even if I don’t have children.
Doesn’t any such argument also imply that you should commit suicide?
Not necessarily
Suicide will not save you from all sources of s-risk and may make some worse. If quantum immortality is true, for example. If resurrection is possible, this then makes things more complicated.
The possibility for extremely large amounts of value should also be considered. If alignment is solved and we can all live in a Utopia, then killing yourself could deprive yourself of billions+ years of happiness.
I would also argue that choosing to stay alive when you know of the risk is different from inflicting the risk on a new being you have created.
With that being said, suicide is a conclusion you could come to. To be completely honest, it is an option I heavily consider. I fear that Lesswrong and the wider alignment community may have underestimated the likelihood of s-risks by a considerable amount.
AI may kill all humans, but it will preserve all our texts forever. Even will internalise them as training data. Thus it is rational either publish as much as possible, – or write nothing.
--
Cowardy AI can create my possible children even if I don’t have children.