If you were the friendly AI and Alicorn failed to make a fast friend as predicted and that resulted in suicidal depression, would that depression be defined as mental illness and treated as such? Would recent wake-ups have the right to commit suicide? I think that’s an incredibly hard question so please don’t answer if you don’t want to.
Have you written anything on suicide in the metaethics sequence or elsewhere?
I suppose having to rigorously prove the mathematics behind these questions is why Eliezer is so much more pessimistic about the probability of AI killing us than I am.
If you were the friendly AI and Alicorn failed to make a fast friend as predicted and that resulted in suicidal depression, would that depression be defined as mental illness and treated as such? Would recent wake-ups have the right to commit suicide? I think that’s an incredibly hard question so please don’t answer if you don’t want to.
Have you written anything on suicide in the metaethics sequence or elsewhere?
And the relevant question extends to the assumption behind the phrase ‘and treated as such’. Do people have the right to be nuts in general?
I suppose having to rigorously prove the mathematics behind these questions is why Eliezer is so much more pessimistic about the probability of AI killing us than I am.