I could sorta understand this if we were talking about one person you couldn’t live without, it’s the idea of worrying about not having any deep friends in general that’s making me blink.
Some people are convinced they’ll have to live without the strangest things after the Singularity… having encountered something possibly similar before, I do seriously wonder if you might be suffering from a general hope-in-the-future deficiency.
If you were the friendly AI and Alicorn failed to make a fast friend as predicted and that resulted in suicidal depression, would that depression be defined as mental illness and treated as such? Would recent wake-ups have the right to commit suicide? I think that’s an incredibly hard question so please don’t answer if you don’t want to.
Have you written anything on suicide in the metaethics sequence or elsewhere?
I suppose having to rigorously prove the mathematics behind these questions is why Eliezer is so much more pessimistic about the probability of AI killing us than I am.
I have only managed to live without particular persons who’ve departed from my life for any reason by virtue of already having other persons to console me.
That said, there are a handful of people whose loss would trouble me especially terribly, but I could survive it with someone else around to grieve with.
I could sorta understand this if we were talking about one person you couldn’t live without, it’s the idea of worrying about not having any deep friends in general that’s making me blink.
Some people are convinced they’ll have to live without the strangest things after the Singularity… having encountered something possibly similar before, I do seriously wonder if you might be suffering from a general hope-in-the-future deficiency.
PS/Edit: Spider Robinson’s analogy, not mine.
If you were the friendly AI and Alicorn failed to make a fast friend as predicted and that resulted in suicidal depression, would that depression be defined as mental illness and treated as such? Would recent wake-ups have the right to commit suicide? I think that’s an incredibly hard question so please don’t answer if you don’t want to.
Have you written anything on suicide in the metaethics sequence or elsewhere?
And the relevant question extends to the assumption behind the phrase ‘and treated as such’. Do people have the right to be nuts in general?
I suppose having to rigorously prove the mathematics behind these questions is why Eliezer is so much more pessimistic about the probability of AI killing us than I am.
I have only managed to live without particular persons who’ve departed from my life for any reason by virtue of already having other persons to console me.
That said, there are a handful of people whose loss would trouble me especially terribly, but I could survive it with someone else around to grieve with.