Premise of comment: Eliezer Yudkowsky’s view is correct. We are doomed, along with everything we value, barring a miracle. This comment takes that seriously and literally.
(I don’t know my personal belief on this, and I don’t think it’s of interest here)
Proposed action: Cancel Cryonics Subscription. If I am doomed then cryonics is a poor use of my resources, whether selfish or altruistic. From You only live twice:
The second statement is that you have at least a little hope in the future. Not faith, not blind hope, not irrational hope—just, any hope at all.
Also, cryonics is bad from a security mindset if it places my brain under the control of an unaligned AI. Dystopia was already a risk of cryonics, and being doomed increases that risk.
Relatedly, don’t have children because you want grandchildren.
Proposed action: Civilizational Hospice. When people have multiple terminal illnesses, they often get hospice care instead of health care. They are kept as comfortable as possible while alive, and allowed to die with as little pain as possible. Our civilization has multiple terminal illnesses. Previously there was a hope for Friendly AI to provide a cure, but it turns out that AI is just another terminal illness.
To keep civilization as comfortable as possible, we can work on short-termist causes like GiveWell. The numbers looked smaller than existential risk, but it turns out the existential risk numbers are all zero, and preventing some malaria deaths is the most valuable thing I did this year.
Achieving a clean civilizational death could mean increasing the best existential risks and avoiding the risk of tiling the light-cone with dystopia. So we would promote gain-of-function research, nanotech, fossil fuels, nuclear proliferation, etc. Insert joke here about a political party you dislike. This requires persuasion to reduce conflict with others who are trying to reduce these risks. Families can have similar conflicts during hospice care.
In hospice, the normal rule is to enter hospice when you expect to die in six months. Civilizational hospice has a similar concern. Delaying unfriendly AI by a year is still worth a lot. But hospice should be the focus as soon as delaying the end of the world becomes intractable, and potentially before then. Since this is the first and last time we will do civilizational hospice, we should try to figure it out.
Proposed action: Avoid Despair. Since we are doomed, we have always been doomed. Many people have believed in similar dooms, religious or secular, and many still do. They were still able to enjoy their lives before doom arrived, and so are we.
Premise of comment:
Eliezer Yudkowsky’s view is correct.We are doomed, along with everything we value, barring a miracle.This comment takes that seriously and literally.(I don’t know my personal belief on this, and I don’t think it’s of interest here)Proposed action: Cancel Cryonics Subscription. If I am doomed then cryonics is a poor use of my resources, whether selfish or altruistic. From You only live twice:
Also, cryonics is bad from a security mindset if it places my brain under the control of an unaligned AI. Dystopia was already a risk of cryonics, and being doomed increases that risk.
Relatedly, don’t have children because you want grandchildren.
Proposed action: Civilizational Hospice. When people have multiple terminal illnesses, they often get hospice care instead of health care. They are kept as comfortable as possible while alive, and allowed to die with as little pain as possible. Our civilization has multiple terminal illnesses. Previously there was a hope for Friendly AI to provide a cure, but it turns out that AI is just another terminal illness.
To keep civilization as comfortable as possible, we can work on short-termist causes like GiveWell. The numbers looked smaller than existential risk, but it turns out the existential risk numbers are all zero, and preventing some malaria deaths is the most valuable thing I did this year.
Achieving a clean civilizational death could mean increasing the best existential risks and avoiding the risk of tiling the light-cone with dystopia. So we would promote gain-of-function research, nanotech, fossil fuels, nuclear proliferation, etc. Insert joke here about a political party you dislike. This requires persuasion to reduce conflict with others who are trying to reduce these risks. Families can have similar conflicts during hospice care.
In hospice, the normal rule is to enter hospice when you expect to die in six months. Civilizational hospice has a similar concern. Delaying unfriendly AI by a year is still worth a lot. But hospice should be the focus as soon as delaying the end of the world becomes intractable, and potentially before then. Since this is the first and last time we will do civilizational hospice, we should try to figure it out.
Proposed action: Avoid Despair. Since we are doomed, we have always been doomed. Many people have believed in similar dooms, religious or secular, and many still do. They were still able to enjoy their lives before doom arrived, and so are we.