A booster for getting AI values right is the 2 sidedness of the process. Existential risk and benefit.
To illustrate—You solve poverty, you still have to face climate change, you solve climate change, you still have to face biopathogens, you solve biopathogens, you still have to face nanotech, you solve nanotech, you still have to face SI.
You solve SI correctly, the rest are all done. For people who use the cui bono argument, I think this answer is usually the best one to give.
This assumes that you get a very strong singularity with either a hard take off or a fairly fast takeoff. If someone doesn’t assign that high a probability to AI engaging in recursive self-improvement this argument will be unpersuasive.
A booster for getting AI values right is the 2 sidedness of the process. Existential risk and benefit.
To illustrate—You solve poverty, you still have to face climate change, you solve climate change, you still have to face biopathogens, you solve biopathogens, you still have to face nanotech, you solve nanotech, you still have to face SI. You solve SI correctly, the rest are all done. For people who use the cui bono argument, I think this answer is usually the best one to give.
This assumes that you get a very strong singularity with either a hard take off or a fairly fast takeoff. If someone doesn’t assign that high a probability to AI engaging in recursive self-improvement this argument will be unpersuasive.