There is no right way to emotionally respond to the reality of approaching superintelligent AI, our collective responsibility to align it with our values, or the fact that we might not succeed.
Just wanted to mention that it is by no means a “reality” but a hotly debated conjecture, in case it helps someone Basilisked by Doomerism.
Just wanted to mention that it is by no means a “reality” but a hotly debated conjecture, in case it helps someone Basilisked by Doomerism.
It still seems pretty likely, but I really appreciate your articulating this and trying to push back against insularity and echo chamber-ness.