(PSA:) Hey you, whoever is reading this comment, this post is not an excuse to skip working on alignment. I can fully relate to the fear of death here, and my own tradeoff is focusing hard on instrumental goals such as my own physical health and nutrition (including supplements) to delay death and get some nice productivity benefits. This doesn’t mean that an AI won’t kill you within 15 years, so it’s most likely not even a defect in a tragedy of commons to not work on it; it’s rather paramount to your future success at being alive.
(Also if we solve alignment, then we can get a pretty op AGI that can help us out with the other stuff so really it’s very much a win-win in my mind.)
after all, inter agent and interspecies alignment is simply an instrumental goal on the way to artificial intelligence that can generate biological and information theoretic immortality
(PSA:)
Hey you, whoever is reading this comment, this post is not an excuse to skip working on alignment. I can fully relate to the fear of death here, and my own tradeoff is focusing hard on instrumental goals such as my own physical health and nutrition (including supplements) to delay death and get some nice productivity benefits. This doesn’t mean that an AI won’t kill you within 15 years, so it’s most likely not even a defect in a tragedy of commons to not work on it; it’s rather paramount to your future success at being alive.
(Also if we solve alignment, then we can get a pretty op AGI that can help us out with the other stuff so really it’s very much a win-win in my mind.)
after all, inter agent and interspecies alignment is simply an instrumental goal on the way to artificial intelligence that can generate biological and information theoretic immortality