In the counterfactual world where Eliezer was totally happy continuing to write articles like this and being seen as the “voice of AI Safety”, would you still agree that it’s important to have a dozen other people also writing similar articles?
I’m genuinely lost on the value of having a dozen similar papers—I don’t know of a dozen different versions of fivethirtyeight.com or GiveWell, and it never occurred to me to think that the world is worse for only having one of those.
We have to actually figure out how to build aligned AGI, and the details are crucial. If you’re modeling this as a random blog post aimed at persuading people to care about this cause area, a “voice of AI safety” type task, then sure, the details are less important and it’s not so clear that Yet Another Marginal Blog Post Arguing For “Care About AI Stuff” matters much.
But humanity also has to do the task of actually figuring out and implementing alignment. If not here, then where, and when? If here—if this is an important part of humanity’s process of actually figuring out the exact shape of the problem, clarifying our view of what sorts of solutions are workable, and solving it—then there is more of a case that this is a conversation of real consequence, and having better versions of this conversation sooner matters.
In the counterfactual world where Eliezer was totally happy continuing to write articles like this and being seen as the “voice of AI Safety”, would you still agree that it’s important to have a dozen other people also writing similar articles?
I’m genuinely lost on the value of having a dozen similar papers—I don’t know of a dozen different versions of fivethirtyeight.com or GiveWell, and it never occurred to me to think that the world is worse for only having one of those.
Here’s my answer: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities?commentId=LowEED2iDkhco3a5d
We have to actually figure out how to build aligned AGI, and the details are crucial. If you’re modeling this as a random blog post aimed at persuading people to care about this cause area, a “voice of AI safety” type task, then sure, the details are less important and it’s not so clear that Yet Another Marginal Blog Post Arguing For “Care About AI Stuff” matters much.
But humanity also has to do the task of actually figuring out and implementing alignment. If not here, then where, and when? If here—if this is an important part of humanity’s process of actually figuring out the exact shape of the problem, clarifying our view of what sorts of solutions are workable, and solving it—then there is more of a case that this is a conversation of real consequence, and having better versions of this conversation sooner matters.