a mistake to leave you as the main “public advocate / person who writes stuff down” person for the cause.
It sort of sounds like you’re treating him as the sole “person who writes stuff down”, not just the “main” one. Noam Chomsky might have been the “main linguistics guy” in the late 20th century, but people didn’t expect him to write more than a trivial fraction of the field’s output, either in terms of high-level overviews or in-the-trenches research.
I think EY was pretty clear in the OP that this is not how things go on earths that survive. Even if there aren’t many who can write high-level alignment overviews today, more people should make the attempt and try to build skill.
In the counterfactual world where Eliezer was totally happy continuing to write articles like this and being seen as the “voice of AI Safety”, would you still agree that it’s important to have a dozen other people also writing similar articles?
I’m genuinely lost on the value of having a dozen similar papers—I don’t know of a dozen different versions of fivethirtyeight.com or GiveWell, and it never occurred to me to think that the world is worse for only having one of those.
We have to actually figure out how to build aligned AGI, and the details are crucial. If you’re modeling this as a random blog post aimed at persuading people to care about this cause area, a “voice of AI safety” type task, then sure, the details are less important and it’s not so clear that Yet Another Marginal Blog Post Arguing For “Care About AI Stuff” matters much.
But humanity also has to do the task of actually figuring out and implementing alignment. If not here, then where, and when? If here—if this is an important part of humanity’s process of actually figuring out the exact shape of the problem, clarifying our view of what sorts of solutions are workable, and solving it—then there is more of a case that this is a conversation of real consequence, and having better versions of this conversation sooner matters.
It sort of sounds like you’re treating him as the sole “person who writes stuff down”, not just the “main” one. Noam Chomsky might have been the “main linguistics guy” in the late 20th century, but people didn’t expect him to write more than a trivial fraction of the field’s output, either in terms of high-level overviews or in-the-trenches research.
I think EY was pretty clear in the OP that this is not how things go on earths that survive. Even if there aren’t many who can write high-level alignment overviews today, more people should make the attempt and try to build skill.
In the counterfactual world where Eliezer was totally happy continuing to write articles like this and being seen as the “voice of AI Safety”, would you still agree that it’s important to have a dozen other people also writing similar articles?
I’m genuinely lost on the value of having a dozen similar papers—I don’t know of a dozen different versions of fivethirtyeight.com or GiveWell, and it never occurred to me to think that the world is worse for only having one of those.
Here’s my answer: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities?commentId=LowEED2iDkhco3a5d
We have to actually figure out how to build aligned AGI, and the details are crucial. If you’re modeling this as a random blog post aimed at persuading people to care about this cause area, a “voice of AI safety” type task, then sure, the details are less important and it’s not so clear that Yet Another Marginal Blog Post Arguing For “Care About AI Stuff” matters much.
But humanity also has to do the task of actually figuring out and implementing alignment. If not here, then where, and when? If here—if this is an important part of humanity’s process of actually figuring out the exact shape of the problem, clarifying our view of what sorts of solutions are workable, and solving it—then there is more of a case that this is a conversation of real consequence, and having better versions of this conversation sooner matters.