I might also add that Eliezer Yudkowsky, despite his many other contributions, has made only minor direct contributions to technical AI Alignment research. [His indirect contribution by highlighting & popularising the work of others is high EV impact]
I don’t think this is true at all. Like, even prosaic alignment researchers care about things like corrigibility, which is an Eliezer-idea.
I don’t think this is true at all. Like, even prosaic alignment researchers care about things like corrigibility, which is an Eliezer-idea.