I hope I am not mistaken about this, but it seems to me that MIRI and CFAR were separated because the former focuses on “Friendly AI” and the latter on “raising the sanity waterline”. It’s not just a difference in topic, but the topic also determines tools and strategy. -- To research Friendly AI, you need to find good mathematicians, develop a mathematical theory, convince AI researchers about its seriousness, publish in peer-reviewed journals, and ultimately develop the machine. To raise the sanity waterline, you need to find good teachers, develop a curriculum, educate people, and measure the impact. -- Obviously, Eliezer cares mostly about the former, and I believe even the author of the video would agree with that.
So, pretty likely, Eliezer is not the most involved person in CFAR. I don’t know about internal stuff of CFAR to say precisely who is that person. Perhaps there are many people contributing significantly in ways that can’t be directly compared; is it more important to research the curriculum, write the textbooks, test the curriculum, connect people, or keep everything running smoothly? Maybe it’s not Julia, but that doesn’t mean it’s Eliezer.
I guess CFAR could also send Anna Salamon, Michael Smith, Andrew Critch, or anyone else from their team to Skepticon. Would that be better? Or unless it is Eliezer personally, will it is always seem like the dark overlord Eliezer is hiding behind someone else’s face? (Actually, I wouldn’t mind if Eliezer goes to Skepticon, if he would think this is the best way to use his time.) How about all of them going to Skepticon together—would that be acceptable? Or is it: anyone but Julia?
By the way, I really liked Julia’s Straw Vulcan lecture, and sent a few people a hyperlink. So she has some interesting things to say, too. And those things are completely relevant to CFAR goals.
I hope I am not mistaken about this, but it seems to me that MIRI and CFAR were separated because the former focuses on “Friendly AI” and the latter on “raising the sanity waterline”. It’s not just a difference in topic, but the topic also determines tools and strategy. -- To research Friendly AI, you need to find good mathematicians, develop a mathematical theory, convince AI researchers about its seriousness, publish in peer-reviewed journals, and ultimately develop the machine. To raise the sanity waterline, you need to find good teachers, develop a curriculum, educate people, and measure the impact. -- Obviously, Eliezer cares mostly about the former, and I believe even the author of the video would agree with that.
So, pretty likely, Eliezer is not the most involved person in CFAR. I don’t know about internal stuff of CFAR to say precisely who is that person. Perhaps there are many people contributing significantly in ways that can’t be directly compared; is it more important to research the curriculum, write the textbooks, test the curriculum, connect people, or keep everything running smoothly? Maybe it’s not Julia, but that doesn’t mean it’s Eliezer.
I guess CFAR could also send Anna Salamon, Michael Smith, Andrew Critch, or anyone else from their team to Skepticon. Would that be better? Or unless it is Eliezer personally, will it is always seem like the dark overlord Eliezer is hiding behind someone else’s face? (Actually, I wouldn’t mind if Eliezer goes to Skepticon, if he would think this is the best way to use his time.) How about all of them going to Skepticon together—would that be acceptable? Or is it: anyone but Julia?
By the way, I really liked Julia’s Straw Vulcan lecture, and sent a few people a hyperlink. So she has some interesting things to say, too. And those things are completely relevant to CFAR goals.